Blog
August 17, 2017 Marie H.

Prometheus + Grafana Monitoring on Kubernetes

Prometheus + Grafana Monitoring on Kubernetes

Photo by <a href="https://www.pexels.com/@alphatradezone" target="_blank" rel="noopener">AlphaTradeZone</a> on <a href="https://www.pexels.com" target="_blank" rel="noopener">Pexels</a>

I spent way too long running Nagios and then various hosted monitoring SaaS tools before landing on Prometheus. The pull model felt wrong to me at first — every other monitoring tool I'd used pushed metrics somewhere — but it clicks once you see how well it fits Kubernetes. Pods come and go; you don't want to reconfigure your monitoring every time the scheduler moves something. Prometheus discovers targets from the Kubernetes API, scrapes them on its schedule, and just handles it. Combined with Grafana for dashboards, this is a solid monitoring stack that you own completely.

Why Prometheus on Kubernetes

Prometheus was basically built for this. It has native Kubernetes service discovery, understands pod annotations, and the whole kubectl mental model transfers over. It also stores metrics locally, which means it works even if your network is having a moment. The tradeoff is that it's not clustered out of the box — for serious HA setups you need Thanos or Cortex, but for most teams a single Prometheus instance with regular remote backups is fine.

We're running Prometheus 1.7.x here. (2.0 dropped in November 2017 with a new storage engine — worth upgrading to, but that's a separate post.)

Installing Prometheus via Helm

If you're not using Helm yet, fix that first. We're on Helm 2, which means Tiller needs to be running in your cluster:

$ kubectl create serviceaccount tiller --namespace kube-system
$ kubectl create clusterrolebinding tiller \
    --clusterrole cluster-admin \
    --serviceaccount=kube-system:tiller
$ helm init --service-account tiller

Then install Prometheus from the stable chart:

$ helm install stable/prometheus \
    --name prometheus \
    --namespace monitoring \
    --set server.persistentVolume.size=20Gi \
    --set server.retention=15d

NAME:   prometheus
LAST DEPLOYED: Thu Aug 17 10:32:44 2017
NAMESPACE: monitoring
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME                        CLUSTER-IP    EXTERNAL-IP  PORT(S)   AGE
prometheus-server           10.3.240.15   <none>       80/TCP    1s
prometheus-alertmanager     10.3.251.22   <none>       80/TCP    1s

==> v1beta1/Deployment
NAME                        DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
prometheus-server           1        1        1           0          1s

Configuring a Scrape Target

Prometheus already has Kubernetes discovery configured by the Helm chart. But if you have an app exposing /metrics on port 8080, you need to annotate the pod so Prometheus picks it up:

apiVersion: v1
kind: Pod
metadata:
  name: my-app
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "8080"
    prometheus.io/path: "/metrics"

The relevant section of prometheus.yml (configured by Helm, but good to understand) looks like this:

scrape_configs:
  - job_name: 'kubernetes-pods'
    kubernetes_sd_configs:
      - role: pod
    relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        target_label: __address__

That relabeling block is doing the work: it reads the annotations, uses them to build the scrape address and path, and drops any pod without prometheus.io/scrape: "true".

Verify your target is being scraped by port-forwarding to the Prometheus UI:

$ kubectl port-forward -n monitoring \
    $(kubectl get pods -n monitoring -l app=prometheus,component=server -o jsonpath='{.items[0].metadata.name}') \
    9090:9090

Open http://localhost:9090/targets and you should see your pod listed as UP.

Installing Grafana via Helm

$ helm install stable/grafana \
    --name grafana \
    --namespace monitoring \
    --set adminPassword=changeme \
    --set service.type=LoadBalancer

$ kubectl get svc -n monitoring grafana
NAME      TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)        AGE
grafana   LoadBalancer   10.3.245.100   52.14.22.33     80:30234/TCP   2m

Connecting Grafana to Prometheus

Log into Grafana at the external IP. Go to Configuration → Data Sources → Add data source:

  • Name: Prometheus
  • Type: Prometheus
  • URL: http://prometheus-server.monitoring.svc.cluster.local
  • Access: proxy

Click Save & Test. If you see "Data source is working," you're done.

Importing a Dashboard

The easiest way to get useful dashboards immediately is to import from Grafana's dashboard library. For Kubernetes cluster monitoring, dashboard ID 315 (Kubernetes cluster monitoring by Instrumenta) is a solid starting point.

Go to Dashboards → Import, enter 315, select your Prometheus data source, and import. You'll immediately have CPU, memory, network, and pod count graphs for your entire cluster.

For a custom metric — say your app exports http_requests_total — a basic PromQL query to put in a Grafana panel:

rate(http_requests_total{job="my-app"}[5m])

This gives you requests per second averaged over a 5-minute window. Add a label breakdown:

rate(http_requests_total{job="my-app"}[5m]) by (status_code)

Now you have per-status-code request rates in a single panel.

Wrapping Up

The Helm-based setup gets you from zero to working monitoring in under 30 minutes, which is remarkable compared to anything I was doing before. The part that takes longer is building the right dashboards — don't try to monitor everything immediately. Start with the four golden signals (latency, traffic, errors, saturation), get those dashboards built, set up a couple of alerts in Alertmanager, and go from there.