top of page

Local Kafka Monitoring with Docker, Jolokia, Telegraf, InfluxDB & Grafana

Updated: May 15


The source for this project can be found here: https://github.com/mjones3/kafka-monitoring-visualization


Real time monitoring of Kafka metrics on your local machine
Real time monitoring of Kafka metrics on your local machine


Setting up a local observability stack for Kafka gives you real-time insight into broker health, throughput, and bottlenecks. In this guide we’ll wire together:

  • Kafka & Zookeeper (via Confluent images)

  • Kafka Manager for an easy cluster UI

  • Telegraf + Jolokia to scrape JMX metrics from Kafka

  • InfluxDB v2 to store time-series data

  • Grafana to visualize everything


By the end you’ll be able to:

  1. Spin up all services with one docker compose up -d.

  2. Verify Jolokia-scraped metrics are landing in InfluxDB.

  3. Explore and import a ready-made Kafka dashboard in Grafana.

Along the way we’ll call out gotchas we hit (mount-points, token scopes, plugin mismatches) so you don’t repeat our headaches.


Prerequisites

  • Docker & Compose installed on your machine

  • curl, jq (for testing)

  • Familiarity with shell / CLI is helpful


Create a project directory:

mkdir project-dir && cd project-dir
mkdir monitoring

1. Download the Jolokia JVM agent

Kafka exposes JMX—you’ll need Jolokia to turn that into HTTP JSON. Grab the “jolokia-agent-jvm-<version>-javaagent.jar” from https://jolokia.org/download.html, place it in monitoring/ and symlink it as /jolokia/jolokia-agent.jar inside the container:


# in heart-rate-anomaly/monitoring
curl -Lo jolokia-agent-jvm.jar https://repo1.maven.org/maven2/org/jolokia/jolokia-jvm/2.2.9/jolokia-jvm-2.2.9-javaagent.jar
ln -sf jolokia-agent-jvm.jar jolokia-agent.jar

2. Write your Telegraf config


Validate:

docker exec telegraf-container telegraf --test --input-filter jolokia2_agent \
  --config /etc/telegraf/telegraf.conf | grep kafka_

You should see lines for all your kafka_* metrics.


3. Docker Compose everything


How the containers fit together

  • kafka_network: All services share this bridge network, so they can talk by container name.

  • Zookeeper → Kafka: Kafka waits for ZK (health-checked) to form its cluster.

  • Kafka → Kafka-Manager: Manager connects to ZK/Kafka via the same network.

  • Monitoring folder (./monitoring):

    • Jolokia agent (/jolokia/jolokia-agent.jar) is mounted into Kafka so Telegraf’s JMX plugin can scrape metrics.

    • telegraf.conf (in same folder) is mounted into Telegraf to configure inputs/outputs.

    • Influx init scripts in monitoring/init/ auto-bootstrap buckets, tokens, users at container start.

  • Volumes:

    • kafka-data persists your Kafka log segments across restarts.

    • influxdb-data holds the time-series engine’s data files and metadata.

    • grafana-data retains dashboards, plugin installs, and settings.

By tying them all to the same network and strategically mounting the monitoring configs, you achieve a fully connected, stateful Docker Compose stack that can:


  1. Expose Kafka’s internals via Jolokia → Telegraf,

  2. Store metrics in InfluxDB,

  3. Visualize everything in Grafana.


Bring up the stack

docker compose up -d

Verify each component


Jolokia

docker exec telegraf-container \
curl -s http://kafka:8778/jolokia/version | jq .


Look for

Wrote batch of XX metrics to kafka_metrics


Inspect InfluxDB


docker exec influxdb-container influx query \
 --org local \
 --token my-super-secret-token << 'EOF'
import "influxdata/influxdb/schema"
schema.measurements(bucket:"kafka_metrics")
EOF

You should see your kafka_* metrics listed.


Configure Grafana


Browse to http://localhost:3000, login admin/admin123.

Add Data Source → InfluxDB (Core)

Access: Server

Token: my-super-secret-token

Org: local

Bucket: kafka_metrics

Save & Test → you should get Data source is working.


Explore

Click the Explore icon, run a Flux query:

 from(bucket:"kafka_metrics")
|> range(start:-5m)
|> filter(fn:(r) => r._measurement=="kafka_messages_in_total")
|> aggregateWindow(every:10s, fn:max)
|> yield()

You’ll see a live line chart of messages per second.

Import a dashboard

Go to Dashboards ➔ Manage ➔ Import

Enter 1860 ➔ Load

Pick your InfluxDB data source ➔ Import

Voilà—Kafka Overview panels for throughput, partitions, CPU, memory, and more.



Common Gotchas

  • Mounting Jolokia JAR: bind the directory, not the file.

  • InfluxDB tokens: admin token only covers metadata; you still need bucket-scoped tokens—or just use the DOCKER_INFLUXDB_INIT_ADMIN_TOKEN for all writes.

  • Grafana plugins: the old “Flux” plugin is deprecated; use the built-in InfluxDB (Core) driver and Flux language mode.

  • Time ranges: always set Grafana’s timepicker wide enough (“Last 30m”) when panels look empty.

  • Telegraf wildcards: if you don’t see your metric in --test, curl Jolokia’s /list endpoint to discover the exact MBean path.


With this in place, you’ve got full, real-time visibility into your Kafka broker—all running locally via Docker Compose. Enjoy monitoring!

Comments


bottom of page