A complete, containerized demo showcasing how an AI-style conversational interface can interact with Grafana metrics and logs via Grafana's Model Context Protocol (MCP) server.
- Overview
- Architecture
- Quick Start
- Using the Demo
- Pre-configured Dashboards
- Troubleshooting
- Demo Presentation Guide
- Cheat Sheet
- Project Structure
- Customization
- Technical Details
This demo provides a complete end-to-end solution for a Grafana & Friends meetup, demonstrating:
- AI Conversational Interface - A modern, Grafana-themed chat UI for natural language queries
- Real-time Metrics - Prometheus + Blackbox Exporter collecting application and endpoint metrics
- Log Aggregation - Loki + Promtail gathering structured logs
- Interactive Dashboards - Pre-configured Grafana OSS dashboards
- MCP Integration - Grafana's Model Context Protocol server enabling AI interactions
All components run as Docker containers for easy replication and demonstration.
Model Context Protocol (MCP) is an open standard for connecting AI assistants to data sources. Grafana has an official MCP server that bridges AI tools with your dashboards, metrics, and logs. Instead of manually navigating dashboards, you can just ask questions in natural language.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Docker Network β
β β
β ββββββββββββ ββββββββββββ ββββββββββββ β
β β UI ββββββΆβ MCP ββββββΆβ Grafana β β
β β :8888 β β Server β β OSS β β
β ββββββββββββ β :3001 β β :3000 β β
β ββββββββββββ ββββββ¬ββββββ β
β β β
β ββββββββββββ ββββββββββββ ββββββΌββββββ β
β β Demo ββββββΆβPrometheusββββββΆβ Loki β β
β β App β β :9090 β β :3100 β β
β β :8081 β ββββββ¬ββββββ ββββββ²ββββββ β
β ββββββββββββ β β β
β β β β β
β β ββββββΌββββββ ββββββ΄ββββββ β
β ββββββββββββΆβ Blackbox β β Promtail β β
β β Exporter β β β β
β β :9115 β ββββββββββββ β
β ββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
| Component | Image | Port | Purpose |
|---|---|---|---|
| Grafana OSS | grafana/grafana:latest |
3000 | Visualization & dashboards |
| Prometheus | prom/prometheus:latest |
9090 | Metrics collection & storage |
| Blackbox Exporter | prom/blackbox-exporter:latest |
9115 | Endpoint probing |
| Loki | grafana/loki:latest |
3100 | Log aggregation |
| Promtail | grafana/promtail:latest |
- | Log collection agent |
| MCP Server | mcp/grafana:latest |
3001 | Official Grafana MCP server |
| Custom UI | nginx:alpine |
8888 | Conversational interface |
| Demo App | python:3.11-slim |
8081 | Metrics & log generator |
Note: Using official
mcp/grafanaimage from Docker Hub - no custom builds required!
- Docker Desktop or Docker Engine (20.10+)
- Docker Compose (v2.0+)
- 4GB+ RAM available
- Ports 3000, 3001, 8888, 8081, 9090, 9115, 3100 available
1. Start all services
# Navigate to the project directory
cd grafana-mcp
# Start all containers
docker-compose up -d2. Wait for services to initialize (~30-60 seconds)
# Check container status
docker-compose ps3. Access the services
| Service | URL | Credentials |
|---|---|---|
| Conversational UI | http://localhost:8888 | None |
| Grafana | http://localhost:3000 | admin / admin |
| Prometheus | http://localhost:9090 | None |
| Demo App | http://localhost:8081 | None |
# Check all containers are running
docker-compose ps
# Verify Grafana health
curl http://localhost:3000/api/health
# Verify Prometheus
curl http://localhost:9090/-/healthy
# Verify Loki (wait 15s after startup)
curl http://localhost:3100/ready
# Check demo app metrics
curl http://localhost:8081/metricsThe demo uses the official mcp/grafana image with SSE (Server-Sent Events) transport for HTTP/web API access.
The official documentation shows:
docker run -e GRAFANA_URL=http://localhost:3000 \
-e GRAFANA_SERVICE_ACCOUNT_TOKEN=<token> \
mcp/grafana -t stdioHowever, this demo uses:
command: ["-transport", "sse", "-address", "0.0.0.0:3001"]Transport Options:
- stdio (default) - For CLI tools and desktop apps like Claude Desktop
- sse (our choice) - For HTTP/web APIs and browser-based UIs β
- streamable-http - For streaming HTTP connections
We use SSE because:
- The custom web UI needs HTTP endpoints
- Enables REST API access from JavaScript
- Works with browser-based conversational interfaces
- Allows multiple concurrent connections
Option 1: Service Account Token (Recommended for Production)
-
Create a service account in Grafana:
- Go to Configuration β Service Accounts
- Click "Add service account"
- Add a token with appropriate permissions
-
Set the environment variable:
GRAFANA_SERVICE_ACCOUNT_TOKEN=glsa_YourTokenHere
-
Update
docker-compose.ymlor create a.envfile
Option 2: Username/Password (Demo/Development)
Used by default in this demo for simplicity:
environment:
- GRAFANA_USERNAME=admin
- GRAFANA_PASSWORD=adminThe MCP server will use the token if provided, otherwise falls back to username/password.
-
Open the UI at http://localhost:8888
-
Wait for "Connected" status in the top right corner
-
Try these example queries:
- "Show me all available dashboards"
- "What are the current error rates?"
- "Show me recent error logs from the demo application"
- "What's the p95 latency for the demo app?"
- "Are there any failing health checks?"
- "What datasources are configured in Grafana?"
-
Use Quick Actions in the sidebar for common queries
-
Click example queries to populate the input field
Simulate traffic patterns to see real-time changes:
# Generate a traffic spike
curl "http://localhost:8081/simulate/load?pattern=spike"
# Generate errors
curl "http://localhost:8081/simulate/load?pattern=errors"Then ask the UI: "What just happened with the error rate?"
Access dashboards in Grafana at http://localhost:3000
- Demo App Request Rate - Real-time request rates by method and status
- Error Rate - Percentage of 5xx errors with threshold indicators
- Response Time (Latency) - p50 and p95 percentiles over time
- Endpoint Health (Blackbox) - HTTP probe results for all endpoints
- Demo Application Logs - Real-time log viewer with filtering
- Log Volume by Level - Bar chart showing log distribution by severity
Navigate to http://localhost:9090 and try:
# Request rate
rate(demo_app_requests_total[5m])
# Error rate
sum(rate(demo_app_requests_total{status=~"5.."}[5m])) / sum(rate(demo_app_requests_total[5m]))
# p95 latency
histogram_quantile(0.95, sum(rate(demo_app_request_duration_seconds_bucket[5m])) by (le))
# Active connections
demo_app_active_connections
In Grafana Explore (http://localhost:3000/explore):
# All demo app logs
{job="demo-app"}
# Error logs only
{job="demo-app"} |~ "ERROR"
# Logs with specific text
{job="demo-app"} |~ "database"
# Count errors per minute
sum(count_over_time({job="demo-app"} |~ "ERROR" [1m]))
Symptom: error during connect
Solution:
- Start Docker Desktop
- Wait for it to fully initialize
- Run
docker psto verify - Try
docker-compose up -dagain
Symptom: port is already allocated
Solution:
# Find process using the port (e.g., 3000)
netstat -ano | findstr :3000
# Kill the process
taskkill /PID <PID> /F
# Or change port in docker-compose.ymlDiagnosis:
# Check logs for failing container
docker-compose logs [service-name]Common fixes:
# Remove volumes and restart
docker-compose down -v
docker-compose up -d
# Increase Docker resources
# Docker Desktop β Settings β Resources
# Set Memory to 4GB+, CPU to 2+ coresSolution:
# Check MCP server status
docker-compose logs mcp-server
# Verify Grafana is accessible
curl http://localhost:3000/api/health
# Restart MCP server
docker-compose restart mcp-serverSolution:
# Check Prometheus targets
# Visit http://localhost:9090/targets (all should be UP)
# Verify demo app is running
curl http://localhost:8081/metrics
# Restart Prometheus
docker-compose restart prometheusSolution:
# Check Promtail status
docker-compose logs promtail
# Restart Promtail
docker-compose restart promtailWhen nothing else works:
# Stop and remove everything
docker-compose down -v
# Remove logs
Remove-Item -Recurse -Force logs\*
# Start fresh
docker-compose up -d# Grafana
curl http://localhost:3000/api/health
# Prometheus
curl http://localhost:9090/-/healthy
# Loki (wait 15s after startup)
curl http://localhost:3100/ready
# Demo App
curl http://localhost:8081
# UI
curl http://localhost:8888- Start all containers:
docker-compose up -d - Open browser tabs:
- Tab 1: http://localhost:8888 (Custom UI)
- Tab 2: http://localhost:3000 (Grafana)
- Tab 3: http://localhost:9090 (Prometheus)
- Login to Grafana (admin/admin, skip password change)
- Ensure metrics are flowing
Act 1: Introduction (3 minutes)
"Today I'm showing you how to use natural language to interact with Grafana's observability data through the Model Context Protocol."
- Explain MCP concept
- Show architecture diagram
- Point out all components running in Docker
Act 2: Component Tour (4 minutes)
# Show running services
docker-compose psNavigate through:
- Grafana datasources (Settings β Data Sources)
- Pre-configured dashboards
- Prometheus targets (http://localhost:9090/targets)
Act 3: Conversational Interface Demo (8 minutes)
Switch to Custom UI (Tab 1):
Query 1: Discovery
Show me all available dashboards
Query 2: Metrics
What are the current error rates from the demo application?
Query 3: Performance
What's the p95 latency for the demo app?
Query 4: Logs
Show me recent error logs from the demo application
Act 4: Live Incident Simulation (4 minutes)
# Generate traffic spike
curl "http://localhost:8081/simulate/load?pattern=spike"
# Generate errors
curl "http://localhost:8081/simulate/load?pattern=errors"In UI, type:
What's happening with the error rate?
Switch to Grafana to show the spike in dashboards.
Act 5: Q&A (3 minutes)
Common questions:
-
Q: Is this using ChatGPT or Claude?
A: The MCP server is model-agnostic. This demo simulates responses, but you could connect any LLM. -
Q: Does this work with Grafana Cloud?
A: Yes! Works with any Grafana instance - OSS, Enterprise, or Cloud. -
Q: What about security?
A: This is a demo. Production needs proper auth, TLS, and API token management.
# Quick reset (keeps data)
docker-compose restart
# Full reset (fresh data)
docker-compose down -v
docker-compose up -d# Start all services
docker-compose up -d
# Watch logs
docker-compose logs -f
# Check status
docker-compose ps| Service | URL |
|---|---|
| UI | http://localhost:8888 |
| Grafana | http://localhost:3000 (admin/admin) |
| Prometheus | http://localhost:9090 |
| Demo App | http://localhost:8081 |
Copy-paste these into the UI:
Show me all available dashboards
What are the current error rates from the demo application?
What's the p95 latency for the demo app?
Show me recent error logs from the demo application
What datasources are configured in Grafana?
Are there any failing health checks?
# Traffic spike
curl "http://localhost:8081/simulate/load?pattern=spike"
# Errors
curl "http://localhost:8081/simulate/load?pattern=errors"
# Normal traffic
curl http://localhost:8081/api/data# View logs
docker-compose logs [service-name]
# Restart service
docker-compose restart [service-name]
# Stop all
docker-compose stop
# Complete teardown
docker-compose down -vgrafana-mcp/
βββ docker-compose.yml # Main orchestration file
βββ README.md # This file
βββ .env.example # Example environment variables
βββ .gitignore # Git ignore patterns
β
βββ config/ # Configuration files
β βββ prometheus/
β β βββ prometheus.yml # Prometheus scrape config
β βββ blackbox/
β β βββ blackbox.yml # Blackbox exporter modules
β βββ loki/
β β βββ loki-config.yml # Loki storage config
β βββ promtail/
β β βββ promtail-config.yml # Log collection config
β βββ grafana/
β βββ provisioning/
β βββ datasources/ # Auto-provisioned datasources
β β βββ datasources.yml
β βββ dashboards/ # Auto-provisioned dashboards
β βββ dashboards.yml
β βββ json/
β βββ demo-metrics.json
β βββ demo-logs.json
β
βββ ui/ # Custom conversational UI
β βββ Dockerfile
β βββ nginx.conf
β βββ index.html
β βββ styles.css
β βββ app.js
β
βββ demo-app/ # Demo application
β βββ Dockerfile
β βββ requirements.txt
β βββ app.py
β
βββ logs/ # Application logs (created at runtime)
Edit ui/styles.css:
:root {
--grafana-orange: #YOUR_COLOR;
--grafana-accent: #YOUR_COLOR;
}- Create dashboard in Grafana UI
- Export JSON
- Save to
config/grafana/provisioning/dashboards/json/ - Restart Grafana:
docker-compose restart grafana
Edit demo-app/app.py to:
- Add custom endpoints
- Change metric names
- Modify log formats
- Adjust error rates
Edit config/prometheus/prometheus.yml:
scrape_configs:
- job_name: 'my-custom-app'
static_configs:
- targets: ['my-app:8080']Create a .env file:
# Grafana credentials
GRAFANA_ADMIN_USER=admin
GRAFANA_ADMIN_PASSWORD=admin
# MCP Server configuration
MCP_PORT=3001Metrics Collection:
Demo App β /metrics endpoint β Prometheus scrapes β Stores in TSDB
β Grafana queries β MCP Server translates β User question
Logs Collection:
Demo App β Writes logs β Promtail tails files β Pushes to Loki
β Grafana queries β MCP Server translates β User question
Conversational Query:
User types β UI (JavaScript) β HTTP POST β MCP Server
β Grafana API β Datasources β Response β UI display
Official Image: mcp/grafana:latest from Docker Hub
Transport Configuration:
- Type: SSE (Server-Sent Events)
- Address:
0.0.0.0:3001 - Why not stdio? The official docs show
-t stdiofor CLI/desktop apps. We use-transport ssefor web/HTTP API access needed by the browser-based UI.
Authentication Flow:
- MCP server checks for
GRAFANA_SERVICE_ACCOUNT_TOKEN - If token exists, uses token-based auth (recommended)
- If no token, falls back to
GRAFANA_USERNAME/GRAFANA_PASSWORD - Connects to Grafana API at
http://grafana:3000
Available Transports:
stdio- Standard input/output (for Claude Desktop, CLI tools)sse- Server-Sent Events (for web APIs, HTTP) β Used in this demostreamable-http- Streaming HTTP (for long-lived connections)
| Component | CPU | Memory | Disk |
|---|---|---|---|
| Grafana | 0.5 | 512MB | 100MB |
| Prometheus | 0.5 | 1GB | 500MB |
| Loki | 0.5 | 512MB | 500MB |
| MCP Server | 0.2 | 256MB | 50MB |
| UI | 0.1 | 128MB | 50MB |
| Demo App | 0.2 | 256MB | 100MB |
| Blackbox | 0.1 | 128MB | 50MB |
| Promtail | 0.1 | 128MB | 50MB |
| Total | ~2.2 | ~3GB | ~1.4GB |
Recommended: 4 CPU cores, 4GB RAM, 5GB disk space
All services communicate via Docker bridge network (grafana-network). Internal DNS allows services to reference each other by name (e.g., http://grafana:3000).
External access is provided through port mappings:
- UI: 8888 β 80
- Grafana: 3000 β 3000
- Prometheus: 9090 β 9090
- etc.
- Default credentials are used (admin/admin)
- No TLS/HTTPS configured
- No authentication on MCP server
- Services exposed on all interfaces
- No resource limits configured
For production use:
- Change all default passwords
- Enable TLS/HTTPS
- Configure proper authentication
- Use secrets management
- Set resource limits
- Enable network policies
-
Grafana & Friends Meetups
- 20-30 minute presentation
- Live demo of MCP capabilities
- Interactive Q&A
-
Internal Demos
- Show what's possible with MCP
- Inspire teams to build similar tools
- Proof of concept for AI observability
-
Learning & Experimentation
- Understand Grafana architecture
- Learn Prometheus and Loki
- Explore MCP protocol
-
Template for Projects
- Foundation for custom integrations
- Reference implementation
- Docker Compose best practices
This demo is designed to be shared:
- β Use at meetups and conferences
- β Customize for your organization
- β Fork and extend
- β Learn and experiment
Ideas for contributions:
- Additional dashboard examples
- More realistic demo data
- Alternative LLM integrations
- Mobile-first UI
- Voice interface
- Multi-language support
MIT License - Feel free to use this demo for your meetups, presentations, and learning!
- Grafana Documentation
- Model Context Protocol
- Grafana MCP Server
- Prometheus Documentation
- Loki Documentation
- Docker Hub - mcp/grafana
- Start containers 5-10 minutes early
- Test all example queries
- Have browser tabs ready
- Prepare for common questions
- Test traffic simulation endpoints
- Keep Grafana open in background
- Show correlation between UI and dashboards
- Emphasize ease of replication
- Be honest about limitations
- Invite audience participation
# Weekly cleanup
docker system prune -a
# Update images
docker-compose pull
docker-compose up -d
# Backup configuration
Copy-Item -Recurse config config-backupYour demo is complete and includes:
- β 8 containerized services working together
- β Custom Grafana-themed conversational UI
- β Real metrics and logs flowing
- β Pre-configured dashboards
- β Official MCP server integration
- β Traffic simulation capabilities
- β Comprehensive documentation
Start your demo:
docker-compose up -dAccess the UI:
Built for the Grafana & Friends community π
For questions or issues, check the troubleshooting section or review container logs with docker-compose logs [service-name].
Happy demoing! π