A comprehensive Model Context Protocol (MCP) server for advanced syslog analysis and monitoring using Elasticsearch. Built with ultra-focused modular architecture for maintainability and scalability.
- Device Health Monitoring - Comprehensive device status and activity analysis
- Security Analysis - Failed authentication detection, suspicious activity monitoring, and timeline analysis
- System Error Analysis - Error pattern detection and troubleshooting recommendations
- Authentication Timeline - Track authentication events and patterns over time
- Multi-field Search - Advanced search with time ranges, device filters, and log levels
- Full-text Search - Intelligent search with highlighting and relevance scoring
- Correlation Analysis - Discover relationships between log events across multiple fields
- Pattern Detection - Identify recurring patterns and anomalies
- Saved Searches - Save and reuse frequent search queries
- Daily Reports - Automated comprehensive system health reports
- Log Export - Export filtered logs in multiple formats (JSON, CSV)
- Alert Rules - Create monitoring rules with thresholds and severity levels
- Real-time Analysis - Fast async Elasticsearch integration
- Smart Aggregations - Device, program, and time-based groupings
- Performance Optimization - Efficient query execution and caching
Built with ultra-focused modular architecture:
βββ Data Access Layer # Pure Elasticsearch queries (~300 lines each)
β βββ security_queries.py # Authentication & security data
β βββ device_queries.py # Device health & activity data
β βββ search_queries.py # General search & correlation
β βββ storage_queries.py # Saved searches & alert rules
βββ Analysis Layer # Pure business logic (~300 lines each)
β βββ auth_analyzer.py # Authentication analysis
β βββ ip_analyzer.py # IP reputation analysis
β βββ suspicious_analyzer.py # Suspicious activity detection
β βββ device_analyzer.py # Device health analysis
β βββ correlation_analyzer.py# Event correlation analysis
β βββ timeline_analyzer.py # Timeline pattern analysis
β βββ report_analyzer.py # Report generation logic
βββ Presentation Layer # Pure formatting (~200 lines each)
β βββ summary_formatters.py # Markdown report generation
βββ Interface Layer # Thin orchestration (~150 lines each)
β βββ security_tools.py # Security tool interfaces
β βββ device_tools.py # Device tool interfaces
β βββ search_tools.py # Search tool interfaces
β βββ utility_tools.py # Utility tool interfaces
βββ Registry Layer # MCP tool registration
βββ device_analysis.py # Tool registration (227 lines vs original 3,621)
Benefits:
- Single Responsibility - Each module has one focused purpose
- Easy Testing - Pure functions with clear inputs/outputs
- Simple Debugging - Issues isolated to specific layers
- Maintainable - Changes contained to relevant modules
- Extensible - New tools follow established patterns
failed_auth_summary_tool
- Analyze failed authentication attemptssuspicious_activity_tool
- Detect suspicious system activitiesauth_timeline_tool
- Track authentication events over time
get_device_summary_tool
- Comprehensive device health analysiserror_analysis_tool
- System error pattern analysis
search_logs
- General log search with filteringsearch_by_timerange_tool
- Time-based log searchesfull_text_search_tool
- Advanced full-text search with highlightingsearch_correlate_tool
- NEW - Multi-field event correlation analysis
saved_searches_tool
- NEW - View all saved search queriesadd_saved_search_tool
- NEW - Save frequently used searchesgenerate_daily_report_tool
- NEW - Automated daily system reportsexport_logs_tool
- NEW - Export logs with analysis summaries
create_alert_rule_tool
- NEW - Create monitoring alert rulesalert_rules_tool
- NEW - View configured alert rulescheck_alerts_tool
- NEW - Check all alerts now and send notificationstest_gotify_tool
- NEW - Test Gotify server connection
- Python 3.11+
- uv (Python package manager)
- Elasticsearch cluster with syslog data
- Install dependencies:
uv sync
- Configure Elasticsearch connection:
cp .env.example .env
# Edit .env with your Elasticsearch settings
- Run the MCP server:
uv run python -m syslog_mcp
Deploy a complete syslog collection and analysis infrastructure using Docker Compose.
- Docker and Docker Compose installed
- Network connectivity on ports 514 (UDP/TCP), 601, 6514
- At least 2GB RAM for Elasticsearch
- Sufficient disk space for log storage
-
Create external network (one-time setup):
docker network create jakenet
-
Start the infrastructure:
docker-compose up -d
-
Verify services are running:
# Check container status docker-compose ps # Verify Elasticsearch health curl http://localhost:9200/_cluster/health # Check syslog-ng logs docker logs syslog-ng
The Docker setup creates the following structure:
syslog-mcp/
βββ docker-compose.yml # Docker services configuration
βββ syslog-ng.conf # Syslog-ng configuration
βββ .env # Environment variables
βββ /mnt/appdata/ # Data persistence (configurable)
βββ syslog-ng/ # Syslog files by device
βββ syslog-ng_elasticsearch/ # Elasticsearch data
Service | Ports | Purpose |
---|---|---|
syslog-ng | 514 (UDP/TCP), 601, 6514 | Receives and processes syslog messages |
elasticsearch | 9200, 9300 | Stores and indexes log data |
If you need to use a different network instead of jakenet
:
-
Create your network:
docker network create your-network-name
-
Update
docker-compose.yml
:networks: your-network-name: external: true
Configure your devices to send syslog messages to the syslog-ng server.
Edit /etc/rsyslog.conf
or /etc/rsyslog.d/50-remote.conf
:
# Send all logs via TCP (recommended)
*.* @@your-syslog-server:514
# Or send via UDP (less reliable)
*.* @your-syslog-server:514
# Send only specific facilities
auth,authpriv.* @@your-syslog-server:514
kern.* @@your-syslog-server:514
# Restart rsyslog
sudo systemctl restart rsyslog
logging host your-syslog-server transport tcp port 514
logging trap informational
logging origin-id hostname
logging source-interface GigabitEthernet0/0
logging server your-syslog-server 5 port 514 use-vrf management
logging origin-id hostname
logging timestamp milliseconds
Via Controller UI:
- Settings β System Settings β Remote Logging
- Enable "Enable remote syslog server"
- Host:
your-syslog-server
- Port:
514
- Protocol: TCP/UDP as preferred
Via SSH (EdgeRouter):
set system syslog host your-syslog-server facility all level info
set system syslog host your-syslog-server port 514
commit
save
/system logging action
add name=remote target=remote remote=your-syslog-server remote-port=514
/system logging
add action=remote topics=info,warning,error,critical
- Status β System Logs β Settings
- Enable "Send log messages to remote syslog server"
- Remote Syslog Server:
your-syslog-server:514
- Select log types to forward
- System β Settings β Logging / Targets
- Add new target:
- Transport: TCP4 or UDP4
- Application: syslog
- Program:
*
- Level: Info
- Hostname:
your-syslog-server
- Port:
514
config log syslogd setting
set status enable
set server "your-syslog-server"
set port 514
set facility local7
set source-ip x.x.x.x
end
# For a single container
docker run --log-driver=syslog \
--log-opt syslog-address=tcp://your-syslog-server:514 \
--log-opt tag="{{.Name}}" \
your-image
# In docker-compose.yml
services:
app:
image: your-image
logging:
driver: syslog
options:
syslog-address: "tcp://your-syslog-server:514"
tag: "{{.Name}}/{{.ID}}"
- Install nxlog
- Configure
nxlog.conf
:
<Input eventlog>
Module im_msvistalog
</Input>
<Output syslog>
Module om_tcp
Host your-syslog-server
Port 514
Exec to_syslog_ietf();
</Output>
<Route 1>
Path eventlog => syslog
</Route>
# Configure Windows Event Collector
wecutil cs subscription.xml
# Where subscription.xml points to your syslog forwarder
import logging.handlers
syslog = logging.handlers.SysLogHandler(
address=('your-syslog-server', 514),
socktype=socket.SOCK_STREAM # TCP
)
syslog.setFormatter(logging.Formatter(
'%(name)s: %(levelname)s %(message)s'
))
logger.addHandler(syslog)
const winston = require('winston');
require('winston-syslog').Syslog;
winston.add(new winston.transports.Syslog({
host: 'your-syslog-server',
port: 514,
protocol: 'tcp4',
app_name: 'node-app'
}));
-
Check syslog-ng is receiving logs:
# Watch syslog-ng logs docker logs -f syslog-ng # Check local log files docker exec syslog-ng ls -la /var/log/
-
Send a test message:
# Using logger (from any Linux host) logger -n your-syslog-server -P 514 "Test message from $(hostname)" # Using netcat echo "<14>Test syslog message" | nc your-syslog-server 514
-
Verify Elasticsearch indexing:
# Check indices curl http://localhost:9200/_cat/indices/syslog-* # Search recent logs curl http://localhost:9200/syslog-*/_search?q=*&size=10
-
Check syslog-ng to Elasticsearch connection:
docker logs syslog-ng | grep -i elasticsearch
-
Verify Elasticsearch is accessible from syslog-ng:
docker exec syslog-ng curl http://elasticsearch:9200
-
Check syslog-ng configuration syntax:
docker exec syslog-ng syslog-ng --syntax-only
-
Verify firewall rules:
# Check if port is open sudo netstat -tulpn | grep 514 # Test connectivity telnet your-syslog-server 514
-
Check Docker port mapping:
docker port syslog-ng
-
Set timezone in docker-compose.yml:
environment: - TZ=America/New_York
-
Verify timezone in containers:
docker exec syslog-ng date docker exec elasticsearch date
-
Adjust Elasticsearch heap size in
docker-compose.yml
:environment: - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
-
Monitor memory usage:
docker stats elasticsearch
options {
# Increase for high-volume environments
log_fifo_size(10000);
# Adjust based on CPU cores
threaded(yes);
};
# Increase indices refresh interval for better ingestion
curl -X PUT "localhost:9200/syslog-*/_settings" -H 'Content-Type: application/json' -d'{
"index": {
"refresh_interval": "30s"
}
}'
# Elasticsearch Configuration
# Use localhost:9200 when MCP server runs on host (Docker infrastructure)
# Use elasticsearch:9200 when MCP server runs in Docker network
# Use actual hostname/IP for external Elasticsearch (e.g., squirts:9200)
ELASTICSEARCH_HOST=localhost:9200
ELASTICSEARCH_USER=your-username
ELASTICSEARCH_PASSWORD=your-password
ELASTICSEARCH_USE_SSL=false
ELASTICSEARCH_VERIFY_CERTS=false
# Index Configuration
ELASTICSEARCH_INDEX=syslog-*
ELASTICSEARCH_TIMEOUT=30
# Optional: Security
ELASTICSEARCH_API_KEY=your-api-key
# Gotify Configuration (Alert Notifications)
GOTIFY_URL=https://gotify-server:443
GOTIFY_TOKEN=your_gotify_app_token_here
Configure Claude Desktop or your MCP client (.mcp.json
or claude_desktop_config.json
):
{
"mcpServers": {
"syslog": {
"command": "uv",
"args": ["run", "python", "-m", "syslog_mcp"],
"cwd": "/path/to/syslog-mcp",
"env": {
"ELASTICSEARCH_HOST": "localhost:9200",
"ELASTICSEARCH_USE_SSL": "false"
}
}
}
}
If running the MCP server as a Docker container alongside the infrastructure:
{
"mcpServers": {
"syslog": {
"command": "docker",
"args": ["run", "--rm", "--network", "jakenet",
"-e", "ELASTICSEARCH_HOST=elasticsearch:9200",
"syslog-mcp:latest"],
"env": {
"DOCKER_HOST": "unix:///var/run/docker.sock"
}
}
}
}
If using an existing Elasticsearch cluster:
{
"mcpServers": {
"syslog": {
"command": "uv",
"args": ["run", "python", "-m", "syslog_mcp"],
"cwd": "/path/to/syslog-mcp",
"env": {
"ELASTICSEARCH_HOST": "your-elasticsearch-host:9200",
"ELASTICSEARCH_USER": "your-username",
"ELASTICSEARCH_PASSWORD": "your-password",
"ELASTICSEARCH_USE_SSL": "true",
"ELASTICSEARCH_VERIFY_CERTS": "true"
}
}
}
}
# Get comprehensive device health summary
await get_device_summary_tool(device="web-server-01", hours=24)
# Analyze system errors
await error_analysis_tool(device="web-server-01", hours=24, severity="error")
# Check for failed authentication attempts
await failed_auth_summary_tool(hours=24, top_ips=10)
# Detect suspicious activities
await suspicious_activity_tool(hours=24, sensitivity="high")
# Analyze IP reputation
await ip_reputation_tool(hours=24, min_attempts=5)
# Correlate events across multiple fields
await search_correlate_tool(
primary_query="error database",
correlation_fields="device,program,level",
time_window=300,
hours=12
)
# Full-text search with highlighting
await full_text_search_tool(
query="connection timeout",
search_type="fuzzy",
hours=6
)
# Save frequently used searches
await add_saved_search_tool(
name="Database Errors",
query="database AND (error OR timeout)",
description="Monitor database-related issues"
)
# Generate daily system report
await generate_daily_report_tool(target_date="2025-01-15")
# Export logs with analysis
await export_logs_tool(
query="level:error",
format_type="json",
start_time="2025-01-15T00:00:00Z",
limit=1000
)
# Create monitoring alert rules
await create_alert_rule_tool(
name="High Error Rate",
query="level:error",
threshold=100,
time_window=60,
severity="high"
)
# View all configured alerts
await alert_rules_tool()
# Check alerts now and send notifications
await check_alerts_tool()
# Test Gotify notification system
await test_gotify_tool()
The Syslog MCP server supports real-time alert notifications via Gotify, an open-source push notification service.
-
Install Gotify server (Docker recommended):
docker run -d --name gotify-server \ -p 80:80 \ -v gotify-data:/app/data \ gotify/server
-
Create application token in Gotify admin interface
-
Configure environment variables:
GOTIFY_URL=http://gotify-server:80 GOTIFY_TOKEN=your_gotify_app_token_here
- Automatic notifications when thresholds are exceeded
- Severity-based priorities (Low=3, Medium=5, High=8, Critical=10)
- Cooldown periods prevent notification spam (30-minute default)
- Rich messages with detailed alert context
- Manual testing with
test_gotify_tool()
- Create alert rules with
create_alert_rule_tool()
- Monitor continuously (or manually with
check_alerts_tool()
) - Receive notifications via Gotify when thresholds exceeded
- Review alert history and manage rules
- Async Operations - Non-blocking Elasticsearch queries
- Smart Caching - Optimized query performance
- Batch Processing - Efficient bulk operations
- Connection Pooling - Reusable database connections
- Query Optimization - Elasticsearch best practices
- Memory Efficient - Streaming large datasets
src/syslog_mcp/
βββ services/ # Core services
β βββ elasticsearch_client.py
βββ tools/ # MCP tools (ultra-focused modules)
β βββ data_access/ # Pure Elasticsearch queries
β βββ analysis/ # Pure business logic
β βββ presentation/ # Pure formatting
β βββ interface/ # Thin orchestration
β βββ device_analysis.py # Tool registry
βββ utils/ # Utilities
β βββ logging.py
β βββ retry.py
βββ main.py # MCP server entry point
# Run with hot reload
uv run python main.py
# Run tests
uv run pytest
# Type checking
uv run mypy src/
# Code formatting
uv run black src/
uv run ruff check --fix src/
Follow the ultra-focused modular pattern:
- Data Access - Add pure Elasticsearch queries in
data_access/
- Analysis - Add business logic in
analysis/
- Presentation - Add formatters in
presentation/
- Interface - Add orchestration in
interface/
- Registry - Register MCP tool in
device_analysis.py
- Request Logging - All MCP requests logged with parameters
- Performance Metrics - Query execution times and success rates
- Error Tracking - Detailed error logging with context
- Health Checks - Elasticsearch connectivity monitoring
- Parameter Validation - Pydantic models for input validation
- Query Sanitization - Safe Elasticsearch query construction
- Rate Limiting - Configurable request throttling
- SSL Support - Encrypted Elasticsearch connections
- Audit Logging - Track all search operations
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature
) - Follow the ultra-focused modular architecture
- Add tests for new functionality
- Commit changes (
git commit -m 'Add amazing feature'
) - Push to branch (
git push origin feature/amazing-feature
) - Open Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Issues: Report bugs and request features via GitHub Issues
- Discussions: Community support via GitHub Discussions
- Documentation: Comprehensive guides in
/docs/
Built with β€οΈ using FastMCP, Elasticsearch, and ultra-focused modular architecture