An uptime checker for my homelab ⏱️
Labtime is a Go-based monitoring system that checks HTTP endpoints, TLS certificates, and Docker containers, exposing metrics via Prometheus. Monitors are configurable via a YAML configuration file with intervals and comprehensive health checks to integrate with Infrastructure as Code driven homelabs.
- HTTP Monitoring: Check website response codes with configurable HTTP methods
- TLS Certificate Monitoring: Monitor SSL/TLS certificate expiration dates
- Docker Container Monitoring: Track container status
- Dynamic Docker Monitoring: Automatically monitor containers with specific labels
- Prometheus Integration: Export metrics for monitoring dashboards
- Configurable Intervals: Set custom check intervals per monitor
- Distroless and Rootless Container: Secure, minimal container image
- JSON Schema Validation: JSON schema of the YAML configuration available to perform in editor validation
Run labtime with a configuration file:
labtime --config config.yaml| CLI Flag | Environment Variable | Description | Default |
|---|---|---|---|
-config |
CONFIG |
Path to configuration file | config.yaml |
-watch |
WATCH |
Watch for changes in the configuration file | false |
-dynamic-docker-monitoring |
DYNAMIC_DOCKER_MONITORING |
Enable dynamic Docker monitoring to automatically monitor containers with specific labels | false |
The application serves Prometheus metrics on port :2112 at the /metrics
endpoint (e.g., http://localhost:2112/metrics).
services:
labtime:
image: ghcr.io/aire-one/labtime:latest
ports:
- 2112:2112
volumes:
- ./labtime-config.yaml:/config.yaml:ro
# For Docker monitoring, the Docker socket needs to be mounted:
# - /var/run/docker.sock:/var/run/docker.sock:ro
# May need to run as root for Docker socket access:
# user: root
environment:
# Enable dynamic Docker monitoring (optional)
# - DYNAMIC_DOCKER_MONITORING=trueThe container is distroless and runs as a non-root user by default. For Docker monitoring features (including dynamic monitoring), running with root privileges may be required.
Create a config.yaml file with your monitoring targets:
# HTTP Status Code Monitoring
http_status_code:
- name: "My Website"
url: "https://example.com"
method: "HEAD" # Optional: GET, POST, HEAD, etc. (default: HEAD)
interval: 60 # Optional: seconds between checks (default: 60)
- url: "https://api.example.com" # Name defaults to URL
# TLS Certificate Monitoring
tls_monitors:
- name: "Main Site"
domain: "example.com"
interval: 3600 # Check every hour (default: 60)
- domain: "api.example.com" # Name defaults to domain
# Docker Container Monitoring
docker_monitors:
- name: "web-server"
container_name: "nginx"
interval: 30 # Check every 30 seconds (default: 60)
- container_name: "database" # Name defaults to container_nameConfiguration can be validated against the JSON schema in
labtime-configuration-schema.json by adding the
# yaml-language-server: $schema=https://raw.githubusercontent.com/Aire-One/labtime/refs/heads/main/labtime-configuration-schema.json
comment to the top of the YAML file to enable schema validation in editors that
support it.
Dynamic Docker monitoring enables automatic monitoring of Docker containers based on labels, without requiring static configuration.
Containers must have the following labels to be monitored:
labtime=true: Required label to enable monitoring for the containerlabtime_interval=<seconds>: Optional monitoring interval in seconds (default: 60)
Run containers with the required labels to enable monitoring:
# Monitor a container with default 60-second interval
docker run -d --label labtime=true nginx
# Monitor with custom 30-second interval
docker run -d --label labtime=true --label labtime_interval=30 nginxOr in Compose files:
services:
web:
image: nginx
labels:
- labtime=true
- labtime_interval=30Labtime exports the following Prometheus metrics:
labtime_http_site_status_code- HTTP response status codes- Labels:
http_monitor_site_name,http_site_url
- Labels:
labtime_tls_certificate_expires_time- TLS certificate expiration timestamp- Labels:
tls_monitor_name,tls_domain_name
- Labels:
labtime_docker_container_status- Docker container running status (1=running, 0=stopped)- Labels:
docker_monitor_name,docker_container_name
- Labels:
This project uses a devcontainer for consistent development environment. The devcontainer includes Go, Docker CLI (Docker-in-Docker), and all necessary tools.
To run the application locally:
# Run with example configuration and features enabled
make dev
# Or build and run manually
make build
./build/labtime --config configs/example-config.yamlAn example Prometheus/Grafana stack is provided in configs/example-stack/ for
testing metrics visualization:
cd configs/example-stack
docker compose up -dThe example stack includes:
- Prometheus on port
9090http://localhost:9090 configured to scrape Labtime metrics. - Grafana on port
3000http://localhost:3000 (default credentials: admin/admin) with a pre-configured dashboard fromconfigs/example-stack/labtime-dashboard.jsonto visualize Labtime metrics from Prometheus. - Example targets for HTTP, TLS, and Docker monitoring by Labtime.
Grafana may ask to change the admin password on first login. After that,
navigate to the dashboard at Home > Dashboards > Labtime to see the metrics.
make all- Full build pipeline (lint, test, generate, build)make dev- Run application with example config and features enabledmake test- Run all testsmake lint- Run golangci-lintmake fmt- Format Go codemake yamllint- Lint YAML filesmake markdownlint- Lint Markdown filesmake cspell- Spell check source and docsmake generate- Required after config changes - Updates JSON schemamake build- Build binary tobuild/labtimemake clean- Remove build artifactsmake build-generator- Build the config schema generator application tobuild/labtime-generatormake tidy-check- Check for go.mod/go.sum changes (CI safe)make tidy- Update go.mod/go.sum to match imports
Labtime uses a generic plugin-based architecture with three main components:
flowchart TD
%% Configuration Input
CONFIG[YAML Config File]
DTO[ExampleMonitorDTO]
%% External Systems
SCHEDULER[Scheduler<br/>Runs jobs periodically]
PROMETHEUS[Prometheus Metrics<br/>Exposed via /metrics]
%% Plugin/Generic System Interfaces
subgraph INTERFACES["Plugin/Generic System Interfaces"]
IPROVIDER[TargetProvider Interface]
IFACTORY[MonitorFactory Interface]
ITARGET[Target Interface]
end
%% Plugin Registration System
subgraph REGISTRATION["Plugin Registration System"]
PROVIDER[ExampleProvider]
FACTORY[ExampleFactory]
WIRING["monitorConfigs"]
PROVIDER ---o WIRING
FACTORY ---o WIRING
end
subgraph Business
TARGET[ExampleTarget]
MONITOR[ExampleMonitor]
end
COLLECTOR[Prometheus Collector<br/>GaugeVec/...]
IJOB[Job Interface]
%% Implementation Flow
PROVIDER -..-> IPROVIDER
TARGET -..-> ITARGET
FACTORY -..-> IFACTORY
MONITOR -..-> IJOB
CONFIG --> DTO
DTO --> PROVIDER
PROVIDER -- maps DTO to Target --> TARGET
FACTORY --> COLLECTOR
TARGET --> MONITOR
FACTORY --> MONITOR
%% Plugin System Wiring
WIRING -- provides jobs --> SCHEDULER
SCHEDULER --> MONITOR
MONITOR --> PROMETHEUS
%% Styling
classDef config fill:#fff8e1,stroke:#f57c00,stroke-width:2px
classDef implementation fill:#e8f5e8,stroke:#388e3c,stroke-width:2px
classDef interface fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef external fill:#f5f5f5,stroke:#616161,stroke-width:2px
class DTO,WIRING config
class PROVIDER,TARGET,FACTORY,MONITOR implementation
class ITARGET,IFACTORY,IPROVIDER,IJOB interface
class CONFIG,SCHEDULER,PROMETHEUS,COLLECTOR external
subgraph Legend["Legend"]
direction TB
subgraph Arrows
direction LR
A -. implements .-> B
C -- compose --o D
classDef hide fill:#0000,stroke:#0000,stroke-width:0px,color:#0000;
class A,B,C,D hide;
end
subgraph Colors
direction LR
L_CONFIG[Configuration]:::config
L_IMPL[Implementation]:::implementation
L_INTERFACE[Interfaces]:::interface
L_EXTERNAL[External Systems]:::external
end
end
Each monitor type (HTTP, TLS, Docker) implements three core interfaces defined
in internal/monitors/monitor.go:
Target- Configuration data structure withGetName()andGetInterval()methodsMonitorFactory[T Target, C prometheus.Collector]- Factory pattern for creating collectors and monitor instancesCreateCollector()- Creates Prometheus metrics collectors (e.g., GaugeVec for status codes)CreateMonitor(target, collector, logger)- Instantiates the actual monitor job
TargetProvider[T]- Extracts and validates targets from YAML configurationGetTargets(config)- Parses YAML DTOs into strongly-typed Target structs and applies default values (60s intervals, fallback names)
The architecture leverages Go generics to ensure compile-time type safety and eliminate runtime type assertions:
MonitorConfig[T monitors.Target, C prometheus.Collector]ininternal/monitorconfig/monitorconfig.go- Ensures Target and Collector types match at compile time, eliminating
interface{}usage and runtime type casting - Example:
MonitorConfig[HTTPTarget, *prometheus.GaugeVec]guarantees HTTP-specific types and prevents mismatched configurations
- Ensures Target and Collector types match at compile time, eliminating
MonitorFactory[T Target, C prometheus.Collector]interface enforces type relationships- Factory methods must accept the correct Target type and compatible Collectors
- Prevents mismatched target/collector combinations at compile time
- Unified
Jobinterface frominternal/monitors/monitor.goreturned by all monitors for scheduler integrationID()- Returns unique identifier for the monitoring jobRun(context.Context)- Executes the health check and updates metrics- Enables polymorphic scheduling regardless of monitor type
- Single Configuration in
internal/apps/labtime/monitors.goby thegetMonitorConfigs()function- Provides a single configuration entry point for all monitor types
- Links monitor types to their factory and target provider ensuring type safety and consistency
Monitor implementations accept interface or function types for external dependencies to enable testing without real network connections:
- TLS Monitor: Uses
TLSDialFuncfunction type to mocktls.Dialcalls- Example:
DialFunc: func(_, _ string, _ *tls.Config) (*tls.Conn, error)
- Example:
- HTTP Monitor: Accepts
HTTPClientinterface to mock HTTP requests- Allows injection of mock clients that return predetermined responses
- Docker Monitor: Uses
DockerClientinterface to mock container API calls- Enables testing container status checks without Docker daemon
All external dependencies are mockable via injection, preventing real network
connections in unit tests. See test files like
TestTLSMonitor_tlsHandshake_DialError for implementation examples.
cmd/labtime/- Main entry pointinternal/apps/labtime/- Application setup and HTTP server for metricsinternal/monitors/- Monitor implementations (HTTP, TLS, Docker)internal/monitorconfig/- Generic monitor configuration systeminternal/scheduler/- Job scheduling and executioninternal/yamlconfig/- Configuration parsing and monitor DTOsinternal/dynamicdockermonitoring/- Dynamic Docker container discovery and event monitoring
- Prometheus metrics use consistent label patterns:
{type}_monitor_name,{type}_{resource}_name - Error wrapping with
github.com/pkg/errors.Wrap()for context - Structured logging with prefixes and file/line numbers
- Default values applied at monitor level (60s interval, fallback names)
- Job tagging system distinguishes between static configuration jobs
(
file_job) and dynamic Docker jobs (dynamic_docker_job) for lifecycle management
- Make changes to code
- Run
make allto validate (lint, test, generate schema, build) - When modifying configuration structure, run
make generateto update JSON schema (part ofmake all) - Ensure all tests pass with
make test(part ofmake all)
See LICENSE file for details.