A bridge service that retrieves monitor statuses from a remote Uptime Kuma instance and exposes them via a web interface and individual health check endpoints for integration with a main Uptime Kuma instance.
Problem: You have an Uptime Kuma instance monitoring your local infrastructure (Docker containers, services, etc.), but your main Uptime Kuma instance can't reach these local services directly.
Solution: Subtime Kuma acts as a bridge by:
- Connecting to your local Uptime Kuma instance
- Retrieving all monitor statuses via the API
- Creating individual HTTP endpoints for each monitor (even for non-HTTP services like Docker containers, ports, etc.)
- Allowing your main Uptime Kuma to monitor these endpoints
Result: A unified dashboard on your main Uptime Kuma instance with all monitors (local + remote), centralized notifications, and complete visibility of your entire infrastructure.
- 🔄 Real-time monitoring: Polls remote Uptime Kuma instance every 30 seconds
- 🌐 Web dashboard: Clean interface displaying all monitors with their current status
- 🔗 Health check endpoints: Individual endpoints for each monitor (HTTP 200/ok if UP or 503/ko if DOWN)
- Docker (for Docker installation)
- Node.js 14+ (for manual installation)
- Access to an Uptime Kuma instance with API key
- Create a
config.ymlfile:
source: http://ip:port
apiKey: your_api_key_here
pollInterval: 30
port: 3000
host: http://your-public-domain.com:3000- Run the container:
docker run -d \
--restart=always \
-p 3000:3000 \
-v ./config.yml:/app/config.yml:ro \
--name subtime-kuma \
ghcr.io/bsdev90/subtime-kuma:latest- View logs:
docker logs -f subtime-kuma- Clone the repository:
git clone https://github.com/bsdev90/subtime-kuma.git
cd subtime-kuma- Install dependencies:
npm install- Create configuration file:
cp config.yml.example config.yml- Edit
config.ymlwith your settings:
source: http://ip:port
apiKey: your_api_key_here
pollInterval: 30
port: 3000
host: http://your-public-domain.com:3000- Start the service:
npm start| Key | Description | Example |
|---|---|---|
source |
Remote Uptime Kuma instance URL | https://uptime.example.com |
apiKey |
API key from remote Uptime Kuma | xxxxx |
pollInterval |
Polling interval in seconds | 30 |
port |
Local web server port | 3000 |
host |
Public URL for health check endpoints | http://monitor.example.com:3000 |
- Log in to your Uptime Kuma instance
- Go to Settings → API Keys
- Click Add API Key
- Set a name and optional expiration
- Copy the generated key
Open your browser to http://localhost:3000
The dashboard displays:
- Source Uptime Kuma instance
- Last update timestamp
- Poll interval
- Total number of monitors
- Status of each monitor with details
Each monitor gets an individual endpoint:
http://localhost:3000/monitor/{monitor-slug}
Responses:
200 OK+ "ok" - Monitor is UP503 Service Unavailable+ "ko" - Monitor is DOWN404 Not Found+ "ko" - Monitor doesn't exist
Integration with main Uptime Kuma:
- Copy the health check URL from the dashboard (click "Copy" button)
- In your main Uptime Kuma instance, create a new HTTP monitor
- Paste the health check URL
- The main instance will now monitor the remote instance's monitor status
- Verify your API key is correct in
config.yml - Ensure the API key hasn't expired
- Check that the remote Uptime Kuma instance is accessible
- Check that the remote instance has active monitors (paused monitors won't appear)
- Verify the
/metricsendpoint is accessible on the remote instance - Check console logs for connection errors
- Monitor names are converted to slugs (lowercase, alphanumeric with hyphens)
- Check the exact URL shown in the dashboard
- Verify the monitor exists in the remote instance
MIT
Contributions are welcome! Please feel free to submit a Pull Request.