Enterprise-grade distributed Docker Hub cache registry with intelligent caching and seamless Kubernetes integration
Features • Quick Start • Architecture • Deployment • Documentation
English | 简体中文
Station is a high-performance, distributed Docker Hub caching registry built with Spring Boot 4.0 and WebFlux. It provides intelligent multi-level caching, automatic node discovery, and production-ready Kubernetes support to accelerate Docker image pulls across your infrastructure.
- Faster Builds: Reduce Docker image pull times by 10-50x with intelligent caching
- Cost Effective: Minimize bandwidth costs and Docker Hub rate limiting
- Production Ready: Battle-tested distributed architecture with graceful scaling
- Zero Config: Automatic node discovery and consistent hash-based load balancing
- Cloud Native: First-class Kubernetes support with StatefulSets and rolling updates
-
Multi-Level Caching System
- L1: Caffeine in-memory cache (~0ms latency)
- L2: Redis distributed index (~1ms latency)
- L3: Peer node gRPC queries (~10ms latency)
- L4: Docker Hub fallback (~100-500ms latency)
- Cache penetration protection with distributed locks
- LRU eviction and automatic cleanup
-
Distributed Architecture
- Consistent hashing with 150 virtual nodes
- Automatic node discovery (Kubernetes and Redis modes)
- gRPC-based inter-node communication with streaming
- Distributed locking to prevent thundering herd
-
Reactive & Non-Blocking
- Built on Spring WebFlux and Project Reactor
- Fully reactive from HTTP to Redis to gRPC
- Backpressure support for large file transfers
- Java 21 Virtual Threads ready
-
Docker Registry API v2
- Full manifest support (v2 schema and OCI)
- Blob storage with range requests
- Docker Hub authentication integration
- HEAD requests for efficient existence checks
-
Production Features
- Graceful shutdown with pod draining
- Prometheus metrics export
- Health checks and readiness probes
- Structured logging and request tracing
- Java 21+
- Docker & Docker Compose (for local development)
- Kubernetes 1.24+ (for production deployment)
-
Clone the repository
git clone https://github.com/dingdangmaoup/station.git cd station -
Build the project
./gradlew clean build
-
Start the cluster
docker-compose up -d
This starts:
- 3 Station nodes (ports 5001-5003)
- 1 Redis instance
- 1 Nginx load balancer (port 5000)
-
Configure Docker to use the registry
Edit
/etc/docker/daemon.json:{ "registry-mirrors": ["http://localhost:5000"] }Restart Docker:
sudo systemctl restart docker
-
Test it out
# Pull an image through the cache docker pull nginx:latest # Check cache metrics curl http://localhost:5000/actuator/prometheus | grep station_cache
Station provides three Redis deployment modes for Kubernetes:
kubectl apply -f kubernetes/base/
kubectl apply -f kubernetes/standalone/kubectl apply -f kubernetes/base/
kubectl apply -f kubernetes/sentinel/
kubectl apply -f kubernetes/standalone/station-service.yamlkubectl apply -f kubernetes/base/
kubectl apply -f kubernetes/cluster/
kubectl apply -f kubernetes/standalone/station-service.yamlSee kubernetes/README.md for detailed deployment instructions.
┌─────────────────────────────────────────────────────────────────┐
│ Client (Docker CLI) │
└───────────────────────────────┬─────────────────────────────────┘
│
┌───────────▼────────────┐
│ Nginx Load Balancer │
└───────────┬─────────────┘
│
┌───────────────────────┼───────────────────────┐
│ │ │
┌───────▼────────┐ ┌────────▼────────┐ ┌────────▼────────┐
│ Station Node │ │ Station Node │ │ Station Node │
│ (Pod 0) │◄───┤ (Pod 1) │◄───┤ (Pod 2) │
└───────┬────────┘ └────────┬────────┘ └────────┬────────┘
│ gRPC │ gRPC │
│ │ │
└──────────────┬───────┴──────────────────────┘
│
┌──────────▼───────────┐
│ Redis (L2 Cache) │
│ + Node Registry │
└──────────────────────┘
- Client Request: Docker client requests image manifest/blob
- Load Balancer: Nginx routes to available Station node
- L1 Cache Check: Check local Caffeine cache
- L2 Cache Check: Query Redis for cached metadata
- L3 Peer Query: Use consistent hashing to find responsible node, query via gRPC
- L4 Docker Hub: Fallback to Docker Hub if not cached
- Cache Population: Store in local and Redis caches on fetch
- RegistryController: Docker Registry API v2 endpoints
- MultiLevelCacheManager: Orchestrates L1/L2/L3 cache lookups
- ConsistentHashManager: Routes requests to responsible nodes
- StationGrpcService: Inter-node communication
- DockerHubClient: Upstream Docker Hub integration
- NodeDiscoveryService: Automatic node registration
| Variable | Description | Default |
|---|---|---|
STATION_STORAGE_BASE_PATH |
Base directory for blob storage | /data/registry |
STATION_REDIS_HOST |
Redis host | localhost |
STATION_REDIS_PORT |
Redis port | 6379 |
STATION_REDIS_PASSWORD |
Redis password | (empty) |
STATION_NODE_DISCOVERY_MODE |
Node discovery mode | kubernetes |
STATION_CACHE_LOCAL_MAX_SIZE |
L1 cache max entries | 10000 |
STATION_CACHE_TTL_HOURS |
Cache TTL in hours | 168 (7 days) |
See src/main/resources/application.yml for full configuration options.
Station exposes Prometheus metrics on /actuator/prometheus:
station_cache_hit_ratio: Cache hit rate (0.0-1.0)station_cache_requests_total: Total cache requests by level and resultstation_node_count: Number of active nodeshttp_server_requests_seconds: HTTP request latency histogramjvm_memory_used_bytes: JVM memory usage
# Cache hit rate
rate(station_cache_requests_total{result="hit"}[5m]) / rate(station_cache_requests_total[5m])
# P95 request latency
histogram_quantile(0.95, rate(http_server_requests_seconds_bucket[5m]))
# Node availability
up{job="station"}
| Scenario | Throughput | Latency (P95) |
|---|---|---|
| Cache Hit (L1) | 1000+ RPS | < 1ms |
| Cache Hit (L2) | 500+ RPS | < 5ms |
| Peer Hit (L3) | 200+ RPS | < 20ms |
| Docker Hub Fallback | ~100 RPS | < 500ms |
| Environment | CPU | Memory | Storage |
|---|---|---|---|
| Minimum | 1 core | 2GB | 100GB |
| Recommended | 2 cores | 4GB | 500GB |
| Production | 4 cores | 8GB | 1TB+ SSD |
Configure GitLab Runner:
[[runners]]
[runners.docker]
registry_mirrors = ["http://station.company.com"]Configure Docker Desktop:
{
"registry-mirrors": ["http://localhost:5000"]
}Configure containerd:
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["http://station-registry.default.svc.cluster.local"]# Build JAR
./gradlew clean build
# Build Docker image
docker build -t station:latest .
# Run tests
./gradlew test
# Generate gRPC code
./gradlew generateProtostation/
├── src/main/java/com/dingdangmaoup/station/
│ ├── cache/ # Multi-level caching (5 classes)
│ ├── config/ # Configuration (5 classes)
│ ├── coordination/ # Distributed coordination (4 classes)
│ ├── docker/ # Docker Hub client (4 classes)
│ ├── grpc/ # gRPC services (2 classes)
│ ├── lifecycle/ # Lifecycle management (2 classes)
│ ├── metrics/ # Prometheus metrics (2 classes)
│ ├── node/discovery/ # Node discovery (4 classes)
│ ├── registry/ # Registry API (1 class)
│ └── storage/ # Storage layer (6 classes)
├── src/main/proto/ # gRPC definitions
├── kubernetes/ # K8s deployment files
└── docker/ # Local development setup
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Built with Spring Boot and Project Reactor
- gRPC communication powered by grpc-java
- Caching with Caffeine and Redis
- Kubernetes deployment inspired by Docker Registry
- Documentation: kubernetes/README.md
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Built with ❤️ using Spring Boot 4.0 and Java 21