A Rust-based toy control plane for hosting PostgreSQL containers as a service on Azure Kubernetes Service (AKS).
- Durable Workflows: Uses Duroxide for reliable orchestration
- Kubernetes Native: Deploys PostgreSQL as pods in AKS
- Public & Private Access: Supports LoadBalancer (public IP) or ClusterIP (internal only)
- DNS Support: Automatic Azure DNS names for instances
- YAML Templates: Kubernetes resources defined in clean, readable YAML
- REST API: Full-featured API for instance management
- PostgreSQL Metadata: Complete CMS database for tracking instances
- Web UI: Modern React-based dashboard for visual management
- Health Monitoring: Continuous per-instance health checks via durable actors
- Deployment: PostgreSQL containers as StatefulSets in AKS
- Storage: Persistent volumes for durable data
- Networking: LoadBalancer services with Azure DNS names
- Workflow Engine: Duroxide for durable orchestrations
- Control Plane: Rust-based API server
-
Rust (1.85.0 or newer)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
-
Docker Desktop (for kind local testing)
- Download from: https://www.docker.com/products/docker-desktop
- Make sure Docker is running
-
Azure CLI
brew install azure-cli az login
-
kubectl
brew install kubectl
-
kind (optional, for local testing)
brew install kind
If you already have an AKS cluster:
# Get cluster credentials
az aks get-credentials --resource-group <your-rg> --name <your-cluster>
# Verify connection
kubectl cluster-info
# Create toygres namespace
kubectl create namespace toygres
# Update .env with your cluster details
cp .env.example .env
# Edit .env and set:
# AKS_CLUSTER_NAME=<your-cluster>
# AKS_RESOURCE_GROUP=<your-rg>
# AKS_NAMESPACE=toygresUse the provided infrastructure setup script:
# This will create:
# - Azure Resource Group
# - AKS Cluster (takes 10-15 minutes)
# - toygres namespace
# - Storage class configuration
./scripts/setup-infra.sh
# Script will prompt for configuration and output values for .env-
Copy the example environment file:
cp .env.example .env
-
Edit
.envwith your values:Required for Control Plane:
DATABASE_URL=postgresql://user:password@host:5432/toygres AKS_CLUSTER_NAME=your-aks-cluster AKS_RESOURCE_GROUP=your-resource-group
Optional (for examples/testing with manual_deploy):
INSTANCE_NAME=my-test-pg POSTGRES_PASSWORD=your-secure-password USE_LOAD_BALANCER=true
-
Set up metadata database (
β οΈ REQUIRED before running toygres-server):./scripts/db-init.sh ./scripts/db-migrate.sh # (no-op until we add 0002+ migrations)This creates the
toygres_cmsschema and all required tables. The server will verify tables exist on startup and fail with a clear error if this step is skipped. -
Verify kubectl connection (required before running toygres-server):
# Get AKS credentials (use values from your .env) az aks get-credentials --resource-group <your-rg> --name <your-cluster> --overwrite-existing # Verify connection works kubectl cluster-info # Verify namespace exists kubectl get namespace toygres
β οΈ Important: The toygres-server requires kubectl to be configured to access your AKS cluster. If you see errors like"Failed to create K8s client", ensure you've run theaz aks get-credentialscommand above.
Test that everything works by deploying a PostgreSQL instance:
# Deploy with auto-generated DNS name
cargo run --example manual_deploy -- --dns-name mytest --clean
# Or deploy with defaults
cargo run --example manual_deploy
# Expected output:
# β PostgreSQL deployed to AKS
# β External IP: 4.249.xxx.xxx
# β DNS name: mytest-toygres.westus3.cloudapp.azure.com
# β Connection verifiedPrerequisites: Ensure kubectl is configured (see step 4 above) and database is initialized (step 3).
# Build all crates
cargo build --workspace
# For convenience, use the wrapper script (in project root)
# Instead of: cargo run --bin toygres-server -- <command>
# Use: ./toygres <command>
# Start the server (API + Workers)
./toygres server start
# List instances
./toygres list
# Create a PostgreSQL instance
# The name you provide becomes the DNS name: <name>.<region>.cloudapp.azure.com
# Returns immediately - instance is created in the background
./toygres create adardb1 --password mySecurePass123
# Check instance status (state will show 'creating' β 'running')
./toygres get adardb1
# List all instances
./toygres list
# Delete an instance (use the same DNS name you used to create it)
# Returns immediately - instance is deleted in the background
./toygres delete adardb1
# Stop the server
./toygres server stop
# Advanced diagnostics (for debugging orchestrations)
./toygres server orchestrations # List all orchestrations
./toygres server orchestration <id> --history # Show execution details
# Or use the full cargo command:
cargo run --bin toygres-server -- create adardb1 --password mySecurePass123Access the visual dashboard at http://localhost:3000:
# Start the backend server first
./toygres server start
# In a new terminal, start the Web UI
cd toygres-ui
npm install
npm start
# Open http://localhost:3000 in your browserThe Web UI provides:
- π Dashboard - System overview and recent activity
- ποΈ Instance Management - View all PostgreSQL instances
- π System Monitoring - Real-time stats and worker status
- π¬ Debug Tools - Orchestration viewer and log browser
- π Auto-refresh - Live updates every 5 seconds
Deploy the complete solution (control plane + data plane) to Azure Kubernetes Service:
# 1. Copy and configure environment
cp .env.example .env
# Edit .env with your values (see required variables below)
# 2. Deploy with HTTP only
./deploy/deploy-to-aks.sh
# 3. Or deploy with HTTPS (recommended for production)
./deploy/deploy-to-aks.sh --https --dns-label mytoygres
# This will create: https://mytoygres.<region>.cloudapp.azure.comRequired environment variables for AKS deployment:
| Variable | Description |
|---|---|
DATABASE_URL |
PostgreSQL connection string for metadata (can be Azure Database for PostgreSQL) |
AKS_CLUSTER_NAME |
Your AKS cluster name |
AKS_RESOURCE_GROUP |
Azure resource group containing the AKS cluster |
AZURE_CLIENT_ID |
Service Principal client ID |
AZURE_CLIENT_SECRET |
Service Principal secret |
AZURE_TENANT_ID |
Azure AD tenant ID |
TOYGRES_ADMIN_USERNAME |
Admin username for web UI login |
TOYGRES_ADMIN_PASSWORD |
Admin password for web UI login |
Create a Service Principal:
az ad sp create-for-rbac --name "toygres-sp" --role contributor \
--scopes /subscriptions/<subscription-id>/resourceGroups/<resource-group>What gets deployed:
toygres-server- Control plane API and orchestration workerstoygres-ui- Web dashboard (proxies to server)- RBAC - ServiceAccount with permissions to create PostgreSQL pods
- Secrets - Database credentials and Azure authentication
- (Optional) nginx-ingress + cert-manager for HTTPS
toygres/
βββ toygres-models/ # Shared data structures
βββ toygres-orchestrations/ # Duroxide orchestrations & activities
β βββ src/
β β βββ activities/ # Atomic K8s operations
β β βββ orchestrations/ # Durable workflows
β β βββ templates/ # Kubernetes YAML templates
βββ toygres-server/ # Control plane server
β βββ src/ # Main server code (API + CLI)
β βββ examples/ # Working examples (manual_deploy.rs)
βββ toygres-ui/ # Web interface (React + TypeScript)
β βββ src/
β β βββ components/ # React components
β β βββ lib/ # API client and utilities
βββ migrations/ # Database schema migrations
βββ docs/ # Documentation
βββ scripts/ # Setup and management scripts
βββ prompts/ # AI assistant context docs
# Setup AKS cluster
./scripts/setup-infra.sh
# Setup metadata database schema + future migrations
./scripts/db-init.sh
./scripts/db-migrate.sh# List all PostgreSQL deployments
./scripts/list-deployments.sh
# Clean up all deployments
./scripts/cleanup-deployments.sh
# Clean up a specific deployment
./scripts/cleanup-single.sh <instance-name># With custom DNS name and auto-cleanup
cargo run --example manual_deploy -- --dns-name mydb --clean
# Keep instance running for testing
cargo run --example manual_deploy -- --dns-name prod-db
# Deploy without public DNS (IP only)
# Remove DNS_LABEL from .env, then:
cargo run --example manual_deployThe deployment tool outputs connection strings:
# Via DNS (recommended) - when using toygres-server
psql 'postgresql://postgres:[email protected]:5432/postgres'
# Via DNS - when using manual_deploy example (with DNS_LABEL=toygres)
psql 'postgresql://postgres:[email protected]:5432/postgres'
# Via IP
psql 'postgresql://postgres:[email protected]:5432/postgres'# List current deployments
AKS_NAMESPACE=toygres ./scripts/list-deployments.sh
# Clean up a specific instance
AKS_NAMESPACE=toygres ./scripts/cleanup-single.sh mydb
# Or clean up all instances
AKS_NAMESPACE=toygres ./scripts/cleanup-deployments.sh- docs/plan.md - Detailed implementation plan with phases
- docs/getting-started.md - Development guide
- docs/phase0-complete.md - Phase 0 summary
- docs/phase1-activities-plan.md - Activities implementation plan
- prompts/project-context.md - AI assistant context
POST /instances- Create a new PostgreSQL instanceDELETE /instances/{id}- Delete an instanceGET /instances- List all instancesGET /instances/{id}- Get instance detailsGET /operations/{id}- Monitor operation statusGET /health- Control plane health check
- Proof of concept working
- YAML-based K8s templates
- LoadBalancer with public IPs
- Azure DNS name support
- Connection testing
- Cleanup scripts
- Extracting into Duroxide activities
- Following cross-crate registry pattern
- Phase 2: Metadata database tracking
- Phase 3: REST API
- Phase 4: Duroxide orchestrations
- Phase 5: Health monitoring
Symptoms:
- Error:
Failed to create K8s client: Failed to create Kubernetes client - kubectl shows:
The connection to the server localhost:8080 was refused - Activity failures in duroxide logs
Solution:
# Get credentials (use your actual resource group and cluster name from .env)
az aks get-credentials --resource-group <rg> --name <cluster> --overwrite-existing
# Verify connection works
kubectl cluster-info
# Should show: Kubernetes control plane is running at https://...Root Cause: The Kubernetes client (kube-rs) requires kubectl to be configured with valid cluster credentials in ~/.kube/config. Without this, it defaults to localhost:8080 and fails.
# Check namespace exists
kubectl get namespace toygres
# Create if missing
kubectl create namespace toygres
# Check storage classes
kubectl get storageclass# Force delete
kubectl delete statefulset,svc,pvc -n toygres -l app=postgres --grace-period=0See docs/plan.md for the implementation roadmap.
MIT