This project sets up a Kubernetes cluster on Proxmox VMs and deploys a complete observability stack including Prometheus, Grafana, Loki, Tempo, and more.
The project is organized with proper Terraform modules and clean directory structure:
.
├── modules/ # Terraform modules
│ ├── proxmox/ # Proxmox VM provisioning
│ ├── kubernetes/ # Kubernetes infrastructure
│ ├── monitoring/ # Observability stack
│ ├── cert-manager/ # Certificate management
│ └── ingress/ # Ingress controllers
├── scripts/ # Management and utility scripts
│ ├── setup-reverse-proxy.sh # Nginx reverse proxy setup
│ ├── setup-letsencrypt.sh # SSL certificate automation
│ ├── test-all-domains.sh # Connectivity testing
│ ├── check-services.sh # Health monitoring
│ ├── fix-nfs-connectivity.sh # NFS diagnostics
│ ├── remove-airflow.sh # Service cleanup
│ └── README.md # Scripts documentation
├── docs/ # Documentation
│ ├── REVERSE-PROXY-SETUP.md # Complete proxy setup guide
│ └── README.md # Documentation index
├── config/ # Configuration files
│ ├── nginx/ # Nginx configurations
│ │ ├── nginx-reverse-proxy.conf
│ │ ├── ssl-params.conf
│ │ └── security-headers.conf
│ └── README.md # Configuration documentation
├── kubernetes/ # Kubernetes manifests
│ ├── metallb-fix.yaml # MetalLB configuration
│ ├── nfs-provisioner.yaml # NFS storage provisioner
│ ├── traefik-service.yaml # Traefik service configuration
│ └── [application manifests] # Various service deployments
├── templates/ # Terraform templates
│ ├── cloud-init-userdata.tftpl # VM initialization template
│ └── ssh_config.tftpl # SSH configuration template
├── main.tf # Root Terraform configuration
├── variables.tf # Root variables
├── outputs.tf # Root outputs
├── terraform.tfvars # Variable values (not in git)
├── kubeconfig.yaml # Kubernetes configuration
└── README.md # This file
# 1. Set up reverse proxy (run on control plane node)
./scripts/setup-reverse-proxy.sh
# 2. Test all services
./scripts/test-all-domains.sh
# 3. Get SSL certificates (optional)
./scripts/setup-letsencrypt.sh- Complete Setup Guide:
docs/REVERSE-PROXY-SETUP.md - Scripts Reference:
scripts/README.md - Configuration Guide:
config/README.md
- Proxmox server with API access
- SSH keypair for VM access
- Terraform installed locally
The deployment is split into two phases:
# Initialize Terraform
terraform init
# Create the VMs and basic infrastructure
terraform apply -var="deploy_kubernetes=false"
# After VMs are created:
# 1. SSH to the control node
ssh -F ssh_config gimli
# 2. Verify k3s is running on the control node
sudo systemctl status k3s
# 3. Copy the kubeconfig from the control node
scp -F ssh_config gimli:/etc/rancher/k3s/k3s.yaml ./kubeconfig.yaml
# 4. Update the server address in the kubeconfig
sed -i '' 's/127.0.0.1/CONTROL_NODE_PRIVATE_IP/g' kubeconfig.yaml# Deploy Kubernetes resources and monitoring stack
terraform apply -var="deploy_kubernetes=true"This diagram illustrates the complete architecture of the project, showing the relationships and dependencies between all components:
- Infrastructure Layer: Proxmox VMs, k3s Kubernetes, MetalLB, and NFS storage
- Core Services: Traefik Ingress, Cert-Manager, and HashiCorp Vault
- Observability Stack: Prometheus, Grafana, Loki, Tempo, Mimir, and Alertmanager
- Applications: WordPress, Obsidian Sync with CouchDB, n8n, and their respective monitoring components
The diagram visualizes key relationships including:
- Service dependencies
- Monitoring data flow (metrics and logs)
- Ingress routing paths
- Security integrations
- Dashboard connections
Creates Proxmox VMs with the following features:
- Public and private networking
- Cloud-init for initial configuration
- K3s installation
Sets up core Kubernetes infrastructure:
- MetalLB for LoadBalancer services
- NFS storage for persistent volumes
Deploys a comprehensive observability stack:
- Prometheus for metrics
- Grafana for visualization
- Loki for logs
- Tempo for tracing
- Mimir for long-term metrics storage
The project also sets up these additional services (via Kubernetes manifests):
- Ingress controller with automatic TLS
- ACME/Let's Encrypt integration
- Workflow automation tool
- Prometheus metrics integration
- Configurable with authentication
- Integration with Grafana dashboards
- Self-hosted Obsidian sync server
- CouchDB backend for data storage
- Monitoring with Prometheus
- Visualization with Grafana dashboards
The project includes a comprehensive monitoring stack:
- Email notifications for alerts
- Configured with appropriate inhibition rules
- Customized for k3s environments
- Kubernetes system resources
- Node resources
- Custom dashboards for all services (n8n, CouchDB, Obsidian)
Accessible via ingress at https://grafana.your-domain.com or:
kubectl port-forward svc/kube-prometheus-stack-grafana 3000:80 -n monitoring
# Open http://localhost:3000
# Username: admin, Password: from terraform.tfvarskubectl port-forward svc/traefik 9000:9000 -n kube-system
# Open http://localhost:9000/dashboard/Accessible via the configured Ingress at https://automate.your-domain.com
kubectl port-forward svc/couchdb 9984:5984 -n obsidian
# Open http://localhost:9984- Keep
terraform.tfvarsand secrets secure - The node token file should not be committed to version control
- For k3s-specific monitoring configuration, see
kubernetes/README-monitoring.md - Alert notifications are configured to use email via Alertmanager
This project uses HashiCorp Vault for secure credential management. All sensitive information is stored in Vault and retrieved by applications at runtime, rather than being stored in manifest files.
To deploy and configure the Vault server:
# Deploy Vault with secure credentials
cd kubernetes/vault
VAULT_PASSWORD="your-secure-password" SMTP_PASSWORD="your-email-app-password" ./deploy-vault.sh
# Deploy the Vault Secrets Operator to sync credentials to Kubernetes
./deploy-secrets-operator.sh-
Web UI Access:
- The Vault UI is available at
https://vault.xalg.im - Use the initial root password:
********(refer to the deployment script)
- The Vault UI is available at
-
CLI Access:
- Source the credentials file to load environment variables:
source ~/.vault/credentials
- Access Vault using the CLI:
export VAULT_ADDR=https://vault.xalg.im vault login -method=token "$VAULT_ROOT_TOKEN"
- Source the credentials file to load environment variables:
The following credentials are securely stored in Vault:
- AlertManager: Email SMTP configuration
- n8n: Admin username and password
- CouchDB: Database credentials for Obsidian sync
- K3s: Cluster token
To update a secret:
# Export variables from the credentials file
source ~/.vault/credentials
# Update a secret using the Vault CLI
export VAULT_ADDR=https://vault.xalg.im
vault kv put secret/n8n admin_password="********" admin_user="admin"- After first login, change the root token and initial password
- Back up the ~/.vault/credentials file to a secure location
- Avoid committing any plain-text credentials to version control