A complete toolkit for provisioning and managing clusters using Lima VMs on macOS. Features user-friendly shell script automation with comprehensive error handling, interactive setup wizards, and rich cluster management capabilities. Supports Kubernetes-based deployments and bare-metal MinIO clusters with enterprise-grade storage solutions.
✨ New! This project now uses intuitive shell scripts instead of Makefiles for better user experience, error handling, and flexibility. All previous functionality is preserved with improved usability.
This project provides a comprehensive toolkit for creating development and production clusters on your local machine using Lima VMs. It offers two complementary approaches:
- User-friendly Scripts: Interactive setup with comprehensive error handling
- Flexible Deployment: Step-by-step or full automation workflows
- Rich Management: Built-in cluster status, logs, and SSH utilities
- Enterprise Integration: AIStor (Commercial MinIO) support
- Direct Shell Scripts: Proven automation scripts from production use in
legacy/ - Lima Templates: Ready-to-use VM configurations
- Quick Deployment: Fast cluster provisioning for immediate needs
- Kubernetes Clusters: Container orchestration with AIStor (Commercial MinIO) integration
- Bare-metal MinIO: High-performance object storage clusters for dedicated storage workloads
- Mixed Environments: Combine approaches based on specific needs
- macOS with Apple Silicon or Intel
- Lima installed (
brew install lima) - Ansible
installed (
brew install ansible) - At least 8GB RAM and 50GB free disk space (for development clusters)
- Clone this repository:
git clone https://github.com/pavelanni/lima-ops.git
cd lima-ops- Verify prerequisites:
lima --version
ansible --version- Lima should be of version 1.1+
- Ansible was tested with version 2.18 (core)
# Launch the interactive setup wizard
./scripts/interactive-setup.sh# Small development cluster (1 control-plane + 1 worker)
./scripts/deploy-cluster.sh --config ansible/vars/dev-small.yml --name dev
# Check cluster status
./scripts/manage-cluster.sh status dev
# SSH into a node
./scripts/manage-cluster.sh ssh dev control-plane-01# Main deployment orchestration
./scripts/deploy-cluster.sh --help
# Cluster management utilities
./scripts/manage-cluster.sh --help
# Interactive wizard
./scripts/interactive-setup.shThe project includes several pre-configured cluster templates:
| Config File | Description | Nodes | Resources |
|---|---|---|---|
ansible/vars/dev-small.yml |
Small dev cluster | 1 control + 1 worker | 2 CPU, 2GB RAM each |
ansible/vars/prod-large.yml |
Large prod cluster | 3 control + 4 workers | 4 CPU, 8GB RAM each |
ansible/vars/baremetal-simple.yml |
Simple MinIO storage | 4 storage nodes | 2 CPU, 4GB RAM each |
ansible/vars/cluster_config.yml |
Default Kubernetes | 1 control + 2 workers | Variable resources |
ansible/vars/baremetal_config.yml |
Default MinIO | Variable nodes | Variable resources |
# Development cluster
./scripts/deploy-cluster.sh --config ansible/vars/dev-small.yml --name dev
# Production cluster
./scripts/deploy-cluster.sh --config ansible/vars/prod-large.yml --name production
# Bare-metal storage cluster
./scripts/deploy-cluster.sh --config ansible/vars/baremetal-simple.yml --name storage
# Interactive deployment (recommended)
./scripts/interactive-setup.sh# 1. Validate configuration
./scripts/deploy-cluster.sh validate --config ansible/vars/dev-small.yml --name dev
# 2. Create storage disks
./scripts/deploy-cluster.sh create-disks --config ansible/vars/dev-small.yml --name dev
# 3. Provision VMs
./scripts/deploy-cluster.sh provision --config ansible/vars/dev-small.yml --name dev
# 4. Configure VMs
./scripts/deploy-cluster.sh configure --name dev
# 5. Mount storage disks
./scripts/deploy-cluster.sh mount-disks --name dev
# 6. Deploy applications
./scripts/deploy-cluster.sh deploy --name dev# Show all clusters
./scripts/manage-cluster.sh list
# Show cluster status
./scripts/manage-cluster.sh status dev
# SSH into a VM
./scripts/manage-cluster.sh ssh dev control-plane-01
# Show VM logs
./scripts/manage-cluster.sh logs dev worker-01
# Destroy cluster
./scripts/deploy-cluster.sh destroy --name dev
# Get help
./scripts/deploy-cluster.sh --help
./scripts/manage-cluster.sh --helpFor Kubernetes deployments, a kubeconfig file is automatically generated at:
kubeconfig-{cluster_name}.yamlUse it to access your cluster from the host machine:
export KUBECONFIG=/path/to/lima-ops/kubeconfig-dev.yaml
kubectl get nodes
kubectl get pods -ASecurity Note: Kubeconfig files contain certificates and private keys. They are:
- Automatically excluded from git commits (
.gitignore) - Should never be shared or committed to version control
- Generated fresh for each cluster deployment
- Specific to your local machine
For production or cloud deployments, ensure kubeconfig files are managed securely and never exposed in repositories.
- Automated VM Provisioning: Creates Lima VMs with specified resources
- Dynamic Disk Management: Creates, formats, and mounts additional storage disks
- Smart Inventory Generation: Dynamic Ansible inventory with SSH configuration
- Multi-cluster Support: Deploy multiple isolated clusters simultaneously
- XFS Filesystem: High-performance filesystem for storage workloads
- UUID-based Mounting: Reliable disk mounting across reboots
- Automatic Cleanup: Handles existing mounts and filesystem signatures
- MinIO-optimized Paths: Storage mounted to
/mnt/minio{n}for compatibility
- Kubernetes: K3s with AIStor (Commercial MinIO), DirectPV storage, ingress
- Bare-metal MinIO: Direct MinIO installation for maximum performance
- Enterprise Features: AIStor includes commercial support and advanced features
The project follows a three-phase deployment approach:
- Infrastructure Phase: VM creation, disk provisioning, networking setup
- Configuration Phase: OS configuration, disk mounting, package installation
- Application Phase: Kubernetes or MinIO deployment and configuration
lima-ops/
├── ansible/ # Modern Ansible automation
│ ├── playbooks/ # Infrastructure and deployment playbooks
│ ├── vars/ # Cluster configuration templates
│ ├── templates/ # Jinja2 templates
│ ├── tasks/ # Reusable Ansible tasks
│ └── Makefile # Ansible workflow automation
├── legacy/ # Battle-tested shell scripts
│ ├── templates/ # Lima VM templates
│ └── scripts/ # Provisioning and management scripts
├── docs/ # Documentation and guides
├── examples/ # Usage examples and tutorials
└── README.md # This file
- Copy an existing configuration:
cp vars/dev-small.yml vars/my-custom.yml- Edit the configuration:
kubernetes_cluster:
name: "my-custom"
nodes:
- name: "control-01"
role: "control-plane"
cpus: 4
memory: "8GiB"
disk_size: "40GiB"
additional_disks:
- name: "disk1"
size: "100GiB"- Deploy with your custom configuration:
./scripts/deploy-cluster.sh --config ansible/vars/my-custom.yml --name custom- role:
control-planeorworker - cpus: Number of CPU cores (1-8)
- memory: RAM allocation (
2GiB,4GiB,8GiB, etc.) - disk_size: System disk size (
20GiB,40GiB, etc.) - additional_disks: Array of additional storage disks
Important: AIStor (Commercial MinIO) requires a valid SUBNET license.
-
Visit SUBNET Portal to obtain your license
-
Add the license to your configuration:
aistor:
license: "your-subnet-license-string-here"- Keep the license secure and do not commit it to version control
# Check Lima installation
lima --version
# Check available resources
vm_stat | grep "Pages free"
# Verify Lima directory permissions
ls -la ~/.lima/# Check VM status
limactl list
# Verify SSH config
limactl shell CLUSTER_NAME-node-name
# Regenerate inventory
./scripts/deploy-cluster.sh provision --config CONFIG_FILE --name your-cluster# Check disk status
limactl disk ls
# Verify disk creation
limactl disk ls
# Re-run disk mounting
./scripts/deploy-cluster.sh mount-disks --name your-cluster# Check syntax
ansible-playbook --syntax-check ansible/playbooks/infrastructure/provision_vms.yml
# Run in dry-run mode
./scripts/deploy-cluster.sh --dry-run --config ansible/vars/dev-small.yml --name test
# Increase verbosity
ansible-playbook -vvv playbooks/infrastructure/provision_vms.ymlFor Development:
- Use
vars/dev-small.ymlconfiguration - Allocate minimum resources (2GB RAM per node)
- Use smaller disk sizes
For Production Testing:
- Use
vars/prod-large.ymlconfiguration - Ensure sufficient host resources (32GB+ RAM recommended)
- Monitor Lima VM resource usage
- Lima logs:
~/.lima/CLUSTER_NAME-NODE_NAME/ha.stderr.log - Ansible logs:
ansible.log(if configured) - VM console:
limactl shell CLUSTER_NAME-NODE_NAME
- Makefile: Main interface for all operations
- CLAUDE.md: Detailed development documentation
- Playbooks: Modular Ansible automation
- Templates: Jinja2 templates for configuration generation
- Variables: YAML-based cluster definitions
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Test with multiple cluster configurations
- Update documentation as needed
- Submit a pull request
# Syntax validation
ansible-playbook --syntax-check ansible/playbooks/infrastructure/provision_vms.yml
# Dry run (safe testing)
./scripts/deploy-cluster.sh --dry-run --config ansible/vars/dev-small.yml --name test
# Full test with cleanup
./scripts/deploy-cluster.sh --config ansible/vars/dev-small.yml --name test
./scripts/deploy-cluster.sh destroy --name testThis project is licensed under the MIT License - see the LICENSE file for details.
- Lima for lightweight VM management
- Ansible for infrastructure automation
- MinIO for object storage
- Kubernetes for container orchestration
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: See
CLAUDE.mdfor detailed development information
Note: This tool is designed for development and testing purposes. For production deployments, consider using dedicated infrastructure providers and enterprise-grade solutions.