TODO LIST:
- Fix Task's preconditions with a docker image to run this in... PLEASEEEE
[] Fix secrets in democratic-csi.yaml
[X] Add Ingress through Traefik, Kube-VIP(Done?) - https://computingforgeeks.com/install-configure-traefik-ingress-controller-on-kubernetes/#:~:text=Install%20and%20Configure%20Traefik%20Ingress%20Controller%20on%20Kubernetes,6%20%E2%80%93%20Test%20Traefik%20Ingress%20on%20Kubernetes%20
- Adding annotations for LB
- Enable dashboard proxy? [] Decide on DNS registrar and setup auto dns. - https://kubernetes-sigs.github.io/external-dns [X] Enable cert manager & letsencrypt [] Deploy Tailscale Mesh - https://headscale.net/running-headscale-linux/#goal [] Deploy Headscale (For non-cloud tailscale) [] Auto Update Pipelines for kairos [] PIPELINES - Solve pipelines, this might just be easiest to run GH Runners in kubernetes
[] Fix ArgoCD Init creds [] Fix ArgoCD Install Creds for GH [] Fix all pre-req passwords with 1pass? [] Fix democratic-csi secrets (Currently has to be managed as a task)
run task argocd:secret & repo with variables for user and pass Find a way to access Argo with no ingress.
- Define the manual steps?
- Attempt to automate them?
echo $(op read "op://Private/GitHub General Access Token/password") | docker login ghcr.io -u lordmuffin --password-stdin
docker run --rm -v ~/.kube/:/root/.kube:ro -v ${PWD}:/launcher -e TOKEN=<1Password Token> -ti homelab-launcher:v0.1.3 task 1password:install
Had to manually sync each Vault resource in ArgoCD. ** Port forward to the vault-0 during configuration.
SOLVED STEPS:
- run launcher in kairos cluster
- execute vault:init steps
docker run --rm -v ~/.kube/:/root/.kube:ro -v ${PWD}/terraform:/terraform -e ENV=$ENV -e TF_VAR_api_key=$(op read "op://HomeLab/Rackspace API Credentials/credential") --workdir=/terraform -ti hashicorp/terraform:light plan
docker run --rm -v ~/.kube/:/root/.kube:ro -v ${PWD}/terraform:/terraform -e ENV=$ENV -e TF_VAR_api_key=$(op read "op://HomeLab/Rackspace API Credentials/credential") --workdir=/terraform -ti hashicorp/terraform:light apply
Be sure to deploy the required VM's ahead of time and then run the Kairos Steps for the control nodes first.
Run this on a linux server to serve AuroraBoot
--set "container_image=ghcr.io/lordmuffin/custom-ubuntu-22.04-standard-amd64-generic-v2.4.3-k3sv1.28.2-k3s1:v0.0.4"
cat <<EOF | sudo docker run --rm -i --net host quay.io/kairos/auroraboot \
--cloud-config - \
--set "container_image=ghcr.io/lordmuffin/k8s-kairos:v1.28"
#cloud-config
install:
auto: true
device: "auto"
reboot: true
hostname: kairos-{{ trunc 4 .MachineID }}
users:
- name: kairos
# Change to your pass here
passwd: kairos
ssh_authorized_keys:
# Replace with your github user and un-comment the line below:
- github:lordmuffin
k3s:
enabled: true
args:
- --disable=traefik,servicelb,kube-proxy
- --flannel-backend=none
- --disable-network-policy
- --node-taint dedicated=control:NoSchedule
env:
K3S_TOKEN: K10a6c1c8c50f2d48e8c42b146dc197863b0b999acec022f2b4e5f993d8e94b552f::server:1wz8kq.piy4kdi3ofc14ilw
EOF
OLD:
sudo docker run --rm -ti --net host quay.io/kairos/auroraboot \
--set "artifact_version=v2.4.3-k3sv1.28.2+k3s1" \
--set "release_version=v2.4.3" \
--set "flavor=ubuntu" \
--set "flavor_release=22.04" \
--set repository="kairos-io/kairos" \
--cloud-config https://raw.githubusercontent.com/lordmuffin/homelab/main/launcher/kairos-config/k3s-HA-lab.yaml \
--set "network.token=<TOKEN HERE>"
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
mkdir /usr/local/bin
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
API_SERVER_IP="192.168.10.20"
API_SERVER_PORT="6443"
cilium install --version 1.15.5 --namespace cilium --set=ipam.operator.clusterPoolIPv4PodCIDRList="10.42.0.0/16" --set kubeProxyReplacement=strict --set k8sServiceHost=${API_SERVER_IP} --set k8sServicePort=${API_SERVER_PORT}
cilium hubble enable --namespace cilium
# MAY NEED TO ALSO INSTALL THIS: https://docs.cilium.io/en/stable/gettingstarted/hubble_setup/#hubble-setup
docker run --rm -v ~/.kube/:/root/.kube -v ${PWD}:/launcher -ti homelab-launcher:v0.2.0 task cluster:update-config
export IP=192.168.10.30
export USER=ubuntu
export NAME=dev-lab
export SSH_PRIV_KEY=~/.ssh/ubuntu.pem
rm $SSH_PRIV_KEY
rm ~/.kube/config
op read --out-file $SSH_PRIV_KEY "op://HomeLab/onarfzninuoetwe2hh2ni7m52q/private key?ssh-format=openssh"
k3sup install --ip $IP --user $USER --skip-install --ssh-key $SSH_PRIV_KEY --merge --local-path ~/.kube/config --context $NAME
export IP=192.168.11.30
export USER=ubuntu
export NAME=prod-lab
k3sup install --ip $IP --user $USER --skip-install --ssh-key $SSH_PRIV_KEY --merge --local-path ~/.kube/config --context $NAME
export ENV="prod-lab"
export OP_TOKEN="$(op read "op://HomeLab/x65o3xuspdsumormc5ffp4p2v4/credential")"
export GH_USER="lordmuffin"
export GH_PASS="$(op read "op://Private/GitHub General Access Token/password")"
export NAS_API_KEY="$(op read "op://Private/TrueNAS API Key/password")"
docker run --rm -v ~/.kube/:/root/.kube -v ${PWD}:/launcher -e ENV=$ENV -e OP_TOKEN=$OP_TOKEN -e GH_USER=$GH_USER -e GH_PASS=$GH_PASS -e NAS_API_KEY=$NAS_API_KEY -ti homelab-launcher:v0.1.3 task cluster:pre-seed
docker run --rm -v ~/.kube/:/root/.kube:ro -v ${PWD}:/launcher -ti ghcr.io/lordmuffin/homelab-launcher:v0.2.0 task namespaces:create
docker run --rm -v ~/.kube/:/root/.kube -v ${PWD}:/launcher -e ENV=$ENV -e OP_TOKEN=$(op read "op://HomeLab/6kduu4inv7zuqgiursj7yvdmfi/credential") -ti ghcr.io/lordmuffin/homelab-launcher:v0.2.0 task 1password:install
docker run --rm -v ~/.kube/:/root/.kube -v ${PWD}:/launcher -e ENV=$ENV -e NAS_API_KEY=$NAS_API_KEY -ti homelab-launcher:v0.1.3 task secrets:democratic-csi-nfs-driver-config
docker run --rm -v ~/.kube/:/root/.kube -v ${PWD}:/launcher -e ENV=$ENV -e NAS_API_KEY=$NAS_API_KEY -ti homelab-launcher:v0.1.3 task secrets:democratic-csi-driver-config
docker run --rm -v ~/.kube/:/root/.kube:ro -v ${PWD}:/launcher -e ENV=$ENV -e GH_USER="lordmuffin" -e GH_PASS=$(op read "op://Private/GitHub General Access Token/password") -ti ghcr.io/lordmuffin/homelab-launcher:v0.2.0 task argocd:install
docker run --rm -v ~/.kube/:/root/.kube -v ${PWD}:/launcher -e ENV=$ENV -ti homelab-launcher:v0.1.3 task utilities:restart
https://www.virtualizationhowto.com/2023/10/proxmox-gpu-passthrough-step-by-step-guide/
- Need to wait for kairos nodes to setup (requires main controls nodes to all be running).
- Wait for Cluster
- Install cillium via helm to turn everything green. :D
CILIUM FIX FOR COREDNS:
echo 'net.ipv4.conf.lxc*.rp_filter = 0' | sudo tee -a /etc/sysctl.d/90-override.conf && sudo systemctl start systemd-sysctl
Gitops managed k3s cluster
Implemented applications
Application Category Info Deployment Status Latest Semver ArgoCD GitMore details CertManager NetworkingMore details Changedetection.io ServicesMore details Crossplane GitOpsMore details External-DNS NetworkingMore details Hashicorp's Vault SecurityChart values Home Assistant Smart HomeMore details Kube-vip NetworkingMore details kube-prometheus MonitoringMore details Milvus DatabasesMore details Gitea GitOpsMore details n8n ServicesMore details Redis Operator DatabasesMore details Unifi Controller NetworkingMore details Unifi Poller MonitoringMore details Uptime Kuma MonitoringMore details Wyze API Bridge Smart HomeMore details Tailscale-operator NetworkingMore details Cloudflared (as proxies) NetworkingMore details
Cluster Utilities
- argocd-image-updater Automatically update a deployment's image version tag and write it back to a Github repository. Example.
- Reflector Replicate a
SecretorconfigMapbetween namespaces automatically.- Descheduler Monitors if workloads are evenly distributed through nodes and cleans failed pods that remained as orphans/stuck.
- Eraser A daemonset responsible for cleaning up outdated images stored in the cluster nodes.
- Kube-fledged Allows for image caching on every node in the cluster, in order to speed up deployments of already existing applications.
- Kured All the cluster's nodes will be properly drained before rebooting cordoned back once they're online.
- Reloader Everytime a
configMapor aSecretresource is created or changed, the pods that use them will be reloaded.- Trivy operator Generates security reports automatically in response to workload and other changes to the cluster.
- Democratic-CSI A CSI implementation for multiple ZFS-based storage systems.
- node-problem-detector Detects if a node has been affected by an issue such as faulty hardware or kernel deadlocks, preventing scheduling.
- Chaos Mesh A Cloud-native, lightweight, no-dependencies required Chaos Engineering Platform for Kubernetes.
- Wavy Patches Kubernetes resources with a VNC access using annotations to provide a GUI to any container.