diff --git a/manuscript/CHANGELOG.md b/manuscript/CHANGELOG.md index 39fc5c88..f689a20b 100644 --- a/manuscript/CHANGELOG.md +++ b/manuscript/CHANGELOG.md @@ -12,15 +12,15 @@ * Kubernetes recipes for UniFi controller, Miniflux, Kanboard and PrivateBin coming in March! (_19 Mar 2019_) ## Recently added recipes -* Added recipe for making your own [DIY Kubernetes Cluster](/kubernetes/diycluster/) (_14 December 2019_) -* Added recipe for [authenticating Traefik Forward Auth against KeyCloak](/ha-docker-swarm/traefik-forward-auth/keycloak/) (_16 May 2019_) -* Added [Bitwarden](/recipes/bitwarden/), an **awesome** open-source password manager, with great mobile sync support (_14 May 2019_) -* Added [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/), replacing function of multiple [oauth_proxies](/reference/oauth_proxy/) with a single, 7MB Go application, which can authenticate against Google, [KeyCloak](/recipes/keycloak/), and other OIDC providers (_10 May 2019_) -* Added Kubernetes version of [Miniflux](/recipes/kubernetes/miniflux/) recipe, a minimalistic RSS reader supporting the Fever API (_26 Mar 2019_) +* Overhauled [Ceph (Shared Storage)](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/shared-storage-ceph/) recipe for Ceph Octopus (v15) (_25 May 2020_) +* Added recipe for making your own [DIY Kubernetes Cluster](https://geek-cookbook.funkypenguin.co.nz/kubernetes/diycluster/) (_14 December 2019_) +* Added recipe for [authenticating Traefik Forward Auth against KeyCloak](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/traefik-forward-auth/keycloak/) (_16 May 2019_) +* Added [Bitwarden](https://geek-cookbook.funkypenguin.co.nz/recipes/bitwarden/), an **awesome** open-source password manager, with great mobile sync support (_14 May 2019_) +* Added [Traefik Forward Auth](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/traefik-forward-auth/), replacing function of multiple [oauth_proxies](https://geek-cookbook.funkypenguin.co.nz/reference/oauth_proxy/) with a single, 7MB Go application, which can authenticate against Google, [KeyCloak](https://geek-cookbook.funkypenguin.co.nz/recipes/keycloak/), and other OIDC providers (_10 May 2019_) ## Recent improvements -* Added recipe for [automated snapshots of Kubernetes Persistent Volumes](/kubernetes/snapshots/), instructions for using [Helm](/kubernetes/helm/), and recipe for deploying [Traefik](/kubernetes/traefik/), which completes the Kubernetes cluster design! (_9 Feb 2019_) -* Added detailed description (_and diagram_) of our [Kubernetes design](/kubernetes/design/), plus a [simple load-balancer design](kubernetes/loadbalancer/) to avoid the complexities/costs of permitting ingress access to a cluster (_7 Feb 2019_) -* Added an [introductory/explanatory page, including a children's story, on Kubernetes](/kubernetes/start/) (_29 Jan 2019_) -* [NextCloud](/recipes/nextcloud/) updated to fix CalDAV/CardDAV service discovery behind Traefik reverse proxy (_12 Dec 2018_) +* Added recipe for [automated snapshots of Kubernetes Persistent Volumes](https://geek-cookbook.funkypenguin.co.nz/kubernetes/snapshots/), instructions for using [Helm](https://geek-cookbook.funkypenguin.co.nz/kubernetes/helm/), and recipe for deploying [Traefik](https://geek-cookbook.funkypenguin.co.nz/kubernetes/traefik/), which completes the Kubernetes cluster design! (_9 Feb 2019_) +* Added detailed description (_and diagram_) of our [Kubernetes design](https://geek-cookbook.funkypenguin.co.nz/kubernetes/design/), plus a [simple load-balancer design](kubernetes/loadbalancer/) to avoid the complexities/costs of permitting ingress access to a cluster (_7 Feb 2019_) +* Added an [introductory/explanatory page, including a children's story, on Kubernetes](https://geek-cookbook.funkypenguin.co.nz/kubernetes/start/) (_29 Jan 2019_) +* [NextCloud](https://geek-cookbook.funkypenguin.co.nz/recipes/nextcloud/) updated to fix CalDAV/CardDAV service discovery behind Traefik reverse proxy (_12 Dec 2018_) diff --git a/manuscript/Gemfile.lock b/manuscript/Gemfile.lock index f8d4775f..ac8ee0e2 100644 --- a/manuscript/Gemfile.lock +++ b/manuscript/Gemfile.lock @@ -1,7 +1,7 @@ GEM remote: https://rubygems.org/ specs: - activesupport (5.2.3) + activesupport (5.2.4.3) concurrent-ruby (~> 1.0, >= 1.0.2) i18n (>= 0.7, < 2) minitest (~> 5.1) @@ -9,7 +9,7 @@ GEM addressable (2.6.0) public_suffix (>= 2.0.2, < 4.0) colorize (0.8.1) - concurrent-ruby (1.1.5) + concurrent-ruby (1.1.6) ethon (0.12.0) ffi (>= 1.3.0) ffi (1.10.0) @@ -22,19 +22,19 @@ GEM parallel (~> 1.3) typhoeus (~> 1.3) yell (~> 2.0) - i18n (1.6.0) + i18n (1.8.2) concurrent-ruby (~> 1.0) mercenary (0.3.6) mini_portile2 (2.4.0) - minitest (5.11.3) - nokogiri (1.10.5) + minitest (5.14.1) + nokogiri (1.10.9) mini_portile2 (~> 2.4.0) parallel (1.17.0) public_suffix (3.0.3) thread_safe (0.3.6) typhoeus (1.3.1) ethon (>= 0.9.0) - tzinfo (1.2.5) + tzinfo (1.2.7) thread_safe (~> 0.1) yell (2.1.0) diff --git a/manuscript/book.txt b/manuscript/book.txt index fd7468d7..2a41eb85 100644 --- a/manuscript/book.txt +++ b/manuscript/book.txt @@ -48,21 +48,10 @@ recipes/phpipam.md recipes/plex.md recipes/privatebin.md recipes/swarmprom.md -recipes/turtle-pool.md sections/menu-docker.md recipes/bitwarden.md recipes/bookstack.md -recipes/cryptominer.md -recipes/cryptominer/mining-rig.md -recipes/cryptominer/amd-gpu.md -recipes/cryptominer/nvidia-gpu.md -recipes/cryptominer/mining-pool.md -recipes/cryptominer/wallet.md -recipes/cryptominer/exchange.md -recipes/cryptominer/minerhotel.md -recipes/cryptominer/monitor.md -recipes/cryptominer/profit.md recipes/calibre-web.md recipes/collabora-online.md recipes/ghost.md @@ -89,7 +78,6 @@ sections/reference.md reference/oauth_proxy.md reference/data_layout.md reference/networks.md -reference/containers.md reference/git-docker.md reference/openvpn.md -reference/troubleshooting.md +reference/troubleshooting.md \ No newline at end of file diff --git a/manuscript/extras/javascript/auto-expand-nav.js b/manuscript/extras/javascript/auto-expand-nav.js new file mode 100644 index 00000000..00c64e35 --- /dev/null +++ b/manuscript/extras/javascript/auto-expand-nav.js @@ -0,0 +1,27 @@ +document.addEventListener("DOMContentLoaded", function() { + load_navpane(); +}); + +function load_navpane() { + var width = window.innerWidth; + if (width <= 1200) { + return; + } + + var nav = document.getElementsByClassName("md-nav"); + for(var i = 0; i < nav.length; i++) { + if (typeof nav.item(i).style === "undefined") { + continue; + } + + if (nav.item(i).getAttribute("data-md-level") && nav.item(i).getAttribute("data-md-component")) { + nav.item(i).style.display = 'block'; + nav.item(i).style.overflow = 'visible'; + } + } + + var nav = document.getElementsByClassName("md-nav__toggle"); + for(var i = 0; i < nav.length; i++) { + nav.item(i).checked = true; + } +} \ No newline at end of file diff --git a/manuscript/extras/javascript/discord.js b/manuscript/extras/javascript/discord.js index 0b455f7a..6f377e28 100644 --- a/manuscript/extras/javascript/discord.js +++ b/manuscript/extras/javascript/discord.js @@ -2,7 +2,7 @@ const button = new Crate({ server: '396055506072109067', channel: '456689991326760973', - shard: 'https://disweb.deploys.io', + shard: 'https://e.widgetbot.io', color: '#795548', indicator: false, notifications: true diff --git a/manuscript/ha-docker-swarm/design.md b/manuscript/ha-docker-swarm/design.md index b11e9672..240baa22 100644 --- a/manuscript/ha-docker-swarm/design.md +++ b/manuscript/ha-docker-swarm/design.md @@ -5,7 +5,7 @@ In the design described below, our "private cloud" platform is: * **Highly-available** (_can tolerate the failure of a single component_) * **Scalable** (_can add resource or capacity as required_) * **Portable** (_run it on your garage server today, run it in AWS tomorrow_) -* **Secure** (_access protected with [LetsEncrypt certificates](/ha-docker-swarm/traefik/) and optional [OIDC with 2FA](/ha-docker-swarm/traefik-forward-auth/)_) +* **Secure** (_access protected with [LetsEncrypt certificates](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/traefik/) and optional [OIDC with 2FA](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/traefik-forward-auth/)_) * **Automated** (_requires minimal care and feeding_) ## Design Decisions @@ -15,9 +15,8 @@ In the design described below, our "private cloud" platform is: This means that: * At least 3 docker swarm manager nodes are required, to provide fault-tolerance of a single failure. -* [Ceph](/ha-docker-swarm/shared-storage-ceph/) is employed for share storage, because it too can be made tolerant of a single failure. +* [Ceph](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/shared-storage-ceph/) is employed for share storage, because it too can be made tolerant of a single failure. -!!! note An exception to the 3-nodes decision is running a single-node configuration. If you only **have** one node, then obviously your swarm is only as resilient as that node. It's still a perfectly valid swarm configuration, ideal for starting your self-hosting journey. In fact, under the single-node configuration, you don't need ceph either, and you can simply use the local volume on your host for storage. You'll be able to migrate to ceph/more nodes if/when you expand. **Where multiple solutions to a requirement exist, preference will be given to the most portable solution.** @@ -38,8 +37,8 @@ Under this design, the only inbound connections we're permitting to our docker s ### Authentication -* Where the hosted application provides a trusted level of authentication (*i.e., [NextCloud](/recipes/nextcloud/)*), or where the application requires public exposure (*i.e. [Privatebin](/recipes/privatebin/)*), no additional layer of authentication will be required. -* Where the hosted application provides inadequate (*i.e. [NZBGet](/recipes/autopirate/nzbget/)*) or no authentication (*i.e. [Gollum](/recipes/gollum/)*), a further authentication against an OAuth provider will be required. +* Where the hosted application provides a trusted level of authentication (*i.e., [NextCloud](https://geek-cookbook.funkypenguin.co.nz/recipes/nextcloud/)*), or where the application requires public exposure (*i.e. [Privatebin](https://geek-cookbook.funkypenguin.co.nz/recipes/privatebin/)*), no additional layer of authentication will be required. +* Where the hosted application provides inadequate (*i.e. [NZBGet](https://geek-cookbook.funkypenguin.co.nz/recipes/autopirate/nzbget/)*) or no authentication (*i.e. [Gollum](https://geek-cookbook.funkypenguin.co.nz/recipes/gollum/)*), a further authentication against an OAuth provider will be required. ## High availability @@ -92,4 +91,4 @@ In summary, although I suffered an **unplanned power outage to all of my infrast [^1]: Since there's no impact to availability, I can fix (or just reinstall) the failed node whenever convenient. -## Chef's Notes 📓 \ No newline at end of file +## Chef's Notes \ No newline at end of file diff --git a/manuscript/ha-docker-swarm/docker-swarm-mode.md b/manuscript/ha-docker-swarm/docker-swarm-mode.md index c909fbbe..95f8bd89 100644 --- a/manuscript/ha-docker-swarm/docker-swarm-mode.md +++ b/manuscript/ha-docker-swarm/docker-swarm-mode.md @@ -4,7 +4,6 @@ For truly highly-available services with Docker containers, we need an orchestra ## Ingredients -!!! summary Existing * [X] 3 x nodes (*bare-metal or VMs*), each with: @@ -81,13 +80,13 @@ To add a manager to this swarm, run the following command: Run the command provided on your other nodes to join them to the swarm as managers. After addition of a node, the output of ```docker node ls``` (on either host) should reflect all the nodes: -```` +``` [root@ds2 davidy]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS b54vls3wf8xztwfz79nlkivt8 ds1.funkypenguin.co.nz Ready Active Leader xmw49jt5a1j87a6ihul76gbgy * ds2.funkypenguin.co.nz Ready Active Reachable [root@ds2 davidy]# -```` +``` ### Setup automated cleanup @@ -127,8 +126,7 @@ networks: - subnet: 172.16.0.0/24 ``` -!!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. + Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](https://geek-cookbook.funkypenguin.co.nz/reference/networks/) here. Launch the cleanup stack by running ```docker stack deploy docker-cleanup -c ``` @@ -167,10 +165,9 @@ Launch shepherd by running ```docker stack deploy shepherd -c /var/data/config/s ### Summary -!!! summary - Created +After completing the above, you should have: - * [X] [Docker swarm cluster](/ha-docker-swarm/design/) +* [X] [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/design/) -## Chef's Notes 📓 \ No newline at end of file +## Chef's Notes \ No newline at end of file diff --git a/manuscript/ha-docker-swarm/keepalived.md b/manuscript/ha-docker-swarm/keepalived.md index 84b9d777..40980d59 100644 --- a/manuscript/ha-docker-swarm/keepalived.md +++ b/manuscript/ha-docker-swarm/keepalived.md @@ -10,7 +10,6 @@ This is accomplished with the use of keepalived on at least two nodes. ## Ingredients -!!! summary "Ingredients" Already deployed: * [X] At least 2 x swarm nodes @@ -65,7 +64,7 @@ docker run -d --name keepalived --restart=always \ That's it. Each node will talk to the other via unicast (no need to un-firewall multicast addresses), and the node with the highest priority gets to be the master. When ingress traffic arrives on the master node via the VIP, docker's routing mesh will deliver it to the appropriate docker node. -## Chef's notes 📓 +## Chef's notes 1. Some hosting platforms (*OpenStack, for one*) won't allow you to simply "claim" a virtual IP. Each node is only able to receive traffic targetted to its unique IP, unless certain security controls are disabled by the cloud administrator. In this case, keepalived is not the right solution, and a platform-specific load-balancing solution should be used. In OpenStack, this is Neutron's "Load Balancer As A Service" (LBAAS) component. AWS, GCP and Azure would likely include similar protections. 2. More than 2 nodes can participate in keepalived. Simply ensure that each node has the appropriate priority set, and the node with the highest priority will become the master. \ No newline at end of file diff --git a/manuscript/ha-docker-swarm/nodes.md b/manuscript/ha-docker-swarm/nodes.md index 373045c4..0ef2db93 100644 --- a/manuscript/ha-docker-swarm/nodes.md +++ b/manuscript/ha-docker-swarm/nodes.md @@ -2,12 +2,10 @@ Let's start building our cluster. You can use either bare-metal machines or virtual machines - the configuration would be the same. To avoid confusion, I'll be referring to these as "nodes" from now on. -!!! note - In 2017, I **initially** chose the "[Atomic](https://www.projectatomic.io/)" CentOS/Fedora image for the swarm hosts, but later found its outdated version of Docker to be problematic with advanced features like GPU transcoding (in [Plex](/recipes/plex/)), [Swarmprom](/recipes/swarmprom/), etc. In the end, I went mainstream and simply preferred a modern Ubuntu installation. + In 2017, I **initially** chose the "[Atomic](https://www.projectatomic.io/)" CentOS/Fedora image for the swarm hosts, but later found its outdated version of Docker to be problematic with advanced features like GPU transcoding (in [Plex](https://geek-cookbook.funkypenguin.co.nz/recipes/plex/)), [Swarmprom](https://geek-cookbook.funkypenguin.co.nz/recipes/swarmprom/), etc. In the end, I went mainstream and simply preferred a modern Ubuntu installation. ## Ingredients -!!! summary "Ingredients" New in this recipe: * [ ] 3 x nodes (*bare-metal or VMs*), each with: @@ -67,7 +65,6 @@ ln -sf /usr/share/zoneinfo/ /etc/localtime After completing the above, you should have: -!!! summary "Summary" Deployed in this recipe: * [X] 3 x nodes (*bare-metal or VMs*), each with: @@ -76,4 +73,4 @@ After completing the above, you should have: * At least 20GB disk space (_but it'll be tight_) * [X] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_) -## Chef's Notes 📓 \ No newline at end of file +## Chef's Notes \ No newline at end of file diff --git a/manuscript/ha-docker-swarm/registry.md b/manuscript/ha-docker-swarm/registry.md index 234e6f74..7cdfa845 100644 --- a/manuscript/ha-docker-swarm/registry.md +++ b/manuscript/ha-docker-swarm/registry.md @@ -10,8 +10,8 @@ The registry mirror runs as a swarm stack, using a simple docker-compose.yml. Cu ## Ingredients -1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik) configured per design +1. [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/shared-storage-ceph.md) +2. [Traefik](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/traefik) configured per design 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP @@ -44,7 +44,6 @@ networks: external: true ``` -!!! note "Unencrypted registry" We create this registry without consideration for SSL, which will fail if we attempt to use the registry directly. However, we're going to use the HTTPS-proxied version via Traefik, leveraging Traefik to manage the LetsEncrypt certificates required. @@ -62,7 +61,7 @@ storage: delete: enabled: true http: - addr: :5000 + addr5000 headers: X-Content-Type-Options: [nosniff] health: @@ -103,11 +102,10 @@ To: ``` Then restart docker by running: -```` +``` systemctl restart docker-latest -```` +``` -!!! tip "" Note the extra comma required after "false" above -## Chef's notes 📓 \ No newline at end of file +## Chef's notes \ No newline at end of file diff --git a/manuscript/ha-docker-swarm/shared-storage-ceph.md b/manuscript/ha-docker-swarm/shared-storage-ceph.md index 717acd80..4416372e 100644 --- a/manuscript/ha-docker-swarm/shared-storage-ceph.md +++ b/manuscript/ha-docker-swarm/shared-storage-ceph.md @@ -2,196 +2,212 @@ While Docker Swarm is great for keeping containers running (_and restarting those that fail_), it does nothing for persistent storage. This means if you actually want your containers to keep any data persistent across restarts (_hint: you do!_), you need to provide shared storage to every docker node. -## Design - -### Why not GlusterFS? -I originally provided shared storage to my nodes using GlusterFS (see the next recipe for details), but found it difficult to deal with because: - -1. GlusterFS requires (n) "bricks", where (n) **has** to be a multiple of your replica count. I.e., if you want 2 copies of everything on shared storage (the minimum to provide redundancy), you **must** have either 2, 4, 6 (etc..) bricks. The HA swarm design calls for minimum of 3 nodes, and so under GlusterFS, my third node can't participate in shared storage at all, unless I start doubling up on bricks-per-node (which then impacts redundancy) -2. GlusterFS turns out to be a giant PITA when you want to restore a failed node. There are at [least 14 steps to follow](https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/sect-Replacing_Hosts.html) to replace a brick. -3. I'm pretty sure I messed up the 14-step process above anyway. My replaced brick synced with my "original" brick, but produced errors when querying status via the CLI, and hogged 100% of 1 CPU on the replaced node. Inexperienced with GlusterFS, and unable to diagnose the fault, I switched to a Ceph cluster instead. - -### Why Ceph? - -1. I'm more familiar with Ceph - I use it in the OpenStack designs I manage -2. Replacing a failed node is **easy**, provided you can put up with the I/O load of rebalancing OSDs after the replacement. -3. CentOS Atomic includes the ceph client in the OS, so while the Ceph OSD/Mon/MSD are running under containers, I can keep an eye (and later, automatically monitor) the status of Ceph from the base OS. +![Ceph Screenshot](../images/ceph.png) ## Ingredients -!!! summary "Ingredients" 3 x Virtual Machines (configured earlier), each with: - * [X] CentOS/Fedora Atomic + * [X] Support for "modern" versions of Python and LVM * [X] At least 1GB RAM * [X] At least 20GB disk space (_but it'll be tight_) * [X] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_) - * [ ] A second disk dedicated to the Ceph OSD + * [X] A second disk dedicated to the Ceph OSD + * [X] Each node should have the IP of every other participating node hard-coded in /etc/hosts (*including its own IP*) ## Preparation -### SELinux - -Since our Ceph components will be containerized, we need to ensure the SELinux context on the base OS's ceph files is set correctly: - -``` -mkdir /var/lib/ceph -chcon -Rt svirt_sandbox_file_t /etc/ceph -chcon -Rt svirt_sandbox_file_t /var/lib/ceph -``` -### Setup Monitors - -Pick a node, and run the following to stand up the first Ceph mon. Be sure to replace the values for **MON_IP** and **CEPH_PUBLIC_NETWORK** to those specific to your deployment: - -``` -docker run -d --net=host \ ---restart always \ --v /etc/ceph:/etc/ceph \ --v /var/lib/ceph/:/var/lib/ceph/ \ --e MON_IP=192.168.31.11 \ --e CEPH_PUBLIC_NETWORK=192.168.31.0/24 \ ---name="ceph-mon" \ -ceph/daemon mon -``` - -Now **copy** the contents of /etc/ceph on this first node to the remaining nodes, and **then** run the docker command above (_customizing MON_IP as you go_) on each remaining node. You'll end up with a cluster with 3 monitors (odd number is required for quorum, same as Docker Swarm), and no OSDs (yet) + Earlier iterations of this recipe (*based on [Ceph Jewel](https://docs.ceph.com/docs/master/releases/jewel/)*) required significant manual effort to install Ceph in a Docker environment. In the 2+ years since Jewel was released, significant improvements have been made to the ceph "deploy-in-docker" process, including the [introduction of the cephadm tool](https://ceph.io/ceph-management/introducing-cephadm/). Cephadm is the tool which now does all the heavy lifting, below, for the current version of ceph, codenamed "[Octopus](https://www.youtube.com/watch?v=Gi58pN8W3hY)". + +### Pick a master node + +One of your nodes will become the cephadm "master" node. Although all nodes will participate in the Ceph cluster, the master node will be the node which we bootstrap ceph on. It's also the node which will run the Ceph dashboard, and on which future upgrades will be processed. It doesn't matter _which_ node you pick, and the cluster itself will operate in the event of a loss of the master node (although you won't see the dashboard) + +### Install cephadm on master node + +Run the following on the ==master== node: + +``` +MYIP=`ip route get 1.1.1.1 | grep -oP 'src \K\S+'` +curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm +chmod +x cephadm +mkdir -p /etc/ceph +./cephadm bootstrap --mon-ip $MYIP +``` + +The process takes about 30 seconds, after which, you'll have a MVC (*Minimum Viable Cluster*)[^1], encompassing a single monitor and mgr instance on your chosen node. Here's the complete output from a fresh install: + +??? "Example output from a fresh cephadm bootstrap" + ``` + root@raphael:~# MYIP=`ip route get 1.1.1.1 | grep -oP 'src \K\S+'` + root@raphael:~# curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm + + root@raphael:~# chmod +x cephadm + root@raphael:~# mkdir -p /etc/ceph + root@raphael:~# ./cephadm bootstrap --mon-ip $MYIP + INFO:cephadm:Verifying podman|docker is present... + INFO:cephadm:Verifying lvm2 is present... + INFO:cephadm:Verifying time synchronization is in place... + INFO:cephadm:Unit systemd-timesyncd.service is enabled and running + INFO:cephadm:Repeating the final host check... + INFO:cephadm:podman|docker (/usr/bin/docker) is present + INFO:cephadm:systemctl is present + INFO:cephadm:lvcreate is present + INFO:cephadm:Unit systemd-timesyncd.service is enabled and running + INFO:cephadm:Host looks OK + INFO:root:Cluster fsid: bf3eff78-9e27-11ea-b40a-525400380101 + INFO:cephadm:Verifying IP 192.168.38.101 port 3300 ... + INFO:cephadm:Verifying IP 192.168.38.101 port 6789 ... + INFO:cephadm:Mon IP 192.168.38.101 is in CIDR network 192.168.38.0/24 + INFO:cephadm:Pulling latest docker.io/ceph/ceph:v15 container... + INFO:cephadm:Extracting ceph user uid/gid from container image... + INFO:cephadm:Creating initial keys... + INFO:cephadm:Creating initial monmap... + INFO:cephadm:Creating mon... + INFO:cephadm:Waiting for mon to start... + INFO:cephadm:Waiting for mon... + INFO:cephadm:mon is available + INFO:cephadm:Assimilating anything we can from ceph.conf... + INFO:cephadm:Generating new minimal ceph.conf... + INFO:cephadm:Restarting the monitor... + INFO:cephadm:Setting mon public_network... + INFO:cephadm:Creating mgr... + INFO:cephadm:Wrote keyring to /etc/ceph/ceph.client.admin.keyring + INFO:cephadm:Wrote config to /etc/ceph/ceph.conf + INFO:cephadm:Waiting for mgr to start... + INFO:cephadm:Waiting for mgr... + INFO:cephadm:mgr not available, waiting (1/10)... + INFO:cephadm:mgr not available, waiting (2/10)... + INFO:cephadm:mgr not available, waiting (3/10)... + INFO:cephadm:mgr is available + INFO:cephadm:Enabling cephadm module... + INFO:cephadm:Waiting for the mgr to restart... + INFO:cephadm:Waiting for Mgr epoch 5... + INFO:cephadm:Mgr epoch 5 is available + INFO:cephadm:Setting orchestrator backend to cephadm... + INFO:cephadm:Generating ssh key... + INFO:cephadm:Wrote public SSH key to to /etc/ceph/ceph.pub + INFO:cephadm:Adding key to root@localhost's authorized_keys... + INFO:cephadm:Adding host raphael... + INFO:cephadm:Deploying mon service with default placement... + INFO:cephadm:Deploying mgr service with default placement... + INFO:cephadm:Deploying crash service with default placement... + INFO:cephadm:Enabling mgr prometheus module... + INFO:cephadm:Deploying prometheus service with default placement... + INFO:cephadm:Deploying grafana service with default placement... + INFO:cephadm:Deploying node-exporter service with default placement... + INFO:cephadm:Deploying alertmanager service with default placement... + INFO:cephadm:Enabling the dashboard module... + INFO:cephadm:Waiting for the mgr to restart... + INFO:cephadm:Waiting for Mgr epoch 13... + INFO:cephadm:Mgr epoch 13 is available + INFO:cephadm:Generating a dashboard self-signed certificate... + INFO:cephadm:Creating initial admin user... + INFO:cephadm:Fetching dashboard port number... + INFO:cephadm:Ceph Dashboard is now available at: -### Setup Managers + URL: https://raphael:8443/ + User: admin + Password: mid28k0yg5 -Since Ceph v12 ("Luminous"), some of the non-realtime cluster management responsibilities are delegated to a "manager". Run the following on every node - only one node will be __active__, the others will be in standby: + INFO:cephadm:You can access the Ceph CLI with: -``` -docker run -d --net=host \ ---privileged=true \ ---pid=host \ --v /etc/ceph:/etc/ceph \ --v /var/lib/ceph/:/var/lib/ceph/ \ ---name="ceph-mgr" \ ---restart=always \ -ceph/daemon mgr -``` - -### Setup OSDs - -Since we have a OSD-less mon-only cluster currently, prepare for OSD creation by dumping the auth credentials for the OSDs into the appropriate location on the base OS: - -``` -ceph auth get client.bootstrap-osd -o \ -/var/lib/ceph/bootstrap-osd/ceph.keyring -``` + sudo ./cephadm shell --fsid bf3eff78-9e27-11ea-b40a-525400380101 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring -On each node, you need a dedicated disk for the OSD. In the example below, I used _/dev/vdd_ (the entire disk, no partitions) for the OSD. + INFO:cephadm:Please consider enabling telemetry to help improve Ceph: -Run the following command on every node: + ceph telemetry on + + For more information see: + + https://docs.ceph.com/docs/master/mgr/telemetry/ + + INFO:cephadm:Bootstrap complete. + root@raphael:~# + ``` + + +### Prepare other nodes + +It's now necessary to tranfer the following files to your ==other== nodes, so that cephadm can add them to your cluster, and so that they'll be able to mount the cephfs when we're done: + +| Path on master | Path on non-master | +|---------------------------------------|------------------------------------------------------------| +| `/etc/ceph/ceph.conf` | `/etc/ceph/ceph.conf` | +| `/etc/ceph/ceph.client.admin.keyring` | `/etc/ceph/ceph.client.admin.keyring` | +| `/etc/ceph/ceph.pub` | `/root/.ssh/authorized_keys` (append to anything existing) | + + +Back on the ==master== node, run `ceph orch host add ` once for each other node you want to join to the cluster. You can validate the results by running `ceph orch host ls` + + Not really. Docker is inherently insecure at the host-level anyway (*think what would happen if you launched a global-mode stack with a malicious container image which mounted `/root/.ssh`*), so worrying about cephadm seems a little barn-door-after-horses-bolted. If you take host-level security seriously, consider switching to [Kubernetes](https://geek-cookbook.funkypenguin.co.nz/kubernetes/start/) :) + +### Add OSDs + +Now the best improvement since the days of ceph-deploy and manual disks.. on the ==master== node, run `ceph orch apply osd --all-available-devices`. This will identify any unloved (*unpartitioned, unmounted*) disks attached to each participating node, and configure these disks as OSDs. + +### Setup CephFS + +On the ==master== node, create a cephfs volume in your cluster, by running `ceph fs volume create data`. Ceph will handle the necessary orchestration itself, creating the necessary pool, mds daemon, etc. + +You can watch the progress by running `ceph fs ls` (to see the fs is configured), and `ceph -s` to wait for `HEALTH_OK` + +### Mount CephFS volume + +On ==every== node, create a mountpoint for the data, by running ```mkdir /var/data```, add an entry to fstab to ensure the volume is auto-mounted on boot, and ensure the volume is actually _mounted_ if there's a network / boot delay getting access to the gluster volume: ``` -docker run -d --net=host \ ---privileged=true \ ---pid=host \ --v /etc/ceph:/etc/ceph \ --v /var/lib/ceph/:/var/lib/ceph/ \ --v /dev/:/dev/ \ --e OSD_FORCE_ZAP=1 \ --e OSD_DEVICE=/dev/vdd \ --e OSD_TYPE=disk \ ---name="ceph-osd" \ ---restart=always \ -ceph/daemon osd_ceph_disk -``` - -Watch the output by running ```docker logs ceph-osd -f```, and confirm success. - -!!! warning "Zapping the device" - The Ceph OSD container will normally refuse to destroy a partition containing existing data, but above we are instructing ceph to zap (destroy) whatever is on the partition currently. Don't run this against a device you care about, and if you're unsure, omit the "OSD_FORCE_ZAP" variable - -### Setup MDSs - -In order to mount our ceph pools as filesystems, we'll need Ceph MDS(s). Run the following on each node: +mkdir /var/data +MYNODES=",," # Add your own nodes here, comma-delimited +MYHOST=`ip route get 1.1.1.1 | grep -oP 'src \K\S+'` +echo -e " +# Mount cephfs volume \n +raphael,donatello,leonardo:/ /var/data ceph name=admin,noatime,_netdev 0 0" >> /etc/fstab +mount -a ``` -docker run -d --net=host \ ---name ceph-mds \ ---restart always \ --v /var/lib/ceph/:/var/lib/ceph/ \ --v /etc/ceph:/etc/ceph \ --e CEPHFS_CREATE=1 \ --e CEPHFS_DATA_POOL_PG=256 \ --e CEPHFS_METADATA_POOL_PG=256 \ -ceph/daemon mds -``` -### Apply tweaks -The ceph container seems to configure a pool default of 3 replicas (3 copies of each block are retained), which is one too many for our cluster (we are only protecting against the failure of a single node). +## Serving -Run the following on any node to reduce the size of the pool to 2 replicas: +### Sprinkle with tools -``` -ceph osd pool set cephfs_data size 2 -ceph osd pool set cephfs_metadata size 2 -``` - -Disabled "scrubbing" (which can be IO-intensive, and is unnecessary on a VM) with: +Although it's possible to use `cephadm shell` to exec into a container with the necessary ceph tools, it's more convenient to use the native CLI tools. To this end, on each node, run the following, which will install the appropriate apt repository, and install the latest ceph CLI tools: ``` -ceph osd set noscrub -ceph osd set nodeep-scrub +curl -L https://download.ceph.com/keys/release.asc | sudo apt-key add - +cephadm add-repo --release octopus +cephadm install ceph-common ``` +### Drool over dashboard -### Create credentials for swarm - -In order to mount the ceph volume onto our base host, we need to provide cephx authentication credentials. - -On **one** node, create a client for the docker swarm: - -``` -ceph auth get-or-create client.dockerswarm osd \ -'allow rw' mon 'allow r' mds 'allow' > /etc/ceph/keyring.dockerswarm -``` - -Grab the secret associated with the new user (you'll need this for the /etc/fstab entry below) by running: +Ceph now includes a comprehensive dashboard, provided by the mgr daemon. The dashboard will be accessible at https://[IP of your ceph master node]:8443, but you'll need to run `ceph dashboard ac-user-create administrator` first, to create an administrator account: ``` -ceph-authtool /etc/ceph/keyring.dockerswarm -p -n client.dockerswarm +root@raphael:~# ceph dashboard ac-user-create batman supermansucks administrator +{"username": "batman", "password": "$2b$12$3HkjY85mav.dq3HHAZiWP.KkMiuoV2TURZFH.6WFfo/BPZCT/0gr.", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1590372281, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": false} +root@raphael:~# ``` -### Mount MDS volume +## Summary -On each node, create a mountpoint for the data, by running ```mkdir /var/data```, add an entry to fstab to ensure the volume is auto-mounted on boot, and ensure the volume is actually _mounted_ if there's a network / boot delay getting access to the gluster volume: +What have we achieved? -``` -mkdir /var/data + Created: -MYHOST=`hostname -s` -echo -e " -# Mount cephfs volume \n -$MYHOST:6789:/ /var/data/ ceph \ -name=dockerswarm\ -,secret=\ -,noatime,_netdev,context=system_u:object_r:svirt_sandbox_file_t:s0 \ -0 2" >> /etc/fstab -mount -a -``` -### Install docker-volume plugin + * [X] Persistent storage available to every node + * [X] Resiliency in the event of the failure of a single node + * [X] Beautiful dashboard -Upstream bug for docker-latest reported at https://bugs.centos.org/view.php?id=13609 +## The easy, 5-minute install -And the alpine fault: -https://github.com/gliderlabs/docker-alpine/issues/317 +I share (_with [sponsors][github_sponsor] and [patrons][patreon]_) a private "_premix_" GitHub repository, which includes an ansible playbook for deploying the entire Geek's Cookbook stack, automatically. This means that members can create the entire environment with just a ```git pull``` and an ```ansible-playbook deploy.yml``` +Here's a screencast of the playbook in action. I sped up the boring parts, it actually takes ==5 min== (*you can tell by the timestamps on the prompt*): -## Serving - -After completing the above, you should have: - -``` -[X] Persistent storage available to every node -[X] Resiliency in the event of the failure of a single node -``` - -## Chef's Notes 📓 +[patreon]: https://www.patreon.com/bePatron?u=6982506 +[github_sponsor]: https://github.com/sponsors/funkypenguin -Future enhancements to this recipe include: +## Chef's Notes -1. Rather than pasting a secret key into /etc/fstab (which feels wrong), I'd prefer to be able to set "secretfile" in /etc/fstab (which just points ceph.mount to a file containing the secret), but under the current CentOS Atomic, we're stuck with "secret", per https://bugzilla.redhat.com/show_bug.cgi?id=1030402 -2. This recipe was written with Ceph v11 "Jewel". Ceph have subsequently releaesd v12 "Kraken". I've updated the recipe for the addition of "Manager" daemons, but it should be noted that the [only reader so far](https://discourse.geek-kitchen.funkypenguin.co.nz/u/ggilley) to attempt a Ceph install using CentOS Atomic and Ceph v12 had issues with OSDs, which lead him to [move to Ubuntu 1604](https://discourse.geek-kitchen.funkypenguin.co.nz/t/shared-storage-ceph-funky-penguins-geek-cookbook/47/24?u=funkypenguin) instead. +[^1]: Minimum Viable Cluster acronym copyright, trademark, and whatever else, to Funky Penguin for 1,000,000 years. diff --git a/manuscript/ha-docker-swarm/shared-storage-gluster.md b/manuscript/ha-docker-swarm/shared-storage-gluster.md index 34cffb7b..137141c1 100644 --- a/manuscript/ha-docker-swarm/shared-storage-gluster.md +++ b/manuscript/ha-docker-swarm/shared-storage-gluster.md @@ -2,8 +2,7 @@ While Docker Swarm is great for keeping containers running (_and restarting those that fail_), it does nothing for persistent storage. This means if you actually want your containers to keep any data persistent across restarts (_hint: you do!_), you need to provide shared storage to every docker node. -!!! warning - This recipe is deprecated. It didn't work well in 2017, and it's not likely to work any better now. It remains here as a reference. I now recommend the use of [Ceph for shared storage](/ha-docker-swarm/shared-storage-ceph/) instead. - 2019 Chef + This recipe is deprecated. It didn't work well in 2017, and it's not likely to work any better now. It remains here as a reference. I now recommend the use of [Ceph for shared storage](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/shared-storage-ceph/) instead. - 2019 Chef ## Design @@ -13,7 +12,6 @@ This GlusterFS recipe was my original design for shared storage, but I [found it ## Ingredients -!!! summary "Ingredients" 3 x Virtual Machines (configured earlier), each with: * [X] CentOS/Fedora Atomic @@ -30,7 +28,7 @@ To build our Gluster volume, we need 2 out of the 3 VMs to provide one "brick". On each host, run a variation following to create your bricks, adjusted for the path to your disk. -!!! note "The example below assumes /dev/vdb is dedicated to the gluster volume" + ``` ( echo o # Create a new empty DOS partition table @@ -50,7 +48,6 @@ echo '/dev/vdb1 /var/no-direct-write-here/brick1 xfs defaults 1 2' >> /etc/fstab mount -a && mount ``` -!!! warning "Don't provision all your LVM space" Atomic uses LVM to store docker data, and **automatically grows** Docker's volumes as requried. If you commit all your free LVM space to your brick, you'll quickly find (as I did) that docker will start to fail with error messages about insufficient space. If you're going to slice off a portion of your LVM space in /dev/atomicos, make sure you leave enough space for Docker storage, where "enough" depends on how much you plan to pull images, make volumes, etc. I ate through 20GB very quickly doing development, so I ended up provisioning 50GB for atomic alone, with a separate volume for the brick. ### Create glusterfs container @@ -58,7 +55,8 @@ mount -a && mount Atomic doesn't include the Gluster server components. This means we'll have to run glusterd from within a container, with privileged access to the host. Although convoluted, I've come to prefer this design since it once again makes the OS "disposable", moving all the config into containers and code. Run the following on each host: -```` + +``` docker run \ -h glusterfs-server \ -v /etc/glusterfs:/etc/glusterfs:z \ @@ -70,15 +68,16 @@ docker run \ --restart=always \ --name="glusterfs-server" \ gluster/gluster-centos -```` +``` + ### Create trusted pool On a single node (doesn't matter which), run ```docker exec -it glusterfs-server bash``` to launch a shell inside the container. -From the node, run -```gluster peer probe ``` +From the node, run `gluster peer probe `. Example output: + ``` [root@glusterfs-server /]# gluster peer probe ds1 peer probe: success. @@ -88,6 +87,7 @@ peer probe: success. Run ```gluster peer status``` on both nodes to confirm that they're properly connected to each other: Example output: + ``` [root@glusterfs-server /]# gluster peer status Number of Peers: 1 @@ -102,7 +102,8 @@ State: Peer in Cluster (Connected) Now we create a *replicated volume* out of our individual "bricks". -Create the gluster volume by running +Create the gluster volume by running: + ``` gluster volume create gv0 replica 2 \ server1:/var/no-direct-write-here/brick1 \ @@ -110,6 +111,7 @@ gluster volume create gv0 replica 2 \ ``` Example output: + ``` [root@glusterfs-server /]# gluster volume create gv0 replica 2 ds1:/var/no-direct-write-here/brick1/gv0 ds3:/var/no-direct-write-here/brick1/gv0 volume create: gv0: success: please start the volume to access data @@ -141,7 +143,8 @@ echo "$MYHOST:/gv0 /var/data glusterfs defaults,_netde mount -a ``` -For some reason, my nodes won't auto-mount this volume on boot. I even tried the trickery below, but they stubbornly refuse to automount. +For some reason, my nodes won't auto-mount this volume on boot. I even tried the trickery below, but they stubbornly refuse to automount: + ``` echo -e "\n\n# Give GlusterFS 10s to start before \ mounting\nsleep 10s && mount -a" >> /etc/rc.local @@ -154,12 +157,10 @@ For non-gluster nodes, you'll need to replace $MYHOST above with the name of one After completing the above, you should have: -``` -[X] Persistent storage available to every node -[X] Resiliency in the event of the failure of a single (gluster) node -``` +* [X] Persistent storage available to every node +* [X] Resiliency in the event of the failure of a single (gluster) node -## Chef's Notes 📓 +## Chef's Notes Future enhancements to this recipe include: diff --git a/manuscript/ha-docker-swarm/traefik-forward-auth.md b/manuscript/ha-docker-swarm/traefik-forward-auth.md index 10f7ef69..ae30fee5 100644 --- a/manuscript/ha-docker-swarm/traefik-forward-auth.md +++ b/manuscript/ha-docker-swarm/traefik-forward-auth.md @@ -2,28 +2,26 @@ Now that we have Traefik deployed, automatically exposing SSL access to our Docker Swarm services using LetsEncrypt wildcard certificates, let's pause to consider that we may not _want_ some services exposed directly to the internet... -..Wait, why not? Well, Traefik doesn't provide any form of authentication, it simply secures the **transmission** of the service between Docker Swarm and the end user. If you were to deploy a service with no native security (*[Radarr](/recipes/autopirate/radarr/) or [Sonarr](/recipes/autopirate/sonarr/) come to mind*), then anybody would be able to use it! Even services which _may_ have a layer of authentication **might** not be safe to expose publically - often open source projects may be maintained by enthusiasts who happily add extra features, but just pay lip service to security, on the basis that "*it's the user's problem to secure it in their own network*". +..Wait, why not? Well, Traefik doesn't provide any form of authentication, it simply secures the **transmission** of the service between Docker Swarm and the end user. If you were to deploy a service with no native security (*[Radarr](https://geek-cookbook.funkypenguin.co.nz/recipes/autopirate/radarr/) or [Sonarr](https://geek-cookbook.funkypenguin.co.nz/recipes/autopirate/sonarr/) come to mind*), then anybody would be able to use it! Even services which _may_ have a layer of authentication **might** not be safe to expose publically - often open source projects may be maintained by enthusiasts who happily add extra features, but just pay lip service to security, on the basis that "*it's the user's problem to secure it in their own network*". -To give us confidence that **we** can access our services, but BadGuys(tm) cannot, we'll deploy a layer of authentication **in front** of Traefik, using [Forward Authentication](https://docs.traefik.io/configuration/entrypoints/#forward-authentication). You can use your own [KeyCloak](/recipes/keycloak/) instance for authentication, but to lower the barrier to entry, this recipe will assume you're authenticating against your own Google account. +To give us confidence that **we** can access our services, but BadGuys(tm) cannot, we'll deploy a layer of authentication **in front** of Traefik, using [Forward Authentication](https://docs.traefik.io/configuration/entrypoints/#forward-authentication). You can use your own [KeyCloak](https://geek-cookbook.funkypenguin.co.nz/recipes/keycloak/) instance for authentication, but to lower the barrier to entry, this recipe will assume you're authenticating against your own Google account. ## Ingredients -!!! summary "Ingredients" Existing: - * [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph) - * [X] [Traefik](/ha-docker-swarm/traefik/) configured per design + * [X] [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/shared-storage-ceph) + * [X] [Traefik](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/traefik/) configured per design New: - * [ ] Client ID and secret from an OpenID-Connect provider (Google, [KeyCloak](/recipes/keycloak/), Microsoft, etc..) + * [ ] Client ID and secret from an OpenID-Connect provider (Google, [KeyCloak](https://geek-cookbook.funkypenguin.co.nz/recipes/keycloak/), Microsoft, etc..) ## Preparation ### Obtain OAuth credentials -!!! note - This recipe will demonstrate using Google OAuth for traefik forward authentication, but it's also possible to use a self-hosted KeyCloak instance - see the [KeyCloak OIDC Provider](/recipes/keycloak/setup-oidc-provider/) recipe for more details! + This recipe will demonstrate using Google OAuth for traefik forward authentication, but it's also possible to use a self-hosted KeyCloak instance - see the [KeyCloak OIDC Provider](https://geek-cookbook.funkypenguin.co.nz/recipes/keycloak/setup-oidc-provider/) recipe for more details! Log into https://console.developers.google.com/, create a new project then search for and select "Credentials" in the search bar. @@ -48,7 +46,7 @@ COOKIE_DOMAINS=example.com ### Prepare the docker service config -This is a small container, you can simply add the following content to the existing `traefik-app.yml` deployed in the previous [Traefik](/recipes/traefik/) recipe: +This is a small container, you can simply add the following content to the existing `traefik-app.yml` deployed in the previous [Traefik](https://geek-cookbook.funkypenguin.co.nz/recipes/traefik/) recipe: ``` traefik-forward-auth: @@ -82,8 +80,7 @@ If you're not confident that forward authentication is working, add a simple "wh - traefik.frontend.auth.forward.trustForwardHeader=true ``` -!!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` @@ -101,16 +98,15 @@ Browse to https://whoami.example.com (*obviously, customized for your domain and What have we achieved? By adding an additional three simple labels to any service, we can secure any service behind our choice of OAuth provider, with minimal processing / handling overhead. -!!! summary "Summary" Created: * [X] Traefik-forward-auth configured to authenticate against an OIDC provider -## Chef's Notes 📓 +## Chef's Notes -1. Traefik forward auth replaces the use of [oauth_proxy containers](/reference/oauth_proxy/) found in some of the existing recipes +1. Traefik forward auth replaces the use of [oauth_proxy containers](https://geek-cookbook.funkypenguin.co.nz/reference/oauth_proxy/) found in some of the existing recipes 2. [@thomaseddon's original version](https://github.com/thomseddon/traefik-forward-auth) of traefik-forward-auth only works with Google currently, but I've created a [fork](https://www.github.com/funkypenguin/traefik-forward-auth) of a [fork](https://github.com/noelcatt/traefik-forward-auth), which implements generic OIDC providers. 3. I reviewed several implementations of forward authenticators for Traefik, but found most to be rather heavy-handed, or specific to a single auth provider. @thomaseddon's go-based docker image is 7MB in size, and with the generic OIDC patch (above), it can be extended to work with any OIDC provider. 4. No, not github natively, but you can ferderate GitHub into KeyCloak, and then use KeyCloak as the OIDC provider. diff --git a/manuscript/ha-docker-swarm/traefik-forward-auth/keycloak.md b/manuscript/ha-docker-swarm/traefik-forward-auth/keycloak.md index 126eaf8f..628f68f1 100644 --- a/manuscript/ha-docker-swarm/traefik-forward-auth/keycloak.md +++ b/manuscript/ha-docker-swarm/traefik-forward-auth/keycloak.md @@ -1,13 +1,12 @@ # Using Traefik Forward Auth with KeyCloak -While the [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/) recipe demonstrated a quick way to protect a set of explicitly-specified URLs using OIDC credentials from a Google account, this recipe will illustrate how to use your own KeyCloak instance to secure **any** URLs within your DNS domain. +While the [Traefik Forward Auth](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/traefik-forward-auth/) recipe demonstrated a quick way to protect a set of explicitly-specified URLs using OIDC credentials from a Google account, this recipe will illustrate how to use your own KeyCloak instance to secure **any** URLs within your DNS domain. ## Ingredients -!!! Summary Existing: - * [X] [KeyCloak](/recipes/keycloak/) recipe deployed successfully, with a [local user](/recipes/keycloak/create-user/) and an [OIDC client](/recipes/keycloak/setup-oidc-provider/) + * [X] [KeyCloak](https://geek-cookbook.funkypenguin.co.nz/recipes/keycloak/) recipe deployed successfully, with a [local user](https://geek-cookbook.funkypenguin.co.nz/recipes/keycloak/create-user/) and an [OIDC client](https://geek-cookbook.funkypenguin.co.nz/recipes/keycloak/setup-oidc-provider/) New: @@ -48,7 +47,7 @@ COOKIE_DOMAIN= ### Prepare the docker service config -This is a small container, you can simply add the following content to the existing `traefik-app.yml` deployed in the previous [Traefik](/recipes/traefik/) recipe: +This is a small container, you can simply add the following content to the existing `traefik-app.yml` deployed in the previous [Traefik](https://geek-cookbook.funkypenguin.co.nz/recipes/traefik/) recipe: ``` traefik-forward-auth: @@ -81,8 +80,7 @@ If you're not confident that forward authentication is working, add a simple "wh - traefik.frontend.auth.forward.trustForwardHeader=true ``` -!!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` ## Serving @@ -110,13 +108,12 @@ And re-deploy your services :) What have we achieved? By adding an additional three simple labels to any service, we can secure any service behind our KeyCloak OIDC provider, with minimal processing / handling overhead. -!!! summary "Summary" Created: * [X] Traefik-forward-auth configured to authenticate against KeyCloak -## Chef's Notes 📓 +## Chef's Notes 1. KeyCloak is very powerful. You can add 2FA and all other clever things outside of the scope of this simple recipe ;) diff --git a/manuscript/ha-docker-swarm/traefik.md b/manuscript/ha-docker-swarm/traefik.md index dfcc6ce6..f1306d6f 100644 --- a/manuscript/ha-docker-swarm/traefik.md +++ b/manuscript/ha-docker-swarm/traefik.md @@ -15,10 +15,9 @@ To deal with these gaps, we need a front-end load-balancer, and in this design, ## Ingredients -!!! summary "You'll need" Existing - * [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph) + * [X] [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/design/) with [persistent shared storage](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/shared-storage-ceph) New @@ -30,7 +29,6 @@ To deal with these gaps, we need a front-end load-balancer, and in this design, The traefik container is aware of the __other__ docker containers in the swarm, because it has access to the docker socket at **/var/run/docker.sock**. This allows traefik to dynamically configure itself based on the labels found on containers in the swarm, which is hugely useful. To make this functionality work on a SELinux-enabled CentOS7 host, we need to add custom SELinux policy. -!!! tip The following is only necessary if you're using SELinux! Run the following to build and activate policy to permit containers to access docker.sock: @@ -92,7 +90,6 @@ swarmmode = true ### Prepare the docker service config -!!! tip "We'll want an overlay network, independent of our traefik stack, so that we can attach/detach all our other stacks (including traefik) to the overlay network. This way, we can undeploy/redepoly the traefik stack without having to bring every other stack first!" - voice of experience Create `/var/data/config/traefik/traefik.yml` as follows: @@ -122,8 +119,7 @@ networks: - subnet: 172.16.200.0/24 ``` -!!! tip - I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 + I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` Create `/var/data/config/traefik/traefik-app.yml` as follows: @@ -181,7 +177,6 @@ touch /var/data/traefik/acme.json chmod 600 /var/data/traefik/acme.json ``` -!!! warning Pay attention above. You **must** set `acme.json`'s permissions to owner-readable-only, else the container will fail to start with an [ID-10T](https://en.wikipedia.org/wiki/User_error#ID-10-T_error) error! Traefik will populate acme.json itself when it runs, but it needs to exist before the container will start (_Chicken, meet egg._) @@ -222,11 +217,10 @@ ID NAME IMAGE You should now be able to access your traefik instance on http://:8080 - It'll look a little lonely currently (*below*), but we'll populate it as we add recipes :) -![Screenshot of Traefik, post-launch](/images/traefik-post-launch.png) +![Screenshot of Traefik, post-launch](https://geek-cookbook.funkypenguin.co.nz/images/traefik-post-launch.png) ### Summary -!!! summary We've achieved: * [X] An overlay network to permit traefik to access all future stacks we deploy @@ -234,6 +228,6 @@ You should now be able to access your traefik instance on http://:8080 * [X] Automatic SSL support for all proxied resources -## Chef's Notes 📓 +## Chef's Notes -1. Did you notice how no authentication was required to view the Traefik dashboard? Eek! We'll tackle that in the next section, regarding [Traefik Forward Authentication](/ha-docker-swarm/traefik-forward-auth/)! \ No newline at end of file +1. Did you notice how no authentication was required to view the Traefik dashboard? Eek! We'll tackle that in the next section, regarding [Traefik Forward Authentication](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/traefik-forward-auth/)! \ No newline at end of file diff --git a/manuscript/images/ceph.png b/manuscript/images/ceph.png new file mode 100644 index 00000000..10129557 Binary files /dev/null and b/manuscript/images/ceph.png differ diff --git a/manuscript/images/kubernetes-dashboard.png b/manuscript/images/kubernetes-dashboard.png new file mode 100644 index 00000000..8d842adb Binary files /dev/null and b/manuscript/images/kubernetes-dashboard.png differ diff --git a/manuscript/images/site-logo.svg b/manuscript/images/site-logo.svg new file mode 100644 index 00000000..970a3a50 --- /dev/null +++ b/manuscript/images/site-logo.svg @@ -0,0 +1,49 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/manuscript/index.md b/manuscript/index.md index 113da6d7..1c287da2 100644 --- a/manuscript/index.md +++ b/manuscript/index.md @@ -1,21 +1,21 @@ # What is this? -Funky Penguin's "**[Geek Cookbook](https://geek-cookbook.funkypenguin.co.nz)**" is a collection of how-to guides for establishing your own container-based self-hosting platform, using either [Docker Swarm](/ha-docker-swarm/design/) or [Kubernetes](/kubernetes/start/). +Funky Penguin's "**[Geek Cookbook](https://geek-cookbook.funkypenguin.co.nz)**" is a collection of how-to guides for establishing your own container-based self-hosting platform, using either [Docker Swarm](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/design/) or [Kubernetes](https://geek-cookbook.funkypenguin.co.nz/kubernetes/start/). -Running such a platform enables you to run self-hosted tools such as [AutoPirate](/recipes/autopirate/) (*Radarr, Sonarr, NZBGet and friends*), [Plex](/recipes/plex/), [NextCloud](/recipes/nextcloud/), and includes elements such as: +Running such a platform enables you to run self-hosted tools such as [AutoPirate](https://geek-cookbook.funkypenguin.co.nz/recipes/autopirate/) (*Radarr, Sonarr, NZBGet and friends*), [Plex](https://www.plex.tv/), [NextCloud](https://nextcloud.com/), and includes elements such as: -* [Automatic SSL-secured access](/ha-docker-swarm/traefik/) to all services (*with LetsEncrypt*) -* [SSO / authentication layer](/ha-docker-swarm/traefik-forward-auth/) to protect unsecured / vulnerable services -* [Automated backup](/recipes/elkarbackup/) of configuration and data -* [Monitoring and metrics](/recipes/swarmprom/) collection, graphing and alerting +* [Automatic SSL-secured access](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/traefik/) to all services (*with LetsEncrypt*) +* [SSO / authentication layer](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/traefik-forward-auth/) to protect unsecured / vulnerable services +* [Automated backup](https://geek-cookbook.funkypenguin.co.nz/recipes/elkarbackup/) of configuration and data +* [Monitoring and metrics](https://geek-cookbook.funkypenguin.co.nz/recipes/swarmprom/) collection, graphing and alerting -Recent updates and additions are posted on the [CHANGELOG](/CHANGELOG/), and there's a friendly community of like-minded geeks in the [Discord server](http://chat.funkypenguin.co.nz). +Recent updates and additions are posted on the [CHANGELOG](https://geek-cookbook.funkypenguin.co.nz/CHANGELOG/), and there's a friendly community of like-minded geeks in the [Discord server](http://chat.funkypenguin.co.nz). ## Who is this for? -You already have a familiarity with concepts such as [virtual](https://libvirt.org/) [machines](https://www.virtualbox.org/), [Docker](https://www.docker.com/) containers, [LetsEncrypt SSL certificates](https://letsencrypt.org/), databases, and command-line interfaces. +You already have a familiarity with concepts such as virtual machines, [Docker](https://www.docker.com/) containers, [LetsEncrypt SSL certificates](https://letsencrypt.org/), databases, and command-line interfaces. -You've probably played with self-hosting some mainstream apps yourself, like [Plex](https://www.plex.tv/), [OwnCloud](https://owncloud.org/), [Wordpress](https://wordpress.org/) or even [SandStorm](https://sandstorm.io/). +You've probably played with self-hosting some mainstream apps yourself, like [Plex](https://www.plex.tv/), [NextCloud](https://nextcloud.com/), [Wordpress](https://wordpress.org/) or [Ghost](https://ghost.io/). ## Why should I read this? @@ -25,38 +25,55 @@ So if you're familiar enough with the concepts above, and you've done self-hosti 2. You want to play. You want a safe sandbox to test new tools, keeping the ones you want and tossing the ones you don't. 3. You want reliability. Once you go from __playing__ with a tool to actually __using__ it, you want it to be available when you need it. Having to "*quickly ssh into the basement server and restart plex*" doesn't cut it when you finally convince your wife to sit down with you to watch sci-fi. + + + + ## What have you done for me lately? (CHANGELOG) -Check out recent change at [CHANGELOG](/CHANGELOG/) +Check out recent change at [CHANGELOG](https://geek-cookbook.funkypenguin.co.nz/CHANGELOG/) ## What do you want from me? -I want your [patronage](https://www.patreon.com/bePatron?u=6982506), either in the financial sense, or as a member of our [friendly geek community](http://chat.funkypenguin.co.nz) (*or both!*) +I want your [support](https://github.com/sponsors/funkypenguin), either in the [financial](https://github.com/sponsors/funkypenguin) sense, or as a member of our [friendly geek community](http://chat.funkypenguin.co.nz) (*or both!*) -### Get in touch 👋 +### Get in touch * Come and say hi to me and the friendly geeks in the [Discord](http://chat.funkypenguin.co.nz) chat or the [Discourse](https://discourse.geek-kitchen.funkypenguin.co.nz/) forums - say hi, ask a question, or suggest a new recipe! -* Tweet me up, I'm [@funkypenguin](https://twitter.com/funkypenguin)! 🐦 +* Tweet me up, I'm [@funkypenguin](https://twitter.com/funkypenguin)! * [Contact me](https://www.funkypenguin.co.nz/contact/) by a variety of channels -### Buy my book 📖 - -I'm also publishing the Geek Cookbook as a formal eBook (*PDF, mobi, epub*), on Leanpub (https://leanpub.com/geek-cookbook). Buy it for as little as $5 (_which is really just a token gesture of support, since all the content is available online anyway!_) or pay what you think it's worth! -### Donate / [Support me 💰](https://www.patreon.com/funkypenguin) +### [Sponsor](https://github.com/sponsors/funkypenguin) / [Patronize](https://www.patreon.com/bePatron?u=6982506) me -The best way to support this work is to become a [Patreon patron](https://www.patreon.com/bePatron?u=6982506) (_for as little as $1/month!_) - You get : +The best way to support this work is to become a [GitHub Sponsor](https://github.com/sponsors/funkypenguin) / [Patreon patron](https://www.patreon.com/bePatron?u=6982506). You get: * warm fuzzies, * access to the pre-mix repo, * an anonymous plug you can pull at any time, * and a bunch more loot based on tier -.. and I get some pocket money every month to buy wine, cheese, and cryptocurrency! 🍷 💰 +.. and I get some pocket money every month to buy wine, cheese, and cryptocurrency! + +Impulsively **[click here (NOW quick do it!)](https://github.com/sponsors/funkypenguin)** to [sponsor me](https://github.com/sponsors/funkypenguin) via GitHub, or [patronize me via Patreon](https://www.patreon.com/bePatron?u=6982506)! + + +### Work with me + +Need some Cloud / Microservices / DevOps / Infrastructure design work done? I'm a full-time [AWS-certified](https://www.certmetrics.com/amazon/public/badge.aspx?i=4&t=c&d=2019-02-22&ci=AWS00794574) consultant, this stuff is my bread and butter! :breadfork_and_knife: [Get in touch](https://www.funkypenguin.co.nz/contact/), and let's talk business! + + + + By the time I had enlisted Funky Penguin's help, I'd architected myself into a bit of a nightmare with Kubernetes. I knew what I wanted to achieve, but I'd made a mess of it. Funky Penguin (David) was able to jump right in and offer a vital second-think on everything I'd done, pointing out where things could be simplified and streamlined, and better alternatives. + + He unblocked me on all the technical hurdles to launching my SaaS in GKE! + + With him delivering the container/Kubernetes architecture and helm CI/CD workflow, I was freed up to focus on coding and design, which fast-tracked me to launching on time. And now I have a simple deployment process that is easy for me to execute and maintain as a solo founder. -Impulsively **[click here (NOW quick do it!)](https://www.patreon.com/bePatron?u=6982506)** to patronize me, or instead thoughtfully and analytically review my Patreon page / history **[here](https://www.patreon.com/funkypenguin)** and make up your own mind. + I have no hesitation in recommending him for your project, and I'll certainly be calling on him again in the future. + -- John McDowall, Founder, [kiso.io](https://kiso.io) -### Hire me 🏢 +### Buy my book -Need some Cloud / Microservices / DevOps / Infrastructure design work done? I'm a full-time [AWS-certified](https://www.certmetrics.com/amazon/public/badge.aspx?i=4&t=c&d=2019-02-22&ci=AWS00794574) consultant, this stuff is my bread and butter! :bread: :fork_and_knife: [Contact](https://www.funkypenguin.co.nz/contact/) me and let's talk! +I'm publishing the Geek Cookbook as a formal eBook (*PDF, mobi, epub*), on Leanpub (https://leanpub.com/geek-cookbook). Check it out! \ No newline at end of file diff --git a/manuscript/kubernetes/cluster.md b/manuscript/kubernetes/cluster.md index f38fe4f0..2f4ed7a8 100644 --- a/manuscript/kubernetes/cluster.md +++ b/manuscript/kubernetes/cluster.md @@ -2,12 +2,12 @@ IMO, the easiest Kubernetes cloud provider to experiment with is [DigitalOcean](https://m.do.co/c/e33b78ad621b) (_this is a referral link_). I've included instructions below to start a basic cluster. -![Kubernetes on Digital Ocean](/images/kubernetes-on-digitalocean.jpg) +![Kubernetes on Digital Ocean](https://geek-cookbook.funkypenguin.co.nz/images/kubernetes-on-digitalocean.jpg) ## Ingredients -1. [DigitalOcean](https://www.digitalocean.com/?refcode=e33b78ad621b) account, either linked to a credit card or (_my preference for a trial_) topped up with $5 credit from PayPal. (_yes, this is a referral link, making me some 💰 to buy 🍷_) -2. Geek-Fu required : 🐱 (easy - even has screenshots!) +1. [DigitalOcean](https://www.digitalocean.com/?refcode=e33b78ad621b) account, either linked to a credit card or (_my preference for a trial_) topped up with $5 credit from PayPal. (_yes, this is a referral link, making me some to buy _) +2. Geek-Fu required : (easy - even has screenshots!) ## Preparation @@ -15,27 +15,27 @@ IMO, the easiest Kubernetes cloud provider to experiment with is [DigitalOcean]( Create a project, and then from your project page, click **Manage** -> **Kubernetes (LTD)** in the left-hand panel: -![Kubernetes on Digital Ocean Screenshot #1](/images/kubernetes-on-digitalocean-screenshot-1.png) +![Kubernetes on Digital Ocean Screenshot #1](https://geek-cookbook.funkypenguin.co.nz/images/kubernetes-on-digitalocean-screenshot-1.png) Until DigitalOcean considers their Kubernetes offering to be "production ready", you'll need the additional step of clicking on **Enable Limited Access**: -![Kubernetes on Digital Ocean Screenshot #2](/images/kubernetes-on-digitalocean-screenshot-2.png) +![Kubernetes on Digital Ocean Screenshot #2](https://geek-cookbook.funkypenguin.co.nz/images/kubernetes-on-digitalocean-screenshot-2.png) The _Enable Limited Access_ button changes to read _Create a Kubernetes Cluster_ . Cleeeek it: -![Kubernetes on Digital Ocean Screenshot #3](/images/kubernetes-on-digitalocean-screenshot-3.png) +![Kubernetes on Digital Ocean Screenshot #3](https://geek-cookbook.funkypenguin.co.nz/images/kubernetes-on-digitalocean-screenshot-3.png) When prompted, choose some defaults for your first node pool (_your pool of "compute" resources for your cluster_), and give it a name. In more complex deployments, you can use this concept of "node pools" to run certain applications (_like an inconsequential nightly batch job_) on a particular class of compute instance (_such as cheap, preemptible instances_) -![Kubernetes on Digital Ocean Screenshot #4](/images/kubernetes-on-digitalocean-screenshot-4.png) +![Kubernetes on Digital Ocean Screenshot #4](https://geek-cookbook.funkypenguin.co.nz/images/kubernetes-on-digitalocean-screenshot-4.png) -That's it! Have a sip of your 🍷, a bite of your :cheese:, and wait for your cluster to build. While you wait, follow the instructions to setup kubectl (if you don't already have it) +That's it! Have a sip of your , a bite of your :cheese:, and wait for your cluster to build. While you wait, follow the instructions to setup kubectl (if you don't already have it) -![Kubernetes on Digital Ocean Screenshot #5](/images/kubernetes-on-digitalocean-screenshot-5.png) +![Kubernetes on Digital Ocean Screenshot #5](https://geek-cookbook.funkypenguin.co.nz/images/kubernetes-on-digitalocean-screenshot-5.png) DigitalOcean will provide you with a "kubeconfig" file to use to access your cluster. It's at the bottom of the page (_illustrated below_), and easy to miss (_in my experience_). -![Kubernetes on Digital Ocean Screenshot #6](/images/kubernetes-on-digitalocean-screenshot-6.png) +![Kubernetes on Digital Ocean Screenshot #6](https://geek-cookbook.funkypenguin.co.nz/images/kubernetes-on-digitalocean-screenshot-6.png) ## Release the kubectl! @@ -72,21 +72,15 @@ That's it. You have a beautiful new kubernetes cluster ready for some action! Still with me? Good. Move on to creating your own external load balancer.. -* [Start](/kubernetes/start/) - Why Kubernetes? -* [Design](/kubernetes/design/) - How does it fit together? +* [Start](https://geek-cookbook.funkypenguin.co.nz/kubernetes/start/) - Why Kubernetes? +* [Design](https://geek-cookbook.funkypenguin.co.nz/kubernetes/design/) - How does it fit together? * Cluster (this page) - Setup a basic cluster -* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access -* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data -* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks -* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm +* [Load Balancer](https://geek-cookbook.funkypenguin.co.nz/kubernetes/loadbalancer/) - Setup inbound access +* [Snapshots](https://geek-cookbook.funkypenguin.co.nz/kubernetes/snapshots/) - Automatically backup your persistent data +* [Helm](https://geek-cookbook.funkypenguin.co.nz/kubernetes/helm/) - Uber-recipes from fellow geeks +* [Traefik](https://geek-cookbook.funkypenguin.co.nz/kubernetes/traefik/) - Traefik Ingress via Helm ## Chef's Notes -1. Ok, yes, there's not much you can do with your cluster _yet_. But stay tuned, more Kubernetes fun to come! - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏 - -### Your comments? 💬 +1. Ok, yes, there's not much you can do with your cluster _yet_. But stay tuned, more Kubernetes fun to come! \ No newline at end of file diff --git a/manuscript/kubernetes/design.md b/manuscript/kubernetes/design.md index d52ad3e7..e3f40627 100644 --- a/manuscript/kubernetes/design.md +++ b/manuscript/kubernetes/design.md @@ -42,7 +42,7 @@ Under this design, the only inbound connections we're permitting to our Kubernet ### Network Flows * HTTPS (TCP 443) : Serves individual docker containers via SSL-encrypted reverse proxy (_Traefik_) -* Individual additional ports we choose to expose for specific recipes (_i.e., port 8443 for [MQTT](/recipes/mqtt/)_) +* Individual additional ports we choose to expose for specific recipes (_i.e., port 8443 for [MQTT](https://geek-cookbook.funkypenguin.co.nz/recipes/mqtt/)_) ### Authentication @@ -68,7 +68,7 @@ We use a phone-home container, which calls a simple webhook on our haproxy VM, a Here's a high-level diagram: -![Kubernetes Design](/images/kubernetes-cluster-design.png) +![Kubernetes Design](https://geek-cookbook.funkypenguin.co.nz/images/kubernetes-cluster-design.png) ## Overview @@ -80,7 +80,7 @@ In the diagram, we have a Kubernetes cluster comprised of 3 nodes. You'll notice Our nodes are partitioned into several namespaces, which logically separate our individual recipes. (_I.e., allowing both a "gitlab" and a "nextcloud" namespace to include a service named "db", which would be challenging without namespaces_) -Outside of our cluster (_could be anywhere on the internet_) is a single VM servicing as a load-balancer, running HAProxy and a webhook service. This load-balancer is described in detail, [in its own section](/kubernetes/loadbalancer/), but what's important up-front is that this VM is the **only element of the design for which we need to provide a fixed IP address**. +Outside of our cluster (_could be anywhere on the internet_) is a single VM servicing as a load-balancer, running HAProxy and a webhook service. This load-balancer is described in detail, [in its own section](https://geek-cookbook.funkypenguin.co.nz/kubernetes/loadbalancer/), but what's important up-front is that this VM is the **only element of the design for which we need to provide a fixed IP address**. ### 1 : The mosquitto pod @@ -92,7 +92,7 @@ The phone-home container calls the webhook, and tells HAProxy to listen on port ### 2 : The Traefik Ingress -In the "default" namespace, we have a Traefik "Ingress Controller". An Ingress controller is a way to use a single port (_say, 443_) plus some intelligence (_say, a defined mapping of URLs to services_) to route incoming requests to the appropriate containers (_via services_). Basically, the Trafeik ingress does what [Traefik does for us under Docker Swarm](/docker-ha-swarm/traefik/). +In the "default" namespace, we have a Traefik "Ingress Controller". An Ingress controller is a way to use a single port (_say, 443_) plus some intelligence (_say, a defined mapping of URLs to services_) to route incoming requests to the appropriate containers (_via services_). Basically, the Trafeik ingress does what [Traefik does for us under Docker Swarm](https://geek-cookbook.funkypenguin.co.nz/docker-ha-swarm/traefik/). What's happening in the diagram is that a phone-home pod is tied to the traefik pod using affinity, so that both containers will be executed on the same host. Again, the phone-home container calls a webhook on the HAProxy VM, auto-configuring HAproxy to send any HTTPs traffic to its calling address and customer NodePort port number. @@ -120,19 +120,10 @@ Finally, the DNS for all externally-accessible services is pointed to the IP of Still with me? Good. Move on to creating your cluster! -* [Start](/kubernetes/start/) - Why Kubernetes? +* [Start](https://geek-cookbook.funkypenguin.co.nz/kubernetes/start/) - Why Kubernetes? * Design (this page) - How does it fit together? -* [Cluster](/kubernetes/cluster/) - Setup a basic cluster -* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access -* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data -* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks -* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm - - -## Chef's Notes - -### Tip your waiter (support me) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏 - -### Your comments? 💬 +* [Cluster](https://geek-cookbook.funkypenguin.co.nz/kubernetes/cluster/) - Setup a basic cluster +* [Load Balancer](https://geek-cookbook.funkypenguin.co.nz/kubernetes/loadbalancer/) - Setup inbound access +* [Snapshots](https://geek-cookbook.funkypenguin.co.nz/kubernetes/snapshots/) - Automatically backup your persistent data +* [Helm](https://geek-cookbook.funkypenguin.co.nz/kubernetes/helm/) - Uber-recipes from fellow geeks +* [Traefik](https://geek-cookbook.funkypenguin.co.nz/kubernetes/traefik/) - Traefik Ingress via Helm \ No newline at end of file diff --git a/manuscript/kubernetes/diycluster.md b/manuscript/kubernetes/diycluster.md index ad880366..b546b083 100644 --- a/manuscript/kubernetes/diycluster.md +++ b/manuscript/kubernetes/diycluster.md @@ -6,7 +6,7 @@ After all, DIY its in our DNA. ## Ingredients -1. Basic knowledge of Kubernetes terms (Will come in handy) [Start](/kubernetes/start) +1. Basic knowledge of Kubernetes terms (Will come in handy) [Start](https://geek-cookbook.funkypenguin.co.nz/kubernetes/start) 2. Some Linux machines (Depends on what recipe you follow) ## Minikube @@ -23,13 +23,12 @@ If you want to use minikube, there is a guide below but again, I recommend using 1. A Fresh Linux Machine 2. Some basic Linux knowledge (or can just copy-paste) -!!! note Make sure you are running a SystemD based distro like Ubuntu. Although minikube will run on macOS and Windows, they add in additional complexities to the installation as they require running a Linux based image running in a VM, that although minikube will manage, adds to the complexities. And - even then, who uses Windows or macOS in production anyways? 🙂 + even then, who uses Windows or macOS in production anyways? If you are serious about running on windows/macOS, check the official MiniKube guides [here](https://minikube.sigs.k8s.io/docs/start/) @@ -56,7 +55,6 @@ sudo minikube config set vm-driver none #Set our default vm driver to none You are now set up with minikube! -!!! warning MiniKube is not a production-grade method of deploying Kubernetes ## K3S @@ -80,9 +78,8 @@ Ubuntu ticks all the boxes for k3s to run on and allows you to follow lots of ot Firstly, download yourself a version of Ubuntu Server from [here](https://ubuntu.com/download/server) (Whatever is latest) Then spin yourself up as many systems as you need with the following guide -!!! note I am running a 3 node cluster, with nodes running on Ubuntu 19.04, all virtualized with VMWare ESXi - Your setup doesn't need to be as complex as mine, you can use 3 old Dell OptiPlex if you really want 🙂 + Your setup doesn't need to be as complex as mine, you can use 3 old Dell OptiPlex if you really want 1. Insert your installation medium into the machine, and boot it. 2. Select your language @@ -146,14 +143,12 @@ Number of key(s) added: 1 You will want to do this once for every machine, replacing the hostname with the other next nodes hostname each time. -!!! note If your hostnames aren't resolving correct, try adding them to your `/etc/hosts` file ### Installation If you have access to the premix repository, you can download the ansible-playbook and follow the steps contained in there, if not sit back and prepare to do it manually. -!!! tip Becoming a patron will allow you to get the ansible-playbook to setup k3s on your own hosts. For as little as 5$/m you can get access to the ansible playbooks for this recipe, and more! See [funkypenguin's Patreon](https://www.patreon.com/funkypenguin) for more! That is all! You have yourself a Kubernetes cluster for you and your dog to enjoy. @@ -284,19 +278,19 @@ That is all! You have yourself a Kubernetes cluster for you and your dog to enjo DRP or Digital Rebar Provisioning Tool is a tool designed to automatically setup your cluster, installing an operating system for you, and doing all the configuration like we did in the k3s setup. -This section is WIP, instead, try using the K3S guide above 🙂 +This section is WIP, instead, try using the K3S guide above ## Where from now Now that you have wasted half a lifetime on installing your very own cluster, you can install more to it. Like a load balancer! -* [Start](/kubernetes/start/) - Why Kubernetes? -* [Design](/kubernetes/design/) - How does it fit together? +* [Start](https://geek-cookbook.funkypenguin.co.nz/kubernetes/start/) - Why Kubernetes? +* [Design](https://geek-cookbook.funkypenguin.co.nz/kubernetes/design/) - How does it fit together? * Cluster (this page) - Setup a basic cluster -* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access -* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data -* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks -* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm +* [Load Balancer](https://geek-cookbook.funkypenguin.co.nz/kubernetes/loadbalancer/) - Setup inbound access +* [Snapshots](https://geek-cookbook.funkypenguin.co.nz/kubernetes/snapshots/) - Automatically backup your persistent data +* [Helm](https://geek-cookbook.funkypenguin.co.nz/kubernetes/helm/) - Uber-recipes from fellow geeks +* [Traefik](https://geek-cookbook.funkypenguin.co.nz/kubernetes/traefik/) - Traefik Ingress via Helm ## About your Chef @@ -304,7 +298,7 @@ This article, believe it or not, was not diced up by your regular chef (funkypen Instead, today's article was diced up by HexF, a fellow kiwi (hence a lot of kiwi references) who enjoys his sysadmin time. Feel free to talk to today's chef in the discord, or see one of his many other links that you can follow below -[Twitter](https://hexf.me/api/social/twitter/geekcookbook) • [Website](https://hexf.me/api/social/website/geekcookbook) • [Github](https://hexf.me/api/social/github/geekcookbook) +[Twitter](https://hexf.me/api/social/twitter/geekcookbook) [Website](https://hexf.me/api/social/website/geekcookbook) [Github](https://hexf.me/api/social/github/geekcookbook) -
-
-
- -
- -
-
- - -
- - -
-
-
- +
## Your comments? 💬 + +[patreon]: https://www.patreon.com/bePatron?u=6982506 +[github_sponsor]: https://github.com/sponsors/funkypenguin \ No newline at end of file diff --git a/scripts/serve.sh b/scripts/serve.sh new file mode 100755 index 00000000..4dee3a80 --- /dev/null +++ b/scripts/serve.sh @@ -0,0 +1,4 @@ +#!/bin/bash +docker pull squidfunk/mkdocs-material:latest +docker build . -t funkypenguin/mkdocs-material +docker run --rm --name mkdocs-material -it -p 8123:8000 -v ${PWD}:/docs funkypenguin/mkdocs-material