To configure the project, we have created the folder /greenhouse that will contain this repository with all devops configs.
See below the three of folders:
/greenhouse/
└── ivy-automation
├── ansible
│ ├── ansible.cfg
│ ├── ansible_vault_password
│ ├── inventory
│ │ ├── computers
│ │ └── host_vars
│ │ ├── debian.yml
│ │ ├── rpi.yml
│ │ ├── vault.yml (only on the machine. Ignored on repo.)
│ │ └── w3070.yml
│ ├── playbooks
│ │ ├── ping.yml
│ │ └── variable_checker.yml
│ └── ssh
│ ├── id_ansible
│ └── id_ansible.pub
├── LICENSE
├── profiles
│ └── ...
└── README.md
File that will contain the list of IP, hostnames or DNS names that Ansible will manage. On ansible.cfg file, we have added the variable inventory that contains the path for the main inventory that we will use.
all:
children:
windows:
hosts:
w3070:
linux:
hosts:
rpi:
debian:
vbox:
hosts:
debian:
greenhouse:
hosts:
w3070:
rpi:
debian:Let first add the next command to ensure that ansible is able to reach all given machines in /ansible/config/inventory file.
$ ansible all --key-file /path/to/ssh/key -i /path/to/inventory/file -m ping --limit {host-name}
# ex
$ ansible all -i inventory.yaml -m win_ping --limit w3070
$ ansible all -i inventory.yaml -m ping --limit rpiUsing as reference Official Ansible Docs for Windows Setup
# Check versions available
> winget search Microsoft.PowerShell
# Install
> winget install --id Microsoft.Powershell --source winget
> winget install --id Microsoft.Powershell.Preview --source wingetTo check the current Keys check folder \home\{user}\.ssh. Inside should be located the file known_hosts plus the keys generated.
# To generate a key, execute the next command:
$ ssh-keygen -t ed25519 -C Ansible
# To copy the ssh key to a Server
$ ssh-copy-id -i {oath of public ssh key. ie: /home/gh/.ssh/id.pub} {IP of the Server}To make the setup, we created the file inventory/host_vars/vault.yml and added all credentials to make reference to them later on playbooks.
Once created, just do ansible-vault encrypt.
$ ansible-vault encrypt --vault-password-file ansible_vault_password inventory/host_vars/vault.yml
$ ansible-vault view --vault-password-file ansible_vault_password inventory/host_vars/vault.yml
$ ansible-vault edit --vault-password-file ansible_vault_password inventory/host_vars/vault.ymlOn ansible.cfg file, we have added the variable vault_password_file that contains the password used to encrypt in vault. So it won't require to use the flag --vault-password-file ansible_vault_password anymore.
The .bashrc file includes few tiny functions that would help and make environments more comfortable.
| Variable Name | Description | Example |
|---|---|---|
BASE_GREENHOUSE_WORKSPACE |
Main folder where the repositories of Greenhouse are placed. | /c/Users/mike/Documents/Workspace |
Explanation of Commands:
systemctl start <service>: Starts the service immediately (in this case, SSH).systemctl enable <service>: Enables the service to start automatically at system boot.systemctl status <service>: Shows the current status of the service (running, stopped, etc.).systemctl is-enabled <service>: Checks if the service is enabled to start on boot.systemctl stop <service>: Stops the service immediately.systemctl disable <service>: Disables the service from starting at boot.
The main file has been split to avoid to have one big file wiht eveything. This will make the maintenance and review changes more confortable.
| Filename | Description & Content |
|---|---|
| docker-compose.yml | Network, Volumes and Include the rest of the docker-compose files. |
| docker-compose.critical.yml | CA Server & AdGuardHome |
| docker-compose.proxy.yml | Traefik |
| docker-compose.frontend.yml | Greenhouse Main Page. |
| docker-compose.apps.yml | NoIP, TeamSpeak & Traefik Dummy Whoami |
| docker-compose.vpn.yml | Wireguard EZ |
To wake up this project you will require to setup several environment files:
- Main environment file
- Each service that require his environment file (example NoIP-duc for credentials)
You can follow the templates defined on .template.env. The service that requires the file, should have a .template file as well.
ENV="dev"
DOMAIN="${ENV}.greenhouse.ogt"
# Scales
# Info pill. This are the number of instances that will be
# created once is running the Docker Compose. Most of the
# services will only accept one.
# So summarizing:
# - Do 0 or 1 if you want the service to be deployed.
# - If you want multiple instances of one service.... be sure that is going to work
greenhouse_scale_noip_sync=0
greenhouse_scale_traefik=1
greenhouse_scale_traefik_whoami=1
greenhouse_scale_ca=1
greenhouse_scale_adguard=1
greenhouse_scale_wireguard=1
greenhouse_scale_nginx=1
greenhouse_scale_teamspeak=0
# Networking
greenhouse_network_name="${ENV}-greenhouse-infra"
greenhouse_network_subnet="192.168.42.0/24"
greenhouse_network_gateway="192.168.42.42"
# Main Page
greenhouse_nginx_static_pages_ip="192.168.42.10"
greenhouse_nginx_static_pages_host="${DOMAIN}"
greenhouse_nginx_static_pages_volume_conf="${PWD}/nginx/${ENV}/conf"
greenhouse_nginx_static_pages_volume_html="${PWD}/nginx/${ENV}/html"
# Step CA - Certificate Authority Sever
greenhouse_ca_ip="192.168.42.70"
greenhouse_ca_port=9000
greenhouse_ca_host="ca.${DOMAIN}"
greenhouse_ca_volume_certs="${PWD}/step-ca/${ENV}/certs"
greenhouse_ca_volume_secrets="${PWD}/step-ca/${ENV}/secrets"
greenhouse_ca_volume_config="${PWD}/step-ca/${ENV}/config"
greenhouse_ca_config_name="Greenhouse ${ENV} CA Server"
greenhouse_ca_config_dns_names="localhost,*.${DOMAIN},${DOMAIN}"
greenhouse_ca_config_provisioner_name=admin
greenhouse_ca_config_ssh=greenhouse
greenhouse_ca_config_password=ogt-0123456789-@@
# AdGuardHome
greenhouse_adguard_ip="192.168.42.30"
greenhouse_adguard_host="adguard.${DOMAIN}"
greenhouse_adguard_volume_work="${PWD}/adguard/${ENV}/work"
greenhouse_adguard_volume_conf="${PWD}/adguard/${ENV}/conf"
# TeamSpeak
greenhouse_teamspeak_ip="192.168.42.40"
greenhouse_teamspeak_port_voice=9987
greenhouse_teamspeak_port_query=10011
greenhouse_teamspeak_port_file=30033
greenhouse_teamspeak_image="ertagh/teamspeak3-server"
# Traefik
greenhouse_traefik_log_level=INFO # Default INFO. Available: DEBUG INFO WARN ERROR FATAL PANIC
greenhouse_traefik_api_dashboard=false
greenhouse_traefik_api_insecure=false
greenhouse_traefik_ip="192.168.42.50"
greenhouse_traefik_host="traefik.${DOMAIN}"
[email protected]
greenhouse_traefik_acme_certificates_duration=168 # Weekly Refresh
greenhouse_traefik_whoami_ip="192.168.42.60"
greenhouse_traefik_whoami_host="traefik.${DOMAIN}"
# Wireguard VPN
greenhouse_wireguard_ip="192.168.42.20"
greenhouse_wireguard_port_ui=51821
greenhouse_wireguard_port_vpn=51820
greenhouse_wireguard_host="vpn.${DOMAIN}"
greenhouse_wireguard_volume="${PWD}/wireguard/${ENV}"
greenhouse_wireguard_ui_insecure=falseFirst of all, log in on your router, and apply the port forwarding for the machine that you will use as a host pf this project.
Each router has their own way of configuring this, so time to use Google.
| Application | Default Ports | Description | Port Forwarding |
|---|---|---|---|
| Wireguard | 51820 | VPN connectivity | ✅ |
| Wireguard | 51821 | UI | ❌ |
| Adguard | 3000 | Initial config | ❌ |
| Adguard | 53 | DNS | ❌ |
| Adguard | 80 | UI | ❌ |
| Main Nginx | 80 | Dummy UI | ❌ |
| NoIP-duc | - | No-IP sync | ❌ |
| Teamspeak (ertagh) | 9987 | Voice | ✅ |
| Teamspeak (ertagh) | 10011 | Server Query | ✅ |
| Teamspeak (ertagh) | 30033 | File transfer | ✅ |
| Traefik | 8080 | Dashboard | ❌ |
| Traefik | Any Port required to forward | You should move the port from the service to Traefik to handle the request instead of expose it directly | ❓ |
The only port at the moment that is required to be included on the firewall is the connectivity for port for Wireguard, port 51820.
Search and open Windows Defender Firewall. Go to Advanced settings
Go to Inbound Rules and New Rule... as we are allowing external connections.
Click on Port and add the list of ports provided above + click on TCP.
Let's go at the moment with Allow the connection option.
Once completed, you will see the new rule on the Inbound Rules window. In this sample, Greenhouse Ports.
Open the terminal and modify the next file
> sudo nano /etc/pf.confFor each port that you want to open, add the next line:
pass in proto tcp from any to any port [PORT]The next lines would be use to activate/deactivate the rule:
> sudo pfctl -f /etc/pf.conf
# Activate
> sudo pfctl -e
# Deactivate
> sudo pfctl -dTo test if it is working or not:
> sudo lsof -i :[PORT]
# The expected response should something similar to this:
> sudo lsof -i :51820
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 3836 usename 200u IPv6 0x0000000000000001 0t0 UDP *:51820
> sudo lsof -i :51821
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 3836 usename 199u IPv6 0x0000000000000001 0t0 TCP *:51821 (LISTEN)How to Forward traffic from NoIp to the computer
The way how the Internet provider maintain our IP can be different. They can update our IP when we restart the router or in any moment.
There are lot of webages that can provide this info (ipinfo.io, ipaddress.my, showmyip.com, whatismyip.com...).
Why NoIp? They offer by free one hostname which we will use to forward our traffic for, here it comes, FREE.
The screenshot of below shows how the NoIp hostname page looks like. Here you will see your hostname plus the IP where is aiming at the moment. When you are developing, you can, a, ping directly your public IP or b, use this domain.
For this, we will be using smallstep/step-ca image.
Follow their documentation page to make a initial setup of the server before continue.
Greenhouse is aiming to have your own local domain only accessible once you are connected with the VPN. Because of that, you will not need buy any domain or trust any external CA for much open-source they are.
Be aware if you are using a Raspberry Pi as I am, to check this link. In my case it happen 2 issues:
First one was fixed applying the changes on the link regarding the DB:
"db": { "type": "badger", "dataSource": "/home/step/db", "badgerFileLoadingMode": "FileIO" },And later, I had to update the permissions of my volume folders. But this is my issue as my users are not very well configured:
docker run --rm -v prod-ca-db:/data alpine chown -R 1000:1000 /data
If everything works as expected, execute the commands of below to include the new provisioner.
# Log to the container the
> docker exec -it ca sh
# Add the new ACME provisioner. After this, ensure to restart to asure the config has been applied.
> step ca provisioner add greenhouse-acme --type ACMEWith this, should be enough to make it work!
DONT FORGET TO INCLUDE THE NEW CERTIFICATES IN YOUR DEVICES !
Additionally, I added few improvements on the ca.json. Not sure if neccesary but I will list them below:
...
"dnsNames": [
"localhost",
"ca.dev.greenhouse.ogt",
"traefik.dev.greenhouse.ogt",
"vpn.dev.greenhouse.ogt",
"traefik.dev.greenhouse.ogt",
"dev.greenhouse.ogt"
],
...
"policy": {
"x509": {
"allow": {
"dns": ["*.dev.greenhouse.ogt"]
},
"allowWildcardNames": false
},
"host": {
"allow": {
"dns": ["*.dev.greenhouse.ogt"]
}
}
},
...
{
"type": "ACME",
"name": "greenhouse-acme",
"claims": {
...
},
"challenges": [
"http-01"
],
"attestationFormats": [
"apple",
"step",
"tpm"
],
"options": {
"x509": {},
"ssh": {}
}
}
For dev environment, the files are already committed on the repository, but at the time of doing it the deployment on your server, you will have to do it from the scratch.
The current configuration that you will have to handle are:
- DNS configuration.
- Host (in our case provided by NoIP).
- User creation.
Make sure of doing all changes before creating any user. Once the user is imported to the client, there are configurations that if vary, you will have to reimport them, like DNS changes.
A quick note over DNS configuration, is to add first the IP of Adguard, and later some extra DNS. In our case we are using the DNS provided by the EU.
We will be using this service as Proxy Reverse.
On the first try I went for the Official Image provided by Teamspeak, but they do not support the RPI architecture. I did a little research and looking for number and ertagh version is the one that I like the most. The bad side of this is he does not use any volume, so if I run down or similar, the configuration will go bananas :D
In any case. The port forwarding is only applied for port 9987 (voice channel), as for security reason, I will remain close access only on the local network for ports 10011 (Server Query) & 30033 (File Manager).