Configuration of my Proxmox Server
Download the latest Proxmox ISO from this link. You can either use IPMI to mount the ISO to the host over the network, or write the ISO to a USB stick of sufficient size. Following the official instructions, follow these commands:
Plug in the USB stick and find the correct /dev/sdX path:
lsblkYou might need to delete all the partitions on the USB stick first. Use Gnome Disks for that. Once the partitions have been deleted, run the following command to write the ISO to the USB stick:
dd bs=1M conv=fdatasync if=./proxmox-ve_*.iso of=/dev/XYZOnce the USB stil is created, eject it and plug it into the server.
Connect to the host's IPMI web page and start up the iKVM CONSOLE. Boot the machine and enter into BIOS (Typically DEL or F2). Verify the boot order for the host so that the main disk(s) will be booted first. When you go to save the BIOS config, choose the SAVE option that also includes a next boot override to boot of the USB stick.
When the server boots again, it will use the USB stick to boot the Proxmox Installer. Proceed through the GUI installer. Make sure to choose ZFS MIRROR on your designated OS disks. Once the installer is finished, reboot the node. Take out the USB stick before the BIOS screen.
While still using the IPMI iKVM Console login to the Proxmox CLI using the root creds you created during the install. Add the non-free-firmware Debian repos to the host's /etc/apt/sources.list. Edit that file and add the non-free-firmware to the end of each Debian repo line. They should look like this (For PVE 8.x):
deb http://deb.debian.org/debian bookworm main contrib non-free-firmware
deb http://deb.debian.org/debian bookworm-updates main contrib non-free-firmware
deb http://security.debian.org/debian-security bookworm-security main contrib non-free-firmwareSave the file. Now update your sources and install the non-free-firmware (for your CPU type):
apt update
apt install amd64-microcodeFollow the steps below to configure the host in preparation for installing containers and VMs.
The interfaces on the host have to be configured into bonds and/or bridges for use by containers and VMs. The most redundant network configuration would be bond of two or more NICs together in an 802.3ad LACP group. Your switch has to support this configuration. Typically, the Linux host (Proxmox) will be the "active" side of the connection. A Cisco switch can be configured as an ACTIVE side (mode on) or PASSIVE side (mode passive). If the switch and host are both set ACTIVE, the portchannel interface and bond will come up. These are the commands to run on the Cisco switch to create an LACP etherchannel.
First you must configure the HASH type that the LACP etherchannel will use:
conf t
port-channel load-balance src-dst-ip
endconf t
int gX/X/X
description Proxmox Host eno1
switchport mode trunk
switchport trunk encapsulation dot1q
switchport trunk allowed vlan x, y-z
switchport nonegotiate
channel-group NUMBER mode passiveRun that command on the two or more interfaces you will connect to the Proxmox host. Now configure the actual Port Channel NUMBER interface.
interface PortChannel-NUMBER
description Proxmost Host LACP
switchport mode trunk
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 10-13,19-24,666,997,999
switchport nonegotiateDon't forget to write your changes to the switch's memory.
wriNow it is time to configure the network settings on the Proxmox host. Look at this repo's configuration files under /etc/systemd/network/. Add these to the node, then run update-initramfs -u -k all. Then, reboot. When the machine comes back up, add the file /etc/network/interfaces and replace it with the config in the repo's file. This will have the config you need to enable LACP on the Proxmox host. Once you've copied the config into the host and restart the network stack, from the host, try pinging your local gateway. Then try pinging an external IP.
Now that the PVE host is accessible over the network, you will need to configure secure SSH access. The best method for this is to copy your pre-created SSH keys from your laptop to the PVE root account.
ssh-copy-id -i ~/.ssh/id_ed25519.pub [email protected]You can then disable Password based SSH access for the root user.
Note
ALWAYS examine a script before blindly running it. It could contain malicious code and a few moments reviewing the script will help you learn what it's actually doing.
There are community developed scripts that will assist you with installing CTs or VMs on your new host. These scripts are hosted here. This is the link for the base repo for these scripts. Specifically, we're going to use the Post-Install script. This script will update the APT SOURCE list and remove any references to the Enterprise repos, correct the Debian repos, etc. It will also remove the Subscription nag screen on PVE login. It will then perform an apt update and apt dist-upgrade and finally prompt to reboot the host. You can get and run the script by SSHing to the host and running the following command on the host:
bash -c "$(wget -qLO - https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/post-pve-install.sh)"Choose yes for most options, except the TESTING repo and anything else that you don't want. The script will prompt you to reboot the host.
We want Proxmox to send an email when there's an issue on the host. There are several steps that are pulled from here.
apt update
apt install -y libsasl2-modules mailutils postfix-pcreNow, follow the directions in the link to create an app password for sending emails. Once that's done, go back to Proxmox and perform:
cd /etc/postfix
nano sasl_passwdIn the new file, enter in the information you created from gmail in the following format:
smtp.gmail.com [email protected]:passwordNow you need to set permissions for the file and tell postman to encrypt it into a password DB:
chmod 600 sasl_passwd
postmap hash:sasl_passwdNow you have to add some configuration to the main.cf:
nano /etc/postfix/main.cfComment out (with a #) or remove the existing line that starts relayhost and the one that starts mydestination. Add the following lines to the bottom of the file.
relayhost = smtp.gmail.com:587
smtp_use_tls = yes
smtp_sasl_auth_enable = yes
smtp_sasl_security_options =
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_tls_CAfile = /etc/ssl/certs/Entrust_Root_Certification_Authority.pem
smtp_tls_session_cache_database = btree:/var/lib/postfix/smtp_tls_session_cache
smtp_tls_session_cache_timeout = 3600sSave the file and reload postfix:
postfix reloadNow you can change the headers of the email that's sent to make it appear as it's coming from Proxmox itself. This might increase the chance of the email being blocked as SPAM from Google. Create a config file that we'll use to modify the email headers:
nano /etc/postfix/smtp_header_checksEnter this information, modifying as needed:
/^From:.*/ REPLACE From: PVE00 <[email protected]>Now create the postfix DB file of this config file:
postmap hash:/etc/postfix/smtp_header_checksEdit the postfix mail.cf file again:
nano /etc/postfix/main.cfAdd this config to include the header re-writing:
smtp_header_checks = pcre:/etc/postfix/smtp_header_checksSave the config and reload postfix:
postfix reloadInstall the intel-gpu-tools package:
apt install intel-gpu-toolsThe intel_gpu_top package utilizes special performance monitor permissions. Read more about that here. These permissions do not allow intel_gpu_top to gather these performance metrics on containers. This means that the Frigate LXC can't run that command. To allow this, you must change the permission level. I got this information from this forum post. Run this command to verify the current permission level:
root@pve01:~# sysctl -n kernel.perf_event_paranoid
4
root@pve01:~# This must be changed to a more permissive setting of 0. Run this command to make that change. This is an instant change.
root@pve01:~# sysctl kernel.perf_event_paranoid=0
root@pve01:~# sysctl -n kernel.perf_event_paranoid
0
root@pve01:~#To make this change permanent, run the following command:
sysctl -w kernel.perf_event_paranoid=0 >> /etc/sysctl.d/98-intel_gpu_top.confThe intel_gpu_top command will now run in all containers where you have installed it, and passed through the /dev/dri/renderD12x device. You will need to add specific sections to your compose.yaml in the container to passthrough CAP_PERFMON to the docker container. That will be covered in FRIGATE-CONFIG.MD.
When you create a CT or VM, you will want to assign a network to it. You can do that by assigning a VMBR and then adding a VLAN tag. This can get confusing if you don't remember the VLAN IDs. To get around that, Proxmox can use SDN to configure named Zones that have named VNETs that are already assigned the VLAN ID. All you have to do is name the VNET appropriately. First, create the ZONE then create the VNET for that Zone. You can do this from the GUI using these commands.
pvesh create /cluster/sdn/zones/ --type vlan --zone HomeLAN --bridge vmbr0 --ipam pve
pvesh create /cluster/sdn/vnets --vnet HomeLAN --alias "Home Client Wired VLAN" --zone HomeLAN --tag 10
pvesh create /cluster/sdn/zones/ --type vlan --zone HomeVoIP --bridge vmbr0 --ipam pve
pvesh create /cluster/sdn/vnets --vnet HomeVoIP --alias "Home VoIP Phones VLAN" --zone HomeVoIP --tag 11
pvesh create /cluster/sdn/zones/ --type vlan --zone HomeWLAN --bridge vmbr0 --ipam pve
pvesh create /cluster/sdn/vnets --vnet HomeWLAN --alias "Home Client Wireless VLAN" --zone HomeWLAN --tag 12
pvesh create /cluster/sdn/zones/ --type vlan --zone VidConf --bridge vmbr0 --ipam pve
pvesh create /cluster/sdn/vnets --vnet VidConf --alias "Home Video Conferencing Devices" --zone VidConf --tag 13
pvesh create /cluster/sdn/zones/ --type vlan --zone Media --bridge vmbr0 --ipam pve
pvesh create /cluster/sdn/vnets --vnet Media --alias "Home Media VLAN" --zone Media --tag 19
pvesh create /cluster/sdn/zones/ --type vlan --zone Server --bridge vmbr0 --ipam pve
pvesh create /cluster/sdn/vnets --vnet Server --alias "Home Server VLAN" --zone Server --tag 20
pvesh create /cluster/sdn/zones/ --type vlan --zone IoT --bridge vmbr0 --ipam pve
pvesh create /cluster/sdn/vnets --vnet HomeIoT --alias "Home Automation VLAN" --zone IoT --tag 21
pvesh create /cluster/sdn/zones/ --type vlan --zone DMZ1 --bridge vmbr0 --ipam pve
pvesh create /cluster/sdn/vnets --vnet DMZ1 --alias "Home DMZ 1" --zone DMZ1 --tag 22
pvesh create /cluster/sdn/zones/ --type vlan --zone DMZ2 --bridge vmbr0 --ipam pve
pvesh create /cluster/sdn/vnets --vnet DMZ2 --alias "Home DMZ 2" --zone DMZ2 --tag 23
pvesh create /cluster/sdn/zones/ --type vlan --zone MGMT --bridge vmbr0 --ipam pve
pvesh create /cluster/sdn/vnets --vnet MGMT --alias "Firewall/Switch Management" --zone MGMT --tag 24
pvesh create /cluster/sdn/zones/ --type vlan --zone Guest --bridge vmbr0 --ipam pve
pvesh create /cluster/sdn/vnets --vnet Guest --alias "Home Guest VLAN" --zone Guest --tag 666
pvesh create /cluster/sdn/zones/ --type vlan --zone DMZSIP --bridge vmbr0 --ipam pve
pvesh create /cluster/sdn/vnets --vnet DMZSIP --alias "SIP Gateway DMZ" --zone DMZSIP --tag 997
pvesh create /cluster/sdn/zones/ --type vlan --zone PubInet --bridge vmbr0 --ipam pve
pvesh create /cluster/sdn/vnets --vnet PubInet --alias "Public Internet VLAN" --zone PubInet --tag 999Deleting a VNET:
pvesh delete /cluster/sdn/vents/VNETNAMEHEREDeleting a Zone:
pvesh delete /cluster/sdn/zones/ZONENAMEHEREYou can then save these SDN changes with this command:
pvesh set /cluster/sdnProxmox will need storage created and defined. To do that, you must create local storage on the host. I prefer ZFS file systems.
Note
If you have drives that already had a partition, you won't see these drives as available when you create new storage. Delete the partitions on these drives first by navigating to the host name > Disks. Hightlight the disk and click on WIPE.
You can create a storage pool from the web GUI by navigating to HOST > DISKS > ZFS. Click on CREATE: ZFS. Give the following parameters:
NAME RAID LEVEL (recommend MIRROR for 2 disks, or RAIDZ2 for 4+ disks) COMPRESSION: ON ASHIFT: 12 (DEFAULT)
In the Web GUI click on DATACENTER > STORAGE. Look at the new storage device you just created. Verify that it is setup for DISK IMAGE, CONTAINER.
You can also create a new DIRECTORY that can contain ISOs and CT Templates and store that on the ZFS pool you created. From the STORAGE menu, click on ADD > DIRECTORY. Fill out the following fields:
ID: Make this a name with no spaces, ex: ISO_CT DIRECTORY: /mnt/ZFSPOOLNAMEHERE CONTENT: Click the drop down and choose VZDump, ISO Image, Container Templates
Click on ADD.
Note
You can also use the storage on the Proxmox OS root pool for ISOs and CT Templates. That is not recommended as it will be deleted when you upgrade the OS to a new PVE version.
Sometimes it will be required to install special drivers on the Proxmox host in order to share that device with a container. For this host, we'll be configuring the mini PCIe Coral Dual TPU adapter for use in a Frigate unpriviledged LXC. This adapter is installed on a PCIe x1 card that is specifically designed for the dual Coral TPU. We will be following the instructions listed here and here. This is also a good thread on the PCIe Coral TPU. We will be substituting in a different GitHub repo for the Gasket Driver.
We need to download software on the PVE host in order to compile the DKMS module for the Coral TPU. Follow these steps:
cd ~/
apt update
apt upgrade
apt install pve-headers-$(uname -r) proxmox-default-headers dh-dkms devscripts git dkms lsb-release sudo
mkdir Packages
cd Packages
ssh-keygen -t ed25519 -C "[email protected]"
git clone https://github.com/feranick/gasket-driver.git
cd gasket-driver
debuild -us -uc -tc -b -d
cd ..
dpkg -i gasket-dkms_1.0-18.2_all.deb
rebootAfter the reboot, check to see if the gasket module was installed by verifying if you have two APEX devices:
ls /dev/apex*If you see both devices like the below, you've sucessfully installed the module!
root@pve01:~# ls /dev/apex*
/dev/apex_0 /dev/apex_1
root@pve01:~#To upgrade the gasket drivers, run the following commands:
cd Packages
git pull https://github.com/feranick/gasket-driver.git
cd gasket-driver
debuild -us -uc -tc -b -d
cd ..
dpkg -i gasket-dkms_1.0-18.2_all.debIf you're running a PVE upgrade, do this first. If you don't, the upgrade could error during the kernel install. If it does, run the above commands, then run apt dist-upgrade.
We will be building an unpriviledged container for use in managing the host. This container will be built using Debian Bookworm as the template and then installing docker and docker compose. We will then install Portainer as the management tool for the rest of the LXCs that will be built. The Portainer Agent will then be deployed on the other LXCs on the node so they can be managed by the main Portainer container.
From the Web GUI, click on the PVE host name, scroll down to the Storage Director you created called ISO_CT. Select CT Templates from the menu and click on the TEMPLATES button. Choose the Debian 12 Bookworm template and download it.
From the top right hand corner of the WebGUI click on CREATE CT. Fill out all the fields as needed. For the initial management template, choose 2 CPUs, 2GB of RAM, 4GB of disk space and choose UNPRIVILEDGED. Create a new ROOT password and copy in your SSH public key.
You must update the packages in the container first.
apt update
apt dist-upgradeWhen that is done, reboot the machine. There is no need to install an NTP client as the container takes its time from the host per this document. However, you must set the correct timezone:
timedatectl set-timezone America/DenverConfigure the Locales for Debian Container:
dpkg-reconfigure localesCreate the skeleton directories needed for when useradd is run.
mkdir /etc/skel/Documents /etc/skel/DownloadsIf creating a CT with multiple vNICs, you should choose one interface to be the default by chaning its metric.
apt install ifmetricEdit /etc/network/interfaces and add metric 1 to your chosen default interface. Restart the networking stack:
systemctl restart networking.serviceBoot the container. Once the system is booted follow the steps from README.md to update the OS and assign the correct timezone.
Create a new user called administrator.
useradd --uid 1000 --user-group --groups sudo --create-home --home-dir /home/administrator --shell /bin/bash administratorWe didn't add the password for the new user to the above command for security reasons. It will be visible to any user listing processes. You should use the following command to create a password for the new user:
echo 'administrator:mypass' | chpasswdDo the same process again to create a user on the container that will run your app. Change user to your new app_user by running su app_user. Create an SSH keypair for that user:
ssh-keygen -t ed25519Shutdown the container.
I modified the instructions from this link and this link.
We will be using a systemd unit file to create our mounts on PVE. However, it would be nice if we could verify that the NAS is online before calling the mounts. Pulled from this link. Create a new systemd unit file using the command:
nano /etc/systemd/system/nas01-available.service[Unit]
Description=Wait for NAS01 to be available
After=network-online.target
Requires=network-online.target
[Service]
Type=oneshot
ExecStart=/usr/bin/bash -c 'until /usr/bin/ping -qW 1 -c1 nas01.theoltmanfamily.net; do sleep 1; done'
RemainAfterExit=yes
Timeout=30
[Install]
WantedBy=multi-user.targetReload the systemd daemon so it sees this new unit file:
systemctl daemon-reloadEnable and start the systemd unit file:
systemctl enable --now nas01-available.serviceDo NOT create the app_user above in the PVE host. When you create the systemd file, use the same LXC UID but add 100000 to it. Example:
LXC user: app_user LXC UID: 2010
PVE UID: 102010
I prefer to create mount points using systemd mount files. To create the mount file, you must name it the exact path where you want the mount using dashes instead of forward slashes. To make a mount of a music folder, use the following:
nano /etc/systemd/system/mnt-app_roon-music.mountIn this file, place the following config:
[Unit]
Description=Mount NAS01 Music share at boot
After=nas01-available.service
[Mount]
What=//nas01.theoltmanfamily.net/music
Where=/mnt/app_roon/music
Options=uid=102010,gid=102010,file_mode=0770,dir_mode=0770,credentials=/etc/smbcreds-app_roon,vers=3.0,iocharset=utf8,sec=nt
lmssp
Type=cifs
TimeoutSec=30
[Install]
WantedBy=multi-user.targetThere are several things we need to create before we can enable and start this systemd unit file. First, you must create the user and group above, then you must create the folder /mnt/app_roon/music and then you must create the smbcredentials file.
Now create the /etc/smbcreds-app_roon file.
sudo nano /etc/smbcreds-app_roonEnter in the following format, but change the user and password to your requirements:
username=usernamehere
password=passwordhereNow set the permissions on this file to 660 so only the root user and group can read/write the contents.
chmod 660 /etc/smbcreds-app_roonNext, you have to create the folder paths referenced in the mount files.
mkdir -p /mnt/app_arrs/{audio,backup,downloads,videos00,videos01,videos04,videos05}
mkdir -p /mnt/app_plex/{audio,backup,videos00,videos01,videos04,videos05}
mkdir -p /mnt/app_roon/{audio,backup}
mkdir /mnt/{joltman,voltman}Assign the correct UID/GID permissions to each folder recursively.
chown -R 102093:102093 /mnt/app_arrs/*
chown -R 102092:102092 /mnt/app_plex/*
chown -R 102010:102010 /mnt/app_roon/*
chown -R 101001:101001 /mnt/joltman/
chown -R 101002:101002 /mnt/voltman/You can now start the mount systemd services:
systemctl enable --now mnt-app_roon-music.mountNow you need to modify the lxc conf file for your container. Example:
nano /etc/pve/lxc/110.confAt the bottom of the file, add the following mount point as RO:
mp0: /mnt/app_roon/music,mp=/mnt/music,ro=1To mount the folder as RWX, ommit the ,ro=1.
Stop then start your LXC container. SSH into the container and verify that the correct folders/files are present in /mnt/music and they have the correct permissions by running ls -lha.
SSH into the container and follow the Docker guide for installing Docker in Debian Bookworm.
# Add Docker's official GPG key:
apt-get update
apt-get dist-upgrade
reboot
apt-get install ca-certificates curl sudo
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin rsyncNow create a folder for all docker container configs and the associated compose.yaml files.
mkdir /dockerChange ownership of this folder:
chown -R app_user:app_user /dockerDon't forget to add your administrator and app_user to the docker group.
usermod -aG docker administratorYou can now use VS Code to connect to this container and create your compose files.
Once you have your main-compose.yaml and separate compose.yaml files created, along with their associated directories, you can start up your containers!
docker compose -f /docker/main-compose.yaml up -d
I'd like the containers to auto-update the Linux software on a weekly basis at 2AM. To do that, edit your crontab with crontab -e with:
0 2 * * SUN /usr/bin/apt-get update && /usr/bin/apt-get -y dist-upgradeMust write this section.
As root in the container, run the following:
sed -i 's/bookworm/trixie/g' /etc/apt/sources.list
sed -i 's/bookworm/trixie/g' /etc/apt/sources.list.d/docker.list
apt update
apt dist-upgrade
rebootAfter the reboot, verify the upgrade by running:
cat /etc/os-release