Virtual DSM in a Docker container.
- Multiple disks
- KVM acceleration
- Upgrades supported
services:
dsm:
container_name: dsm
image: vdsm/virtual-dsm
environment:
DISK_SIZE: "256G"
devices:
- /dev/kvm
- /dev/net/tun
cap_add:
- NET_ADMIN
ports:
- 5000:5000
volumes:
- ./dsm:/storage
restart: always
stop_grace_period: 2mdocker run -it --rm --name dsm -e "DISK_SIZE=256G" -p 5000:5000 --device=/dev/kvm --device=/dev/net/tun --cap-add NET_ADMIN -v "${PWD:-.}/dsm:/storage" --stop-timeout 120 docker.io/vdsm/virtual-dsmkubectl apply -f https://raw.githubusercontent.com/vdsm/virtual-dsm/refs/heads/master/kubernetes.ymlVery simple! These are the steps:
-
Start the container and connect to port 5000 using your web browser.
-
Wait until DSM finishes its installation
-
Choose an username and password, and you will be taken to the desktop.
Enjoy your brand new NAS, and don't forget to star this repo!
To change the storage location, include the following bind mount in your compose file:
volumes:
- ./dsm:/storageReplace the example path ./dsm with the desired storage folder or named volume.
To expand the default size of 256 GB, locate the DISK_SIZE setting in your compose file and modify it to your preferred capacity:
environment:
DISK_SIZE: "512G"Tip
This can also be used to resize the existing disk to a larger capacity without any data loss.
To create additional disks, modify your compose file like this:
environment:
DISK2_SIZE: "500G"
DISK3_SIZE: "750G"
volumes:
- ./example2:/storage2
- ./example3:/storage3It is possible to pass-through disk devices or partitions directly by adding them to your compose file in this way:
devices:
- /dev/sdb:/disk1
- /dev/sdc1:/disk2Make sure it is totally empty (without any filesystem), otherwise DSM may not format it as a volume.
By default, Virtual DSM will be allowed to use 2 CPU cores and 2 GB of RAM.
If you want to adjust this, you can specify the desired amount using the following environment variables:
environment:
RAM_SIZE: "4G"
CPU_CORES: "4"First check if your software is compatible using this chart:
| Product | Linux | Win11 | Win10 | macOS |
|---|---|---|---|---|
| Docker CLI | ✅ | ✅ | ❌ | ❌ |
| Docker Desktop | ❌ | ✅ | ❌ | ❌ |
| Podman CLI | ✅ | ✅ | ❌ | ❌ |
| Podman Desktop | ✅ | ✅ | ❌ | ❌ |
After that you can run the following commands in Linux to check your system:
sudo apt install cpu-checker
sudo kvm-okIf you receive an error from kvm-ok indicating that KVM cannot be used, please check whether:
-
the virtualization extensions (
Intel VT-xorAMD SVM) are enabled in your BIOS. -
you enabled "nested virtualization" if you are running the container inside a virtual machine.
-
you are not using a cloud provider, as most of them do not allow nested virtualization for their VPS's.
If you did not receive any error from kvm-ok but the container still complains about a missing KVM device, it could help to add privileged: true to your compose file (or sudo to your docker command) to rule out any permission issue.
By default, the container uses bridge networking, which shares the IP address with the host.
If you want to assign an individual IP address to the container, you can create a macvlan network as follows:
docker network create -d macvlan \
--subnet=192.168.0.0/24 \
--gateway=192.168.0.1 \
--ip-range=192.168.0.100/28 \
-o parent=eth0 vdsmBe sure to modify these values to match your local subnet.
Once you have created the network, change your compose file to look as follows:
services:
dsm:
container_name: dsm
..<snip>..
networks:
vdsm:
ipv4_address: 192.168.0.100
networks:
vdsm:
external: trueAn added benefit of this approach is that you won't have to perform any port mapping anymore, since all ports will be exposed by default.
Important
This IP address won't be accessible from the Docker host due to the design of macvlan, which doesn't permit communication between the two. If this is a concern, you need to create a second macvlan as a workaround.
After configuring the container for macvlan, it is possible for DSM to become part of your home network by requesting an IP from your router, just like your other devices.
To enable this mode, in which the container and DSM will have separate IP addresses, add the following lines to your compose file:
environment:
DHCP: "Y"
devices:
- /dev/vhost-net
device_cgroup_rules:
- 'c *:* rwm'To pass-through your Intel GPU, add the following lines to your compose file:
environment:
GPU: "Y"
devices:
- /dev/driNote
This can be used to enable the facial recognition function in Synology Photos, but does not provide hardware transcoding for video.
By default, version 7.2 will be installed, but if you prefer an older version, you can add the download URL of the .pat file to your compose file as follows:
environment:
URL: "https://global.synologydownload.com/download/DSM/release/7.0.1/42218/DSM_VirtualDSM_42218.pat"With this method, it is even possible to switch back and forth between versions while keeping your file data intact.
Alternatively, you can also skip the download and use a local file instead, by binding it in your compose file in this way:
volumes:
- ./DSM_VirtualDSM_42218.pat:/boot.patReplace the example path ./DSM_VirtualDSM_42218.pat with the filename of your desired .pat file. The value of URL will be ignored in this case.
There are only two minor differences: the Virtual Machine Manager package is not available, and Surveillance Station will not include any free licenses.
Yes, this project contains only open-source code and does not distribute any copyrighted material. Neither does it try to circumvent any copyright protection measures. So under all applicable laws, this project will be considered legal.
However, by installing Synology's Virtual DSM, you must accept their end-user license agreement, which does not permit installation on non-Synology hardware. So only run this container on an official Synology NAS, as any other use will be a violation of their terms and conditions.
Only run this container on Synology hardware, any other use is not permitted by their EULA. The product names, logos, brands, and other trademarks referred to within this project are the property of their respective trademark holders. This project is not affiliated, sponsored, or endorsed by Synology, Inc.