agentenv is a lightweight Rust control plane that manages Firecracker
microVMs and exposes a simple HTTP API for lifecycle operations. It was built
by translating the existing shell scripts used for VM orchestration into a
type-safe service that can be embedded in larger systems.
The API currently supports:
- Creating a VM from a fresh overlay or an existing disk snapshot
- Pausing, resuming, and shutting down running microVMs
- Creating read-only disk snapshots (initially stored on tmpfs and optionally persisted to an HDFS mount)
- Persisting previously created snapshots to durable storage
- Linux host with KVM support and permission to run the Firecracker binary
mkfs.ext4available inPATHfor provisioning overlay disks and host-share volumes- Firecracker binary placed at the path referenced by
firecracker_binaryin the configuration file - Base rootfs images and optional disk snapshots stored in the directories
configured via
rootfs_rootandsnapshots_root
The fastest path to a working VM on localhost:
- Build or fetch the artifacts (only once)
# Firecracker binary
scripts/get-firecracker.sh --version 1.13.1 --output artifacts/firecracker
# Linux kernel (uncompressed vmlinux)
scripts/build-kernel.sh --version 6.6.30 \
--fragment resources/kernel/firecracker.fragment \
--output artifacts/vmlinux
# Rootfs image
scripts/create-rootfs.sh \
--dockerfile resources/rootfs/Dockerfile \
--output artifacts/rootfs.ext4 \
--size-mib 4096- Prepare runtime dirs and start the API
scripts/setup-runtime.sh --clean --force --config config/hypervisor.artifacts.toml
# Grant CAP_NET_ADMIN so agentenv can create TAP/iptables rules
cargo build --release
sudo setcap cap_net_admin+ep target/release/agentenv
target/release/agentenv --config config/hypervisor.artifacts.toml --listen 127.0.0.1:8080- Create a VM (auto-networking; omit the network block)
curl -X POST http://127.0.0.1:8080/v1/vm \
-H "content-type: application/json" \
-d '{
"vm_id": "vm-demo",
"slots": 1,
"disk": {
"kind": "overlay",
"rootfs_id": "rootfs.ext4",
"disk_size_gib": 16
}
}'- Inspect, pause/resume, snapshot, shutdown
# List VMs and note assigned tap + guest_ip
curl http://127.0.0.1:8080/v1/vm | jq
# Pause / resume
curl -X POST http://127.0.0.1:8080/v1/vm/vm-demo/pause
curl -X POST http://127.0.0.1:8080/v1/vm/vm-demo/resume
# Snapshot and optionally persist
curl -X POST http://127.0.0.1:8080/v1/vm/vm-demo/snapshot \
-H "content-type: application/json" \
-d '{"snapshot_id":"vm-demo-snap1"}'
curl -X POST http://127.0.0.1:8080/v1/snapshots/vm-demo-snap1/persist
# Shutdown
curl -X POST http://127.0.0.1:8080/v1/vm/vm-demo/shutdownThe service reads configuration from a TOML file. Pass the path via the
--config CLI flag or AGENTENV_CONFIG. When no file is provided, sane
defaults (relative data/ directories) are used.
session_root = "/mnt/nvme/sessions"
socket_root = "/mnt/nvme/sockets"
rootfs_root = "/mnt/data/rootfs"
snapshots_root = "/mnt/data/snapshots"
tmpfs_snapshot_root = "/dev/shm/agentenv"
hdfs_snapshot_root = "/mnt/hdfs/firecracker"
kernel_root = "/mnt/data/kernels"
initramfs_root = "/mnt/data/initramfs"
firecracker_binary = "/usr/local/bin/firecracker"
default_kernel = "vmlinux"
default_boot_args = "console=ttyS0 reboot=k panic=1 pci=off acpi=off"
memory_mib_per_slot = 1024
default_slots = 2
default_disk_size_gib = 32
snapshot_state_overhead_mib = 64
[network]
enable = true
tap_prefix = "tap"
subnet_prefix = "172.20"
netmask = 24
# host_interface = "eth0"Each VM gets its own session directory containing the generated Firecracker configuration, disk files, host-share images, and a JSON metadata record.
When networking is enabled (default), Agentenv automatically provisions TAP devices
(tap0, tap1, ...), assigns /24 subnets rooted at subnet_prefix, enables
NAT through the host interface, and blocks cross-VM traffic. Run the service as
root or grant it CAP_NET_ADMIN so it can create TAP devices and manage
iptables. The REST API response for each VM now includes host_ip/guest_ip
fields describing the assigned /24. When [network].enable = true, any
network block provided in POST /v1/vm is ignored—Agentenv always provisions its
own TAP device and MAC.
Use the helper below to initialize or wipe the runtime folders referenced in
your config (defaults to config/hypervisor.artifacts.toml):
scripts/setup-runtime.sh --clean --forceDrop --clean to simply ensure the directories exist, or pass --config to
point at a different TOML file.
cargo run -- --config /etc/agentenv/hypervisor.toml --listen 0.0.0.0:8080Environment variables:
AGENTENV_CONFIG– optional path to the TOML file
Dedicated helper scripts live under scripts/ to bootstrap artifacts:
-
Build the guest rootfs from the sample Dockerfile (or your own):
scripts/create-rootfs.sh \ --dockerfile resources/rootfs/Dockerfile \ --output artifacts/rootfs.ext4 \ --size-mib 4096
The Dockerfile installs a minimal Ubuntu environment with OpenSSH and a placeholder
/sbin/overlay-init. Edit it to add packages, SSH keys, or other initialization logic. -
Compile an uncompressed kernel suitable for Firecracker:
scripts/build-kernel.sh \ --version 6.6.30 \ --fragment resources/kernel/firecracker.fragment \ --output artifacts/vmlinux
The script downloads the kernel, applies the Firecracker fragment on top of
virt_defconfig, and emitsvmlinuxinartifacts/. -
Download the Firecracker binary matching your target architecture:
scripts/get-firecracker.sh \ --version 1.13.1 \ --output artifacts/firecracker
The script downloads the official
.tgzbundle (for example,https://github.com/firecracker-microvm/firecracker/releases/download/v1.13.1/firecracker-v1.13.1-x86_64.tgz), extracts the binary, and copies it to the output path. Pass--arch aarch64or--forceas needed; downloads are cached underbuild/firecracker/and copied to the output path with executable perms. -
Deploy artifacts to a remote host (installs Firecracker + agentenv under
/opt/agentenvby default):cargo build --release scripts/setup-remote-host.sh \ --host [email protected] \ --firecracker ./firecracker \ --hypervisor target/release/agentenv \ --kernel artifacts/vmlinux \ --rootfs artifacts/rootfs.ext4 \ --config config/hypervisor.example.toml
Pass
--no-serviceto skip installing the systemd unit, or--ssh-optsto supply a private key.
All endpoints accept and return JSON.
Creates a VM. Example request for an overlay rootfs:
{
"vm_id": "vm-demo",
"slots": 2,
"disk": {
"kind": "overlay",
"rootfs_id": "al2023.img",
"disk_size_gib": 64
}
}network is optional, but when [network].enable = true the request value is
ignored and Agentenv provisions a dedicated TAP device and reports the assigned
tap_name, guest_mac, and the host/guest IP pair. The response includes the
VM ID, API socket path, memory size, disk path, and the auto-generated
networking information.
Returns all tracked VMs and their current states.
Gracefully shuts down the VM (issues SendCtrlAltDel and waits up to 10s).
Pauses/resumes a running microVM.
Creates a disk snapshot. The snapshot is written to tmpfs_snapshot_root when
there is enough free space; otherwise it falls back to the configured HDFS
mount. The response contains the snapshot ID, storage tier, and artifact paths.
{
"snapshot_id": "optional-custom-name"
}Copies an existing snapshot from tmpfs to HDFS storage.
Snapshots capture the writable disk backing file of the VM. They can be used to
bootstrap future microVMs by passing the snapshot ID in a new DiskRequest.
- Overlay VMs create
.ext4files inside the session and snapshots simply copy those files. - Direct disks (
DiskRequest::Direct) are snapshotted by copying the provided path as-is. - Snapshot metadata (guest ID, backend type, and timestamp) is written next to the disk artifact as a JSON file.
When snapshot_id is omitted, the service generates a unique name using the VM
ID and timestamp.
To keep the implementation host-agnostic, the API expects that tap devices and
MAC addresses are supplied as part of the create request. Hooks for automated
network provisioning can be added by extending Hypervisor::prepare_disk and
introducing a NetworkAllocator interface.
- Format code with
cargo fmt - Validate builds via
cargo check
The project intentionally avoids touching unrelated parts of the Firecracker codebase and focuses solely on orchestrating microVMs via an HTTP service.