Complete workflow for running GPU-accelerated Jupyter Lab on NVIDIA Jetson devices with Docker, from startup to teardown.
- NVIDIA Jetson device (Orin/Xavier/Nano) running JetPack 6.x
- Docker installed with
nvidia-container-runtime - SSH or direct terminal access to your Jetson
- ~15GB free disk space for container image
Copy-paste these commands to start immediately with full optimizations:
# Terminal 1 (keep open): Start Jupyter with shared memory
docker run -it --rm --gpus all --shm-size=2g -p 8888:8888 \
-v ~/my_jupyter_work:/workspace \
dustynv/l4t-ml:r36.4.0 \
jupyter lab --ip=0.0.0.0 --port=8888 --allow-root \
--IdentityProvider.token='mynotebook' \
--ServerApp.root_dir=/workspace
# Browser: Access Jupyter Lab
# URL: http://127.0.0.1:8888/lab?token=mynotebookZRAM provides ~50% more compressed virtual memory. Run in Terminal 2 (new terminal):
sudo systemctl enable nvzramconfig.service
sudo systemctl start nvzramconfig.service
zramctl # Verify: should show compressed swap devicesVerify output (example on 8GB Orin Nano):
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 lzo-rle 635M 4K 74B 12K 6 [SWAP]
/dev/zram1 lzo-rle 635M 4K 74B 12K 6 [SWAP]
/dev/zram2 lzo-rle 635M 4K 74B 12K 6 [SWAP]
...
Critical: Use --shm-size=2g for PyTorch DataLoader shared memory.
In Terminal 1 (keep this window open):
docker run -it --rm --gpus all --shm-size=2g -p 8888:8888 \
-v ~/my_jupyter_work:/workspace \
dustynv/l4t-ml:r36.4.0 \
jupyter lab --ip=0.0.0.0 --port=8888 --allow-root \
--IdentityProvider.token='mynotebook' \
--ServerApp.root_dir=/workspaceKey flags explained:
--gpus all: Enables GPU access--shm-size=2g: Required for PyTorch DataLoader shared memory-v ~/my_jupyter_work:/workspace: Persists notebooks to host--IdentityProvider.token='mynotebook': Sets login token--ServerApp.root_dir=/workspace: Sets default directory
Open your web browser and navigate to:
http://127.0.0.1:8888/lab?token=mynotebook
From another device on your network, find your Jetson's IP first:
hostname -I | awk '{print $1}'Then use: http://<JETSON_IP>:8888/lab?token=mynotebook
Inside Jupyter Lab, create a new notebook and run:
import torch
print(f"GPU available: {torch.cuda.is_available()}")
print(f"GPU name: {torch.cuda.get_device_name(0)}")Expected output:
GPU available: True
GPU name: Orin
In Terminal 2, while Jupyter is running:
# Set MAXN power mode (maximum performance, adjust -m number if needed)
sudo nvpmodel -m 0
# Maximize CPU/GPU clocks (increases power & heat - ensure cooling)
sudo jetson_clocks
# Monitor real-time performance
tegrastatsPress q to exit tegrastats.
Warning: Increases power consumption and heat. To restore defaults:
sudo jetson_clocks --restore# List running containers
docker ps --filter ancestor=dustynv/l4t-ml:r36.4.0
# View container resource usage
docker stats $(docker ps -q --filter ancestor=dustynv/l4t-ml:r36.4.0)
# Get token while Jupyter is running
docker exec $(docker ps -q --filter ancestor=dustynv/l4t-ml:r36.4.0) \
python3 -c "import json; d=json.load(open('/root/.local/share/jupyter/runtime/jpserver-1.json')); print(d['token'])"
# Check GPU inside container
docker exec $(docker ps -q --filter ancestor=dustynv/l4t-ml:r36.4.0) nvidia-smi# Overall memory (including ZRAM)
free -h
# GPU memory usage
tegrastats# Error: "Bind for 0.0.0.0:8888 failed"
# Solution: Stop existing container
docker stop $(docker ps -q --filter ancestor=dustynv/l4t-ml:r36.4.0)# Error: No token in jpserver-*.json
# Solution: Start with explicit token (already in startup command)
--IdentityProvider.token='mynotebook'# Check Docker runtime
docker info | grep -i runtime
# Should show: nvidia
# If not, restart with explicit GPU runtime:
docker run --runtime nvidia -it --rm --gpus all ...# Ensure ZRAM is enabled
zramctl
# Check container memory usage
docker stats $(docker ps -q --filter ancestor=dustynv/l4t-ml:r36.4.0)
# Reduce batch size in your code# ERROR: "RuntimeError: DataLoader worker exited unexpectedly"
# FIX: Ensure --shm-size=2g is in your docker run command| Window | Purpose | Keep Open? | Commands |
|---|---|---|---|
| Terminal 1 | Run Jupyter server | β YES | docker run ... (startup command) |
| Terminal 2 | Optimization & monitoring | sudo jetson_clocks, tegrastats, zramctl |
|
| Browser | Jupyter Lab interface | β YES (while working) | http://127.0.0.1:8888/lab?token=mynotebook |
- In Jupyter Lab browser:
File β Shut Down - Verify in Terminal 2: Run
docker ps(should show no containers) - Container automatically removed (due to
--rmflag)
- In Jupyter Lab:
File β Save AllorCtrl+S - Close browser tab (optional)
- In Terminal 1: Press
Ctrl+Ctwice - Verify in Terminal 2:
docker ps(should be empty)
docker run -it --rm --gpus all --shm-size=2g -p 8889:8889 \
dustynv/l4t-ml:r36.4.0 \
jupyter lab --ip=0.0.0.0 --port=8889 --allow-root \
--IdentityProvider.token='your_secure_token_here'docker run ... \
-v ~/my_jupyter_work:/workspace \
-v /path/to/data:/data \
-v /path/to/models:/models \
...Replace jupyter lab with jupyter notebook in startup command.
docker run ... \
--memory=6g --memory-swap=10g \
...- Image:
dustynv/l4t-ml:r36.4.0 - Built for: JetPack 6.x (L4T R36.4.0)
- Includes: PyTorch, TensorFlow, CUDA, cuDNN, NumPy, SciPy, scikit-learn
- Size: ~12GB (first download required)
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β JETSON JUPYTER LAB QUICK REFERENCE β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β‘ ONE-TIME SETUP:
sudo systemctl enable nvzramconfig.service
sudo systemctl start nvzramconfig.service
π STARTUP (Terminal 1):
docker run -it --rm --gpus all --shm-size=2g -p 8888:8888 \
-v ~/my_jupyter_work:/workspace \
dustynv/l4t-ml:r36.4.0 \
jupyter lab --ip=0.0.0.0 --port=8888 --allow-root \
--IdentityProvider.token='mynotebook' \
--ServerApp.root_dir=/workspace
π ACCESS:
http://127.0.0.1:8888/lab?token=mynotebook
β
GPU TEST:
import torch; print(torch.cuda.is_available())
π₯ PERFORMANCE (Terminal 2):
sudo nvpmodel -m 0
sudo jetson_clocks
tegrastats
πΎ WORKSPACE:
Host: ~/my_jupyter_work
Jupyter: /workspace
π SHUTDOWN:
File β Shut Down (in browser)
# Or: Ctrl+C twice in Terminal 1
π VERIFY:
docker ps # Should return nothing
ls ~/my_jupyter_work # Check saved files
zramctl # Check ZRAM active
Version: 2.0 (Optimized)
Tested On: Jetson Orin Nano, JetPack 6.1, L4T R36.4.0
Last Updated: 2025-11-10