This repository provides binaries and Docker containers for teeonnx, a hybrid Trusted Execution Environment (TEE) approach for zero-knowledge proof witness generation. Run ML model inference inside Intel SGX enclaves with DCAP attestation supporting both traditional zkp workflows and TEE-based verification.
teeonnx extends the standard workflow by moving witness generation into a secure Intel SGX enclave. The TEE approach provides cryptographic guarantees about the execution environment and computation integrity using tract for ONNX inference.
- Verifiable Computation: SGX enclave provides hardware-backed proof of correct execution
- Hybrid zkp Support: Generated witnesses work with standard proving systems, while DCAP quotes enable TEE-based verification
- Cryptographic Binding: keccak256 hashes in the DCAP quote cryptographically bind input, circuit, and witness
- Compressed Verification: RISC0 zkVM creates succinct proofs of quote validity for efficient on-chain verification
The system operates as a stateless "pure function" that verifiably runs inference:
┌─────────────┐ ┌─────────────────────┐ ┌──────────────┐
│ input.json │───▶│ SGX Enclave │───▶│ output.json │
│ circuit.bin │ │ │ │ quote.bin │
└─────────────┘ │ 1. Hash inputs │ └──────────────┘
│ 2. Generate output │
│ 3. Hash outputs │
│ 4. Create DCAP quote│
└─────────────────────┘
│
▼
┌─────────────────────┐
│ zk │
│ Quote Verification │
│ → Proof │
└─────────────────────┘
SGX-Enabled Hardware: You need a machine with Intel SGX support. For cloud deployment, we recommend Azure DCsv3 instances.
SGX Runtime Installation (Ubuntu 22.04):
# Install SGX runtime libraries
echo 'deb [arch=amd64] https://download.01.org/intel-sgx/sgx_repo/ubuntu jammy main' | sudo tee /etc/apt/sources.list.d/intel-sgx.list
wget -qO - https://download.01.org/intel-sgx/sgx_repo/ubuntu/intel-sgx-deb.key | sudo apt-key add -
sudo apt-get update
sudo apt-get install libsgx-urts libsgx-dcap-qlThe easiest way to get started is with the prebuilt Docker container:
# Pull the latest SGX container
docker pull ghcr.io/zkonduit/teeonnx-sgx:latest
# Run inference in the SGX enclave
docker run --device /dev/sgx_enclave --device /dev/sgx_provision \
-v $(pwd):/workspace \
ghcr.io/zkonduit/teeonnx-sgx:latest \
gen-output \
--input /workspace/input.json \
--model /workspace/network.onnx \
--output /workspace/output.json \
--quote /workspace/quote.binRequired Docker Arguments:
--device /dev/sgx_enclave: Access to SGX enclave device--device /dev/sgx_provision: Access to SGX provisioning service-v $(pwd):/workspace: Mount your working directory
Download the appropriate binary for your system from the releases page:
CPU-only verification binary:
teeonnx-zk-cpu-linux: Works on any x86_64 Linux system
CUDA-enabled verification binaries (faster proving):
teeonnx-zk-cuda-linux-sm70: Tesla V100, GTX 1080 Titeeonnx-zk-cuda-linux-sm75: RTX 2080, RTX 2080 Ti, Tesla T4teeonnx-zk-cuda-linux-sm80: RTX 3080, RTX 3090, A100teeonnx-zk-cuda-linux-sm86: RTX 3050, RTX 3060, RTX 3070teeonnx-zk-cuda-linux-sm89: RTX 4090, RTX 4080teeonnx-zk-cuda-linux-sm90: H100teeonnx-zk-cuda-linux-sm100: Future architecture supportteeonnx-zk-cuda-linux-sm100a: Future architecture supportteeonnx-zk-cuda-linux-sm120: Future architecture supportteeonnx-zk-cuda-linux-sm120a: Future architecture support
# Download CPU-only binary
wget https://github.com/zkonduit/teeonnx-p/releases/latest/download/teeonnx-zk-cpu-linux
chmod +x teeonnx-zk-cpu-linux
# Or download CUDA binary for your GPU architecture (example for RTX 3080/3090/A100)
wget https://github.com/zkonduit/teeonnx-p/releases/latest/download/teeonnx-zk-cuda-linux-sm80
chmod +x teeonnx-zk-cuda-linux-sm80Find your GPU's compute capability:
# Check your GPU model
nvidia-smi
# Or use this command to get compute capability directly
nvidia-smi --query-gpu=compute_cap --format=csv,noheader,nounitsUsing Docker (recommended):
docker run --device /dev/sgx_enclave --device /dev/sgx_provision \
-v $(pwd):/workspace \
ghcr.io/zkonduit/teeonnx-sgx:latest \
gen-output \
--input /workspace/input.json \
--model /workspace/network.onnx \
--output /workspace/output.json \
--quote /workspace/quote.binThis command:
- Loads your input and ONNX model into the SGX enclave
- Uses tract for ONNX inference to generate output
- Creates a DCAP quote with computation hashes
- Outputs both result and quote files
# Using CPU binary
./teeonnx-zk-cpu-linux prove \
--quote quote.bin \
--proof proof.json
# Using CUDA binary (faster)
./teeonnx-zk-cuda-linux-sm80 prove \
--quote quote.bin \
--proof proof.jsonCreates a proof that the DCAP quote is valid.
./teeonnx-zk-cpu-linux verify --proof proof.jsonYou can also verify with additional hash checks:
./teeonnx-zk-cpu-linux verify --proof proof.json \
--input-hash "INPUT_HASH_HEX" \
--model-hash "MODEL_HASH_HEX" \
--output-hash "OUTPUT_HASH_HEX"./teeonnx-zk-cpu-linux hash-check \
--input input.json \
--model network.onnx \
--output output.json \
--quote quote.binVerifies that the quote's user_data contains the correct hashes of your computation.
./teeonnx-zk-cpu-linux mrenclave-check \
--quote quote.bin \
--mrenclave "EXPECTED_MRENCLAVE_HEX"The full verification process involves three checks:
- Quote Validity: Verify the DCAP quote is cryptographically valid (via RISC0 proof) and checks the integrity of the enclave execution
- (optional) MRENCLAVE Check: Confirm the quote was generated by the expected enclave code
- (optional) Hash Verification: Ensure the quote's user_data matches your input/model/output hashes
Example verification script:
#!/bin/bash
# Complete verification workflow
# Download verification binary
wget https://github.com/zkonduit/teeonnx-p/releases/latest/download/teeonnx-zk-cpu-linux
chmod +x teeonnx-zk-cpu-linux
# 1. Verify RISC0 proof of quote validity
./teeonnx-zk-cpu-linux verify --proof proof.json
# 2. Check MRENCLAVE matches expected value
./teeonnx-zk-cpu-linux mrenclave-check --quote quote.bin --mrenclave "$EXPECTED_MRENCLAVE"
# 3. Verify hash bindings
./teeonnx-zk-cpu-linux hash-check \
--input input.json \
--model network.onnx \
--output output.json \
--quote quote.bin
# Alternative: Verify proof with hash checks in one command
./teeonnx-zk-cpu-linux verify --proof proof.json \
--input-hash "INPUT_HASH_HEX" \
--model-hash "MODEL_HASH_HEX" \
--output-hash "OUTPUT_HASH_HEX"
echo "✅ All verification checks passed!"{
"data": [
[1.0, 2.0, 3.0, 4.0, 5.0, 6.0]
],
"shapes": [
[2, 3]
],
}{
"data": [
[0.0, 0.0, 0.0]
],
"shapes": [
[1, 3]
],
}output.json: Inference results from the ONNX modelquote.bin: DCAP quote containing cryptographic attestationproof.json: RISC0 Groth16 proof of quote validity
# Pull and run CPU proving container
docker pull ghcr.io/zkonduit/teeonnx-cpu:latest
# Generate proof using CPU
docker run -v $(pwd):/workspace ghcr.io/zkonduit/teeonnx-cpu:latest \
prove --quote /workspace/quote.bin --proof /workspace/proof.json
# Verify proof
docker run -v $(pwd):/workspace ghcr.io/zkonduit/teeonnx-cpu:latest \
verify --proof /workspace/proof.jsonPrerequisites:
- NVIDIA GPU with CUDA support
- NVIDIA Container Toolkit installed
Install NVIDIA Container Toolkit:
# Install NVIDIA Container Toolkit
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
# Configure Docker to use NVIDIA runtime
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart dockerUse GPU container for faster proving:
# Pull GPU container for your architecture (example: sm80 for RTX 3080/3090/A100)
docker pull ghcr.io/zkonduit/teeonnx-gpu-sm80:latest
# Generate proof using GPU (much faster than CPU)
docker run --runtime=nvidia -v $(pwd):/workspace ghcr.io/zkonduit/teeonnx-gpu-sm80:latest \
prove --quote /workspace/quote.bin --proof /workspace/proof.json
# Verify proof
docker run --runtime=nvidia -v $(pwd):/workspace ghcr.io/zkonduit/teeonnx-gpu-sm80:latest \
verify --proof /workspace/proof.jsonAvailable GPU containers:
ghcr.io/zkonduit/teeonnx-gpu-sm70:latest- Tesla V100, GTX 1080 Tighcr.io/zkonduit/teeonnx-gpu-sm75:latest- RTX 2080, RTX 2080 Ti, Tesla T4ghcr.io/zkonduit/teeonnx-gpu-sm80:latest- RTX 3080, RTX 3090, A100ghcr.io/zkonduit/teeonnx-gpu-sm86:latest- RTX 3050, RTX 3060, RTX 3070ghcr.io/zkonduit/teeonnx-gpu-sm89:latest- RTX 4090, RTX 4080ghcr.io/zkonduit/teeonnx-gpu-sm90:latest- H100
For development and testing without SGX hardware:
# Pull and run in simulation mode
docker run -e SGX_MODE=SW \
-v $(pwd):/workspace \
ghcr.io/zkonduit/teeonnx-sgx:latest \
gen-output \
--input /workspace/input.json \
--model /workspace/network.onnx \
--output /workspace/output.json \
--quote /workspace/quote.binThe verification binary supports these commands:
prove: Generate RISC0 proof of DCAP quote validityverify: Verify a RISC0 proofhash-check: Verify hash bindings in quote user_datamrenclave-check: Validate the enclave measurementhelp: Show detailed help for each command
- Privacy-Preserving ML: Run sensitive models with hardware-backed privacy guarantees
- Verifiable AI: Prove model execution without revealing the model or inputs
- Compliance: Meet regulatory requirements for secure computation environments
- Hybrid Verification: Combine traditional zkps with TEE attestations for enhanced security
- Quote Freshness: Verify quotes against current Intel collateral to prevent replay attacks
- MRENCLAVE Validation: Always verify the enclave measurement matches the expected code
- Hardware Requirements: Ensure SGX hardware is properly provisioned and updated
The enclave creates keccak256 hashes embedded in the 64-byte DCAP quote user_data:
- Bytes 0-31:
P(witness)- Hash of witness outputs - Bytes 32-63:
keccak256(P(circuit) || P(input))- Combined input hash
Where:
P(circuit): keccak256 hash of compiled circuit dataP(input): keccak256 hash of input field elements
This enables verifiers to confirm computations without accessing input or circuit data in the clear.
- CPU vs CUDA: CUDA binaries provide significant speedup for proof generation
- Memory Requirements: Ensure adequate RAM for large models (8GB+ recommended)
- SGX Memory: Large models may require SGX memory configuration adjustments
- SGX Device Not Found: Ensure SGX is enabled in BIOS and drivers are installed
- Permission Denied: Check SGX device permissions (
/dev/sgx_enclave,/dev/sgx_provision) - Quote Generation Failed: Verify DCAP service is running and configured
- Docker Issues: Ensure SGX devices are properly mounted in container
Run verification with debug output:
RUST_LOG=debug ./teeonnx-zk-cpu-linux prove --quote quote.bin --proof proof.jsonThis binary distribution is built from the latest stable release of teeonnx. For version-specific information, check the releases page.
Copyright 2025 Zkonduit Inc. Production use requires a license. For licensing inquiries, please contact [email protected].
- tract - Neural network inference library for ONNX
- zkdcap - DCAP quote verification library by Datachain Lab
- Automata Network - SGX SDK and infrastructure
- RISC Zero - zkVM technology for quote verification
- Intel SGX - Trusted execution environment