Universal LLM Deployment Engine with ML Compilation
-
Updated
Jan 20, 2026 - Python
Universal LLM Deployment Engine with ML Compilation
Open Machine Learning Compiler Framework
yolort is a runtime stack for yolov5 on specialized accelerators such as tensorrt, libtorch, onnxruntime, tvm and ncnn.
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
Benchmark scripts for TVM
A curated list of awesome inference deployment framework of artificial intelligence (AI) models. OpenVINO, TensorRT, MediaPipe, TensorFlow Lite, TensorFlow Serving, ONNX Runtime, LibTorch, NCNN, TNN, MNN, TVM, MACE, Paddle Lite, MegEngine Lite, OpenPPL, Bolt, ExecuTorch.
TVM Relay IR Visualization Tool (TVM 可视化工具)
动手学习TVM核心原理教程
ANT framework's model database that provides DNN models for the various range of IoT devices
Canopy is a machine learning learning compiler stack with the capability of adopting high-end FPGAs. As a part of OpenAIOS project, Canopy is an evolved version of Apache TVM. Canopy is able to support a variety of hardware backends such as PCIE-based cloud FPGAs, CPUs and GPUs.
Pruner: A Draft-then-Verify Exploration Mechanism to Accelerate Tensor Program Tuning (ASPLOS'25)
Add a description, image, and links to the tvm topic page so that developers can more easily learn about it.
To associate your repository with the tvm topic, visit your repo's landing page and select "manage topics."