ort is a Rust interface for performing hardware-accelerated inference & training on machine learning models in the Open Neural Network Exchange (ONNX) format.
Based on the now-inactive onnxruntime-rs crate, ort is primarily a wrapper for Microsoft's ONNX Runtime library, but offers support for other pure-Rust runtimes.
ort with ONNX Runtime is super quick - and it supports almost any hardware accelerator you can think of. Even still, it's light enough to run on your users' devices.
When you need to deploy a PyTorch/TensorFlow/Keras/scikit-learn/PaddlePaddle model either on-device or in the datacenter, ort has you covered.
Open a PR to add your project here π
- edge-transformers uses
ortfor accelerated transformer model inference at the edge. - Ortex uses
ortfor safe ONNX Runtime bindings in Elixir. - Lantern uses
ortto provide embedding model inference inside Postgres. - Magika uses
ortfor content type detection. sbv2-apiis a fast implementation of Style-BERT-VITS2 text-to-speech usingort.- Ahnlich uses
ortto power their AI proxy for semantic search applications. - Spacedrive is a cross-platform file manager with AI features powered by
ort. - BoquilaHUB uses
ortfor local AI deployment in biodiversity conservation efforts. FastEmbed-rsusesortfor generating vector embeddings, reranking locally.- Valentinus uses
ortto provide embedding model inference inside LMDB. - retto uses
ortfor reliable, fast ONNX inference of PaddleOCR models on Desktop and WASM platforms. - oar-ocr A comprehensive OCR library, built in Rust with
ortfor efficient inference. - Text Embeddings Inference (TEI) uses
ortto deliver high-performance ONNX runtime inference for text embedding models. - Flow-Like uses
ortto enable local ML inference inside its typed workflow engine. - CamTrap Detector uses
ortto detect animals, humans and vehicles in trail camera imagery