Stop rewriting your app every time you switch LLMs. ORBIT unifies 20+ AI providers with your databases, vector stores, and APIsโall through one self-hosted gateway.
Ship faster. Stay portable. Keep your data private.
- Questions? Open an issue
- Updates: Check the changelog
- Commercial Support: schmitech.ai
- Maintained by: Remsy Schmilinsky
- โจ Highlights
- ๐ ๏ธ Why ORBIT
- โญ Why Star This Repo?
- ๐ Quick Start
- ๐ฌ Chat Clients
- ๐ข Commercial Support
- ๐ Documentation
- ๐ License
- Unified AI gateway supporting 20+ LLM providers (OpenAI, Anthropic, Gemini, Cohere, Mistral, Ollama, Groq, DeepSeek, xAI, OpenRouter, and more) plus local models via Ollama, llama.cpp, and vLLM.
- Data integration with RAG adapters for SQL databases (PostgreSQL, MySQL, SQLite, DuckDB, Oracle, SQL Server, Cassandra), vector stores (Chroma, Qdrant, Pinecone, Milvus, Elasticsearch, Redis), MongoDB, HTTP APIs, and file uploads with multimodal support.
- Intelligent query processing with intent-based adapters that translate natural language to SQL, Elasticsearch queries, MongoDB queries, and HTTP API calls.
- Vision capabilities with support for vLLM, OpenAI, Gemini, and Anthropic vision models for image analysis and OCR.
- Secure by default with token-based auth, role-aware API keys, and pluggable content moderation.
- Ready for teams thanks to batteries-included clients (CLI, React widget, Node/Python SDKs).
- Avoid vendor lock-in by switching between LLM providers without rewriting your application codeโchange providers in configuration, not code.
- Keep your data private with support for on-prem deployments, air-gapped installs, and local models that never leave your infrastructure.
- Query your data naturally in any language instead of writing SQL, Elasticsearch queries, or API callsโintent-based adapters handle the translation automatically.
- Platform & infra teams who need a stable control plane for LLM workloads across multiple providers and data sources.
- Product teams shipping AI copilots that depend on reliable retrieval, intent-based querying, and guardrails.
- Data teams building RAG applications that need to query SQL databases, vector stores, and APIs through natural language.
- Researchers & tinkerers exploring local-first stacks, evaluating different foundation models, or building multimodal AI applications.
Your star isn't just a vanity metricโit directly helps the project:
- Visibility โ Stars help other developers discover ORBIT in search results
- Releases โ Get notified when we ship new features and providers
- Open source โ Support independent AI infrastructure development
๐งช Sandbox Environment: https://orbit.schmitech.ai/
This environment is unstable and may be unavailable at any given time. Not for production useโjust examples showing ORBIT functionality. Prompts are not retained.
There are three ways to get started with ORBIT.
docker pull schmitech/orbit:basic
docker run -d --name orbit-basic -p 5173:5173 schmitech/orbit:basicOpen http://localhost:5173 in your browser and start chatting.
The Docker image includes:
- ORBIT server (API on port 3000)
- orbitchat web app (browser UI on port 5173)
- Ollama with pre-pulled models
- Pre-configured API key (no setup needed)
For more Docker options, see docker/README.md.
Download and install the latest stable release. Best for production deployments.
- Python 3.12+
- Node.js 18+ and npm
- AI Provider (choose one or more):
- Optional: MongoDB, Redis, and a vector DB (Chroma, Qdrant, etc.)
# Download the latest release archive
# Replace v2.2.0 with the latest version from https://github.com/schmitech/orbit/releases
curl -L https://github.com/schmitech/orbit/releases/download/v2.2.0/orbit-2.2.0.tar.gz -o orbit-2.2.0.tar.gz
tar -xzf orbit-2.2.0.tar.gz
cd orbit-2.2.0# Add API keys if using proprietary services like OpenAI, Cohere, Anthropic, etc.
cp env.example .env
# Install packages
./install/setup.sh
# Activate Python environment
source venv/bin/activate-
Install Ollama (if not already installed):
# macOS/Linux curl -fsSL https://ollama.com/install.sh | sh # Windows: Download from https://ollama.com/download
-
Pull the model:
ollama pull granite4:1b
-
Configure Model:
- The default model is configured as
granite4:1binconfig/adapters/passthrough.yamlandconfig/adapters/multimodal.yaml. - You can configure model settings in
config/ollama.yaml.
- The default model is configured as
# Start the ORBIT server
./bin/orbit.sh start
# Check the logs
cat ./logs/orbit.logOnce the server is running, open your browser and navigate to:
๐ฅ๏ธ Dashboard: http://localhost:3000/dashboard
The dashboard provides a visual interface to manage adapters, monitor conversations, and configure your ORBIT instance.
dashboard.mp4
The ORBIT Dashboard in action
For contributing or modifying ORBIT, clone and run from source.
- Python 3.12+
- Node.js 18+ and npm
- Docker 20.10+ and Docker Compose 2.0+
- AI Provider (choose one or more):
- Optional: MongoDB, Redis, and a vector DB (Chroma, Qdrant, etc.)
# Clone the repository
git clone https://github.com/schmitech/orbit.git
cd orbit
# Add API keys if using proprietary services like OpenAI, Cohere, etc.
cp env.example .env
# Install packages
./install/setup.sh
# Activate Python environment
source venv/bin/activate
# Start the ORBIT server
./bin/orbit.sh start
# Check the logs
tail -f ./logs/orbit.logNote: After starting the server, you'll need to create an API key using ./bin/orbit.sh key create before you can use the chat clients.
Once your ORBIT server is running (via Docker or manual installation), you can interact with it using one of these clients:
The orbit-chat python CLI provides a terminal-based chat interface.
# Install the client from PyPI
pip install schmitech-orbit-client
orbit-chat --api-key YOUR_API_KEYorbit-cli-chat.mp4
Using the
orbit-chat CLI. Run orbit-chat -h for options.
npm install -g orbitchat
orbitchat --api-url http://localhost:3000 --api-key YOUR_API_KEY --openorbit-chat-gui.mp4
Chatting with ORBIT using the React client.
Add an AI chatbot to any website with the @schmitech/chatbot-widget. Supports floating and embedded modes with full theme customization.
<!-- Add to your HTML -->
<script src="https://unpkg.com/react@18/umd/react.production.min.js"></script>
<script src="https://unpkg.com/react-dom@18/umd/react-dom.production.min.js"></script>
<script src="https://unpkg.com/@schmitech/chatbot-widget@latest/dist/chatbot-widget.umd.js"></script>
<link rel="stylesheet" href="https://unpkg.com/@schmitech/chatbot-widget@latest/dist/chatbot-widget.css">
<script>
window.addEventListener('load', function() {
window.initChatbotWidget({
apiUrl: 'http://localhost:3000',
apiKey: 'YOUR_API_KEY',
sessionId: 'session-' + Date.now(),
widgetConfig: {
header: { title: "ORBIT Assistant" },
welcome: { title: "Hello! ๐", description: "How can I help you today?" }
}
});
});
</script>theming-app.mp4
The embeddable chat widget in action. Try the theming app to customize and preview your widget.
For full configuration options, themes, and integration guides, see clients/chat-widget/README.md.
ORBIT provides a native TypeScript/JavaScript client for seamless integration into Node.js, web, or mobile applications.
npm install @schmitech/chatbot-apiimport { ApiClient } from '@schmitech/chatbot-api';
const client = new ApiClient({
apiUrl: "http://localhost:3000",
apiKey: "YOUR_API_KEY"
});
async function chat() {
const stream = client.streamChat("How can I integrate ORBIT into my application?");
for await (const chunk of stream) {
process.stdout.write(chunk.text);
}
}
chat();ORBIT exposes an OpenAI-compatible /v1/chat/completions endpoint, letting you use the official openai Python library with your ORBIT server as a drop-in backend.
from openai import OpenAI
client = OpenAI(
api_key="ORBIT_API_KEY",
base_url="http://localhost:3000/v1"
)
completion = client.chat.completions.create(
model="orbit", # Value is required but ORBIT routes via your API key
messages=[
{"role": "system", "content": "You are a helpful ORBIT assistant."},
{"role": "user", "content": "Summarize the latest deployment status."}
],
)
print(completion.choices[0].message.content)
# ORBIT-specific metadata (sources, threading info, audio, etc.) is available via completion.orbit
if completion.orbit.get("sources"):
print("Sources:", completion.orbit["sources"])Streaming works as well:
stream = client.chat.completions.create(
model="orbit",
messages=[{"role": "user", "content": "Give me an onboarding checklist."}],
stream=True,
)
for chunk in stream:
delta = chunk.choices[0].delta.content
if delta:
print(delta, end="", flush=True)See the Tutorial โ Set up the HR example in 5 minutes and start chatting with your data.
Your support keeps ORBIT independent and focused on open-source innovation.
- โญ Star the repo to signal that ORBIT matters to you.
- ๐ฃ Share a demo, blog, or tweet so other builders discover it.
- ๐ Open issues and PRsโyour feedback directly shapes the roadmap.
For more detailed information, please refer to the official documentation.
- Tutorial: Chat with Your Data
- Installation Guide
- Configuration
- Authentication
- RAG & Adapters
- Development Roadmap
- Contributing Guide
Apache 2.0 - See LICENSE for details.