Thanks to visit codestin.com
Credit goes to GitHub.com

Skip to content

MoM Service: OpenAI-compatible API that orchestrates multiple LLMs in parallel and synthesizes their responses into superior answers. Get GPT-5, Claude, and Gemini working together. Features intelligent caching, multimodal vision support, cost tracking, and comprehensive observability.

License

Notifications You must be signed in to change notification settings

arashbehmand/mom-llm

Repository files navigation

🎭 MoM (Mixture of Models) Service

Python 3.9+ FastAPI License: MIT Docker Ask DeepWiki

Transform multiple AI perspectives into superior answers through intelligent synthesis

MoM Service is an OpenAI-compatible API that revolutionizes LLM usage by orchestrating multiple AI models simultaneously. Instead of relying on a single model's perspective, it queries several LLMs in parallel and synthesizes their responses into a single, superior answer using a dedicated "concluding" model.

Think of it as assembling an expert panel: you get the creativity of GPT-5, the reasoning of Claude Sonnet 4.5, and the versatility of Gemini 2.5 Proβ€”all combined into one comprehensive response that's more reliable and nuanced than any individual model could produce.

🌟 Why a Mixture of Models?

In today's AI landscape with hundreds of specialized LLMs, relying on a single model is limiting. A Mixture of Models (MoM) approach delivers compelling advantages:

Different AI models offer unique perspectives on the same question Each AI model brings its own unique perspective and reasoning style. MoM synthesizes these diverse viewpoints into a more comprehensive answer.

Benefit Description
🎯 Superior Quality Synthesize multiple perspectives to mitigate individual model weaknesses (hallucinations, biases, knowledge gaps)
πŸ›‘οΈ Enhanced Reliability If one LLM fails or underperforms, others compensate to maintain high-quality output
πŸ’° Cost Optimization Route queries strategicallyβ€”use cost-effective models where appropriate, premium ones when needed
πŸ”„ Maximum Flexibility Hot-swap models via configuration without code changes. Create specialized "meta-models" for different tasks

Real-World Use Cases

  • πŸ“ Content Creation: Combine creative and factual models for balanced, engaging content
  • πŸ’» Code Generation: Merge multiple coding assistants for more robust solutions
  • πŸ” Research & Analysis: Get comprehensive answers by consulting multiple AI "experts"
  • πŸŽ“ Educational Applications: Provide students with well-rounded explanations from diverse perspectives

πŸ”„ How It Works

MoM Service uses an elegant fan-out, fan-in architecture for parallel processing and intelligent synthesis:

graph TD
    A[Client Request via OpenAI-Compatible API] --> B{MoM Service - FastAPI};
    B --> C[Fan-Out to Multiple LLMs];
    subgraph "Parallel LLM Inference"
        C --> D1[GPT-4o];
        C --> D2[Claude 3.5 Sonnet];
        C --> D3[Gemini 1.5 Pro];
        C --> D4[Llama 3.1 405B];
    end
    subgraph "Response Synthesis"
        D1 --> E{Concluding LLM};
        D2 --> E;
        D3 --> E;
        D4 --> E;
    end
    E --> F[Final Response Streamed to User];

    style B fill:#f9f,stroke:#333,stroke-width:2px
    style C fill:#ccf,stroke:#333,stroke-width:2px
    style E fill:#cfc,stroke:#333,stroke-width:2px
Loading

Processing Flow

  1. πŸ“₯ Request In: Client makes request to OpenAI-compatible endpoint (/v1/chat/completions)
  2. 🎯 Fan-Out: Service identifies the MoM configuration and forwards request to all configured LLMs
  3. ⚑ Concurrent Processing: All LLMs process the request simultaneously (non-blocking)
  4. 🧠 Synthesize: Responses collected and passed to the "Concluding LLM"
  5. πŸ“€ Stream Response: Final synthesized answer streamed back to client in real-time

✨ Features

  • πŸ”Œ OpenAI-Compatible API: Drop-in replacement with /v1/chat/completions and /v1/models endpoints
  • 🎭 Multi-Model Orchestration: Query multiple LLMs in parallel with intelligent synthesis
  • πŸ–ΌοΈ Multimodal Vision Support: Send images alongside text using OpenAI Vision API format
  • ⚑ Real-Time Streaming: Stream synthesized responses back to clients with low latency
  • βš™οΈ Configuration-Driven: Define everything in a single config.yaml fileβ€”no code changes needed
  • πŸ’° Advanced Pricing & Cost Tracking:
    • Custom pricing configurations for reasoning tokens
    • Automatic model filtering based on multimodal capabilities
    • Detailed cost breakdowns with normalized token reporting
    • Per-request cost calculation and logging
  • πŸ“Š Advanced Observability:
    • Built-in Langfuse integration for distributed tracing
    • Comprehensive metrics API with cost tracking and usage analytics
    • Detailed health check endpoints for monitoring system components
  • πŸ”’ Enterprise Security:
    • Centralized Bearer token authentication with structured error responses
    • Clear distinction between service misconfiguration (503) and auth failures (401)
    • Flexible CORS policies for cross-origin requests
  • 🐳 Production Ready:
    • Multi-stage Docker builds with non-root users
    • Docker Compose for local development
    • Advanced health checks for orchestration
  • πŸ’Ύ Response Caching: Automatic LLM response caching to reduce costs and latency
  • πŸ§ͺ Comprehensive Testing: Full test suite with pytest for reliability

πŸ“ Project Structure

mom-llm/
β”œβ”€β”€ πŸ“„ Dockerfile              # Multi-stage Docker build for production
β”œβ”€β”€ 🐳 docker-compose.yml      # Docker Compose for local development
β”œβ”€β”€ βš™οΈ  config.yaml            # Main configuration (gitignored - use template)
β”œβ”€β”€ πŸ“‹ config.yaml_template    # Configuration template with examples
β”œβ”€β”€ πŸ“¦ requirements.txt        # Python dependencies
β”œβ”€β”€ πŸ“ LICENSE                 # MIT License
β”œβ”€β”€ πŸ”’ .env                    # Environment variables (gitignored)
β”œβ”€β”€ πŸ“‚ mom_service/
β”‚   β”œβ”€β”€ 🎯 main.py            # FastAPI application & middleware
β”‚   β”œβ”€β”€ πŸ”’ auth.py            # Authentication & token validation
β”‚   β”œβ”€β”€ βš™οΈ  config.py         # Configuration loader & models
β”‚   β”œβ”€β”€ 🧠 core_logic.py      # Fan-out & synthesis engine
β”‚   β”œβ”€β”€ πŸ“ž llm_calls.py       # LLM communication via LiteLLM
β”‚   β”œβ”€β”€ πŸ–ΌοΈ  multimodal_utils.py # Multimodal content & message sanitization
β”‚   β”œβ”€β”€ πŸ’° cost_calculation.py # Cost tracking with reasoning tokens
β”‚   β”œβ”€β”€ πŸ’΅ pricing_utils.py   # Pricing conversions & normalization
β”‚   β”œβ”€β”€ πŸ“Š metrics_db.py      # Metrics persistence & analytics
β”‚   β”œβ”€β”€ πŸ₯ health.py          # Health check utilities
β”‚   └── πŸ“‚ endpoints/
β”‚       β”œβ”€β”€ πŸ“‹ models.py      # Pydantic request/response models
β”‚       β”œβ”€β”€ πŸ”Œ openai_v1.py   # OpenAI-compatible endpoints
β”‚       └── πŸ“ˆ metrics_api.py # Usage metrics API
└── πŸ“‚ tests/
    β”œβ”€β”€ βš™οΈ  conftest.py       # Pytest fixtures & configuration
    β”œβ”€β”€ πŸ§ͺ test_config.py     # Configuration tests
    β”œβ”€β”€ πŸ§ͺ test_core_logic.py # Core logic tests
    β”œβ”€β”€ πŸ§ͺ test_llm_calls.py  # LLM integration tests
    β”œβ”€β”€ πŸ§ͺ test_endpoints.py  # API endpoint tests
    └── πŸ§ͺ test_health.py     # Health check tests

πŸš€ Quick Start

Prerequisites

  • Python 3.9 or higher
  • Docker (optional, for containerized deployment)
  • API keys for your chosen LLM providers (OpenAI, Google Gemini, Anthropic, etc.)

Installation

  1. Clone the repository

    git clone https://github.com/arashbehmand/mom-llm.git
    cd mom-llm
  2. Set up environment variables

    Create a .env file in the project root:

    # Service Configuration
    API_TOKEN="your-secret-bearer-token"
    ALLOWED_CORS_ORIGINS=""  # Comma-separated origins, or empty for no CORS
    LITELLM_VERBOSE="false"
    
    # LLM API Keys (add the ones you need)
    OPENAI_API_KEY="sk-..."
    GOOGLE_API_KEY="..."
    ANTHROPIC_API_KEY="..."
    
    # Optional: Langfuse for observability
    LANGFUSE_PUBLIC_KEY=""
    LANGFUSE_SECRET_KEY=""
    LANGFUSE_HOST="https://cloud.langfuse.com"
  3. Configure your models

    Copy the template and customize:

    • macOS/Linux:
      cp config.yaml_template config.yaml
      # Edit config.yaml to define your LLMs and MoM configurations
    • Windows (PowerShell):
      Copy-Item config.yaml_template config.yaml
      # Then edit config.yaml to define your LLMs and MoM configurations
  4. Install dependencies

    pip install -r requirements.txt
  5. Run the service

    uvicorn mom_service.main:app --reload --host 0.0.0.0 --port 8000

🐳 Docker Deployment

Using Docker Compose (Recommended):

# Start the service
docker-compose up -d

# View logs
docker-compose logs -f mom-service

# Stop the service
docker-compose down

Using Docker directly:

# Build the image
docker build -t mom-service .

# Run the container
docker run -d \
  --name mom-service \
  -p 8000:8000 \
  --env-file .env \
  -v $(pwd)/config.yaml:/app/config.yaml \
  -v $(pwd)/data:/app/data \
  mom-service

πŸ“ Basic Usage

Test the service:

curl http://localhost:8000/v1/models \
  -H "Authorization: Bearer your-secret-bearer-token"

Make a chat completion request:

curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-secret-bearer-token" \
  -d '{
    "model": "mom",
    "messages": [
      {"role": "user", "content": "Explain quantum computing in simple terms"}
    ],
    "stream": true
  }'

Note: Set "stream": false to get a single JSON response instead of an SSE stream.

Send an image (multimodal vision request):

curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-secret-bearer-token" \
  -d '{
    "model": "mom",
    "messages": [
      {
        "role": "user",
        "content": [
          {
            "type": "text",
            "text": "What'\''s in this image?"
          },
          {
            "type": "image_url",
            "image_url": {
              "url": "https://example.com/image.jpg",
              "detail": "high"
            }
          }
        ]
      }
    ],
    "stream": false
  }'

Note: Vision requests automatically filter to multimodal-capable models. Non-capable models are skipped, and messages are sanitized for each provider to ensure compatibility.

βš™οΈ Configuration

The service is configured through config.yaml and environment variables (.env file).

Quick Configuration Overview

1. Environment Variables - API keys and service settings:

# Required
API_TOKEN="your-secret-bearer-token"

# LLM Provider Keys (add the ones you need)
OPENAI_API_KEY="sk-..."
GOOGLE_API_KEY="..."
ANTHROPIC_API_KEY="..."

2. Configuration File - Define your LLMs and MoM models:

# Define individual LLMs
llm_definitions:
  - name: "gpt4"
    model: "openai/gpt-4"
    api_key_env: "OPENAI_API_KEY"

# Define synthesis prompts
prompt_definitions:
  - name: "synth_default"
    content: "Synthesize responses into a cohesive answer..."

# Create MoM models
models:
  - name: "mom"
    llms_to_query: ["gpt4", "claude", "gemini"]
    concluding_llm: "gpt4"
    concluding_prompt: "synth_default"

For detailed configuration options, custom pricing, advanced features, and complete examples, see the Configuration Guide.

πŸ”Œ API Reference

The MoM Service provides OpenAI-compatible endpoints plus additional metrics and health check endpoints.

Quick API Overview

Core Endpoints:

  • GET /v1/models - List available MoM models
  • POST /v1/chat/completions - Chat completions (streaming and non-streaming)
  • GET /v1/metrics/usage - Usage metrics and cost tracking
  • GET /health - Health check

Example Request:

curl http://localhost:8000/v1/chat/completions \
  -H "Authorization: Bearer your-token" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "mom",
    "messages": [{"role": "user", "content": "Hello!"}],
    "stream": true
  }'

For complete API documentation including all endpoints, parameters, response formats, and code examples in multiple languages, see the API Reference.

Using with OpenAI SDK

The service is fully compatible with the OpenAI Python SDK:

from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="your-secret-bearer-token"
)

response = client.chat.completions.create(
    model="mom",
    messages=[{"role": "user", "content": "What is the meaning of life?"}],
    stream=True
)

for chunk in response:
    print(chunk.choices[0].delta.content or "", end="")

See the API Reference for more examples including non-streaming and multimodal requests.

🎯 Advanced Features

Thinking Context

Set include_thinking_context: true in your model configuration to see intermediate responses from all LLMs before synthesis:

<think>
Model: gpt-4o
Content: [GPT-4o's response]
---
Model: claude-3-5-sonnet
Content: [Claude's response]
---
</think>

[Final synthesized answer]

Useful for understanding synthesis logic, debugging, and transparency.

Message Sanitization

The service automatically sanitizes messages for provider compatibility, removing empty fields and preserving multimodal content appropriately. This ensures reliable operation across all LLM providers without manual adjustments.

Cost Tracking & Observability

  • Automatic cost calculation for every request with detailed breakdowns
  • Langfuse integration for distributed tracing: Add credentials to .env and view detailed traces at Langfuse
  • Metrics API at /v1/metrics/usage for usage analytics

πŸ› οΈ Development

Running in Development Mode

uvicorn mom_service.main:app --reload --reload-include "config.yaml"

The --reload-include flag watches config.yaml for changes and automatically reloads the service.

Health Checks

# Basic health check
curl http://localhost:8000/health

# Detailed health check with component validation
curl http://localhost:8000/health/detailed

# Include LLM connectivity test
curl http://localhost:8000/health/detailed?check_llm=true

Running Tests

# Run all tests
pytest

# Run with coverage report
pytest --cov=mom_service --cov-report=html

# Run specific test file
pytest tests/test_endpoints.py

The test suite includes unit tests, integration tests, API tests, and health check validation.

πŸ“š Documentation

For more detailed information, check out these guides:

🀝 Contributing

Contributions are welcome! Whether you're fixing bugs, improving documentation, or proposing new features, your help is appreciated.

Please see CONTRIBUTING.md for detailed guidelines on:

  • Setting up your development environment
  • Code style and standards
  • Running tests and quality checks
  • Submitting pull requests
  • Reporting issues

Quick start for contributors:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes with tests
  4. Run the test suite (pytest)
  5. Commit your changes
  6. Push to your branch
  7. Open a Pull Request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • This project was developed with the assistance of multiple AI tools, including Anthropic's Claude, GitHub Copilot, and Kilo Code.
  • Built with FastAPI and LiteLLM
  • Inspired by ensemble learning and multi-agent AI systems
  • Observability powered by Langfuse

πŸ“¬ Contact

Arash Behmand


⭐ If you find this project useful, please consider giving it a star on GitHub!

About

MoM Service: OpenAI-compatible API that orchestrates multiple LLMs in parallel and synthesizes their responses into superior answers. Get GPT-5, Claude, and Gemini working together. Features intelligent caching, multimodal vision support, cost tracking, and comprehensive observability.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published