Instead of asking a question to a single LLM, this app lets you assemble your own LLM Council from multiple models—whether local (via Ollama), cloud-based (Ollama, OpenRouter), or custom endpoints (OpenAI-compatible APIs). The web interface sends your query to all council members simultaneously, collects their responses, then has them review and rank each other's answers anonymously. Finally, a designated Chairman model synthesizes the ranked outputs into a single, polished response.
In a bit more detail, here is what happens when you submit a query:
- Stage 1: First opinions – The user query is sent to all LLMs individually and the responses are collected. Each response is shown in a tab view for easy inspection.
- Stage 2: Review – Every LLM receives the other models' responses (identities are anonymised) and is asked to rank them on accuracy and insight.
- Stage 3: Final response – The designated Chairman model aggregates the ranked outputs into a single, polished answer for the user.
This project was 99% vibe‑coded as a fun Saturday hack while exploring side‑by‑side LLM comparisons (see the original tweet here). The code is intentionally lightweight and may contain shortcuts. It is provided as‑is for inspiration; no ongoing support is guaranteed.
- API Keys via Environment Variables – You can now store model credentials outside
config.json. Each API key field supports aDirect | Env Vartoggle so you can decide per model; selecting Env Var saves values asenv:YOUR_VAR_NAMEand the backend resolves the secret from the environment at runtime. - General Setting for Defaults – In Settings → Other Settings, a new “Store API Key as ENV variable, not in json” checkbox controls the default mode when adding future models (existing entries stay untouched). Even with the box unchecked you can still opt into env vars per model via the toggle.
- Masked Config Responses – All config/API responses now mask plain API keys and preserve
env:references so the UI can show which environment variable is referenced without revealing the actual secret. - Model Registry – Complete redesign of model configuration! Now you define individual model instances (e.g., "My Local Llama", "GPT-4 via OpenRouter") with their own credentials in a central registry, then select from these pre-configured models for your Council and Chairman.
- Multi-Provider Support – Supports Local Ollama, OpenRouter, and OpenAI-Compatible endpoints (e.g., LM Studio, vLLM).
- Settings UI Redesign – Three-tab interface:
- Models Tab: Add, edit, and delete model configurations with labels and type badges
- Ollama Settings Tab: Configure global Ollama parameters (context window, serialization)
- Council Configuration Tab: Select models from your registry for Council and Chairman
- Automated Setup Scripts – New
setup.shandsetup.batscripts automate dependency installation and Redis setup for faster onboarding. - Dark Theme – A sleek dark UI is now the default.
- Frontend Improvements – Fixed tab naming display issues in peer rankings and evaluations views.
- Docker Setup – A minimal Dockerfile and compose script for quick containerised deployment.
Choose the setup method that works best for you:
- 🐳 Docker (Recommended) – Easiest option, everything containerized. Best for quick demos and production.
- ⚡ Automated Scripts – One-command setup with
./setup.sh(macOS/Linux) orsetup.bat(Windows). Handles dependencies and Redis automatically. - 🛠️ Manual Installation – Full control over each component. Best for development and customization.
Tip
New to the project? Start with Docker or the automated setup scripts for the fastest experience.
Prerequisites:
- Docker and Docker Compose installed on your system
All Operating Systems (macOS / Linux / Windows):
-
Clone the repository and navigate to the project directory
-
Create a
.envfile in the project root (optional, only if using OpenRouter):OPENROUTER_API_KEY=sk-or-v1-... REDIS_HOST=redis REDIS_PORT=6379
Get your API key at openrouter.ai.
-
Build and run with Docker Compose:
docker compose up --build
-
Open
http://localhost:5173in your browser
This brings up Redis, the backend, the background worker, and the frontend—all configured and ready to use.
Use the provided setup scripts to automatically install dependencies and configure Redis:
macOS / Linux:
./setup.shWindows:
.\setup.batThese scripts will:
- ✓ Check for required dependencies (Python, Node.js, Docker)
- ✓ Install Python and frontend dependencies
- ✓ Create a Redis container via Docker
- ✓ Generate a
.envfile from the template
After setup completes:
# macOS / Linux
./start-background.sh
# Windows
.\start-background.batThen open http://localhost:5173 in your browser.
If you prefer not to use Docker, follow these OS-specific instructions:
- Python 3.10+ with uv or pip
- Node.js 16+ and npm
- Redis server
1. Install Dependencies
Redis:
brew install redisPython dependencies:
uv sync
# or
pip install -r requirements.txtFrontend:
cd frontend
npm install
cd ..2. Configure Environment
Create a .env file:
OPENROUTER_API_KEY=sk-or-v1-...
REDIS_HOST=localhost
REDIS_PORT=63803. Start Services
Start Redis:
redis-server --port 6380In separate terminal windows, start:
Background worker:
uv run rq worker council --url redis://localhost:6380/0Backend:
uv run python -m backend.mainFrontend:
cd frontend
npm run dev4. Access the App
Open http://localhost:5173 in your browser.
1. Install Dependencies
Redis (Ubuntu/Debian):
sudo apt update
sudo apt install redis-serverRedis (Fedora/RHEL):
sudo dnf install redisPython dependencies:
uv sync
# or
pip install -r requirements.txtFrontend:
cd frontend
npm install
cd ..2. Configure Environment
Create a .env file:
OPENROUTER_API_KEY=sk-or-v1-...
REDIS_HOST=localhost
REDIS_PORT=63803. Start Services
Start Redis:
redis-server --port 6380In separate terminal windows, start:
Background worker:
uv run rq worker council --url redis://localhost:6380/0Backend:
uv run python -m backend.mainFrontend:
cd frontend
npm run dev4. Access the App
Open http://localhost:5173 in your browser.
1. Install Dependencies
- Option A: Download and install Memurai (Redis for Windows)
- Option B: Use WSL and follow the Linux instructions above
Python dependencies (PowerShell):
uv sync
# or
pip install -r requirements.txtFrontend:
cd frontend
npm install
cd ..2. Configure Environment
Create a .env file:
OPENROUTER_API_KEY=sk-or-v1-...
REDIS_HOST=localhost
REDIS_PORT=6380
3. Start Services
Start Redis (Memurai or WSL):
# If using Memurai, it runs as a service automatically
# If using WSL: wsl redis-server --port 6380In separate PowerShell windows, start:
Background worker:
uv run rq worker council --url redis://localhost:6380/0Backend:
uv run python -m backend.mainFrontend:
cd frontend
npm run dev4. Access the App
Open http://localhost:5173 in your browser.
After starting the app, configure your models using the Model Registry:
- Click the ⚙️ Settings icon in the sidebar
- Go to the Models tab
- Click + Add Model
- Fill in the model details:
- Label: A friendly name (e.g., "My Local Llama")
- Type: Choose Ollama, OpenRouter, or OpenAI Compatible
- Model Name: The actual model identifier (e.g.,
llama3,openai/gpt-4o) - Base URL: For Ollama and OpenAI Compatible (e.g.,
http://localhost:11434) - API Key: For OpenRouter and OpenAI Compatible
- Click Save Model
Global Ollama settings are now in a dedicated tab:
- Context Window (num_ctx): Default context size for all Ollama models (e.g., 4096, 8192)
- Serialize Requests: Run Ollama models sequentially to avoid GPU thrashing
- Go to the Council Configuration tab
- Check the boxes for models you want in your Council
- Select a Chairman model from the dropdown
- Click Save Changes
Local Ollama Setup:
- Ensure Ollama is running (
ollama serve) - Pull models you want to use (e.g.,
ollama pull mistral,ollama pull llama3) - In Settings → Models tab, add each model with:
- Type: Ollama
- Base URL:
http://localhost:11434 - Model Name: The model you pulled (e.g.,
mistral)
Default Configuration: The app starts with free OpenRouter models configured. You can add your own models or modify these in the Settings UI.
The application can work with the following model providers:
- Ollama – Run local models via the Ollama server (e.g.,
http://localhost:11434). Ideal for offline use. - OpenRouter – Cloud‑based models with a free tier. Requires an API key set in
.env(OPENROUTER_API_KEY). - OpenAI‑compatible – Any OpenAI‑style endpoint such as LM Studio, vLLM, or custom deployments. Configure the base URL and API key in the Model Registry.
Configure these models in the Models tab of the Settings UI. See the Setup section for details on adding each type.
Models are now stored in a Model Registry in data/config.json:
{
"models": {
"unique-model-id": {
"label": "My Model Label",
"type": "ollama|openrouter|openai-compatible",
"model_name": "actual-model-name",
"base_url": "http://localhost:11434",
"api_key": "sk-..."
}
},
"ollama_settings": {
"num_ctx": 4096,
"serialize_requests": false
},
"council_models": ["model-id-1", "model-id-2"],
"chairman_model": "model-id-3"
}Key Features:
- Each model has its own credentials and settings
- Models are referenced by ID throughout the app
- Global Ollama settings apply to all Ollama models
- Automatic migration from old configuration format
Redis Connection Errors
rq.exceptions.ConnectionError: Error while reading from socket
- Solution: Ensure Redis is running on the correct port (default: 6380)
- Check Docker:
docker ps | grep redis - Restart Redis:
docker restart llm-council-redis
Ollama Service Not Reachable
Unable to reach Ollama at http://localhost:11434
- Solution: Start Ollama service:
ollama serve - Verify models are pulled:
ollama list - If using Docker, use
http://host.docker.internal:11434as the base URL
OpenRouter API Errors
401 Unauthorized or Invalid API Key
- Solution: Check your
.envfile has the correctOPENROUTER_API_KEY - Get a new key at openrouter.ai/keys
- Restart the backend after updating
.env
Port Already in Use
Address already in use: 5173, 8010, or 6380
- Solution: Change the port in the respective config:
- Frontend (5173): Edit
frontend/vite.config.js - Backend (8010): Edit
backend/main.pyor use env var - Redis (6380): Update
REDIS_PORTin.env
- Frontend (5173): Edit
Frontend Tab Names Showing "(OpenRouter)" Repeatedly
- Solution: This was a known bug that has been fixed in recent updates
- Update to the latest version:
git pull origin main - Clear browser cache and reload
Docker Networking Issues (localhost vs host.docker.internal)
- Solution: The app automatically rewrites URLs when running in Docker:
localhost→host.docker.internal(when in container)host.docker.internal→127.0.0.1(when on host)
- For manual override, set the correct base URL in Settings → Models
If you encounter other issues:
- Check the backend logs in your terminal
- Check browser console for frontend errors
- Review the GitHub Issues
- Open a new issue with:
- Your OS and setup method (Docker/Scripts/Manual)
- Error messages and logs
- Steps to reproduce
- Backend: FastAPI (Python 3.10+), async httpx, Redis + RQ for job queuing
- Frontend: React + Vite, react‑markdown for rendering
- Storage: JSON files in
data/for config and conversations - Package Management: uv for Python, npm for JavaScript
Contributions are welcome! Feel free to open issues or submit pull requests. Please ensure that any new code follows the existing style and includes appropriate documentation.
This project is licensed under the MIT License. See LICENSE for details.