The idea of this repo is that instead of asking a question to your favorite LLM provider (e.g. OpenAI GPT 5.1, Google Gemini 3.0 Pro, Anthropic Claude Sonnet 4.5, xAI Grok 4, eg.c), you can group them into your "LLM Council". This repo is a simple, local web app that essentially looks like ChatGPT except it uses LiteLLM to send your query to multiple LLM providers, it then asks them to review and rank each other's work, and finally a Chairman LLM produces the final response.
In a bit more detail, here is what happens when you submit a query:
- Stage 1: First opinions. The user query is given to all LLMs individually, and the responses are collected. The individual responses are shown in a "tab view", so that the user can inspect them all one by one.
- Stage 2: Review. Each individual LLM is given the responses of the other LLMs. Under the hood, the LLM identities are anonymized so that the LLM can't play favorites when judging their outputs. The LLM is asked to rank them in accuracy and insight.
- Stage 3: Final response. The designated Chairman of the LLM Council takes all of the model's responses and compiles them into a single final answer that is presented to the user.
This project was 99% vibe coded as a fun Saturday hack because I wanted to explore and evaluate a number of LLMs side by side in the process of reading books together with LLMs. It's nice and useful to see multiple responses side by side, and also the cross-opinions of all LLMs on each other's outputs. I'm not going to support it in any way, it's provided here as is for other people's inspiration and I don't intend to improve it. Code is ephemeral now and libraries are over, ask your LLM to change it in whatever way you like.
The project uses uv for project management.
Backend:
uv syncFrontend:
cd frontend
npm install
cd ..Create a .env file in the project root:
OPENAI_API_KEY=...Set the appropriate environment variables for whichever providers you want to use (e.g. OPENAI_API_KEY, ANTHROPIC_API_KEY, GEMINI_API_KEY, etc.).
Edit backend/config.py (or set env vars) to customize the council:
COUNCIL_MODELS = [
"openrouter/google/gemini-3-pro-preview",
"openrouter/openai/gpt-5.1",
"openrouter/anthropic/claude-sonnet-4.5",
"openrouter/x-ai/grok-4",
]
CHAIRMAN_MODEL = "openrouter/google/gemini-3-pro-preview"
TITLE_MODEL = "openrouter/google/gemini-2.5-flash"You can also override via env vars:
COUNCIL_MODELS=openrouter/google/gemini-3-pro-preview,openrouter/openai/gpt-5.1
# use first council model as chairman if not configured
CHAIRMAN_MODEL=openrouter/google/gemini-3-pro-preview
# use last council model as title model if not configured
TITLE_MODEL=openrouter/google/gemini-2.5-flashFirst time (or after pulling updates):
cd frontend
npm ci
cd ..Option 1: Use the start script
./start.shOption 2: Run manually
Terminal 1 (Backend):
uv run python -m backend.mainTerminal 2 (Frontend):
cd frontend
npm run devThen open http://localhost:5173 in your browser.
- Backend: FastAPI (Python 3.10+), LiteLLM
- Frontend: React + Vite, react-markdown for rendering
- Storage: JSON files in
data/conversations/ - Package Management: uv for Python, npm for JavaScript