MassGen is a cutting-edge multi-agent system that leverages the power of collaborative AI to solve complex tasks.
Multi-agent scaling through intelligent collaboration in Grok Heavy style
MassGen is a cutting-edge multi-agent system that leverages the power of collaborative AI to solve complex tasks. It assigns a task to multiple AI agents who work in parallel, observe each other's progress, and refine their approaches to converge on the best solution to deliver a comprehensive and high-quality result. The power of this "parallel study group" approach is exemplified by advanced systems like xAI's Grok Heavy and Google DeepMind's Gemini Deep Think.
This project started with the "threads of thought" and "iterative refinement" ideas presented in The Myth of Reasoning, and extends the classic "multi-agent conversation" idea in AG2. Here is a video recording of the background context introduction presented at the Berkeley Agentic AI Summit 2025.
- Recent Achievements
- Key Future Enhancements
- Bug Fixes & Backend Improvements
- Advanced Agent Collaboration
- Expanded Model, Tool & Agent Integrations
- Improved Performance & Scalability
- Enhanced Developer Experience
- v0.1.3 Roadmap
Feature | Description |
---|---|
π€ Cross-Model/Agent Synergy | Harness strengths from diverse frontier model-powered agents |
β‘ Parallel Processing | Multiple agents tackle problems simultaneously |
π₯ Intelligence Sharing | Agents share and learn from each other's work |
π Consensus Building | Natural convergence through collaborative refinement |
π Live Visualization | See agents' working processes in real-time |
π Released: October 22, 2025
What's New in v0.1.2:
- π§ Intelligent Planning Mode - Automatic question analysis for safe MCP tool blocking
- π Claude 4.5 Haiku Support - Access to latest Claude Haiku model
- π Grok Web Search Fix - Improved web search functionality in Grok backend
Key Improvements:
- Automatically determines if questions require irreversible operations
- Read-only MCP operations allowed during coordination for better decisions
- Write operations automatically blocked for safety
- Zero configuration required - works transparently
- Enhanced model support with latest Claude 4.5 Haiku
Get Started with v0.1.2:
# Install or upgrade from PyPI
pip install --upgrade massgen
# Try intelligent planning mode with MCP tools
# (Please read the YAML file for required API keys: DISCORD_TOKEN, OPENAI_API_KEY, etc.)
massgen --config @examples/tools/planning/five_agents_discord_mcp_planning_mode \
"Check recent messages in our development channel, summarize the discussion, and post a helpful response about the current topic."
# Use latest Claude 4.5 Haiku model
# (Requires ANTHROPIC_API_KEY in .env)
massgen --model claude-haiku-4-5-20251001 \
"Summarize the latest AI developments"
β See full release history and examples
MassGen operates through an architecture designed for seamless multi-agent collaboration:
graph TB
O[π MassGen Orchestrator<br/>π Task Distribution & Coordination]
subgraph Collaborative Agents
A1[Agent 1<br/>ποΈ Anthropic/Claude + Tools]
A2[Agent 2<br/>π Google/Gemini + Tools]
A3[Agent 3<br/>π€ OpenAI/GPT + Tools]
A4[Agent 4<br/>β‘ xAI/Grok + Tools]
end
H[π Shared Collaboration Hub<br/>π‘ Real-time Notification & Consensus]
O --> A1 & A2 & A3 & A4
A1 & A2 & A3 & A4 <--> H
classDef orchestrator fill:#e1f5fe,stroke:#0288d1,stroke-width:3px
classDef agent fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef hub fill:#e8f5e8,stroke:#388e3c,stroke-width:2px
class O orchestrator
class A1,A2,A3,A4 agent
class H hub
The system's workflow is defined by the following key principles:
Parallel Processing - Multiple agents tackle the same task simultaneously, each leveraging their unique capabilities (different models, tools, and specialized approaches).
Real-time Collaboration - Agents continuously share their working summaries and insights through a notification system, allowing them to learn from each other's approaches and build upon collective knowledge.
Convergence Detection - The system intelligently monitors when agents have reached stability in their solutions and achieved consensus through natural collaboration rather than forced agreement.
Adaptive Coordination - Agents can restart and refine their work when they receive new insights from others, creating a dynamic and responsive problem-solving environment.
This collaborative approach ensures that the final output leverages collective intelligence from multiple AI systems, leading to more robust and well-rounded results than any single agent could achieve alone.
π Complete Documentation: For comprehensive guides, API reference, and detailed examples, visit MassGen Official Documentation
Method 1: PyPI Installation (Recommended - Python 3.11+):
# Install MassGen via pip
pip install massgen
# Or with uv (faster)
uv pip install massgen
# Run the interactive setup wizard
massgen
The wizard will guide you through:
- Configuring API keys
- Selecting your use case (Research, Code, Q&A, etc.)
- Choosing AI models
- Saving your configuration
After setup, you can run MassGen with:
# Interactive mode
massgen
# Single query
massgen "Your question here"
# With example configurations
massgen --config @examples/basic/multi/three_agents_default "Your question"
β See Installation Guide for complete setup instructions.
Method 2: Development Installation (for contributors):
# Clone the repository
git clone https://github.com/Leezekun/MassGen.git
cd MassGen
# Install in editable mode with pip
pip install -e .
# Or with uv (faster)
uv pip install -e .
# Optional: External framework integration
pip install -e ".[external]"
Alternative Installation Methods (click to expand)
Using uv with venv:
git clone https://github.com/Leezekun/MassGen.git
cd MassGen
uv venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
uv pip install -e .
Using traditional Python venv:
git clone https://github.com/Leezekun/MassGen.git
cd MassGen
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -e .
Global installation with uv tool:
git clone https://github.com/Leezekun/MassGen.git
cd MassGen
uv tool install -e .
# Now run from any directory
uv tool run massgen --config @examples/basic/multi/three_agents_default "Question"
Backwards compatibility (uv run):
cd /path/to/MassGen
uv run massgen --config @examples/basic/multi/three_agents_default "Question"
uv run python -m massgen.cli --config config.yaml "Question"
Optional CLI Tools:
# Claude Code CLI - Advanced coding assistant
npm install -g @anthropic-ai/claude-code
# LM Studio - Local model inference
# MacOS/Linux:
sudo ~/.lmstudio/bin/lms bootstrap
# Windows:
cmd /c %USERPROFILE%\.lmstudio\bin\lms.exe bootstrap
Create a .env
file in your working directory with your API keys:
# Copy this template to .env and add your API keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
XAI_API_KEY=...
# Optional: Additional providers
CEREBRAS_API_KEY=...
TOGETHER_API_KEY=...
GROQ_API_KEY=...
OPENROUTER_API_KEY=...
MassGen automatically loads API keys from .env
in your current directory.
β Complete setup guide with all providers: See API Key Configuration in the docs
Get API keys:
- OpenAI | Claude | Gemini | Grok
- Azure OpenAI | Cerebras | More providers...
The system currently supports multiple model providers with advanced capabilities:
API-based Models:
- Azure OpenAI (NEW in v0.0.10): GPT-4, GPT-4o, GPT-3.5-turbo, GPT-4.1, GPT-5-chat
- Cerebras AI: GPT-OSS-120B...
- Claude: Claude Haiku 3.5, Claude Sonnet 4, Claude Opus 4...
- Claude Code: Native Claude Code SDK with comprehensive dev tools
- Gemini: Gemini 2.5 Flash, Gemini 2.5 Pro...
- Grok: Grok-4, Grok-3, Grok-3-mini...
- OpenAI: GPT-5 series (GPT-5, GPT-5-mini, GPT-5-nano)...
- Together AI, Fireworks AI, Groq, Kimi/Moonshot, Nebius AI Studio, OpenRouter, POE: LLaMA, Mistral, Qwen...
- Z AI: GLM-4.5
Local Model Support:
-
vLLM & SGLang (ENHANCED in v0.0.25): Unified inference backend supporting both vLLM and SGLang servers
- Auto-detection between vLLM (port 8000) and SGLang (port 30000) servers
- Support for both vLLM and SGLang-specific parameters (top_k, repetition_penalty, separate_reasoning)
- Mixed server deployments with configuration example:
two_qwen_vllm_sglang.yaml
-
LM Studio (v0.0.7+): Run open-weight models locally with automatic server management
- Automatic LM Studio CLI installation
- Auto-download and loading of models
- Zero-cost usage reporting
- Support for LLaMA, Mistral, Qwen and other open-weight models
β For complete model list and configuration details, see Supported Models
MassGen agents can leverage various tools to enhance their problem-solving capabilities. Both API-based and CLI-based backends support different tool capabilities.
Supported Built-in Tools by Backend:
Backend | Live Search | Code Execution | File Operations | MCP Support | Multimodal (Image/Audio/Video) | Advanced Features |
---|---|---|---|---|---|---|
Azure OpenAI (NEW in v0.0.10) | β | β | β | β | β | Code interpreter, Azure deployment management |
Claude API | β | β | β | β | β | Web search, code interpreter, MCP integration |
Claude Code | β | β | β | β | β
Image |
Native Claude Code SDK, comprehensive dev tools, MCP integration |
Gemini API | β | β | β | β | β
Image |
Web search, code execution, MCP integration |
Grok API | β | β | β | β | β | Web search, MCP integration |
OpenAI API | β | β | β | β | β
Image |
Web search, code interpreter, MCP integration |
ZAI API | β | β | β | β | β | MCP integration |
Note: Audio/video multimodal support (NEW in v0.0.30) is available through Chat Completions-based providers like OpenRouter and Qwen API. See configuration examples: single_openrouter_audio_understanding.yaml
, single_qwen_video_understanding.yaml
β For detailed backend capabilities and tool integration guides, see User Guide - Backends
Complete Usage Guide: For all usage modes, advanced features, and interactive multi-turn sessions, see Running MassGen
Parameter | Description |
---|---|
--config |
Path to YAML configuration file with agent definitions, model parameters, backend parameters and UI settings |
--backend |
Backend type for quick setup without a config file (claude , claude_code , gemini , grok , openai , azure_openai , zai ). Optional for models with default backends. |
--model |
Model name for quick setup (e.g., gemini-2.5-flash , gpt-5-nano , ...). --config and --model are mutually exclusive - use one or the other. |
--system-message |
System prompt for the agent in quick setup mode. If --config is provided, --system-message is omitted. |
--no-display |
Disable real-time streaming UI coordination display (fallback to simple text output). |
--no-logs |
Disable real-time logging. |
--debug |
Enable debug mode with verbose logging (NEW in v0.0.13). Shows detailed orchestrator activities, agent messages, backend operations, and tool calls. Debug logs are saved to agent_outputs/log_{time}/massgen_debug.log . |
"<your question>" |
Optional single-question input; if omitted, MassGen enters interactive chat mode. |
Quick Start Commands:
# Quick test with any supported model - no configuration needed
uv run python -m massgen.cli --model claude-3-5-sonnet-latest "What is machine learning?"
uv run python -m massgen.cli --model gemini-2.5-flash "Explain quantum computing"
uv run python -m massgen.cli --model gpt-5-nano "Summarize the latest AI developments"
Configuration:
Use the agent
field to define a single agent with its backend and settings:
agent:
id: "<agent_name>"
backend:
type: "azure_openai" | "chatcompletion" | "claude" | "claude_code" | "gemini" | "grok" | "openai" | "zai" | "lmstudio" #Type of backend
model: "<model_name>" # Model name
api_key: "<optional_key>" # API key for backend. Uses env vars by default.
system_message: "..." # System Message for Single Agent
β See all single agent configs
Configuration:
Use the agents
field to define multiple agents, each with its own backend and config:
Quick Start Commands:
# Three powerful agents working together - Gemini, GPT-5, and Grok
massgen --config @examples/basic/multi/three_agents_default \
"Analyze the pros and cons of renewable energy"
This showcases MassGen's core strength:
- Gemini 2.5 Flash - Fast research with web search
- GPT-5 Nano - Advanced reasoning with code execution
- Grok-3 Mini - Real-time information and alternative perspectives
agents: # Multiple agents (alternative to 'agent')
- id: "<agent1 name>"
backend:
type: "azure_openai" | "chatcompletion" | "claude" | "claude_code" | "gemini" | "grok" | "openai" | "zai" | "lmstudio" #Type of backend
model: "<model_name>" # Model name
api_key: "<optional_key>" # API key for backend. Uses env vars by default.
system_message: "..." # System Message for Single Agent
- id: "..."
backend:
type: "..."
model: "..."
...
system_message: "..."
β Explore more multi-agent setups
The Model context protocol (MCP) standardises how applications expose tools and context to language models. From the official documentation:
MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.
MCP Configuration Parameters:
Parameter | Type | Required | Description |
---|---|---|---|
mcp_servers |
dict | Yes (for MCP) | Container for MCP server definitions |
ββ type |
string | Yes | Transport: "stdio" or "streamable-http" |
ββ command |
string | stdio only | Command to run the MCP server |
ββ args |
list | stdio only | Arguments for the command |
ββ url |
string | http only | Server endpoint URL |
ββ env |
dict | No | Environment variables to pass |
allowed_tools |
list | No | Whitelist specific tools (if omitted, all tools available) |
exclude_tools |
list | No | Blacklist dangerous/unwanted tools |
Quick Start Commands (Check backend MCP support here):
# Weather service with GPT-5
massgen --config @examples/tools/mcp/gpt5_nano_mcp_example \
"What's the weather forecast for New York this week?"
# Multi-tool MCP with Gemini - Search + Weather + Filesystem (Requires BRAVE_API_KEY in .env)
massgen --config @examples/tools/mcp/multimcp_gemini \
"Find the best restaurants in Paris and save the recommendations to a file"
Configuration:
agents:
# Basic MCP Configuration:
backend:
type: "openai" # Your backend choice
model: "gpt-5-mini" # Your model choice
# Add MCP servers here
mcp_servers:
weather: # Server name (you choose this)
type: "stdio" # Communication type
command: "npx" # Command to run
args: ["-y", "@modelcontextprotocol/server-weather"] # MCP server package
# That's it! The agent can now check weather.
# Multiple MCP Tools Example:
backend:
type: "gemini"
model: "gemini-2.5-flash"
mcp_servers:
# Web search
search:
type: "stdio"
command: "npx"
args: ["-y", "@modelcontextprotocol/server-brave-search"]
env:
BRAVE_API_KEY: "${BRAVE_API_KEY}" # Set in .env file
# HTTP-based MCP server (streamable-http transport)
custodm_api:
type: "streamable-http" # For HTTP/SSE servers
url: "http://localhost:8080/mcp/sse" # Server endpoint
# Tool configuration (MCP tools are auto-discovered)
allowed_tools: # Optional: whitelist specific tools
- "mcp__weather__get_current_weather"
- "mcp__test_server__mcp_echo"
- "mcp__test_server__add_numbers"
exclude_tools: # Optional: blacklist specific tools
- "mcp__test_server__current_time"
β For comprehensive MCP integration guide, see MCP Integration
MassGen provides comprehensive file system support through multiple backends, enabling agents to read, write, and manipulate files in organized workspaces.
Filesystem Configuration Parameters:
Parameter | Type | Required | Description |
---|---|---|---|
cwd |
string | Yes (for file ops) | Working directory for file operations (agent-specific workspace) |
snapshot_storage |
string | Yes | Directory for workspace snapshots |
agent_temporary_workspace |
string | Yes | Parent directory for temporary workspaces |
Quick Start Commands:
# File operations with Claude Code
massgen --config @examples/tools/filesystem/claude_code_single \
"Create a Python web scraper and save results to CSV"
# Multi-agent file collaboration
massgen --config @examples/tools/filesystem/claude_code_context_sharing \
"Generate a comprehensive project report with charts and analysis"
Configuration:
# Basic Workspace Setup:
agents:
- id: "file-agent"
backend:
type: "claude_code" # Backend with file support
model: "claude-sonnet-4" # Your model choice
cwd: "workspace" # Isolated workspace for file operations
# Multi-Agent Workspace Isolation:
agents:
- id: "agent_a"
backend:
type: "claude_code"
cwd: "workspace1" # Agent-specific workspace
- id: "agent_b"
backend:
type: "gemini"
cwd: "workspace2" # Separate workspace
orchestrator:
snapshot_storage: "snapshots" # Shared snapshots directory
agent_temporary_workspace: "temp_workspaces" # Temporary workspace management
Available File Operations:
- Claude Code: Built-in tools (Read, Write, Edit, MultiEdit, Bash, Grep, Glob, LS, TodoWrite)
- Other Backends: Via MCP Filesystem Server
Workspace Management:
- Isolated Workspaces: Each agent's
cwd
is fully isolated and writable - Snapshot Storage: Share workspace context between Claude Code agents
- Temporary Workspaces: Agents can access previous coordination results
β View more filesystem examples
β οΈ IMPORTANT SAFETY WARNINGMassGen agents can autonomously read, write, modify, and delete files within their permitted directories.
Before running MassGen with filesystem access:
- Only grant access to directories you're comfortable with agents modifying
- Use the permission system to restrict write access where needed
- Consider testing in an isolated directory or virtual environment first
- Back up important files before granting write access
- Review the
context_paths
configuration carefullyThe agents will execute file operations without additional confirmation once permissions are granted.
β For comprehensive file operations guide, see File Operations
Work directly with your existing projects! User Context Paths allow you to share specific directories with all agents while maintaining granular permission control. This enables secure multi-agent collaboration on your real codebases, documentation, and data.
MassGen automatically organizes all its working files under a .massgen/
directory in your project root, keeping your project clean and making it easy to exclude MassGen's temporary files from version control.
Project Integration Parameters:
Parameter | Type | Required | Description |
---|---|---|---|
context_paths |
list | Yes (for project integration) | Shared directories for all agents |
ββ path |
string | Yes | Absolute or relative path to your project directory (must be directory, not file) |
ββ permission |
string | Yes | Access level: "read" or "write" (write applies only to final agent) |
ββ protected_paths |
list | No | Files/directories immune from modification (relative to context path) |
- Context paths must point to directories, not individual files
- Paths can be absolute or relative (resolved against current working directory)
- Write permissions apply only to the final agent during presentation phase
- During coordination, all context paths are read-only to protect your files
- MassGen validates all paths during startup and will show clear error messages for missing paths or file paths
Quick Start Commands:
# Multi-agent collaboration to improve the website in `massgen/configs/resources/v0.0.21-example
massgen --config @examples/tools/filesystem/gpt5mini_cc_fs_context_path "Enhance the website with: 1) A dark/light theme toggle with smooth transitions, 2) An interactive feature that helps users engage with the blog content (your choice - could be search, filtering by topic, reading time estimates, social sharing, reactions, etc.), and 3) Visual polish with CSS animations or transitions that make the site feel more modern and responsive. Use vanilla JavaScript and be creative with the implementation details."
Configuration:
# Basic Project Integration:
agents:
- id: "code-reviewer"
backend:
type: "claude_code"
cwd: "workspace" # Agent's isolated work area
orchestrator:
context_paths:
- path: "." # Current directory (relative path)
permission: "write" # Final agent can create/modify files
protected_paths: # Optional: files immune from modification
- ".env"
- "config.json"
- path: "/home/user/my-project/src" # Absolute path example
permission: "read" # Agents can analyze your code
# Advanced: Multi-Agent Project Collaboration
agents:
- id: "analyzer"
backend:
type: "gemini"
cwd: "analysis_workspace"
- id: "implementer"
backend:
type: "claude_code"
cwd: "implementation_workspace"
orchestrator:
context_paths:
- path: "../legacy-app/src" # Relative path to existing codebase
permission: "read" # Read existing codebase
- path: "../legacy-app/tests"
permission: "write" # Final agent can write new tests
protected_paths: # Protect specific test files
- "integration_tests/production_data_test.py"
- path: "/home/user/modernized-app" # Absolute path
permission: "write" # Final agent can create modernized version
This showcases project integration:
- Real Project Access - Work with your actual codebases, not copies
- Secure Permissions - Granular control over what agents can read/modify
- Multi-Agent Collaboration - Multiple agents safely work on the same project
- Context Agents (during coordination): Always READ-only access to protect your files
- Final Agent (final execution): Gets the configured permission (READ or write)
Use Cases:
- Code Review: Agents analyze your source code and suggest improvements
- Documentation: Agents read project docs to understand context and generate updates
- Data Processing: Agents access shared datasets and generate analysis reports
- Project Migration: Agents examine existing projects and create modernized versions
Clean Project Organization:
your-project/
βββ .massgen/ # All MassGen state
β βββ sessions/ # Multi-turn conversation history (if using interactively)
β β βββ session_20240101_143022/
β β βββ turn_1/ # Results from turn 1
β β βββ turn_2/ # Results from turn 2
β β βββ SESSION_SUMMARY.txt # Human-readable summary
β βββ workspaces/ # Agent working directories
β β βββ agent1/ # Individual agent workspaces
β β βββ agent2/
β βββ snapshots/ # Workspace snapshots for coordination
β βββ temp_workspaces/ # Previous turn results for context
βββ massgen/
βββ ...
Benefits:
- β Clean Projects - All MassGen files contained in one directory
- β
Easy Gitignore - Just add
.massgen/
to.gitignore
- β
Portable - Move or delete
.massgen/
without affecting your project - β Multi-Turn Sessions - Conversation history preserved across sessions
Configuration Auto-Organization:
orchestrator:
# User specifies simple names - MassGen organizes under .massgen/
snapshot_storage: "snapshots" # β .massgen/snapshots/
session_storage: "sessions" # β .massgen/sessions/
agent_temporary_workspace: "temp" # β .massgen/temp/
agents:
- backend:
cwd: "workspace1" # β .massgen/workspaces/workspace1/
β For comprehensive project integration guide, see Project Integration
Security Considerations:
- Agent ID Safety: Avoid using agent+incremental digits for IDs (e.g.,
agent1
,agent2
). This may cause ID exposure during voting - File Access Control: Restrict file access using MCP server configurations when needed
- Path Validation: All context paths are validated to ensure they exist and are directories (not files)
- Directory-Only Context Paths: Context paths must point to directories, not individual files
Claude (Recursive MCP Execution - v0.0.20+)
# Claude with advanced tool chaining
massgen --config @examples/tools/mcp/claude_mcp_example \
"Research and compare weather in Beijing and Shanghai"
OpenAI (GPT-5 Series with MCP - v0.0.17+)
# GPT-5 with weather and external tools
massgen --config @examples/tools/mcp/gpt5_nano_mcp_example \
"What's the weather of Tokyo"
Gemini (Multi-Server MCP - v0.0.15+)
# Gemini with multiple MCP services
massgen --config @examples/tools/mcp/multimcp_gemini \
"Find accommodations in Paris with neighborhood analysis" # (requires BRAVE_API_KEY in .env)
Claude Code (Development Tools)
# Professional development environment with auto-configured workspace
uv run python -m massgen.cli \
--backend claude_code \
--model sonnet \
"Create a Flask web app with authentication"
# Default workspace directories created automatically:
# - workspace1/ (working directory)
# - snapshots/ (workspace snapshots)
# - temp_workspaces/ (temporary agent workspaces)
Local Models (LM Studio - v0.0.7+)
# Run open-source models locally
massgen --config @examples/providers/local/lmstudio \
"Explain machine learning concepts"
β Browse by provider | Browse by tools | Browse teams
Question Answering & Research:
# Complex research with multiple perspectives
massgen --config @examples/basic/multi/gemini_4o_claude \
"What's best to do in Stockholm in October 2025"
# Specific research requirements
massgen --config @examples/basic/multi/gemini_4o_claude \
"Give me all the talks on agent frameworks in Berkeley Agentic AI Summit 2025"
Creative Writing:
# Story generation with multiple creative agents
massgen --config @examples/basic/multi/gemini_4o_claude \
"Write a short story about a robot who discovers music"
Development & Coding:
# Full-stack development with file operations
massgen --config @examples/tools/filesystem/claude_code_single \
"Create a Flask web app with authentication"
Web Automation: (still in test)
# Browser automation with screenshots and reporting
# Prerequisites: npm install @playwright/mcp@latest (for Playwright MCP server)
massgen --config @examples/tools/code-execution/multi_agent_playwright_automation \
"Browse three issues in https://github.com/Leezekun/MassGen and suggest documentation improvements. Include screenshots and suggestions in a website."
# Data extraction and analysis
massgen --config @examples/tools/code-execution/multi_agent_playwright_automation \
"Navigate to https://news.ycombinator.com, extract the top 10 stories, and create a summary report"
β See detailed case studies with real session logs and outcomes
Multi-Turn Conversations:
# Start interactive chat (no initial question)
massgen --config @examples/basic/multi/three_agents_default
# Debug mode for troubleshooting
massgen --config @examples/basic/multi/three_agents_default \
--debug "Your question"
MassGen configurations are organized by features and use cases. See the Configuration Guide for detailed organization and examples.
Quick navigation:
- Basic setups: Single agent | Multi-agent
- Tool integrations: MCP servers | Web search | Filesystem
- Provider examples: OpenAI | Claude | Gemini
- Specialized teams: Creative | Research | Development
See MCP server setup guides: Discord MCP | Twitter MCP
For detailed configuration of all supported backends (OpenAI, Claude, Gemini, Grok, etc.), see:
β Backend Configuration Guide
MassGen supports an interactive mode where you can have ongoing conversations with the system:
# Start interactive mode with a single agent (no tool enabled by default)
uv run python -m massgen.cli --model gpt-5-mini
# Start interactive mode with configuration file
uv run python -m massgen.cli \
--config massgen/configs/basic/multi/three_agents_default.yaml
Interactive Mode Features:
- Multi-turn conversations: Multiple agents collaborate to chat with you in an ongoing conversation
- Real-time coordination tracking: Live visualization of agent interactions, votes, and decision-making processes
- Interactive coordination table: Press
r
to view complete history of agent coordination events and state transitions - Real-time feedback: Displays real-time agent and system status with enhanced coordination visualization
- Clear conversation history: Type
/clear
to reset the conversation and start fresh - Easy exit: Type
/quit
,/exit
,/q
, or pressCtrl+C
to stop
Watch the recorded demo:
The system provides multiple ways to view and analyze results:
- Live Collaboration View: See agents working in parallel through a multi-region terminal display
- Status Updates: Real-time phase transitions, voting progress, and consensus building
- Streaming Output: Watch agents' reasoning and responses as they develop
Watch an example here:
All sessions are automatically logged with detailed information for debugging and analysis.
Real-time Interaction:
- Press
r
during execution to view the coordination table in your terminal - Watch agents collaborate, vote, and reach consensus in real-time
.massgen/
βββ massgen_logs/
βββ log_YYYYMMDD_HHMMSS/ # Timestamped log directory
βββ agent_<id>/ # Agent-specific coordination logs
β βββ YYYYMMDD_HHMMSS_NNNNNN/ # Timestamped coordination steps
β βββ answer.txt # Agent's answer at this step
β βββ context.txt # Context available to agent
β βββ workspace/ # Agent workspace (if filesystem tools used)
βββ agent_outputs/ # Consolidated output files
β βββ agent_<id>.txt # Complete output from each agent
β βββ final_presentation_agent_<id>.txt # Winning agent's final answer
β βββ final_presentation_agent_<id>_latest.txt # Symlink to latest
β βββ system_status.txt # System status and metadata
βββ final/ # Final presentation phase
β βββ agent_<id>/ # Winning agent's final work
β βββ answer.txt # Final answer
β βββ context.txt # Final context
βββ coordination_events.json # Structured coordination events
βββ coordination_table.txt # Human-readable coordination table
βββ vote.json # Final vote tallies and consensus data
βββ massgen.log # Complete debug log (or massgen_debug.log in debug mode)
βββ snapshot_mappings.json # Workspace snapshot metadata
βββ execution_metadata.yaml # Query, config, and execution details
- Coordination Table (
coordination_table.txt
): Complete visualization of multi-agent coordination with event timeline, voting patterns, and consensus building - Coordination Events (
coordination_events.json
): Structured JSON log of all events (started_streaming, new_answer, vote, restart, final_answer) - Vote Summary (
vote.json
): Final vote tallies, winning agent, and consensus information - Execution Metadata (
execution_metadata.yaml
): Original query, timestamp, configuration, and execution context for reproducibility - Agent Outputs (
agent_outputs/
): Complete output history and final presentations from all agents - Debug Log (
massgen.log
): Complete system operations, API calls, tool usage, and error traces (use--debug
for verbose logging)
β For comprehensive logging guide and debugging techniques, see Logging & Debugging
To see how MassGen works in practice, check out these detailed case studies based on real session logs:
- MassGen Case Studies
- Case Studies Documentation - Browse case studies online
MassGen is currently in its foundational stage, with a focus on parallel, asynchronous multi-agent collaboration and orchestration. Our roadmap is centered on transforming this foundation into a highly robust, intelligent, and user-friendly system, while enabling frontier research and exploration. An earlier version of MassGen can be found here.
π Released: October 22, 2025
- Automatic Question Analysis: New
_analyze_question_irreversibility()
method in orchestrator determines if MCP operations are reversible - Selective Tool Blocking: Granular control with
set_planning_mode_blocked_tools()
,get_planning_mode_blocked_tools()
, andis_mcp_tool_blocked()
methods - Dynamic Behavior: Read-only MCP operations allowed during coordination, write operations blocked for safety
- Zero Configuration: Works transparently without setup
- Multi-Workspace Support: Planning mode works across different workspaces without conflicts
- Test Coverage: Comprehensive tests in
massgen/tests/test_intelligent_planning_mode.py
- Documentation: Complete guide in
docs/case_studies/INTELLIGENT_PLANNING_MODE.md
- Claude 4.5 Haiku: Added latest Claude Haiku model
claude-haiku-4-5-20251001
- Model Priority Updates: Reorganized Claude model list with updated defaults (
claude-sonnet-4-5-20250929
) - Grok Web Search Fix: Resolved
extra_body
parameter handling for Grok's Live Search API with new_add_grok_search_params()
method
- Planning Mode Configs: Updated 5 configurations in
massgen/configs/tools/planning/
with selective blocking examples - Default Configuration: Updated
three_agents_default.yaml
with Grok-4-fast model
β
Custom Tools System (v0.1.1): User-defined Python function registration using ToolManager
class in massgen/tool/_manager.py
, cross-backend support alongside MCP servers, builtin/MCP/custom tool categories with automatic discovery, 40+ examples in massgen/configs/tools/custom_tools/
, voting sensitivity controls with three-tier quality system (lenient/balanced/strict), answer novelty detection preventing duplicates
β
Backend Enhancements (v0.1.1): Gemini architecture refactoring with extracted MCP management (gemini_mcp_manager.py
), tracking (gemini_trackers.py
), and utilities, new capabilities registry in massgen/backend/capabilities.py
documenting feature support across backends
β
PyPI Package Release (v0.1.0): Official distribution via pip install massgen
with simplified installation, global massgen
command accessible from any directory, comprehensive Sphinx documentation at docs.massgen.ai, interactive setup wizard with use case presets and API key management, enhanced CLI with @examples/
prefix for built-in configurations
β Docker Execution Mode (v0.0.32): Container-based isolation with secure command execution in isolated Docker containers preventing host filesystem access, persistent state management with packages and dependencies persisting across conversation turns, multi-agent support with dedicated isolated containers for each agent, configurable security with resource limits (CPU, memory), network isolation modes, and read-only volume mounts
β
MCP Architecture Refactoring (v0.0.32): Simplified client with renamed MultiMCPClient
to MCPClient
reflecting streamlined architecture, code consolidation by removing deprecated modules and consolidating duplicate MCP protocol handling, improved maintainability with standardized type hints, enhanced error handling, and cleaner code organization
β Claude Code Docker Integration (v0.0.32): Automatic tool management with Bash tool automatically disabled in Docker mode routing commands through execute_command, MCP auto-permissions with automatic approval for MCP tools while preserving security validation, enhanced guidance with system messages preventing git repository confusion between host and container environments
β Universal Command Execution (v0.0.31): MCP-based execute_command tool works across Claude, Gemini, OpenAI, and Chat Completions providers, comprehensive security with permission management and command filtering, code execution in planning mode for safer coordination
β External Framework Integration (v0.0.31): Multi-agent conversations using external framework group chat patterns, smart speaker selection (automatic, round-robin, manual) powered by LLMs, enhanced adapter supporting native group chat coordination
β Audio & Video Generation (v0.0.31): Audio tools for text-to-speech and transcription, video generation using OpenAI's Sora-2 API, multimodal expansion beyond text and images
β Multimodal Support Extension (v0.0.30): Audio and video processing for Chat Completions and Claude backends (WAV, MP3, MP4, AVI, MOV, WEBM formats), flexible media input via local paths or URLs, extended base64 encoding for audio/video files, configurable file size limits
β
Claude Agent SDK Migration (v0.0.30): Package migration from claude-code-sdk
to claude-agent-sdk>=0.0.22
, improved bash tool permission validation, enhanced system message handling
β
Qwen API Integration (v0.0.30): Added Qwen API provider to Chat Completions ecosystem with QWEN_API_KEY
support, video understanding configuration examples
β MCP Planning Mode (v0.0.29): Strategic planning coordination strategy for safer MCP tool usage, multi-backend support (Response API, Chat Completions, Gemini), agents plan without execution during coordination, 5 planning mode configurations
β
File Operation Safety (v0.0.29): Read-before-delete enforcement with FileOperationTracker
class, PathPermissionManager
integration with operation tracking methods, enhanced file operation safety mechanisms
β External Framework Integration (v0.0.28): Adapter system for external agent frameworks with async execution, code execution in multiple environments (Local, Docker, Jupyter, YepCode), ready-to-use configurations for framework integration
β
Multimodal Support - Image Processing (v0.0.27): New stream_chunk
module for multimodal content, image generation and understanding capabilities, file upload and search for document Q&A, Claude Sonnet 4.5 support, enhanced workspace multimodal tools
β
File Deletion and Workspace Management (v0.0.26): New MCP tools (delete_file
, delete_files_batch
, compare_directories
, compare_files
) for workspace cleanup and file comparison, consolidated _workspace_tools_server.py
, enhanced path permission manager
β Protected Paths and File-Based Context Paths (v0.0.26): Protect specific files within write-permitted directories, grant access to individual files instead of entire directories
β
Multi-Turn Filesystem Support (v0.0.25): Multi-turn conversation support with persistent context across turns, automatic .massgen
directory structure, workspace snapshots and restoration, enhanced path permission system with smart exclusions, and comprehensive backend improvements
β
SGLang Backend Integration (v0.0.25): Unified vLLM/SGLang backend with auto-detection, support for SGLang-specific parameters like separate_reasoning
, and dual server support for mixed vLLM and SGLang deployments
β vLLM Backend Support (v0.0.24): Complete integration with vLLM for high-performance local model serving, POE provider support, GPT-5-Codex model recognition, backend utility modules refactoring, and comprehensive bug fixes including streaming chunk processing
β
Backend Architecture Refactoring (v0.0.23): Major code consolidation with new base_with_mcp.py
class reducing ~1,932 lines across backends, extracted formatter module for better code organization, and improved maintainability through unified MCP integration
β Workspace Copy Tools via MCP (v0.0.22): Seamless file copying capabilities between workspaces, configuration organization with hierarchical structure, and enhanced file operations for large-scale collaboration
β Grok MCP Integration (v0.0.21): Unified backend architecture with full MCP server support, filesystem capabilities through MCP servers, and enhanced configuration files
β Claude Backend MCP Support (v0.0.20): Extended MCP integration to Claude backend, full MCP protocol and filesystem support, robust error handling, and comprehensive documentation
β Comprehensive Coordination Tracking (v0.0.19): Complete coordination tracking and visualization system with event-based tracking, interactive coordination table display, and advanced debugging capabilities for multi-agent collaboration patterns
β Comprehensive MCP Integration (v0.0.18): Extended MCP to all Chat Completions backends (Cerebras AI, Together AI, Fireworks AI, Groq, Nebius AI Studio, OpenRouter), cross-provider function calling compatibility, 9 new MCP configuration examples
β OpenAI MCP Integration (v0.0.17): Extended MCP (Model Context Protocol) support to OpenAI backend with full tool discovery and execution capabilities for GPT models, unified MCP architecture across multiple backends, and enhanced debugging
β
Unified Filesystem Support with MCP Integration (v0.0.16): Complete FilesystemManager
class providing unified filesystem access for Gemini and Claude Code backends, with MCP-based operations for file manipulation and cross-agent collaboration
β MCP Integration Framework (v0.0.15): Complete MCP implementation for Gemini backend with multi-server support, circuit breaker patterns, and comprehensive security framework
β Enhanced Logging (v0.0.14): Improved logging system for better agents' answer debugging, new final answer directory structure, and detailed architecture documentation
β Unified Logging System (v0.0.13): Centralized logging infrastructure with debug mode and enhanced terminal display formatting
β Windows Platform Support (v0.0.13): Windows platform compatibility with improved path handling and process management
β Enhanced Claude Code Agent Context Sharing (v0.0.12): Claude Code agents now share workspace context by maintaining snapshots and temporary workspace in orchestrator's side
β Documentation Improvement (v0.0.12): Updated README with current features and improved setup instructions
β Custom System Messages (v0.0.11): Enhanced system message configuration and preservation with backend-specific system prompt customization
β Claude Code Backend Enhancements (v0.0.11): Improved integration with better system message handling, JSON response parsing, and coordination action descriptions
β Azure OpenAI Support (v0.0.10): Integration with Azure OpenAI services including GPT-4.1 and GPT-5-chat models with async streaming
β MCP (Model Context Protocol) Support (v0.0.9): Integration with MCP for advanced tool capabilities in Claude Code Agent, including Discord and Twitter integration
β Timeout Management System (v0.0.8): Orchestrator-level timeout with graceful fallback and enhanced error messages
β Local Model Support (v0.0.7): Complete LM Studio integration for running open-weight models locally with automatic server management
β GPT-5 Series Integration (v0.0.6): Support for OpenAI's GPT-5, GPT-5-mini, GPT-5-nano with advanced reasoning parameters
β Claude Code Integration (v0.0.5): Native Claude Code backend with streaming capabilities and tool support
β GLM-4.5 Model Support (v0.0.4): Integration with ZhipuAI's GLM-4.5 model family
β Foundation Architecture (v0.0.3): Complete multi-agent orchestration system with async streaming, builtin tools, and multi-backend support
β Extended Provider Ecosystem: Support for 15+ providers including Cerebras AI, Together AI, Fireworks AI, Groq, Nebius AI Studio, and OpenRouter
- Bug Fixes & Backend Improvements: Fixing image generation path issues and adding Claude multimodal support
- Advanced Agent Collaboration: Exploring improved communication patterns and consensus-building protocols to improve agent synergy
- Expanded Model Integration: Adding support for more frontier models and local inference engines
- Improved Performance & Scalability: Optimizing the streaming and logging mechanisms for better performance and resource management
- Enhanced Developer Experience: Completing tool registration system and web interface for better visualization
We welcome community contributions to achieve these goals.
Version 0.1.3 focuses on general interoperability and enterprise collaboration:
- General Interoperability: Enable MassGen to orchestrate agents from multiple external frameworks with unified interface
- Final Agent Submit/Restart Tools: Enable final agent to decide whether to submit or restart orchestration
Key technical approach:
- Framework Integration: Multi-agent coordination supporting external agent frameworks with specialized agent roles (researcher, analyst, critic, synthesizer)
- Submit/Restart: Multi-step task verification with access to previous agents' responses and workspaces
For detailed milestones and technical specifications, see the full v0.1.3 roadmap.
We welcome contributions! Please see our Contributing Guidelines for details.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
β Star this repo if you find it useful! β
Made with β€οΈ by the MassGen team