An interactive Tic Tac Toe game where three AI agents work together using CrewAI as the agent framework and MCP (Multi-Context Protocol) for distributed communication. This project showcases how multiple LLMs can collaborate through structured communication protocols - each agent runs as both a CrewAI Agent and an MCP Server.
- ๐ฎ Game: Interactive Tic Tac Toe vs AI team
- ๐ค AI Team: Three MCP agents (Scout, Strategist, Executor) - each a CrewAI Agent + MCP Server
- ๐ Hot-Swappable Models: Switch LLMs mid-game without restart via MCP protocol
- ๐ Real-time Analytics: MCP protocol monitoring and performance analytics
- ๐จ Modern UI: Streamlit dashboard with live updates
- ๐ Distributed: Each agent runs as independent MCP server for scalable deployment
Get started in 5 minutes!
This project supports multiple deployment modes and agent frameworks:
- ๐ Simple Mode (Fastest) - Direct LLM calls, < 1 second per move, perfect for Tic Tac Toe
- โก Optimized Mode (Recommended) - Shared resources, LangChain direct calls, < 1 second per move
- ๐ Local Mode (Default) - All agents run in the same Python process with direct method calls
- ๐ Distributed Mode - Agents run as separate processes communicating via HTTP/JSON-RPC (true MCP transport)
| Mode | Framework | Speed | Architecture | Resources | Use Case |
|---|---|---|---|---|---|
| ๐ Simple | Direct LLM | < 1s | Single LLM call | 1 connection | Fastest, simplest |
| โก Optimized | LangChain | < 1s | Shared resources | 1 shared connection | Best balance |
| ๐ Local | CrewAI | 3-8s | MCP simulation | 3 LLM connections | Agent coordination |
| ๐ Distributed | CrewAI + MCP | 3-8s | Full MCP protocol | 3 separate processes | Multi-machine |
# Simple mode - fastest and most reliable for Tic Tac Toe
git clone https://github.com/arun-gupta/mcp-multiplayer-game.git
cd mcp-multiplayer-game
chmod +x quickstart.sh
./quickstart.sh --simple # or --s for shortBenefits:
- โก < 1 second per move - 8-19x faster than complex mode
- ๐ง 10x simpler - No CrewAI/MCP overhead
- ๐ ๏ธ 5x easier maintenance - Direct LLM calls only
- ๐ฏ Perfect for Tic Tac Toe - No over-engineering
Access the game: http://localhost:8501 API Documentation: http://localhost:8000/docs
# Optimized mode - best balance of speed and structure
git clone https://github.com/arun-gupta/mcp-multiplayer-game.git
cd mcp-multiplayer-game
chmod +x quickstart.sh
./quickstart.sh --optimized # or --o for shortBenefits:
- โก < 1 second per move - Shared resources, no MCP servers
- ๐ง LangChain direct calls - No CrewAI overhead
- ๐ ๏ธ Shared Ollama connection - Memory efficient
- ๐ฏ Pre-created tasks - No runtime creation overhead
- ๐ Best balance - Speed + structure
Access the game: http://localhost:8501 API Documentation: http://localhost:8000/docs
# Clone and setup MCP hybrid architecture automatically
git clone https://github.com/arun-gupta/mcp-multiplayer-game.git
cd mcp-multiplayer-game
chmod +x quickstart.sh
./quickstart.shAccess the game: http://localhost:8501 API Documentation: http://localhost:8000/docs
Choose between different agent frameworks:
# Simple mode (fastest, recommended for Tic Tac Toe)
./quickstart.sh --simple # or --s for short
# Optimized mode (best balance, recommended)
./quickstart.sh --optimized # or --o for short
# LangChain agents (faster than CrewAI)
./quickstart.sh --langchain
# CrewAI agents with MCP protocol (complex, full coordination)
./quickstart.sh --crewaiFramework Comparison:
- Simple: Direct LLM calls, < 1 second per move, perfect for Tic Tac Toe
- Optimized: LangChain with shared resources, < 1 second per move, best balance
- LangChain: Direct LLM calls, faster than CrewAI, good balance
- CrewAI: Full agent coordination with MCP protocol, most complex
For true MCP protocol transport between agents:
# Clone and setup
git clone https://github.com/arun-gupta/mcp-multiplayer-game.git
cd mcp-multiplayer-game
chmod +x quickstart.sh
./quickstart.sh -d # or --d, --dist, --distributed all workThis starts:
- Scout Agent on port 3001
- Strategist Agent on port 3002
- Executor Agent on port 3003
- Main API Server on port 8000 (with
--distributedflag)
Access the game: http://localhost:8501 API Documentation: http://localhost:8000/docs
# Clone and setup MCP hybrid architecture
git clone https://github.com/arun-gupta/mcp-multiplayer-game.git
cd mcp-multiplayer-game
# Install dependencies
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
# Install Ollama models (optional)
ollama pull llama2:7b
ollama pull mistral
# Optimize Ollama for instant AI responses (recommended)
OLLAMA_KEEP_ALIVE=-1 ollama run llama3.2:1b
# Start MCP API server
python main.py &
# Start Streamlit UI (in another terminal)
python run_streamlit.pyThe quickstart.sh script automatically:
- โ Process cleanup - Kills existing processes on ports 8000/8501
- โ Environment setup - Creates venv and installs dependencies
- โ Python version checking - Validates Python 3.11+
- โ Dependency installation - Installs all requirements with Python 3.13 compatibility
- โ Ollama model setup - Optional local model installation
- โ File validation - Checks for all required files
- โ Application startup - Starts both backend and frontend services
- โ Error handling - Comprehensive error checking and colored output
# Full setup and launch (default)
./quickstart.sh
# Launch only (skip setup, venv must exist)
./quickstart.sh --skip-setup
# Setup and launch without cleanup
./quickstart.sh --skip-cleanup
# Show help
./quickstart.sh --help๐ Complete Setup Guide - Detailed instructions and troubleshooting
- ๐ Quick Start Guide - Complete setup and troubleshooting
- ๐จ Streamlit UI Guide - Frontend features and customization
- ๐๏ธ Architecture - System architecture and design
- ๐ก API Reference - Complete API documentation and examples
- ๐ฎ User Guide - Game experience and setup instructions
- ๐ Features - Detailed feature explanations and capabilities
- ๐ ๏ธ Development - Development workflow and contribution guidelines
- ๐ MCP Query Guide - All methods to query MCP servers (Recommended starting point)
- ๐ REST API Guide - Detailed REST/HTTP API reference with Python examples
- ๐ MCP Protocol - Complete MCP protocol implementation details
To use the AI agents, you'll need API keys for the LLM providers. See the User Guide for detailed setup instructions.
The application uses config.json for all configuration settings. Copy the example file and customize as needed:
cp config.example.json config.json{
"mcp": {
"ports": {
"scout": 3001, // MCP server port for Scout agent
"strategist": 3002, // MCP server port for Strategist agent
"executor": 3003 // MCP server port for Executor agent
},
"host": "localhost",
"protocol": "http"
},
"api": {
"host": "0.0.0.0",
"port": 8000 // FastAPI server port
},
"streamlit": {
"host": "0.0.0.0",
"port": 8501 // Streamlit UI port
},
"models": {
"default": "gpt-5-mini", // Default model for all agents
"fallback": ["gpt-4", "claude-3-sonnet", "llama3.2:3b"]
},
"performance": {
"mcp_coordination_timeout": 15, // Timeout for MCP coordination (seconds)
"agent_execution_timeout": 8, // Timeout for individual agent tasks (seconds)
"enable_metrics": true // Enable/disable performance metrics
}
}Note: config.json is gitignored for security. Always use config.example.json as a template.
The system uses MCP (Multi-Context Protocol) for distributed communication between CrewAI agents. Each agent runs as both a CrewAI Agent and an MCP Server, enabling modular, scalable deployment.
- ๐ค MCP Agents: Scout, Strategist, Executor (Ports 3001-3003)
- ๐ FastAPI Server: Main application server (Port 8000)
- ๐จ Streamlit UI: Interactive game interface (Port 8501)
- ๐ก MCP Coordinator: Orchestrates agent communication with streamlined real-time coordination
For optimal real-time gaming performance, the system uses a lightweight MCP coordination approach:
- โก Fast Response Times: Sub-second AI moves via optimized agent communication
- ๐ฏ Strategic Logic: Direct blocking/winning move detection for immediate threats
- ๐ Real-Time Metrics: Accurate request tracking with microsecond precision
- ๐ Auto-AI Moves: Automatic AI turn triggering via dedicated
/ai-moveendpoint - ๐ฎ Seamless UX: No delays or timeouts during gameplay
Main application server that coordinates everything
| Endpoint | Method | Description |
|---|---|---|
/ |
GET | Root endpoint |
/state |
GET | Get current game state |
/make-move |
POST | Make a player move and get AI response |
/ai-move |
POST | Trigger AI move (auto-called when AI's turn) |
/reset-game |
POST | Reset game |
/agents/status |
GET | Get all agent status |
/agents/{agent_id}/switch-model |
POST | Switch agent model |
/mcp-logs |
GET | Get MCP protocol logs |
/agents/{agent_id}/metrics |
GET | Get agent performance metrics (real-time) |
/health |
GET | Health check |
Individual agent MCP servers exposing tools for direct communication
๐ MCP Tools: These are tools (actions/operations) that agents can perform, representing capabilities like "analyze", "create", "execute".
The Scout agent analyzes the game board and identifies patterns, threats, and opportunities.
| Tool | Description | Parameters |
|---|---|---|
analyze_board |
Analyze board state and provide comprehensive insights | board, current_player, move_number |
detect_threats |
Identify immediate threats from opponent | board_state |
identify_opportunities |
Find winning opportunities and strategic positions | board_state |
get_pattern_analysis |
Analyze game patterns and trends | board_state, move_history |
The Strategist agent creates game plans and recommends optimal moves.
| Tool | Description | Parameters |
|---|---|---|
create_strategy |
Generate strategic plan based on Scout's analysis | observation_data |
evaluate_position |
Evaluate current position strength | board_state, player |
recommend_move |
Recommend best move with detailed reasoning | board_state, available_moves |
assess_win_probability |
Calculate win probability for current state | board_state, player |
The Executor agent validates and executes moves on the game board.
| Tool | Description | Parameters |
|---|---|---|
execute_move |
Execute strategic move on the board | move_data, board_state |
validate_move |
Validate move legality and game rules | move, board_state |
update_game_state |
Update game state after move execution | move, current_state |
confirm_execution |
Confirm move execution and return results | execution_result |
All agents share these standard MCP capabilities:
| Tool | Description | Purpose |
|---|---|---|
execute_task |
Execute CrewAI task via MCP protocol | Task execution |
get_status |
Get agent status and current state | Health monitoring |
get_memory |
Retrieve agent memory and context | State management |
switch_model |
Hot-swap LLM model without restart | Model switching |
get_metrics |
Get real-time performance metrics | Performance tracking |
๐ Complete Architecture & API Documentation - Detailed architecture diagrams, communication flows, and complete API reference.
The Streamlit dashboard provides comprehensive monitoring with real-time analytics, performance tracking, and MCP protocol logging.
๐ Features Documentation - Detailed monitoring capabilities, analytics, and feature status.
This project is licensed under the Apache License, Version 2.0 - see the LICENSE file for details.