This project demonstrates how to integrate the Model Context Protocol (MCP) with customized LLM (e.g. Qwen), creating a powerful chatbot that can interact with various tools through MCP servers. The implementation showcases the flexibility of MCP by enabling LLMs to use external tools seamlessly.
For Chinese version, please refer to README_ZH.md.
This project includes:
- A simple CLI chatbot interface
- Integration with Markdown processing tools via MCP
- Support for customized LLM (e.g. Qwen)
- Example implementation for processing and summarizing Markdown files (very simple, just for demo)
- Python 3.10+
- Dependencies (automatically installed via requirements):
- python-dotenv
- mcp[cli]
- openai
- colorama
-
Clone the repository:
git clone [email protected]:keli-wen/mcp_chatbot.git cd mcp_chatbot
-
Set up a virtual environment (recommended):
cd folder # Install uv if you don't have it already pip install uv # Create a virtual environment and install dependencies uv venv .venv --python=3.10 # Activate the virtual environment # For macOS/Linux source .venv/bin/activate # For Windows .venv\Scripts\activate # Deactivate the virtual environment deactivate
-
Install dependencies:
pip install -r requirements.txt # or use uv for faster installation uv pip install -r requirements.txt -
Configure your environment:
-
Copy the
.env.examplefile to.env:cp .env.example .env
-
Edit the
.envfile to add your Qwen API key (just for demo, you can use any LLM API key, remember to set the base_url and api_key in the .env file) and set the paths:LLM_MODEL_NAME=your_llm_model_name_here LLM_BASE_URL=your_llm_base_url_here LLM_API_KEY=your_llm_api_key_here MARKDOWN_FOLDER_PATH=/path/to/your/markdown/folder RESULT_FOLDER_PATH=/path/to/your/result/folder
-
Before running the application, you need to modify the following:
-
MCP Server Configuration: Edit
mcp_servers/servers_config.jsonto match your local setup:{ "mcpServers": { "markdown_processor": { "command": "/path/to/your/uv", "args": [ "--directory", "/path/to/your/project/mcp_servers", "run", "markdown_processor.py" ] } } }Replace
/path/to/your/uvwith the actual path to your uv executable. You can usewhich uvto get the path. Replace/path/to/your/project/mcp_serverswith the absolute path to the mcp_servers directory in your project. -
Environment Variables: Make sure to set proper paths in your
.envfile:MARKDOWN_FOLDER_PATH="/path/to/your/markdown/folder" RESULT_FOLDER_PATH="/path/to/your/result/folder"The application will validate these paths and throw an error if they contain placeholder values.
You can run the following command to check your configuration:
bash check.shTo run the basic chatbot interface:
python main.pyThis will start an interactive session where you can chat with the AI. The AI has access to the tools provided by the configured MCP servers.
To run the provided example that summarizes Markdown content:
python run_example.pyThis script will:
- Initialize the MCP servers
- Connect to the Qwen API
- Process the Markdown files from the configured directory
- Generate a summary in Chinese
main.py: Entry point for the interactive chatbotrun_example.py: Example script showing how to use the system for a specific taskmcp_chatbot/: Core library codechat/: Chat session managementconfig/: Configuration handlingllm/: LLM client implementationmcp_server/: MCP server and tool integration
mcp_servers/: Custom MCP servers implementationmarkdown_processor.py: Server for processing Markdown filesservers_config.json: Configuration for MCP servers
data-example/: Example Markdown files for testing
You can extend this project by:
- Adding new MCP servers in the
mcp_servers/directory - Updating the
servers_config.jsonto include your new servers - Implementing new functionalities in the existing servers
- Path Issues: Ensure all paths in the configuration files are absolute paths appropriate for your system
- MCP Server Errors: Make sure the tools are properly installed and configured
- API Key Errors: Verify your API key is correctly set in the
.envfile