δΈζζζ‘£ | Contributing | Documentation
Kode is a powerful AI assistant that lives in your terminal. It can understand your codebase, edit files, run commands, and handle entire workflows for you.
- π€ AI-Powered Assistance - Uses advanced AI models to understand and respond to your requests
- π Multi-Model Collaboration - Flexibly switch and combine multiple AI models to leverage their unique strengths
- π Code Editing - Directly edit files with intelligent suggestions and improvements
- π Codebase Understanding - Analyzes your project structure and code relationships
- π Command Execution - Run shell commands and see results in real-time
- π οΈ Workflow Automation - Handle complex development tasks with simple prompts
- π¨ Interactive UI - Beautiful terminal interface with syntax highlighting
- π Tool System - Extensible architecture with specialized tools for different tasks
- πΎ Context Management - Smart context handling to maintain conversation continuity
npm install -g @shareai-lab/kodeAfter installation, you can use any of these commands:
kode- Primary commandkwa- Kode With Agent (alternative)kd- Ultra-short alias
Start an interactive session:
kode
# or
kwa
# or
kdGet a quick response:
kode -p "explain this function" main.js
# or
kwa -p "explain this function" main.js/help- Show available commands/model- Change AI model settings/config- Open configuration panel/cost- Show token usage and costs/clear- Clear conversation history/init- Initialize project context
Unlike official Claude which supports only a single model, Kode implements true multi-model collaboration, allowing you to fully leverage the unique strengths of different AI models.
We designed a unified ModelManager system that supports:
- Model Profiles: Each model has an independent configuration file containing API endpoints, authentication, context window size, cost parameters, etc.
- Model Pointers: Users can configure default models for different purposes in the
/modelcommand:main: Default model for main Agenttask: Default model for SubAgentreasoning: Reserved for future ThinkTool usagequick: Fast model for simple NLP tasks (security identification, title generation, etc.)
- Dynamic Model Switching: Support runtime model switching without restarting sessions, maintaining context continuity
Our specially designed TaskTool (Architect tool) implements:
- Subagent Mechanism: Can launch multiple sub-agents to process tasks in parallel
- Model Parameter Passing: Users can specify which model SubAgents should use in their requests
- Default Model Configuration: SubAgents use the model configured by the
taskpointer by default
We specially designed the AskExpertModel tool:
- Expert Model Invocation: Allows temporarily calling specific expert models to solve difficult problems during conversations
- Model Isolation Execution: Expert model responses are processed independently without affecting the main conversation flow
- Knowledge Integration: Integrates expert model insights into the current task
- Tab Key Quick Switch: Press Tab in the input box to quickly switch the model for the current conversation
/modelCommand: Use/modelcommand to configure and manage multiple model profiles, set default models for different purposes- User Control: Users can specify specific models for task processing at any time
Architecture Design Phase
- Use o3 model or GPT-5 model to explore system architecture and formulate sharp and clear technical solutions
- These models excel in abstract thinking and system design
Solution Refinement Phase
- Use gemini model to deeply explore production environment design details
- Leverage its deep accumulation in practical engineering and balanced reasoning capabilities
Code Implementation Phase
- Use Qwen Coder model, Kimi k2 model, GLM-4.5 model, or Claude Sonnet 4 model for specific code writing
- These models have strong performance in code generation, file editing, and engineering implementation
- Support parallel processing of multiple coding tasks through subagents
Problem Solving
- When encountering complex problems, consult expert models like o3 model, Claude Opus 4.1 model, or Grok 4 model
- Obtain deep technical insights and innovative solutions
# Example 1: Architecture Design
"Use o3 model to help me design a high-concurrency message queue system architecture"
# Example 2: Multi-Model Collaboration
"First use GPT-5 model to analyze the root cause of this performance issue, then use Claude Sonnet 4 model to write optimization code"
# Example 3: Parallel Task Processing
"Use Qwen Coder model as subagent to refactor these three modules simultaneously"
# Example 4: Expert Consultation
"This memory leak issue is tricky, ask Claude Opus 4.1 model separately for solutions"
# Example 5: Code Review
"Have Kimi k2 model review the code quality of this PR"
# Example 6: Complex Reasoning
"Use Grok 4 model to help me derive the time complexity of this algorithm"
# Example 7: Solution Design
"Have GLM-4.5 model design a microservice decomposition plan"// Example of multi-model configuration support
{
"modelProfiles": {
"o3": { "provider": "openai", "model": "o3", "apiKey": "..." },
"claude4": { "provider": "anthropic", "model": "claude-sonnet-4", "apiKey": "..." },
"qwen": { "provider": "alibaba", "model": "qwen-coder", "apiKey": "..." }
},
"modelPointers": {
"main": "claude4", // Main conversation model
"task": "qwen", // Task execution model
"reasoning": "o3", // Reasoning model
"quick": "glm-4.5" // Quick response model
}
}- Usage Statistics: Use
/costcommand to view token usage and costs for each model - Multi-Model Cost Comparison: Track usage costs of different models in real-time
- History Records: Save cost data for each session
- Context Inheritance: Maintain conversation continuity when switching models
- Context Window Adaptation: Automatically adjust based on different models' context window sizes
- Session State Preservation: Ensure information consistency during multi-model collaboration
- Maximized Efficiency: Each task is handled by the most suitable model
- Cost Optimization: Use lightweight models for simple tasks, powerful models for complex tasks
- Parallel Processing: Multiple models can work on different subtasks simultaneously
- Flexible Switching: Switch models based on task requirements without restarting sessions
- Leveraging Strengths: Combine advantages of different models for optimal overall results
| Feature | Kode | Official Claude |
|---|---|---|
| Number of Supported Models | Unlimited, configurable for any model | Only supports single Claude model |
| Model Switching | β Tab key quick switch | β Requires session restart |
| Parallel Processing | β Multiple SubAgents work in parallel | β Single-threaded processing |
| Cost Tracking | β Separate statistics for multiple models | β Single model cost |
| Task Model Configuration | β Different default models for different purposes | β Same model for all tasks |
| Expert Consultation | β AskExpertModel tool | β Not supported |
This multi-model collaboration capability makes Kode a true AI Development Workbench, not just a single AI assistant.
Kode is built with modern tools and requires Bun for development.
# macOS/Linux
curl -fsSL https://bun.sh/install | bash
# Windows
powershell -c "irm bun.sh/install.ps1 | iex"# Clone the repository
git clone https://github.com/shareAI-lab/kode.git
cd kode
# Install dependencies
bun install
# Run in development mode
bun run devbun run build# Run tests
bun test
# Test the CLI
./cli.js --helpWe welcome contributions! Please see our Contributing Guide for details.
ISC License - see LICENSE for details.
- π Documentation
- π Report Issues
- π¬ Discussions