A model-agnostic agentic coding CLI tool that works with local LLMs via Ollama. OpenCoder enables AI-assisted coding directly in your terminal with file operations, bash commands, and intelligent code analysis.
- Agentic Workflow: AI that can read, write, and edit files autonomously
- Tool Calling: Built-in tools for file operations, bash commands, and glob patterns
- Model Agnostic: Works with any Ollama-compatible model
- Planning System: Break down complex tasks into executable steps
- Interactive REPL: Conversational interface with slash commands
- Node.js >= 18.0.0
- Ollama running locally (or accessible server)
# Clone the repository
git clone https://github.com/YOUR_USERNAME/opencoder.git
cd opencoder
# Install dependencies
npm install
# Build the project
npm run build
# Link globally (optional)
npm linkFirst, install Ollama from ollama.ai, then pull a model:
# Pull a recommended model
ollama pull deepseek-r1:8b
# Start Ollama (if not already running)
ollama serve# If installed globally
opencoder
# Or run directly
npm start
# Or with specific options
opencoder --model deepseek-r1:8b --url http://localhost:11434Once in the REPL, you can ask the AI to help with coding tasks:
You: Read the main.py file and explain what it does
You: Create a new function that validates email addresses
You: Find all TODO comments in the src directory
OpenCoder uses prompt-based tool calling since most local models don't have native function calling. Results vary by model:
| Model | Tool Use | Notes |
|---|---|---|
deepseek-r1:8b |
Excellent | Fast, follows instructions well. Recommended for most use cases. |
deepseek-r1:32b |
Good | Better reasoning but slower, may timeout on complex tasks |
deepseek-coder:33b |
Mixed | Strong coding capabilities but may not follow agentic instructions consistently |
codellama:13b |
Mixed | Decent for simple tasks |
llama3:8b |
Mixed | General purpose, varying results with tools |
Tip: Start with deepseek-r1:8b for the best balance of speed and capability. Experiment with larger models for complex reasoning tasks.
Configuration is loaded in priority order (highest to lowest):
opencoder --model deepseek-r1:32b --url http://192.168.1.100:11434export OPENCODER_MODEL=deepseek-r1:8b
export OPENCODER_BASE_URL=http://localhost:11434
export OPENCODER_PROVIDER=ollamaCreate ~/.opencoder/config.json:
{
"provider": "ollama",
"model": "deepseek-r1:8b",
"baseUrl": "http://localhost:11434",
"timeout": 300000
}- Provider:
ollama - Model:
deepseek-r1:8b - URL:
http://localhost:11434 - Timeout: 300000ms (5 minutes)
opencoder [options]
Options:
-m, --model <model> Model to use (default: deepseek-r1:8b)
-p, --provider <name> AI provider (default: ollama)
-u, --url <url> Base URL for API (default: http://localhost:11434)
-h, --help Display helpInside the REPL:
| Command | Description |
|---|---|
/help |
Show available commands |
/init |
Analyze and summarize the current codebase |
/plan <goal> |
Create an execution plan for a complex task |
/clear |
Clear conversation history |
/readonly |
Toggle read-only mode (disables write operations) |
/exit |
Exit the application |
The AI has access to these tools:
| Tool | Description |
|---|---|
read_file |
Read file contents with optional line range |
write_file |
Create or overwrite files |
edit_file |
Edit files (replace, insert, append modes) |
bash |
Execute bash commands (with safety checks) |
glob |
Find files matching patterns |
src/
├── index.ts # CLI entry point
├── agent/ # Core agentic loop
│ ├── agent.ts # Main agent with tool execution
│ ├── context.ts # Conversation management
│ └── parser.ts # Tool call extraction
├── providers/ # AI provider abstraction
│ ├── base.ts # Provider interface
│ └── ollama.ts # Ollama implementation
├── tools/ # Agent capabilities
│ ├── read.ts # Read files
│ ├── write.ts # Write files
│ ├── edit.ts # Edit files
│ ├── bash.ts # Execute commands
│ └── glob.ts # File patterns
├── cli/ # REPL interface
└── planning/ # Task planning system
# Watch mode for development
npm run dev
# Build
npm run build
# Run
npm startExtend the AIProvider base class in src/providers/:
import { AIProvider, Message, ChatResponse } from './base.js';
export class MyProvider extends AIProvider {
async chat(messages: Message[]): Promise<ChatResponse> {
// Implement your provider logic
}
async checkConnection(): Promise<boolean> {
// Verify connection to the service
}
}Make sure Ollama is running:
ollama servePull the model first:
ollama pull deepseek-r1:8b- Try a smaller model (8b vs 32b)
- Increase timeout in config
- Check system resources
Some models don't follow tool-calling instructions well. Try:
- Using
deepseek-r1:8b(best tool compliance) - Being more explicit in your prompts
- Breaking complex tasks into smaller steps
MIT
Contributions are welcome! Please feel free to submit issues and pull requests.