An autonomous agentic coding assistant powered by Ollama with native tool calling. Similar to OpenCode or Claude Code, but using local LLMs through Ollama. Works completely offline and respects your privacy.
- 🤖 Autonomous Agentic Coding - Works independently to complete tasks without constant supervision
- 🛠️ Native Ollama Tool Calling - Uses Ollama's built-in tool calling capabilities for structured function execution
- 📁 File Operations - Read, write, create, delete files and directories
- 💻 Terminal Commands - Execute shell commands (cd, ls, git, npm, pip, etc.)
- 🌐 Web Search - Search the internet using DuckDuckGo (no API keys required)
- 🌍 Browser Automation - Control Chrome/Selenium for web interactions and scraping
- 💬 Beautiful Terminal UI - Rich library-based interface with syntax highlighting
- 🔒 Privacy-First - All processing happens locally, no data sent to external services
- 🚀 Fast & Efficient - Uses local LLM inference, works offline
- Python 3.8+ (
python3) - Ollama - Install from ollama.ai
- Chrome/Chromium and ChromeDriver (optional, for browser automation)
- Arch Linux:
sudo pacman -S chromium chromium-driver - Ubuntu/Debian:
sudo apt-get install chromium-browser chromium-chromedriver - macOS:
brew install chromium chromedriver
- Arch Linux:
git clone https://github.com/r3dg0d/ollamacode.git
cd ollamacode
chmod +x setup.sh
./setup.shThe setup script will:
- Create a Python virtual environment
- Install all dependencies
- Check for Ollama installation
- Optionally pull the default model
- Create a launcher script
git clone https://github.com/r3dg0d/ollamacode.git
cd ollamacode
# Create virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install --upgrade pip
pip install -r requirements.txt
# Make launcher executable
chmod +x ollamacode# Using yay
yay -S ollamacode
# Using paru
paru -S ollamacode
# Or manually
git clone https://aur.archlinux.org/ollamacode.git
cd ollamacode
makepkg -siAdd to your PATH:
echo 'export PATH="$PATH:$HOME/Documents/ollamacode"' >> ~/.bashrc
source ~/.bashrcOr create a symlink:
ln -s ~/Documents/ollamacode/ollamacode ~/.local/bin/ollamacode# Start Ollama (if not already running)
ollama serve
# Pull the recommended model (if not already installed)
ollama pull huihui_ai/qwen3-abliterated:32bNote: You can use any Ollama model that supports tool calling. Check Ollama's model library for compatible models.
Interactive Mode:
ollamacode
# or
python main.pyAutonomous Mode:
ollamacode --task "Create a Python script that prints 'Hello World'"
# or
python main.py --task "Create a new React project with TypeScript"ollamacode [OPTIONS]
Options:
--model MODEL Ollama model to use
(default: huihui_ai/qwen3-abliterated:32b)
--endpoint URL Ollama API endpoint
(default: http://127.0.0.1:11434)
--base-dir PATH Base directory for file operations
(default: ~/Documents)
--task TASK Initial task to execute (autonomous mode)ollamacode --task "Create a new Python project called 'myapp' with a main.py file and a requirements.txt"ollamacode --task "Search for information about Python decorators and create a file with examples"ollamacode --task "Open a browser, navigate to example.com, and save the page HTML to a file"ollamacode
# Then type your tasks interactively
Task: Create a Flask REST API with endpoints for users
Task: Add authentication middleware
Task: Write unit testsThe assistant has access to these tools:
read_file(filepath)- Read file contentswrite_file(filepath, content)- Write to a file (creates parent directories)create_directory(dirpath)- Create directorieslist_directory(dirpath)- List directory contentsdelete_file(filepath)- Delete a filedelete_directory(dirpath, recursive)- Delete directories
run_command(command, cwd)- Execute shell commands (cd, ls, git, npm, pip, etc.)
web_search(query, max_results)- Search the web using DuckDuckGoopen_browser(headless)- Open Chrome browsernavigate_to(url)- Navigate to URLbrowser_click(selector, by)- Click elementbrowser_type(selector, text, by)- Type textbrowser_get_text(selector, by)- Get element textbrowser_get_html()- Get page HTMLclose_browser()- Close browser
- You provide a task or question - Either interactively or via command-line
- The assistant uses Ollama with tool calling - Plans and executes actions autonomously
- Tools are executed automatically - File operations, commands, web searches, etc.
- Results are fed back to the model - The model processes results and continues
- The process continues until the task is complete - Up to 50 iterations per task
Edit main.py or pass via command-line:
ollamacode --model llama3.2:3b --task "your task"ollamacode --base-dir ~/Projects --task "create a new project"If Ollama is running on a different host/port:
ollamacode --endpoint http://192.168.1.100:11434 --task "your task"Problem: Failed to connect to Ollama at http://127.0.0.1:11434
Solution:
# Make sure Ollama is running
ollama serve
# Check if Ollama is accessible
curl http://127.0.0.1:11434/api/tagsProblem: Selenium/Chrome errors when using browser tools
Solution:
# Install Chrome/Chromium and ChromeDriver
# Arch Linux
sudo pacman -S chromium chromium-driver
# Ubuntu/Debian
sudo apt-get install chromium-browser chromium-chromedriver
# macOS
brew install chromium chromedriverProblem: Error: model not found
Solution:
# Pull the model
ollama pull huihui_ai/qwen3-abliterated:32b
# Or use a different model
ollamacode --model llama3.2:3b --task "your task"Problem: Permission denied when accessing files/directories
Solution:
- Check file/directory permissions in the base directory
- Ensure the base directory path is correct
- Use absolute paths if relative paths fail
Problem: Module not found errors
Solution:
# Make sure virtual environment is activated
source venv/bin/activate # Linux/macOS
# or
venv\Scripts\activate # Windows
# Reinstall dependencies
pip install -r requirements.txtollamacode/
├── main.py # Main entry point
├── ollama_client.py # Ollama API client with tool calling
├── tools.py # Tool implementations (file ops, terminal, web)
├── ui.py # Terminal UI using Rich library
├── ollamacode # Launcher script (bash)
├── setup.sh # Setup script
├── requirements.txt # Python dependencies
├── README.md # This file
├── PKGBUILD # AUR package build file
└── LICENSE # MIT License
# Activate virtual environment
source venv/bin/activate
# Run directly with Python
python main.py
# Or use the launcher
./ollamacode- Add tool implementation to
tools.pyin theToolRegistryclass - Add tool definition to
get_tool_definitions()method - Tools are automatically available to the LLM
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
OllamaCode works with any Ollama model that supports tool calling. Recommended models:
huihui_ai/qwen3-abliterated:32b- Default, excellent coding capabilitiesllama3.2:3b- Lightweight, fastcodellama:7b- Code-specialized modeldeepseek-coder:6.7b- Strong coding model
Check Ollama's model library for more options.
- 100% Local - All processing happens on your machine
- No Data Collection - No telemetry, no analytics, no external requests (except web search)
- No API Keys Required - Uses local Ollama instance
- File System Access - Be careful with commands, the assistant can read/write files
MIT License - see LICENSE file for details
- Ollama - Amazing local LLM runtime
- Rich - Beautiful terminal UI
- Selenium - Browser automation
- DuckDuckGo Search - Web search
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Made with ❤️ by r3dg0d