Orla is a unix tool for running lightweight open-source agents. It is easy to add to a script, use with pipes, or build things on top of.
Install via Homebrew on MacOS or Linux:
brew install --cask dorcha-inc/orla/orlaor install orla via a helper script (you might need sudo):
curl -fsSL https://raw.githubusercontent.com/dorcha-inc/orla/main/scripts/install.sh | shTry orla:
>>> orla agent "Hello"
Hello! How can I assist you today? Could you please provide some details or specify what you need help with?All done!
Side note: if required, this will install go, ollama, and pull in a lightweight open-source model. To skip that, you can use:
HOMEBREW_ORLA_SKIP_OLLAMA=1 brew install --cask dorcha-inc/orla/orlaor via the install script:
curl -fsSL https://raw.githubusercontent.com/dorcha-inc/orla/main/scripts/install.sh | sh -s -- --skip-ollamaSimple and usable tools are a key part of the Unix philosophy. Tools like grep, curl, and git have become second nature and are huge wins for an inclusive and productive ecosystem. They are fast, reliable, and composable. However, the ecosystem around AI and AI agents currently feels like using a bloated monolithic piece of proprietary software with over-priced and kafkaesque licensing fees.
Orla is built on a simple premise: AI should be a (free software) tool you own, not a service you rent. It treats simplicity, reliability, and composability as first-order priorities. Orla uses models running on your own machine and automatically discovers the tools you already have, making it powerful and private right out of the box. It requires no API keys, subscriptions, or power-hungry data centers. To summarize,
- Orla runs locally. Your data, queries, and tools never leave your machine without your explicit instruction. It's private by default.
- Orla brings the power of modern LLMs to your terminal with a dead-simple interface. If you know how to use
grep, you know how to use Orla. - Orla is free and open-source software. No subscriptions, no vendor lock-in.
See the RFCs in docs/rfcs/ for more details on the roadmap.
The easiest and recommended way to install Orla on macOS and Linux is using Homebrew:
brew install --cask dorcha-inc/orla/orlaAlternatively, you can use our installation script. It will automatically install Orla, Ollama, and set everything up for you:
curl -fsSL https://raw.githubusercontent.com/dorcha-inc/orla/main/scripts/install.sh | shIf you already have a remote Ollama server or prefer to manage Ollama separately, you can skip the local Ollama installation:
Using homebrew:
HOMEBREW_ORLA_SKIP_OLLAMA=1 brew install --cask dorcha-inc/orla/orlaUsing the install script:
curl -fsSL https://raw.githubusercontent.com/dorcha-inc/orla/main/scripts/install.sh | sh -s -- --skip-ollamaAfter installation, configure Orla to use your remote Ollama server by setting either the OLLAMA_HOST or the ORLA_OLLAMA_HOST environment variable, or using the llm_backend configuration in your orla.yaml:
export ORLA_OLLAMA_HOST=http://your-ollama-server:11434Or add to your orla.yaml:
llm_backend:
endpoint: http://your-ollama-server:11434
type: ollamaTo remove orla, see uninstalling orla.
Orla supports two modes of operation: agent for direct terminal interaction, and serve for integration with MCP clients.
The simplest way to use Orla is through agent. Just ask Orla to do something, and it will use local models to reason and execute commands:
You can do a one-shot task like this:
orla agent "summarize this code" < main.goYou can run it in a pipeline, like this:
cat data.json | orla agent "extract all email addresses" | sort -uThis lets you pipe context directly into orla. Here's a second example:
git status | orla agent "Draft a short, imperative-mood commit message for these changes"You can install one of Orla's tools (fs) and do file operations like this:
orla tool install fs
orla agent "find all TODO comments in *.c files in `pwd`" > todos.txtYou can also override the model:
orla agent "List all files in the current directory" --model ollama:ministral-3:3bFor integration with external MCP clients (like Claude Desktop), run Orla as a server:
Start server on default port (8080):
orla serveUse stdio transport
orla serve --stdioIf no configuration file is specified, Orla will automatically check for orla.yaml in the current directory. If not found, default configuration is used.
You can hot reload Orla to refresh tools and configuration without restarting:
kill -HUP $(pgrep orla)The easiest way to get started is to install tools from the Orla Tool Registry:
Install the latest version of a tool
orla install fsInstall a specific version
orla install coinflip --version v0.1.0Search for available tools
orla search $search_termInstalled tools are automatically placed in the default tools directory and will be discovered by Orla when you start the server or use agent mode.
You can also create your own tools. Any executable can be a tool:
mkdir tools
cat > tools/hello.sh << 'EOF'
#!/bin/bash
echo "Hello from orla!"
EOF
chmod +x tools/hello.shOrla will automatically discover and make these tools available.
Orla works out of the box with zero configuration, but you can customize it with a YAML config file. Configuration follows a precedence order:
- Environment variables (highest precedence) - e.g.,
ORLA_PORT=3000 - Project config (
./orla.yamlin current directory) - User config (
~/.orla/config.yaml) - Orla's Defaults (lowest precedence)
If you create an orla.yaml file in your project directory, it will override the global user config for that project. This allows project-specific settings while maintaining global defaults.
tools_dir: Directory containing executable tools (default:.orla/tools)port: HTTP server port (default:8080, ignored in stdio mode)timeout: Tool execution timeout in seconds (default:30)log_format:"json"or"pretty"(default:"json")log_level:"debug","info","warn","error", or"fatal"(default:"info")log_file: Optional log file path (default: empty, logs to stderr)
model: Model identifier (e.g.,"ollama:ministral-3:3b","ollama:qwen3:0.6b") (default:"ollama:qwen3:0.6b")max_tool_calls: Maximum tool calls per prompt (default:10)streaming: Enable streaming responses (default:true)output_format: Output format -"auto","rich", or"plain"(default:"auto")confirm_destructive: Prompt for confirmation on destructive actions (default:true)dry_run: Default to dry-run mode (default:false)show_thinking: Show thinking trace output for thinking-capable models (default:false)show_tool_calls: Show detailed tool call information (default:false)show_progress: Show progress messages even when UI is disabled (e.g., when stdin is piped) (default:false)
Create an orla.yaml file in your project directory:
# Server mode configuration
tools_dir: ./tools
port: 8080
timeout: 30
log_format: json
log_level: info
# Agent mode configuration
model: ollama:llama3
max_tool_calls: 10
streaming: true
output_format: auto
confirm_destructive: true
show_thinking: false
show_tool_calls: trueYou can also set configuration via environment variables. For example:
export ORLA_PORT=3000
export ORLA_MODEL=ollama:qwen3:1.7b
export ORLA_SHOW_TOOL_CALLS=trueIf you prefer to install manually, make sure you have Go (1.25+) installed, then:
go install github.com/dorcha-inc/orla/cmd/orla@latestOr build it locally by cloning this repository and running:
make buildand then install locally:
make installorla includes pre-commit hooks for secret detection, linting, and testing. to enable them, run this once:
git config core.hooksPath .githooksthis configures git to automatically use hooks from .githooks/ - no setup script needed!
orla comes with extensive tests which can be run using
make testFor integration tests, use:
make test-integrationFor end to end tests, use
make test-e2eOrla is built for the community. Contributions are not just welcome—they are essential. Whether it's reporting a bug, suggesting a feature, or writing code, we'd love your help.
- Report a bug or request a feature
- Join us on Discord.
- Check out our CONTRIBUTING.md to get started.
All the amazing folks who have taken their time to contribute something cool to orla are listed in CONTRIBUTORS.md.
If Orla becomes a tool you love, please consider sponsoring the project. Your support helps us dedicate more time to maintenance and building the future of local AI.
If installed via Homebrew:
brew uninstall --cask orlaIf installed via install script:
curl -fsSL https://raw.githubusercontent.com/dorcha-inc/orla/main/scripts/uninstall.sh | shNote: The uninstall script only removes Orla. Ollama and models are left intact. To remove Ollama:
- If installed via Homebrew:
brew uninstall ollama - Otherwise: Visit https://ollama.ai or check your system's package manager