Interactive terminal CLI for building and running LLM agents. Built with LangChain, LangGraph, Prompt Toolkit, and Rich.
demo.mov
- Deep Agent Architecture - Planning tools, virtual filesystem, and sub-agent delegation for complex multi-step tasks
- LangGraph Server Mode - Run agents as API servers with LangGraph Studio integration for visual debugging
- Multi-Provider LLM Support - OpenAI, Anthropic, Google, AWS Bedrock, Ollama, DeepSeek, ZhipuAI, and local models (LMStudio, Ollama)
- Extensible Tool System - File operations, web search, terminal access, grep search, and MCP server integration
- Persistent Conversations - SQLite-backed thread storage with resume, replay, and compression
- User Memory - Project-specific custom instructions and preferences that persist across conversations
- Human-in-the-Loop - Configurable tool approval system with regex-based allow/deny rules
- Cost Tracking (Beta) - Token usage and cost calculation per conversation
- MCP Server Support - Integrate external tool servers via the MCP protocol
- Python 3.13+ - Required for the project
- uv - Fast Python package installer (install instructions)
- ripgrep (rg) - Fast search tool used by the grep_search functionality:
- macOS:
brew install ripgrep - Ubuntu/Debian:
sudo apt install ripgrep - Arch Linux:
sudo pacman -S ripgrep - Windows:
choco install ripgrepor download from releases
- macOS:
- Node.js & npm (optional) - Required only if using MCP servers that run via npx
The .langrepl config directory is created in your working directory (or use -w to specify a location).
Aliases: langrepl or lg
Quick try (no installation):
uvx --from git+https://github.com/midodimori/langrepl langrepl
uvx --from git+https://github.com/midodimori/langrepl langrepl -w /path # specify working dirInstall globally:
uv tool install git+https://github.com/midodimori/langreplThen run from any directory:
langrepl # or: lg
langrepl -w /path # specify working directoryClone and install:
git clone https://github.com/midodimori/langrepl.git
cd langrepl
make install
uv tool install --editable .Then run from any directory (same as above).
Set API keys via .env:
LLM__OPENAI_API_KEY=your_openai_api_key_here
LLM__ANTHROPIC_API_KEY=your_anthropic_api_key_here
LLM__GOOGLE_API_KEY=your_google_api_key_here
LLM__DEEPSEEK_API_KEY=your_deepseek_api_key_here
LLM__ZHIPUAI_API_KEY=your_zhipuai_api_key_hereAdd to .env:
LANGCHAIN_TRACING_V2=true
LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
LANGCHAIN_API_KEY="your_langsmith_api_key"
LANGCHAIN_PROJECT="your_project_name"Langrepl ships with multiple prebuilt agents:
general(default) - General-purpose agent for research, writing, analysis, and planningclaude-style-coder- Software development agent mimicking Claude Code's behaviorcode-reviewer- Code review agent focusing on quality and best practices
langrepl # Start interactive session (general agent by default)
langrepl -a general # Use specific agent
langrepl -r # Resume last conversation
langrepl -am ACTIVE # Set approval mode (SEMI_ACTIVE, ACTIVE, AGGRESSIVE)
langrepl -w /path # Set working directory
lg # Quick aliaslangrepl -s -a general # Start LangGraph server
langrepl -s -a general -am ACTIVE # With approval mode
# Server: http://localhost:2024
# Studio: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
# API Docs: http://localhost:2024/docsServer features:
- Auto-generates
langgraph.jsonconfiguration - Creates/updates assistants via LangGraph API
- Enables visual debugging with LangGraph Studio
- Supports all agent configs and MCP servers
/resume - Switch between conversation threads
Shows list of all saved threads with timestamps. Select one to continue that conversation.
/replay - Branch from previous message
Shows all previous human messages in current thread. Select one to branch from that point while preserving the original conversation.
/compress - Compress conversation history
Compresses messages using LLM summarization to reduce token usage. Creates new thread with compressed history (e.g., 150 messages/45K tokens → 3 messages/8K tokens).
/clear - Start new conversation
Clear screen and start a new conversation thread while keeping previous thread saved.
/agents - Switch agent
Shows all configured agents with interactive selector. Switch between specialized agents (e.g., coder, researcher, analyst).
/model - Switch LLM model
Shows all configured models with interactive selector. Switch between models for cost/quality tradeoffs.
/tools - View available tools
Lists all tools from impl/, internal/, and MCP servers.
/mcp - Manage MCP servers
View and toggle enabled/disabled MCP servers interactively.
/memory - Edit user memory
Opens .langrepl/memory.md for custom instructions and preferences. Content is automatically injected into agent
prompts.
Advanced: Use {user_memory} placeholder in custom agent prompts to control placement. If omitted, memory
auto-appends to end.
/graph [--browser] - Visualize agent graph
Renders in terminal (ASCII) or opens in browser with --browser flag.
/help - Show help
/exit - Exit application
Configs are auto-generated in .langrepl/ on first run.
agents:
- name: my-agent
prompt: prompts/my_agent.md # Single file or array of files, this will look for `.langrepl/prompts/my_agent.md`
llm: haiku-4.5
checkpointer: sqlite
recursion_limit: 40
tool_output_max_tokens: 10000
default: true
tools:
- impl:file_system:read_file
- mcp:context7:resolve-library-id
subagents:
- general-purpose
compression:
auto_compress_enabled: true
auto_compress_threshold: 0.8
compression_llm: haiku-4.5Tool naming: <category>:<module>:<function> with wildcard support (*, ?, [seq])
impl:*:*- All built-in toolsimpl:file_system:read_*- All read_* tools in file_systemmcp:server:*- All tools from MCP server
llms:
- model: claude-haiku-4-5
alias: haiku-4.5
provider: anthropic
max_tokens: 10000
temperature: 0.1
context_window: 200000
input_cost_per_mtok: 1.00
output_cost_per_mtok: 5.00-
Implement in
src/tools/impl/my_tool.py:from src.tools.wrapper import approval_tool @approval_tool() def my_tool(query: str) -> str: """Tool description.""" return result
-
Register in
src/tools/factory.py:MY_TOOLS = [my_tool] self.impl_tools.extend(MY_TOOLS)
-
Reference:
impl:my_tool:my_tool
{
"mcpServers": {
"my-server": {
"command": "uvx",
"args": ["my-mcp-package"],
"transport": "stdio",
"enabled": true,
"include": ["tool1"],
"exclude": [],
"repair_command": ["sh", "-c", "rm -rf .some_cache"]
}
}
}repair_command: Runs if server fails, then run this command before retrying- Suppress stderr:
"command": "sh", "args": ["-c", "npx pkg 2>/dev/null"] - Reference:
mcp:my-server:tool1 - Examples: useful-mcp-servers.json
Sub-agents use the same config structure as main agents.
subagents:
- name: code-reviewer
prompt: prompts/code-reviewer.md
tools: [impl:file_system:read_file]Add custom: Create prompt, add to config above, reference in parent agent's subagents list.
{
"always_allow": [
{
"name": "read_file",
"args": null
},
{
"name": "run_command",
"args": "pwd"
}
],
"always_deny": [
{
"name": "run_command",
"args": "rm -rf /.*"
}
]
}Modes: SEMI_ACTIVE (ask unless whitelisted), ACTIVE (auto-approve except denied), AGGRESSIVE (bypass all)
For local development without global install:
git clone https://github.com/midodimori/langrepl.git
cd langrepl
make installRun from within repository:
uv run langrepl # Start interactive session
uv run langrepl -w /path # Specify working directory
uv run langrepl -s -a general # Start LangGraph serverDevelopment commands:
make install # Install dependencies + pre-commit hooks
make lint-fix # Format and lint code
make test # Run tests
make pre-commit # Run pre-commit on all files
make bump-patch # Bump version (0.1.0 → 0.1.1)
make clean # Remove cache/build artifactsThis project is licensed under the MIT License - see the LICENSE file for details.