Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Interactive command-line chat application powered by Langchain, Langgraph, Prompt Toolkit and Rich

License

midodimori/langrepl

Repository files navigation

Langrepl

Interactive terminal CLI for building and running LLM agents. Built with LangChain, LangGraph, Prompt Toolkit, and Rich.

demo.mov

Table of Contents

Features

  • Deep Agent Architecture - Planning tools, virtual filesystem, and sub-agent delegation for complex multi-step tasks
  • LangGraph Server Mode - Run agents as API servers with LangGraph Studio integration for visual debugging
  • Multi-Provider LLM Support - OpenAI, Anthropic, Google, AWS Bedrock, Ollama, DeepSeek, ZhipuAI, and local models (LMStudio, Ollama)
  • Extensible Tool System - File operations, web search, terminal access, grep search, and MCP server integration
  • Persistent Conversations - SQLite-backed thread storage with resume, replay, and compression
  • User Memory - Project-specific custom instructions and preferences that persist across conversations
  • Human-in-the-Loop - Configurable tool approval system with regex-based allow/deny rules
  • Cost Tracking (Beta) - Token usage and cost calculation per conversation
  • MCP Server Support - Integrate external tool servers via the MCP protocol

Prerequisites

  • Python 3.13+ - Required for the project
  • uv - Fast Python package installer (install instructions)
  • ripgrep (rg) - Fast search tool used by the grep_search functionality:
    • macOS: brew install ripgrep
    • Ubuntu/Debian: sudo apt install ripgrep
    • Arch Linux: sudo pacman -S ripgrep
    • Windows: choco install ripgrep or download from releases
  • Node.js & npm (optional) - Required only if using MCP servers that run via npx

Installation

The .langrepl config directory is created in your working directory (or use -w to specify a location). Aliases: langrepl or lg

From GitHub

Quick try (no installation):

uvx --from git+https://github.com/midodimori/langrepl langrepl
uvx --from git+https://github.com/midodimori/langrepl langrepl -w /path  # specify working dir

Install globally:

uv tool install git+https://github.com/midodimori/langrepl

Then run from any directory:

langrepl              # or: lg
langrepl -w /path     # specify working directory

From Source

Clone and install:

git clone https://github.com/midodimori/langrepl.git
cd langrepl
make install
uv tool install --editable .

Then run from any directory (same as above).

Configure API Keys

Set API keys via .env:

LLM__OPENAI_API_KEY=your_openai_api_key_here
LLM__ANTHROPIC_API_KEY=your_anthropic_api_key_here
LLM__GOOGLE_API_KEY=your_google_api_key_here
LLM__DEEPSEEK_API_KEY=your_deepseek_api_key_here
LLM__ZHIPUAI_API_KEY=your_zhipuai_api_key_here

Tracing

LangSmith

Add to .env:

LANGCHAIN_TRACING_V2=true
LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
LANGCHAIN_API_KEY="your_langsmith_api_key"
LANGCHAIN_PROJECT="your_project_name"

Quick Start

Langrepl ships with multiple prebuilt agents:

  • general (default) - General-purpose agent for research, writing, analysis, and planning
  • claude-style-coder - Software development agent mimicking Claude Code's behavior
  • code-reviewer - Code review agent focusing on quality and best practices

Interactive Chat Mode

langrepl              # Start interactive session (general agent by default)
langrepl -a general   # Use specific agent
langrepl -r           # Resume last conversation
langrepl -am ACTIVE   # Set approval mode (SEMI_ACTIVE, ACTIVE, AGGRESSIVE)
langrepl -w /path     # Set working directory
lg                    # Quick alias

LangGraph Server Mode

langrepl -s -a general                # Start LangGraph server
langrepl -s -a general -am ACTIVE     # With approval mode

# Server: http://localhost:2024
# Studio: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
# API Docs: http://localhost:2024/docs

Server features:

  • Auto-generates langgraph.json configuration
  • Creates/updates assistants via LangGraph API
  • Enables visual debugging with LangGraph Studio
  • Supports all agent configs and MCP servers

Interactive Commands

Conversation Management

/resume - Switch between conversation threads

Shows list of all saved threads with timestamps. Select one to continue that conversation.

/replay - Branch from previous message

Shows all previous human messages in current thread. Select one to branch from that point while preserving the original conversation.

/compress - Compress conversation history

Compresses messages using LLM summarization to reduce token usage. Creates new thread with compressed history (e.g., 150 messages/45K tokens → 3 messages/8K tokens).

/clear - Start new conversation

Clear screen and start a new conversation thread while keeping previous thread saved.

Configuration

/agents - Switch agent

Shows all configured agents with interactive selector. Switch between specialized agents (e.g., coder, researcher, analyst).

/model - Switch LLM model

Shows all configured models with interactive selector. Switch between models for cost/quality tradeoffs.

/tools - View available tools

Lists all tools from impl/, internal/, and MCP servers.

/mcp - Manage MCP servers

View and toggle enabled/disabled MCP servers interactively.

/memory - Edit user memory

Opens .langrepl/memory.md for custom instructions and preferences. Content is automatically injected into agent prompts.

Advanced: Use {user_memory} placeholder in custom agent prompts to control placement. If omitted, memory auto-appends to end.

Utilities

/graph [--browser] - Visualize agent graph

Renders in terminal (ASCII) or opens in browser with --browser flag.

/help - Show help
/exit - Exit application

Usage

Configs are auto-generated in .langrepl/ on first run.

Agents (config.agents.yml)

agents:
  - name: my-agent
    prompt: prompts/my_agent.md  # Single file or array of files, this will look for `.langrepl/prompts/my_agent.md`
    llm: haiku-4.5
    checkpointer: sqlite
    recursion_limit: 40
    tool_output_max_tokens: 10000
    default: true
    tools:
      - impl:file_system:read_file
      - mcp:context7:resolve-library-id
    subagents:
      - general-purpose
    compression:
      auto_compress_enabled: true
      auto_compress_threshold: 0.8
      compression_llm: haiku-4.5

Tool naming: <category>:<module>:<function> with wildcard support (*, ?, [seq])

  • impl:*:* - All built-in tools
  • impl:file_system:read_* - All read_* tools in file_system
  • mcp:server:* - All tools from MCP server

LLMs (config.llms.yml)

llms:
  - model: claude-haiku-4-5
    alias: haiku-4.5
    provider: anthropic
    max_tokens: 10000
    temperature: 0.1
    context_window: 200000
    input_cost_per_mtok: 1.00
    output_cost_per_mtok: 5.00

Custom Tools

  1. Implement in src/tools/impl/my_tool.py:

    from src.tools.wrapper import approval_tool
    
    @approval_tool()
    def my_tool(query: str) -> str:
        """Tool description."""
        return result
  2. Register in src/tools/factory.py:

    MY_TOOLS = [my_tool]
    self.impl_tools.extend(MY_TOOLS)
  3. Reference: impl:my_tool:my_tool

MCP Servers (config.mcp.json)

{
  "mcpServers": {
    "my-server": {
      "command": "uvx",
      "args": ["my-mcp-package"],
      "transport": "stdio",
      "enabled": true,
      "include": ["tool1"],
      "exclude": [],
      "repair_command": ["sh", "-c", "rm -rf .some_cache"]
    }
  }
}
  • repair_command: Runs if server fails, then run this command before retrying
  • Suppress stderr: "command": "sh", "args": ["-c", "npx pkg 2>/dev/null"]
  • Reference: mcp:my-server:tool1
  • Examples: useful-mcp-servers.json

Sub-Agents (config.subagents.yml)

Sub-agents use the same config structure as main agents.

subagents:
  - name: code-reviewer
    prompt: prompts/code-reviewer.md
    tools: [impl:file_system:read_file]

Add custom: Create prompt, add to config above, reference in parent agent's subagents list.

Tool Approval (config.approval.json)

{
  "always_allow": [
    {
      "name": "read_file",
      "args": null
    },
    {
      "name": "run_command",
      "args": "pwd"
    }
  ],
  "always_deny": [
    {
      "name": "run_command",
      "args": "rm -rf /.*"
    }
  ]
}

Modes: SEMI_ACTIVE (ask unless whitelisted), ACTIVE (auto-approve except denied), AGGRESSIVE (bypass all)

Development

For local development without global install:

git clone https://github.com/midodimori/langrepl.git
cd langrepl
make install

Run from within repository:

uv run langrepl              # Start interactive session
uv run langrepl -w /path     # Specify working directory
uv run langrepl -s -a general  # Start LangGraph server

Development commands:

make install      # Install dependencies + pre-commit hooks
make lint-fix     # Format and lint code
make test         # Run tests
make pre-commit   # Run pre-commit on all files
make bump-patch   # Bump version (0.1.0 → 0.1.1)
make clean        # Remove cache/build artifacts

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

Interactive command-line chat application powered by Langchain, Langgraph, Prompt Toolkit and Rich

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published