Getting Started | CLI Reference | Configuration | Changelog | Roadmap
PTC-CLI-DEMO.mp4
Demo: Analyzing 2 years of NVDA, AMD & SPY stock data (15,000+ lines of raw JSON) using DeepSeek V3.2
This project is an open source implementation of Anthropic recently introduced Programmatic Tool Calling (PTC), which enables agents to invoke tools with code execution rather than making individual JSON tool calls. This paradigm is also featured in their earlier engineering blog Code execution with MCP.
-
LLMs are exceptionally good at writing code! They excel at understanding context, reasoning about data flows, and generating precise logic. PTC lets them do what they do best - write code that orchestrates entire workflows rather than reasoning through one tool call at a time.
-
Traditional tool calling returns full results to the model's context window. Suppose fetching 1 year of daily stock prices for 10 tickers. This means 2,500+ OHLCV data points polluting context - tens of thousands of tokens just to compute a portfolio summary. With PTC, code runs in a sandbox, processes data locally, and only the final output returns to the model. Result: 85-98% token reduction.
-
PTC particularly shines when working with large volumes of structured data, time series data (like financial market data), and scenarios requiring further data processing - filtering, aggregating, transforming, or visualizing results before returning them to the model.
User Task
|
v
+-------------------+
| PTCAgent | Tool discovery -> Writes Python code
+-------------------+
| ^
v |
+-------------------+
| Daytona Sandbox | Executes code
| +-------------+ |
| | MCP Tools | | tool() -> process / filter / aggregate -> dump to data/ directory
| | (Python) | |
| +-------------+ |
+-------------------+
|
v
+-------------------+
|Final deliverables | Files and data can be downloaded from sandbox
+-------------------+
Built on LangChain DeepAgents - This project uses many components from DeepAgents and cli feature was bootstrapped from deepagent-cli. Special thanks to the LangChain team!
Sandbox environment provided by Daytona.
- Interactive CLI - New
ptc-agentcommand for terminal-based interaction with session persistence, plan mode, themes, and rich UI - Background Subagent Execution - Subagents now run asynchronously using a "fire and collect" pattern, giving the agent proactive control over dispatched tasks. Results are automatically returned to the agent upon completion, even without explicitly calling
wait() - Task Monitoring - New
wait()andtask_progress()tools for monitoring and collecting background task results - Vision/Multimodal Support - New
view_imagetool enables vision-capable LLMs to analyze images from URLs, base64 data, or sandbox files
- Universal MCP Support - Auto-converts any MCP server tools to Python functions
- Progressive Tool Discovery - Tools discovered on-demand; avoids large number of tokens of upfront tool definitions
- Custom MCP Upload - Deploy Python MCP implementations directly into sandbox sessions
- Enhanced File Tools - Refined glob, grep and other file operation tools optimized for sandbox environment
- Daytona Backend - Secure code execution with filesystem isolation and snapshot support
- Auto Image Upload - Charts and images auto-uploaded to cloud storage (Cloudflare R2, AWS S3, Alibaba OSS)
- LangGraph Ready - Compatible with LangGraph Cloud/Studio deployment
- Multi-LLM Support - Works with Anthropic, OpenAI, and Any LLM provider you configure in
llms.json
├── libs/
│ ├── ptc-agent/ # Core agent library
│ │ └── ptc_agent/
│ │ ├── core/ # Sandbox, MCP registry, tool generator, session
│ │ ├── config/ # Configuration classes and loaders
│ │ ├── agent/ # PTCAgent, tools, prompts, middleware, subagents
│ │ └── utils/ # Cloud storage uploaders
│ │
│ └── ptc-cli/ # Interactive CLI application
│ └── ptc_cli/
│ ├── core/ # State, config, theming
│ ├── commands/ # Slash commands, bash execution
│ ├── display/ # Rich terminal rendering
│ ├── input/ # Prompt, completers, file mentions
│ └── streaming/ # Tool approval, execution
│
├── mcp_servers/ # Demo MCP server implementations
│ ├── yfinance_mcp_server.py
│ └── tickertick_mcp_server.py
│
├── example/ # Demo notebooks and scripts
│ ├── PTC_Agent.ipynb
│ ├── Subagent_demo.ipynb
│ └── quickstart.py
│
├── config.yaml # Main configuration
└── llms.json # LLM provider definitions
The agent has access to native tools plus middleware capabilities from deep-agent:
| Tool | Description | Key Parameters |
|---|---|---|
| execute_code | Execute Python with MCP tool access | code |
| Bash | Run shell commands | command, timeout, working_dir |
| Read | Read file with line numbers | file_path, offset, limit |
| Write | Write/overwrite file | file_path, content |
| Edit | Exact string replacement | file_path, old_string, new_string |
| Glob | File pattern matching | pattern, path |
| Grep | Content search (ripgrep) | pattern, path, output_mode |
| Middleware | Description | Tools Provided |
|---|---|---|
| SubagentsMiddleware | Delegates specialized tasks to sub-agents with isolated execution | task() |
| BackgroundSubagentMiddleware | Async subagent execution with fire and collect pattern | wait(), task_progress() |
| ViewImageMiddleware | Injects images into conversation for multimodal LLMs | view_image() |
| FilesystemMiddleware | File operations | read_file, write_file, edit_file, glob, grep, ls |
| TodoListMiddleware | Task planning and progress tracking (auto-enabled) | write_todos |
| SummarizationMiddleware | Auto-summarizes conversation history (auto-enabled) | - |
Available Subagents (Default):
research- Web search with Tavily + think tool for strategic reflectiongeneral-purpose- Full execute_code, filesystem, and vision tools for complex multi-step tasks
Subagents run in the background by default - the main agent can continue working while delegated tasks execute asynchronously.
The demo includes 3 enabled MCP servers configured in config.yaml:
| Server | Transport | Tools | Purpose |
|---|---|---|---|
| tavily | stdio (npx) | 4 | Web search |
| yfinance | stdio (python) | 21 | Stock prices, financials |
| tickertick | stdio (python) | 7 | Financial news |
In Prompts - Tool summaries are injected into the system prompt:
tavily: Web search engine for finding current information
- Module: tools/tavily.py
- Tools: 4 tools available
- Import: from tools.tavily import <tool_name>
In Sandbox - Full Python modules are generated:
/home/daytona/
├── tools/
│ ├── mcp_client.py # MCP communication layer
│ ├── tavily.py # from tools.tavily import search
│ ├── yfinance.py # from tools.yfinance import get_stock_history
│ └── docs/ # Auto-generated documentation
│ ├── tavily/*.md
│ └── yfinance/*.md
├── results/ # Agent output
└── data/ # Input data
In Code - Agent imports and uses tools directly:
from tools.yfinance import get_stock_history
import pandas as pd
# Fetch data - stays in sandbox
history = get_stock_history(ticker="AAPL", period="1y")
# Process locally - no tokens wasted
df = pd.DataFrame(history)
summary = {"mean": df["close"].mean(), "volatility": df["close"].std()}
# Only summary returns to model
print(summary)- Python 3.12+
- Node.js (for MCP servers)
- uv package manager
git clone https://github.com/Chen-zexi/open-ptc-agent.git
cd open-ptc-agent
uv sync
source .venv/bin/activate # On Windows: .venv\Scripts\activateCreate a .env file with the minimum required keys:
# One LLM provider (choose one)
ANTHROPIC_API_KEY=your-key
# or
OPENAI_API_KEY=your-key
# or
# Any model you configured in llms.json and config.yaml
# You can also use Coding plans from Minimax and GLM here!
# Daytona (required)
DAYTONA_API_KEY=your-keyGet your Daytona API key from Daytona Dashboard. They provide free credits for new users!
For full functionality, add optional keys:
# MCP Servers
TAVILY_API_KEY=your-key # Web search
ALPHA_VANTAGE_API_KEY=your-key # Financial data
# Cloud Storage (choose one provider)
R2_ACCESS_KEY_ID=... # Cloudflare R2
AWS_ACCESS_KEY_ID=... # AWS S3
OSS_ACCESS_KEY_ID=... # Alibaba OSS
# Tracing (optional)
LANGSMITH_API_KEY=your-keySee .env.example for the complete list of environment variables options.
Start the interactive CLI:
ptc-agentSee the ptc-cli documentation for all commands and options.
For programmatic usage of PTC Agent, see the ptc-agent documentation.
For Jupyter notebook examples:
- PTC_Agent.ipynb - Quick demo with open-ptc-agent
- Subagent_demo.ipynb - Background subagent execution
- quickstart.py - Python script quickstart
Optionally, use the LangGraph API to deploy the agent.
The project uses two configuration files:
- config.yaml - Main configuration (LLM selection, MCP servers, Daytona, security, storage)
- llms.json - LLM provider definitions
Select your LLM in config.yaml:
llm:
name: "claude-sonnet-4-5" # Options: claude-sonnet-4-5, gpt-5.1-codex-mini, gemini-3-proEnable/disable MCP servers:
mcp:
servers:
- name: "tavily"
enabled: true # Set to false to disableFor complete configuration options including Daytona settings, security policies, and adding custom LLM providers, see the Configuration Guide.
The ptc-agent command provides an interactive terminal interface with:
- Session persistence and sandbox reuse
- Slash commands (
/help,/files,/view,/download) - Bash execution with
!command - File mentions with
@path/to/file - Customizable themes and color palettes
Quick start:
ptc-agent # Start interactive session
ptc-agent --plan-mode # Enable plan approval before execution
ptc-agent list # List available agentsFor complete CLI documentation including all options, commands, keyboard shortcuts, and theming configuration, see the CLI Reference.
Planned features and improvements:
- CLI Version for PTC Agent
- CI/CD pipeline for automated testing
- Additional MCP server integrations / More example notebooks
- Performance benchmarks and optimizations
- Improved search tool for smoother tool discovery
- Claude skill integration
We welcome contributions from the community! Here are some ways you can help:
- Code Contributions - Bug fixes, new features, improvements (CI/CD coming soon)
- Use Cases - Share how you're using PTC in production or research
- Example Notebooks - Create demos showcasing different workflows
- MCP Servers - Build or recommend MCP servers that work well with PTC (data processing, APIs, etc.)
- Prompt Tricks - Share prompting techniques that improve agent performance
Open an issue or PR on GitHub to contribute!
This project builds on research and tools from:
Research/Articles
- Introducing advanced tool use on the Claude Developer Platform - Anthropic
- Code execution with MCP: building more efficient AI agents - Anthropic
- CodeAct: Executable Code Actions Elicit Better LLM Agents - Wang et al.
Frameworks and Infrastructure
- LangChain DeepAgents - Base Agent Framework
- Daytona - Sandbox infrastructure
If you find this project useful, please consider giving it a star! It helps others discover this work.
MIT License