A transparent, minimal, and hackable agent framework for developers who want full control.
While large agent frameworks offer extensive features, they often become black boxes that are hard to understand, customize, or debug. AgentSilex takes a different approach:
- Transparent: Every line of code is readable and understandable. No magic, no hidden complexity.
- Minimal: Core implementation in ~300 lines. You can read the entire codebase in one sitting.
- Hackable: Designed for modification. Fork it, customize it, make it yours.
- Universal LLM Support: Built on LiteLLM, seamlessly switch between 100+ models - OpenAI, Anthropic, Google Gemini, DeepSeek, Azure, Mistral, local LLMs, and more. Change providers with one line of code.
- Educational: Perfect for learning how agents actually work under the hood.
- Companies who need a customizable foundation for their agent systems
- Developers who want to understand agent internals, not just use them
- Educators teaching AI agent concepts
- Researchers prototyping new agent architectures
Real-time streaming response demonstration - see how AgentSilex processes queries and streams results
pip install agentsilexOr with uv:
uv add agentsilexfrom agentsilex import Agent, Runner, Session, tool
# Define a simple tool
@tool
def get_weather(city: str) -> str:
"""Get weather information for a city."""
# In production, this would call a real weather API
return "SUNNY"
# Create an agent with the weather tool
agent = Agent(
name="Weather Assistant",
model="gemini/gemini-2.0-flash", # Switch models: openai/gpt-4, anthropic/claude-3-5-sonnet, deepseek/deepseek-chat, et al.
instructions="Help users find weather information using the available tools.",
tools=[get_weather]
)
# Create a session to track conversation history
session = Session()
# Run the agent with a user query
runner = Runner(session)
result = runner.run(agent, "What's the weather in Monte Cristo?")
# Output the result
print("Final output:", result.final_output)
# Access the conversation history
for message in session.get_dialogs():
print(message)AgentSilex supports intelligent agent handoffs, allowing a main agent to route requests to specialized sub-agents:
from agentsilex import Agent, Runner, Session, tool
# Specialized weather agent with tools
@tool
def get_weather(city: str) -> str:
"""Get weather information for a city."""
return "SUNNY"
weather_agent = Agent(
name="Weather Agent",
model="openai/gpt-4o-mini",
instructions="Help users find weather information using tools",
tools=[get_weather],
)
# Specialized FAQ agent
faq_agent = Agent(
name="FAQ Agent",
model="openai/gpt-4o-mini",
instructions="Answer frequently asked questions about our products",
)
# Main orchestrator agent
main_agent = Agent(
name="Main Agent",
model="openai/gpt-4o-mini",
instructions="Route user questions to the appropriate specialist agent",
handoffs=[weather_agent, faq_agent],
)
# Execute multi-agent workflow
session = Session()
result = Runner(session).run(main_agent, "What's the weather in Paris?")
print(result.final_output)AgentSilex includes built-in OpenTelemetry tracing to visualize agent execution, tool calls, and handoffs.
# Install and start Phoenix
pip install arize-phoenix
python -m phoenix.server.main serve
# Set the endpoint
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317"
# Run your agent - view traces at http://localhost:6006Phoenix will show a complete trace tree of your agent workflow, including all tool calls and agent handoffs.
- Single Agent Execution - Create and run individual agents with custom instructions
- Tool Calling - Agents can call external tools/functions to perform actions
- Tool Definition - Simple
@tooldecorator to define callable functions with automatic schema extraction - Conversation Management - Session-based dialog history tracking across multiple turns
- Multi-Agent Handoff - Main agent can intelligently route requests to specialized sub-agents
- Agents as Tools - Any agent can be converted into a tool for another agent with
agent.as_tool() - Context Management - Share mutable state across tools and conversation turns
- Universal LLM Support - Built on LiteLLM for seamless model switching (OpenAI, Anthropic, Google, DeepSeek, Azure, and 100+ models)
- Type-Safe Tool Definitions - Automatic parameter schema extraction from Python type hints
- Transparent Architecture - ~300 lines of readable, hackable code
- Simple API - Intuitive
Agent,Runner,Session, and@toolabstractions - OpenTelemetry Observability - Built-in tracing compatible with Phoenix and other OTLP backends
- Streaming Support - Real-time response streaming with event-based architecture for better UX
- Agent Memory - Callback-based memory management for conversation history control
- MCP Client Support - Connect to Model Context Protocol servers to extend agent capabilities with external tools
- Custom Agent Behaviors - Pluggable callback system for implementing custom behaviors (ReAct, Chain-of-Thought, logging, etc.)
- Async Support - Asynchronous execution for improved performance
- Tool Call Error Handling - Graceful handling of failed tool executions
- Parallel Tool Execution - Execute multiple tool calls concurrently
- State Persistence - Save and restore agent sessions
- Built-in Tools Library - Common tools (web search, file operations, etc.)
- Human-in-the-Loop - Built-in approval flows for sensitive operations
- Agent Evaluation Framework - Test and evaluate agent performance