Build production-ready AI agents with structured outputs, tool integration, and multi-LLM support
GitHub
β’
Website
β’
Documentation
Flo AI is a Python framework that makes building production-ready AI agents and teams as easy as writing YAML. Think "Kubernetes for AI Agents" - compose complex AI architectures using pre-built components while maintaining the flexibility to create your own.
- π Truly Composable: Build complex AI systems by combining smaller, reusable components
- ποΈ Production-Ready: Built-in best practices and optimizations for production deployments
- π YAML-First: Define your entire agent architecture in simple YAML
- π§ LLM-Powered Routing: Intelligent routing decisions made by LLMs, no code required
- π§ Flexible: Use pre-built components or create your own
- π€ Team-Oriented: Create and manage teams of AI agents working together
- π OpenTelemetry Integration: Built-in observability with automatic instrumentation
- π Quick Start
- π¨ Flo AI Studio - Visual Workflow Designer
- π§ Core Features
- π Agent Orchestration with Arium
- π OpenTelemetry Integration
- π Examples & Documentation
- π Why Flo AI?
- π€ Contributing
pip install flo-ai
# or using poetry
poetry add flo-aiimport asyncio
from flo_ai.builder.agent_builder import AgentBuilder
from flo_ai.llm import OpenAI
async def main():
# Create a simple conversational agent
agent = (
AgentBuilder()
.with_name('Math Tutor')
.with_prompt('You are a helpful math tutor.')
.with_llm(OpenAI(model='gpt-4o-mini'))
.build()
)
response = await agent.run('What is the formula for the area of a circle?')
print(f'Response: {response}')
asyncio.run(main())import asyncio
from flo_ai.builder.agent_builder import AgentBuilder
from flo_ai.tool import flo_tool
from flo_ai.llm import Anthropic
@flo_tool(description="Perform mathematical calculations")
async def calculate(operation: str, x: float, y: float) -> float:
"""Calculate mathematical operations between two numbers."""
operations = {
'add': lambda: x + y,
'subtract': lambda: x - y,
'multiply': lambda: x * y,
'divide': lambda: x / y if y != 0 else 0,
}
return operations.get(operation, lambda: 0)()
async def main():
agent = (
AgentBuilder()
.with_name('Calculator Assistant')
.with_prompt('You are a math assistant that can perform calculations.')
.with_llm(Anthropic(model='claude-3-5-sonnet-20240620'))
.with_tools([calculate.tool])
.build()
)
response = await agent.run('Calculate 5 plus 3')
print(f'Response: {response}')
asyncio.run(main())import asyncio
from pydantic import BaseModel, Field
from flo_ai.builder.agent_builder import AgentBuilder
from flo_ai.llm import OpenAI
class MathSolution(BaseModel):
solution: str = Field(description="Step-by-step solution")
answer: str = Field(description="Final answer")
confidence: float = Field(description="Confidence level (0-1)")
async def main():
agent = (
AgentBuilder()
.with_name('Math Solver')
.with_llm(OpenAI(model='gpt-4o'))
.with_output_schema(MathSolution)
.build()
)
response = await agent.run('Solve: 2x + 5 = 15')
print(f'Structured Response: {response}')
asyncio.run(main())Create AI workflows visually with our powerful React-based studio!
Flo AI Studio is a modern, intuitive visual editor that allows you to design complex multi-agent workflows through a drag-and-drop interface. Build sophisticated AI systems without writing code, then export them as production-ready YAML configurations.
- π― Visual Design: Drag-and-drop interface for creating agent workflows
- π€ Agent Management: Configure AI agents with different roles, models, and tools
- π Smart Routing: Visual router configuration for intelligent workflow decisions
- π€ YAML Export: Export workflows as Flo AI-compatible YAML configurations
- π₯ YAML Import: Import existing workflows for further editing
- β Workflow Validation: Real-time validation and error checking
- π§ Tool Integration: Connect agents to external tools and APIs
- π Template System: Quick start with pre-built agent and router templates
-
Start the Studio:
cd studio pnpm install pnpm dev -
Design Your Workflow:
- Add agents, routers, and tools to the canvas
- Configure their properties and connections
- Test with the built-in validation
-
Export & Run:
from flo_ai.arium import AriumBuilder
builder = AriumBuilder.from_yaml(yaml_file='your_workflow.yaml')
result = await builder.build_and_run(['Your input here'])Flo AI supports multiple LLM providers with consistent interfaces:
# OpenAI
from flo_ai.llm import OpenAI
llm = OpenAI(model='gpt-4o', temperature=0.7)
# Anthropic Claude
from flo_ai.llm import Anthropic
llm = Anthropic(model='claude-3-5-sonnet-20240620', temperature=0.7)
# Google Gemini
from flo_ai.llm import Gemini
llm = Gemini(model='gemini-2.5-flash', temperature=0.7)
# Google VertexAI
from flo_ai.llm import VertexAI
llm = VertexAI(model='gemini-2.5-flash', project='your-project')
# Ollama (Local)
from flo_ai.llm import Ollama
llm = Ollama(model='llama2', base_url='http://localhost:11434')Create custom tools easily with the @flo_tool decorator:
from flo_ai.tool import flo_tool
@flo_tool(description="Get current weather for a city")
async def get_weather(city: str, country: str = None) -> str:
"""Get weather information for a specific city."""
# Your weather API implementation
return f"Weather in {city}: sunny, 25Β°C"
# Use in agent
agent = (
AgentBuilder()
.with_name('Weather Assistant')
.with_llm(OpenAI(model='gpt-4o-mini'))
.with_tools([get_weather.tool])
.build()
)Dynamic variable resolution in agent prompts using <variable_name> syntax:
# Create agent with variables
agent = (
AgentBuilder()
.with_name('Data Analyst')
.with_prompt('Analyze <dataset_path> and focus on <key_metric>. Generate insights for <target_audience>.')
.with_llm(OpenAI(model='gpt-4o-mini'))
.build()
)
# Define variables at runtime
variables = {
'dataset_path': '/data/sales_q4_2024.csv',
'key_metric': 'revenue growth',
'target_audience': 'executive team'
}
result = await agent.run(
'Please provide a comprehensive analysis with actionable recommendations.',
variables=variables
)Process PDF and TXT documents with AI agents:
from flo_ai.models.document import DocumentMessage, DocumentType
# Create document message
document = DocumentMessage(
document_type=DocumentType.PDF,
document_file_path='business_report.pdf'
)
# Process with agent
agent = (
AgentBuilder()
.with_name('Document Analyzer')
.with_prompt('Analyze the provided document and extract key insights.')
.with_llm(OpenAI(model='gpt-4o-mini'))
.build()
)
result = await agent.run([document])Use Pydantic models for structured outputs:
from pydantic import BaseModel, Field
class AnalysisResult(BaseModel):
summary: str = Field(description="Executive summary")
key_findings: list = Field(description="List of key findings")
recommendations: list = Field(description="Actionable recommendations")
agent = (
AgentBuilder()
.with_name('Business Analyst')
.with_llm(OpenAI(model='gpt-4o'))
.with_output_schema(AnalysisResult)
.build()
)Built-in retry mechanisms and error recovery:
agent = (
AgentBuilder()
.with_name('Robust Agent')
.with_llm(OpenAI(model='gpt-4o'))
.with_retries(3) # Retry up to 3 times on failure
.build()
)Arium is Flo AI's powerful workflow orchestration engine for creating complex multi-agent workflows.
from flo_ai.arium import AriumBuilder
from flo_ai.models.agent import Agent
from flo_ai.llm import OpenAI
async def simple_chain():
llm = OpenAI(model='gpt-4o-mini')
# Create agents
analyst = Agent(
name='content_analyst',
system_prompt='Analyze the input and extract key insights.',
llm=llm
)
summarizer = Agent(
name='summarizer',
system_prompt='Create a concise summary based on the analysis.',
llm=llm
)
# Build and run workflow
result = await (
AriumBuilder()
.add_agents([analyst, summarizer])
.start_with(analyst)
.connect(analyst, summarizer)
.end_with(summarizer)
.build_and_run(["Analyze this complex business report..."])
)
return resultfrom flo_ai.arium.memory import BaseMemory
def route_by_type(memory: BaseMemory) -> str:
"""Route based on classification result"""
messages = memory.get()
last_message = str(messages[-1]) if messages else ""
if "technical" in last_message.lower():
return "tech_specialist"
else:
return "business_specialist"
# Build workflow with conditional routing
result = await (
AriumBuilder()
.add_agents([classifier, tech_specialist, business_specialist, final_agent])
.start_with(classifier)
.add_edge(classifier, [tech_specialist, business_specialist], route_by_type)
.connect(tech_specialist, final_agent)
.connect(business_specialist, final_agent)
.end_with(final_agent)
.build_and_run(["How can we optimize our database performance?"])
)Define entire workflows in YAML:
metadata:
name: "content-analysis-workflow"
version: "1.0.0"
description: "Multi-agent content analysis pipeline"
arium:
agents:
- name: "analyzer"
role: "Content Analyst"
job: "Analyze the input content and extract key insights."
model:
provider: "openai"
name: "gpt-4o-mini"
- name: "summarizer"
role: "Content Summarizer"
job: "Create a concise summary based on the analysis."
model:
provider: "anthropic"
name: "claude-3-5-sonnet-20240620"
workflow:
start: "analyzer"
edges:
- from: "analyzer"
to: ["summarizer"]
end: ["summarizer"]# Run YAML workflow
result = await (
AriumBuilder()
.from_yaml(yaml_str=workflow_yaml)
.build_and_run(["Analyze this quarterly business report..."])
)Define intelligent routing logic directly in YAML:
routers:
- name: "content_type_router"
type: "smart" # Uses LLM for intelligent routing
routing_options:
technical_writer: "Technical content, documentation, tutorials"
creative_writer: "Creative writing, storytelling, fiction"
marketing_writer: "Marketing copy, sales content, campaigns"
model:
provider: "openai"
name: "gpt-4o-mini"ReflectionRouter for AβBβAβC feedback patterns:
routers:
- name: "reflection_router"
type: "reflection"
flow_pattern: [writer, critic, writer] # A β B β A pattern
model:
provider: "openai"
name: "gpt-4o-mini"PlanExecuteRouter for Cursor-style plan-and-execute workflows:
routers:
- name: "plan_router"
type: "plan_execute"
agents:
planner: "Creates detailed execution plans"
developer: "Implements features according to plan"
tester: "Tests implementations and validates functionality"
reviewer: "Reviews and approves completed work"
settings:
planner_agent: planner
executor_agent: developer
reviewer_agent: reviewerBuilt-in observability for production monitoring:
from flo_ai import configure_telemetry, shutdown_telemetry
# Configure at startup
configure_telemetry(
service_name="my_ai_app",
service_version="1.0.0",
console_export=True # For debugging
)
# Your application code here...
# Shutdown to flush data
shutdown_telemetry()π Complete Telemetry Guide β
Check out the examples/ directory for comprehensive examples:
agent_builder_usage.py- Basic agent creation patternsyaml_agent_example.py- YAML-based agent configurationoutput_formatter.py- Structured output examplesmulti_tool_example.py- Multi-tool agent examplesdocument_processing_example.py- Document processing with PDF and TXT files
Visit our website to know more
Additional Resources:
- @flo_tool Decorator Guide - Complete guide to the
@flo_tooldecorator - Examples Directory - Ready-to-run code examples
- Contributing Guide - How to contribute to Flo AI
- Simple Setup: Get started in minutes with minimal configuration
- Flexible: Use YAML or code-based configuration
- Production Ready: Built-in error handling and retry mechanisms
- Multi-LLM: Switch between providers easily
- Maintainable: YAML-first approach makes configurations versionable
- Testable: Each component can be tested independently
- Scalable: From simple agents to complex multi-tool systems
- π€ Customer Service Automation
- π Data Analysis and Processing
- π Content Generation and Summarization
- π Research and Information Retrieval
- π― Task-Specific AI Assistants
- π§ Email Analysis and Classification
We love your input! Check out our Contributing Guide to get started. Ways to contribute:
- π Report bugs
- π‘ Propose new features
- π Improve documentation
- π§ Submit PRs
Flo AI is MIT Licensed.