A Pythonic helper library for building FastAPI applications that integrate with the Vercel AI SDK. This library provides a seamless way to stream AI responses from your FastAPI backend to your Next.js frontend.
- Full Vercel AI SDK Compatibility - Implements the complete AI SDK protocol specification
- Type-Safe with Pydantic - Full type hints and validation for all events
- Streaming Support - Built-in Server-Sent Events (SSE) streaming
- Easy Integration - Simple decorators and utilities for FastAPI
- Flexible Builder Pattern - Intuitive API for constructing AI streams
- Well Tested - Comprehensive test coverage
- Fully Documented - Complete documentation with examples
pip install fastapi-ai-sdkfrom fastapi import FastAPI
from fastapi_ai_sdk import AIStreamBuilder, ai_endpoint
app = FastAPI()
@app.post("/api/chat")
@ai_endpoint()
async def chat(message: str):
"""Simple chat endpoint that streams a response."""
builder = AIStreamBuilder()
builder.text(f"You said: {message}")
return builderimport { useChat } from "@ai-sdk/react";
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: "http://localhost:8000/api/chat",
});
return (
<div>
{messages.map((msg) => (
<div key={msg.id}>{msg.content}</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
<button type="submit">Send</button>
</form>
</div>
);
}The library supports all Vercel AI SDK event types:
- Message Lifecycle:
start,finish - Text Streaming:
text-start,text-delta,text-end - Reasoning:
reasoning-start,reasoning-delta,reasoning-end - Tool Calls:
tool-input-start,tool-input-delta,tool-input-available,tool-output-available - Structured Data: Custom
data-*events - File References: URLs and documents
- Error Handling: Error events with messages
from fastapi_ai_sdk import AIStreamBuilder
# Create a builder
builder = AIStreamBuilder(message_id="optional_id")
# Add different types of content
builder.start() # Start the stream
builder.text("Here's some text") # Add text content
builder.reasoning("Let me think about this...") # Add reasoning
builder.data("weather", {"temperature": 20, "city": "Berlin"}) # Add structured data
builder.tool_call( # Add tool usage
"get_weather",
input_data={"city": "Berlin"},
output_data={"temperature": 20}
)
builder.finish() # End the stream
# Build and return the stream
return builder.build()@app.post("/chat")
@ai_endpoint()
async def chat(message: str):
builder = AIStreamBuilder()
builder.text(f"Response: {message}")
return builder@app.get("/stream")
@streaming_endpoint(chunk_size=10, delay=0.1)
async def stream():
return "This text will be streamed chunk by chunk"@app.post("/tools/weather")
@tool_endpoint("get_weather")
async def get_weather(city: str):
# Your tool logic here
return {"temperature": 20, "condition": "sunny"}@app.post("/api/advanced-chat")
@ai_endpoint()
async def advanced_chat(query: str):
builder = AIStreamBuilder()
# Start with reasoning
builder.reasoning("Analyzing your query...")
# Make a tool call
weather_data = await get_weather_data("Berlin")
builder.tool_call(
"get_weather",
input_data={"city": "Berlin"},
output_data=weather_data
)
# Stream the response
builder.text(f"Based on the weather data: {weather_data}")
return builderfrom fastapi_ai_sdk import create_ai_stream_response
@app.get("/api/generate")
async def generate():
async def event_generator():
from fastapi_ai_sdk.models import StartEvent, TextDeltaEvent, FinishEvent
yield StartEvent(message_id="gen_1")
for word in ["Hello", " ", "from", " ", "FastAPI"]:
yield TextDeltaEvent(id="txt_1", delta=word)
await asyncio.sleep(0.1)
yield FinishEvent()
return create_ai_stream_response(event_generator())@app.post("/api/story")
@ai_endpoint()
async def generate_story(prompt: str):
builder = AIStreamBuilder()
story = await generate_long_story(prompt) # Your story generation logic
# Stream with custom chunk size
builder.text(story, chunk_size=50) # Streams in 50-character chunks
return builderRun the test suite:
# Install dev dependencies
pip install -e ".[dev]"
# Run tests with coverage
pytest --cov=fastapi_ai_sdk --cov-report=term-missing
# Run specific test file
pytest tests/test_models.py
# Run with verbose output
pytest -v# Clone the repository
git clone https://github.com/doganarif/fastapi-ai-sdk.git
cd fastapi-ai-sdk
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install in development mode
pip install -e ".[dev]"
# Run linting
black fastapi_ai_sdk tests
isort fastapi_ai_sdk tests
flake8 fastapi_ai_sdk tests
mypy fastapi_ai_sdkThis project uses:
- Black for code formatting
- isort for import sorting
- flake8 for linting
- mypy for type checking
MIT License - see LICENSE file for details.
Arif Dogan
- Email: [email protected]
- GitHub: @doganarif
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
- Vercel AI SDK for the excellent frontend SDK
- FastAPI for the amazing web framework
- Pydantic for data validation
Made with โค๏ธ for the FastAPI and AI community from Arif