The TypeScript SDK for Multi-Provider AI Agents
Build agents that chain LLM reasoning with MCP tools. Mix OpenAI, Claude, Mistral in one workflow. Parallel execution, branching, loops. Native retries, streaming, and typed errors.
π Read the full documentation at volcano.dev β
|
Chain steps with |
LLM automatically selects and calls appropriate MCP tools based on the prompt. No manual routing required. |
OpenAI, Anthropic, Mistral, Llama, Bedrock, Vertex, Azure. Switch providers per-step or use globally. |
|
Full TypeScript support with type inference and IntelliSense for all APIs. |
Parallel execution, conditional branching, loops, and sub-agent composition for complex workflows. |
Three retry strategies: immediate, delayed, and exponential backoff. Per-step timeout configuration. |
|
Stream step results as they complete using async generators. Perfect for real-time UIs and long-running tasks. |
Native Model Context Protocol support with connection pooling, tool discovery, and authentication. |
Build reusable agent components and compose them into larger workflows. Modular and testable. |
|
Production-ready distributed tracing and metrics. Monitor performance, debug failures. Export to Jaeger, Prometheus, DataDog, NewRelic. |
OAuth 2.1 and Bearer token authentication per MCP specification. Agent-level or handle-level configuration with automatic token refresh. |
Intelligent connection pooling for MCP servers, tool discovery caching with TTL, and JSON schema validation for reliability. |
npm install volcano-sdkThat's it! Includes MCP support and all common LLM providers (OpenAI, Anthropic, Mistral, Llama, Vertex).
import { agent, llmOpenAI, mcp } from "volcano-sdk";
const llm = llmOpenAI({
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o-mini"
});
const astro = mcp("http://localhost:3211/mcp");
const results = await agent({ llm })
.then({
prompt: "Find the astrological sign for birthdate 1993-07-11",
mcps: [astro] // Automatic tool selection
})
.then({
prompt: "Write a one-line fortune for that sign"
})
.run();
console.log(results[1].llmOutput);
// Output: "Fortune based on the astrological sign"import { agent, llmOpenAI, llmAnthropic, llmMistral } from "volcano-sdk";
const gpt = llmOpenAI({ apiKey: process.env.OPENAI_API_KEY! });
const claude = llmAnthropic({ apiKey: process.env.ANTHROPIC_API_KEY! });
const mistral = llmMistral({ apiKey: process.env.MISTRAL_API_KEY! });
// Use different LLMs for different steps
await agent()
.then({ llm: gpt, prompt: "Extract data from report" })
.then({ llm: claude, prompt: "Analyze for patterns" })
.then({ llm: mistral, prompt: "Write creative summary" })
.run();- Getting Started - Installation, quick start, core concepts
- LLM Providers - OpenAI, Anthropic, Mistral, Llama, Bedrock, Vertex, Azure
- MCP Tools - Automatic selection, OAuth authentication, connection pooling
- Advanced Patterns - Parallel, branching, loops, multi-LLM workflows
- Features - Streaming, retries, timeouts, hooks, error handling
- Observability - OpenTelemetry traces and metrics
- API Reference - Complete API documentation
- Examples - Ready-to-run code examples
We welcome contributions! Please see our Contributing Guide for details.
- π Report bugs or issues
- π‘ Request features or ask questions
- β Star the project if you find it useful
Apache 2.0 - see LICENSE file for details.