AgentMesh is a Go-first framework for orchestrating AI agents with predictable flows, typed tool calls, and built-in observability.
- Deterministic orchestration β Compose sequential, parallel, or looping agents with explicit error handling.
- Typed tools & retrieval β Wrap Go functions, sub-agents, or search connectors with JSON Schema validation.
- Streaming-first runtime β Stream partial events, route transfers, and merge structured output deterministically.
- Pluggable observability β Bring your own logging, tracing, metrics, and persistence layers.
go get github.com/hupe1980/agentmesh
Requirements:
- Go 1.24 or newer
OPENAI_API_KEY
exported in your environment
The snippet below mirrors examples/basic_agent/main.go
and creates a model-backed agent with streaming output and logging:
package main
import (
"context"
"fmt"
"log"
"os"
"time"
am "github.com/hupe1980/agentmesh"
"github.com/hupe1980/agentmesh/core"
"github.com/hupe1980/agentmesh/logging"
"github.com/hupe1980/agentmesh/model/openai"
)
func main() {
if os.Getenv("OPENAI_API_KEY") == "" {
log.Fatal("OPENAI_API_KEY is required")
}
model := openai.NewModel()
agent, err := am.NewModelAgent("basic", model, func(o *am.ModelAgentOptions) {
o.Instructions = core.NewInstructionsFromText("Keep responses short and helpful.")
})
if err != nil {
log.Fatalf("build agent: %v", err)
}
app := am.NewApp("basic_app", agent)
runner := am.NewRunner(app, func(o *am.RunnerOptions) {
o.Logger = logging.NewSlogLogger(logging.LogLevelInfo, logging.LogFormatText, false)
})
defer runner.Close()
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
parts := []core.Part{core.NewPartFromText("Hello! What can you help with?")}
runID, text, err := runner.RunFinalText(ctx, "user1", "session1", parts)
if err != nil {
log.Fatalf("run failed: %v", err)
}
fmt.Printf("=== Run %s ===\n%s\n", runID, text)
}
From here you can add tools, switch to sequential/parallel flows, or persist sessions. Check the Getting Started guide for a deeper walkthrough.
- Compose
Sequential
,Parallel
, andLoop
agents for complex flows. - Use transfer events to hand off control across agents or escalate when necessary.
- Rely on the event stream to interleave partial outputs with tool activity.
- Create function tools with
tool.NewFuncTool
or wrap an entire agent viatool.NewAgentTool
. - Promote search connectors into tools using
retrieval.NewTool
and merge multiple sources withretrieval.NewMergerRetriever
. - Choose from built-in adapters for Amazon Bedrock, Amazon Kendra, and LangChainGo retrievers, or implement your own.
- Plug in custom loggers, metrics, and tracers using
logging
,metrics
, andtrace
packages. - Store artifacts, session data, and memories with pluggable backends under
artifact
andsession
. - Control concurrency and buffering through runner options for predictable throughput.
- Getting Started β Install, run the basic agent, and learn the runner lifecycle.
- Agents Guide β Build sequential, parallel, loop, and functional agents.
- Models Guide β Connect OpenAI, LangChainGo, gateway, or functional models with structured outputs.
- Tools Guide β Author function tools, toolsets, retrieval mergers, and MCP adapters.
- Observability β Wire logging, metrics, tracing, and inspect emitted events.
- Architecture β Understand flows, the runner, plugins, and execution context.
Run any example locally with go run
:
go run ./examples/basic_agent/main.go
Featured samples:
- Basic model agent β
examples/basic_agent
- Tooling bundle β
examples/tool_usage
- Agent-as-a-tool β
examples/agent_tool
- Structured output β
examples/output_schema
- Multi-agent fan-out β
examples/multi_agent
- Agent transfer β
examples/transfer_agent
- OpenTelemetry wiring β
examples/opentelemetry
More integrations live under examples/
.
Use the just
recipes provided in the repo:
just test # go test ./...
just test-race # go test ./... -race
just lint # golangci-lint with project config
just cover # generate HTML coverage report
Prefer raw tools?
go test ./... -race
golangci-lint run --config .golangci.yml
To iterate on the docs locally:
just docs-serve
This launches Jekyll with livereload at http://localhost:4000
using the bundled docs/_config.dev.yml
.
- Swap the in-memory stores (session, memory, artifact) for your durable implementations.
- Enforce tool allow-lists and sanitize tool responses before surfacing them to models.
- Tune runner buffers, flow concurrency, and limiter settings to match downstream capacity.
- Instrument with OpenTelemetry exporters or your preferred providers through runner options.
- Cache model responses and trim session history to manage cost.
We welcome issues and pull requests! Review CONTRIBUTING.md for guidelines, coding style, and the release process.
MIT Β© AgentMesh contributors