Rapida is an open-source platform for designing, building, and deploying voice agents at scale.
It’s built around three core principles:
- Reliable — designed for production workloads, real-time audio, and fault-tolerant execution
- Observable — deep visibility into calls, latency, metrics, and tool usage
- Customizable — flexible architecture that adapts to any LLM, workflow, or enterprise stack
Rapida provides both a platform and a framework for building real-world voice agents—from low-latency audio streaming to orchestration, monitoring, and integrations.
Rapida is written in Go, using the highly optimized gRPC protocol for fast, efficient, bidirectional communication.
-
Real-time Voice Orchestration
Stream and process audio with low latency using GRPC. -
LLM-Agnostic Architecture
Bring your own model—OpenAI, Anthropic, open-source models, or custom inference. -
Production-grade Reliability
Built-in retries, error handling, call lifecycle management, and health checks. -
Full Observability
Call logs, streaming events, tool traces, latency breakdowns, metrics, and dashboards. -
Flexible Tooling System
Build custom tools and actions for your agents, or integrate with any backend. -
Developer-friendly
Clear APIs, modular components, and simple configuration. -
Enterprise-ready
Scalable design, efficient protocol, and predictable performance.
- Docker & Docker Compose (Install)
- 16GB+ RAM (for all services)
Get all services running in 4 commands:
# Clone repo
git clone https://github.com/rapidaai/voice-ai.git && cd voice-ai
# Setup & build
make setup-local && make build-all
# Start all services
make up-all
# View running services
docker compose psServices Ready:
- UI: http://localhost:3000
- Web API: http://localhost:9001
- Assistant API: http://localhost:9007
- Endpoint API: http://localhost:9005
- Integration API: http://localhost:9004
- Document API: http://localhost:9010
Stop services:
make down-all# Start only database
make up-db
# Start only UI
make up-ui
# Start only Assistant API
make up-assistant
# List all start commands
make help# All services
make logs-all
# Specific service
make logs-web
make logs-assistant# Rebuild and restart one service
make rebuild-assistant
# Rebuild all
make rebuild-allEdit environment files before starting:
docker/web-api/.web.env- Web API (port 9001)docker/assistant-api/.assistant.env- Assistant API (port 9007)docker/endpoint-api/.endpoint.env- Endpoint API (port 9005)docker/integration-api/.integration.env- Integration API (port 9004)docker/document-api/config.yaml- Document API (port 9010)
Add your API keys (OpenAI, Anthropic, Deepgram, Twilio, etc.) in these files.
# Install dependencies
go mod download
# Build service
go build -o bin/web ./cmd/web
# Run service
./bin/webRequires PostgreSQL, Redis, OpenSearch running separately.
cd ui
# Install & run
yarn install
yarn start:dev
# Build for production
yarn buildPort already in use:
lsof -i :3000 # Find process
kill -9 <PID> # Kill itServices won't start:
make logs-all # Check logs
docker compose ps # Verify statusDatabase issues:
# Test connection
docker compose exec postgres psql -U rapida -d web_db -c "SELECT 1"
# Reset everything
make clean
make setup-local
make build-all
make up-allmake help # Show all available commands
make setup-local # Create data directories
make build-all # Build all Docker images
make up-all # Start all services
make down-all # Stop all services
make logs-all # View all logs
make clean # Remove containers & volumes
make restart-all # Restart all servicesSee CONTRIBUTING.md for guidelines.
Want to add:
- New STT/TTS provider? Check
api/assistant-api/internal/transformer/ - New telephony channel? Check
api/assistant-api/internal/telephony/
Client SDKs enable your frontend to include interactive, multi-user experiences.
| Language | Repo | Docs |
|---|---|---|
| Web (React) | rapida-react | docs |
| Web Widget (react) | react-widget |
Server SDKs enable your backend to build and manage agents.
| Language | Repo | Docs |
|---|---|---|
| Go | rapida-go | docs |
| Python | rapida-python | docs |
For those who'd like to contribute code, see our Contribution Guide. At the same time, please consider supporting RapidaAi by sharing it on social media and at events and conferences.
To protect your privacy, please avoid posting security issues on GitHub. Instead, report issues to [email protected], and our team will respond with detailed answer.
Rapida is open-source under the GPL-2.0 license, with additional conditions:
- Open-source users must keep the Rapida logo visible in UI components.
- Future license terms may change; this does not affect released versions.
A commercial license is available for enterprise use, which allows:
- Removal of branding
- Closed-source usage
- Private modifications Contact [email protected] for details.