Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Rapida is an open-source, end-to-end voice AI orchestration platform for building real-time conversational voice agents with audio streaming, STT, TTS, VAD, multi-channel integration, agent state management, and observability.

License

Notifications You must be signed in to change notification settings

rapidaai/voice-ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Banner

Rapida: End-to-End Voice Orchestration Platform

Rapida is an open-source platform for designing, building, and deploying voice agents at scale.
It’s built around three core principles:

  • Reliable — designed for production workloads, real-time audio, and fault-tolerant execution
  • Observable — deep visibility into calls, latency, metrics, and tool usage
  • Customizable — flexible architecture that adapts to any LLM, workflow, or enterprise stack

Rapida provides both a platform and a framework for building real-world voice agents—from low-latency audio streaming to orchestration, monitoring, and integrations.

Rapida is written in Go, using the highly optimized gRPC protocol for fast, efficient, bidirectional communication.

GitHub stars Twitter Follow Ask DeepWiki CodeQL GitHub release (latest SemVer)


Features

  • Real-time Voice Orchestration
    Stream and process audio with low latency using GRPC.

  • LLM-Agnostic Architecture
    Bring your own model—OpenAI, Anthropic, open-source models, or custom inference.

  • Production-grade Reliability
    Built-in retries, error handling, call lifecycle management, and health checks.

  • Full Observability
    Call logs, streaming events, tool traces, latency breakdowns, metrics, and dashboards.

  • Flexible Tooling System
    Build custom tools and actions for your agents, or integrate with any backend.

  • Developer-friendly
    Clear APIs, modular components, and simple configuration.

  • Enterprise-ready
    Scalable design, efficient protocol, and predictable performance.

Documentation & Guides

https://doc.rapida.ai

Prerequisites

  • Docker & Docker Compose (Install)
  • 16GB+ RAM (for all services)

Quick Start

Get all services running in 4 commands:

# Clone repo
git clone https://github.com/rapidaai/voice-ai.git && cd voice-ai

# Setup & build
make setup-local && make build-all

# Start all services
make up-all

# View running services
docker compose ps

Services Ready:

Stop services:

make down-all

Development

Work on Specific Services

# Start only database
make up-db

# Start only UI
make up-ui

# Start only Assistant API
make up-assistant

# List all start commands
make help

View Logs

# All services
make logs-all

# Specific service
make logs-web
make logs-assistant

Rebuild After Code Changes

# Rebuild and restart one service
make rebuild-assistant

# Rebuild all
make rebuild-all

Configure Services

Edit environment files before starting:

  • docker/web-api/.web.env - Web API (port 9001)
  • docker/assistant-api/.assistant.env - Assistant API (port 9007)
  • docker/endpoint-api/.endpoint.env - Endpoint API (port 9005)
  • docker/integration-api/.integration.env - Integration API (port 9004)
  • docker/document-api/config.yaml - Document API (port 9010)

Add your API keys (OpenAI, Anthropic, Deepgram, Twilio, etc.) in these files.


Local Development (Without Docker)

Go Services

# Install dependencies
go mod download

# Build service
go build -o bin/web ./cmd/web

# Run service
./bin/web

Requires PostgreSQL, Redis, OpenSearch running separately.

React UI

cd ui

# Install & run
yarn install
yarn start:dev

# Build for production
yarn build

Troubleshooting

Port already in use:

lsof -i :3000    # Find process
kill -9 <PID>    # Kill it

Services won't start:

make logs-all    # Check logs
docker compose ps  # Verify status

Database issues:

# Test connection
docker compose exec postgres psql -U rapida -d web_db -c "SELECT 1"

# Reset everything
make clean
make setup-local
make build-all
make up-all

All Commands

make help          # Show all available commands
make setup-local   # Create data directories
make build-all     # Build all Docker images
make up-all        # Start all services
make down-all      # Stop all services
make logs-all      # View all logs
make clean         # Remove containers & volumes
make restart-all   # Restart all services

Contributing

See CONTRIBUTING.md for guidelines.

Want to add:

  • New STT/TTS provider? Check api/assistant-api/internal/transformer/
  • New telephony channel? Check api/assistant-api/internal/telephony/

SDKs & Tools

Client SDKs

Client SDKs enable your frontend to include interactive, multi-user experiences.

Language Repo Docs
Web (React) rapida-react docs
Web Widget (react) react-widget

Server SDKs

Server SDKs enable your backend to build and manage agents.

Language Repo Docs
Go rapida-go docs
Python rapida-python docs

Contributing

For those who'd like to contribute code, see our Contribution Guide. At the same time, please consider supporting RapidaAi by sharing it on social media and at events and conferences.

Security disclosure

To protect your privacy, please avoid posting security issues on GitHub. Instead, report issues to [email protected], and our team will respond with detailed answer.

License

Rapida is open-source under the GPL-2.0 license, with additional conditions:

  • Open-source users must keep the Rapida logo visible in UI components.
  • Future license terms may change; this does not affect released versions.

A commercial license is available for enterprise use, which allows:

  • Removal of branding
  • Closed-source usage
  • Private modifications Contact [email protected] for details.