Demo deployment of apflow with rate limiting and quota management.
This is an independent application that wraps apflow (v0.10.0+) as a core library, adding demo-specific features like:
- LLM Quota Management: Per-user task tree limits with LLM-consuming restrictions
- Rate limiting: Per user/IP daily limits
- Built-in Demo Mode: Uses apflow v0.6.0's
use_demoparameter for automatic demo data fallback - User Identification: Browser fingerprinting + session cookie hybrid approach (no registration required)
- Demo-specific API middleware: Quota checking and demo data injection
- Usage tracking: Task execution and quota usage statistics
- Concurrency control: System-wide and per-user concurrent task tree limits
- LLM API Key Support: Supports
X-LLM-API-KEYheader with prefixed format (openai:sk-...oranthropic:sk-ant-...) or direct format (sk-...) - Executor Metadata API: Query executor metadata and schemas using apflow's executor_metadata utilities
- Executor Demo Tasks: Automatically generate demo tasks for all executors based on executor_metadata
- User Management CLI: Built-in commands to analyze user statistics and activity
- Automatic Database Setup: Zero-config initialization with local DuckDB fallback
This application uses apflow[all]>=0.6.0 as a dependency and leverages new v0.6.0 features:
- TaskRoutes Extension: Uses
task_routes_classparameter (no monkey patching) - Task Tree Lifecycle Hooks: Uses
register_task_tree_hook()for explicit lifecycle events - Executor-Specific Hooks: Uses
add_executor_hook()for quota checks at executor level - Built-in Demo Mode: Uses
use_demoparameter for automatic demo data - Automatic User ID Extraction: Leverages JWT extraction with browser fingerprinting fallback
- Database Storage: Uses the same database as apflow (DuckDB/PostgreSQL) for quota tracking, no Redis required
# Install dependencies
pip install -e ".[dev]"
# Start with docker-compose
docker-compose up
# Or run directly
python -m apflow_demo.main# Build Docker image
docker build -f docker/Dockerfile -t apflow-demo .
# Run with docker-compose
docker-compose up -dSee .env.example for configuration options.
Key environment variables:
DEMO_MODE=true: Enable demo modeRATE_LIMIT_ENABLED=true: Enable rate limitingRATE_LIMIT_DAILY_PER_USER=10: Total task trees per day (free users)RATE_LIMIT_DAILY_LLM_PER_USER=1: LLM-consuming task trees per day (free users)RATE_LIMIT_DAILY_PER_USER_PREMIUM=10: Total task trees per day (premium users)MAX_CONCURRENT_TASK_TREES=10: System-wide concurrent task treesMAX_CONCURRENT_TASK_TREES_PER_USER=1: Per-user concurrent task treesRATE_LIMIT_DAILY_PER_IP=50: Daily limit per IP
Note: Rate limiting uses the same database as apflow (DuckDB/PostgreSQL), no Redis required.
The demo includes a comprehensive LLM quota management system:
No Registration Required: The demo uses a session cookie + browser fingerprinting hybrid approach:
- Session Cookie: Set on first request (
demo_session_id), persists for 30 days. - Browser Fingerprinting: Generated from
User-Agent+ IP + headers (fallback if cookie cleared). - Auto-Login: Transparently handles guest user creation and session persistence across visits.
- User-Agent Tracking: Captures browser/OS metadata to generate descriptive guest usernames (e.g.,
Guest_Mac_Chrome_abc123). - Privacy-Friendly: No personal data collected, fingerprints are hashed.
- Total Quota: 10 task trees per day
- LLM-consuming Limit: Only 1 LLM-consuming task tree per day
- Concurrency: 1 task tree at a time
- Behavior: When LLM quota exceeded, uses built-in demo mode (
use_demo=True)
- Total Quota: 10 task trees per day
- LLM-consuming Limit: All 10 can be LLM-consuming (no separate limit)
- Concurrency: 1 task tree at a time
- Behavior: Uses own LLM API keys, no demo data fallback
Free User Example (No authentication required):
# First LLM-consuming task tree - succeeds
# User ID is automatically generated from browser fingerprint
curl -X POST http://localhost:8000/tasks \
-H "Content-Type: application/json" \
-d '{"method": "tasks.generate", "params": {"requirement": "..."}}'
# Second LLM-consuming task tree - uses built-in demo mode
# Executor hooks automatically set use_demo=True when quota exceeded
curl -X POST http://localhost:8000/tasks \
-H "Content-Type: application/json" \
-d '{"method": "tasks.generate", "params": {"requirement": "..."}}'Premium User Example (With LLM API Key):
# Provide LLM API key in header
# All 10 task trees can be LLM-consuming
# Supported formats:
# - Prefixed: "openai:sk-xxx..." or "anthropic:sk-ant-xxx..."
# - Direct: "sk-xxx..." (auto-detected as OpenAI) or "sk-ant-xxx..." (auto-detected as Anthropic)
curl -X POST http://localhost:8000/tasks \
-H "Content-Type: application/json" \
-H "X-LLM-API-KEY: openai:sk-xxx..." \
-d '{"method": "tasks.generate", "params": {"requirement": "..."}}'Check Quota Status:
# User ID is automatically detected from session cookie or browser fingerprint
curl http://localhost:8000/api/quota/statusThe demo provides endpoints to query executor metadata using apflow's executor_metadata utilities:
Get All Executor Metadata:
curl http://localhost:8000/api/executors/metadataGet Specific Executor Metadata:
curl http://localhost:8000/api/executors/metadata/system_info_executorThe metadata includes:
id: Executor IDname: Executor namedescription: Executor descriptioninput_schema: JSON schema for task inputsexamples: List of example descriptionstags: List of tagstype: Executor type (optional)
The demo can automatically create demo tasks for all executors based on executor_metadata:
Check Demo Init Status:
# Check which executors already have demo tasks and which ones can be initialized
curl http://localhost:8000/api/demo/tasks/init-statusResponse includes:
can_init: Whether demo init can be performed (has executors without demo tasks)total_executors: Total number of executorsexisting_executors: List of executor IDs that already have demo tasksmissing_executors: List of executor IDs that don't have demo tasks yetexecutor_details: Details for each executor (id, name, has_demo_task)message: Status description
Initialize Executor Demo Tasks:
# Creates one demo task per executor with inputs generated from input_schema
# Skips executors that already have demo tasks to avoid duplicates
curl -X POST http://localhost:8000/api/demo/tasks/init-executorsEach executor gets a demo task with:
schemas.method= executor_idinputs= Generated from executor'sinput_schema(uses examples or default values)name= "Demo: {executor_name}"user_id= Current user ID (from session cookie or browser fingerprint)
Note: The initialization process automatically skips executors that already have demo tasks for the current user, preventing duplicate task creation.
The demo includes a plugin for the apflow-demo CLI to manage and analyze users.
List recently active users with their status and source.
apflow-demo users list --limit 10Options:
--limit(-l): Number of users to display (default: 20)--status(-s): Filter by status (active,inactive)--format(-f): Output format (table,json)--show-ua: Show full User-Agent string in the output
Display aggregate user statistics for different time periods.
apflow-demo users stat dayAvailable periods: all, day, week, month, year.
The application features Automatic Database Initialization.
- Zero-Config: If
DATABASE_URLis not set in.envor environment, it automatically creates a DuckDB database at.data/apflow-demo.duckdb. - Sync/Async Support: Fully compatible with both synchronous (DuckDB) and asynchronous (PostgreSQL) engines.
- Auto-Migration: Automatically adds missing columns (like
user_agent) to existing tables during startup.
# Start with docker-compose
docker-compose up
# Or run directly (uses same database as apflow)
# Option 1: run module directly
python -m apflow_demo.main
# Option 2: use the packaged CLI wrapper (recommended for demo features)
# After `pip install -e .`, run the wrapper which preloads demo extensions:
```bash
apflow-demo tasks all --limit 3
### Production Deployment
1. **Build Docker image**:
```bash
docker build -f docker/Dockerfile -t apflow-demo:latest .
-
Deploy with docker-compose:
docker-compose up -d
-
Or deploy to cloud:
- Update environment variables in
.envor docker-compose.yml - Set
DEMO_MODE=trueandRATE_LIMIT_ENABLED=true - Configure database connection (same as apflow)
- Deploy to your cloud provider
- Update environment variables in
- Deploy demo API (this repository) to your server
- Deploy apflow-webapp and configure it to point to demo API:
NEXT_PUBLIC_API_URL=https://demo-api.aipartnerup.com
- Add demo link in aipartnerup-website (already configured)
Apache-2.0