A low-level asynchronous job queue system built with FastAPI, PostgreSQL, and asyncio. This project demonstrates modern async Python patterns for basic background job processing systems.
- Async-First Design: Built with asyncio and async/await patterns throughout
- Job Queue System: PostgreSQL-backed job queue with atomic job claiming
- REST API: FastAPI endpoints for job submission and status tracking
- Background Worker: Continuous job processing with graceful shutdown
- Connection Pooling: Efficient database connection management
- Duplicate Detection: Smart handling of duplicate job submissions
- Concurrent Processing: Multiple jobs processed simultaneously
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β FastAPI App β β PostgreSQL β β Background β
β β β β β Worker β
β β’ Job Creation βββββΊβ β’ Job Queue βββββΊβ β
β β’ Status Check β β β’ Connection β β β’ Job Processingβ
β β’ Health Check β β Pool β β β’ Error Handlingβ
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
- Python 3.8+
- PostgreSQL 12+
- Docker (optional, for easy PostgreSQL setup)
-
Clone the repository:
git clone <repository-url> cd async-worker-project
-
Install dependencies:
pip install -r requirements.txt
-
Set up PostgreSQL:
Option A: Using Docker:
docker-compose up -d
Option B: Local PostgreSQL:
createdb async_worker_db
-
Run database migrations:
psql -d async_worker_db -f migrations/init.sql
-
Configure environment:
export DATABASE_URL="postgresql://user:password@localhost/async_worker_db"
-
Start the application:
uvicorn app.main:app --reload
-
Create a job:
curl -X POST "http://localhost:8000/jobs" \ -H "Content-Type: application/json" \ -d '{"payload": {"task": "hello_world", "data": "test message"}}'
-
Check job status:
curl "http://localhost:8000/jobs/{job_id}" -
Health check:
curl "http://localhost:8000/health"
async_worker/
βββ app/
β βββ __init__.py
β βββ main.py # FastAPI application and lifespan management
β βββ models.py # Pydantic models for request/response
β βββ database.py # Database connection and operations
β βββ worker.py # Background job processor
βββ migrations/
β βββ init.sql # Database schema
βββ tests/
β βββ __init__.py
β βββ test_api.py # API endpoint tests
β βββ test_worker.py # Worker logic tests
βββ requirements.txt
βββ docker-compose.yml # PostgreSQL for development
βββ README.md
βββββββββββ βββββββββββββββ βββββββββββββ ββββββββββββββ
β queued βββββΊβ processing βββββΊβ completed β β failed β
βββββββββββ βββββββββββββββ βββββββββββββ ββββββββββββββ
β β²
ββββββββββββββββββββββββββββββββββββ
(on error)
- POST
/jobs - Body:
{"payload": {"key": "value"}} - Response:
{"job_id": "uuid", "status": "queued"}
- GET
/jobs/{job_id} - Response:
{ "job_id": "uuid", "status": "completed", "attempt_count": 1, "payload": {"key": "value"} }
- GET
/health - Response:
{"status": "healthy"}
Run the test suite:
# Install test dependencies
uv pip install -r requirements.txt
# Run tests
pytest tests/ -v
# Run with coverage
pytest tests/ --cov=app --cov-report=html- Connection pooling for concurrent request handling
- Atomic job claiming with
FOR UPDATE SKIP LOCKED - Proper resource cleanup with async context managers
- PostgreSQL-based persistence for reliability
- Duplicate job detection and replacement
- Worker polling with graceful shutdown
- Structured error responses
- Basic retry logic (extensible for advanced patterns)
- Connection failure recovery
- Concurrent Jobs: Handles multiple jobs simultaneously
- Database Connections: Efficient pooling (2-5 connections by default)
- Response Time: Sub-100ms for job submission
- Throughput: Depends on job complexity and database performance
- Exponential backoff with jitter
- Circuit breaker pattern
- Dead letter queue for failed jobs
- Comprehensive timeout hierarchy
- Task registry system
- Email sending tasks
- File processing tasks
- Rate limiting and resource management
- Structured logging and monitoring
- Health check improvements
- Horizontal scaling support
- Admin dashboard
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Control Flow Management: Just-in-time connection acquisition, proper use of
awaitvsasyncio.gather() - Critical Section Protection: Atomic job claiming prevents race conditions
- Task Lifecycle Management: Proper cleanup with async context managers
- Resource Management: Connection pooling with health monitoring
- Graceful Degradation: Error handling with structured responses
- Connection as Parameter Pattern: Enables flexible transaction boundaries
- Atomic Operations: Single SQL queries for consistency
- Pool-Level Health Checks: More efficient than per-connection validation
- Structured Return Values: Clear distinction between business logic and system errors
This project demonstrates several advanced async Python concepts:
This project is licensed under the MIT License - see the LICENSE file for details.
Database Connection Errors:
# Check PostgreSQL is running
pg_isready -h localhost -p 5432
# Verify database exists
psql -l | grep async_worker_dbImport Errors:
# Ensure you're in the project root and PYTHONPATH is set
export PYTHONPATH=$(pwd)Worker Not Processing Jobs:
- Check database connectivity
- Verify worker is running (check logs)
- Ensure jobs are in 'queued' status
Run with debug logging:
uvicorn app.main:app --reload --log-level debugBasic monitoring endpoints:
/health- Application health status/metrics- Basic application metrics (planned)- Database queries for job queue depth and processing rates
For production deployments, consider integrating with:
- Prometheus for metrics collection
- Grafana for dashboards
- Sentry for error tracking
- Structured logging with ELK stack