A comprehensive FastAPI-based backend application for task management with user authentication, CRUD operations, integrated LLM capabilities using Ollama, and vector search capabilities.
- Task Management: Full CRUD operations for tasks with advanced filtering and pagination
- User Authentication: JWT-based authentication system with secure password handling
- LLM Integration: Built-in Ollama integration for AI-powered task assistance and chat completion
- Vector Search: AI-powered task search using sentence embeddings and ChromaDB
- Database: SQLAlchemy ORM with Alembic for database migrations
- RESTful API: Well-structured API endpoints with proper documentation
- CORS Support: Configured for cross-origin requests from frontend applications
- Backend: FastAPI
- Database: SQLite (configurable to other databases)
- ORM: SQLAlchemy
- Authentication: JWT tokens with bcrypt password hashing
- LLM: Ollama integration
- Vector Search: ChromaDB with sentence-transformers
- Migrations: Alembic
- Python: 3.8+
- Python 3.10 or higher
- Ollama installed and running (for LLM features)
- Clone the repository:
git clone <repository-url>
cd task-manager-backend- Create and activate a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies:
uv sync- Set up environment variables (optional):
cp .env.example .env # If .env.example exists
# Edit .env with your configuration- Run database migrations:
alembic upgrade head- Start the application:
uvicorn app.main:app --reloadThe API will be available at http://localhost:8000
Once the server is running, you can access:
- Swagger UI:
http://localhost:8000/docs - ReDoc:
http://localhost:8000/redoc
The API endpoints are organized into the following tags for better navigation and documentation:
users: User authentication and management endpointsitems: General item management endpointstasks: Task management and CRUD operationsllm: LLM integration and AI-powered features
POST /users/register- Register a new userPOST /users/login- User login and JWT token generation
POST /tasks/- Create a new taskGET /tasks/- Get all tasks with paginationGET /tasks/{task_id}- Get a specific taskPUT /tasks/{task_id}- Update a taskDELETE /tasks/{task_id}- Delete a taskGET /tasks/user/{user_id}- Get tasks by userGET /tasks/status/{status}- Get tasks by statusGET /tasks/search/- Vector search tasks by title or description
POST /items/- Create a new itemGET /items/- Get all itemsGET /items/{item_id}- Get a specific itemPUT /items/{item_id}- Update an itemDELETE /items/{item_id}- Delete an item
POST /llm/chat- Chat completion with OllamaPOST /llm/completion- Text completion with OllamaGET /llm/models- List available Ollama modelsPOST /llm/models/pull- Pull a new modelDELETE /llm/models/{model_name}- Delete a model
The application now includes vector search capabilities for tasks using:
- ChromaDB: Vector database for storing and searching task embeddings
- sentence-transformers: Pre-trained models for generating text embeddings
- all-MiniLM-L6-v2: Default model for encoding task descriptions and queries
- When tasks are created, their titles and descriptions are encoded into vector embeddings
- These embeddings are stored in ChromaDB for efficient similarity search
- When searching, the query is also encoded and compared against stored embeddings
- Results are ranked by semantic similarity, providing more relevant matches than traditional text search
GET /tasks/search/?query=your_search_queryThe search endpoint accepts a query parameter and returns tasks ranked by semantic similarity to the search query.
task-manager-backend/
├── alembic/
│ ├── versions/ # Database migration files
│ ├── env.py # Alembic environment configuration
│ ├── script.py.mako # Migration script template
│ └── README
├── app/
│ ├── routers/ # API route modules
│ │ ├── items.py # Item-related endpoints
│ │ ├── llm.py # LLM integration endpoints
│ │ ├── tasks.py # Task management endpoints
│ │ └── users.py # User authentication endpoints
│ ├── auth.py # Authentication utilities
│ ├── config.py # Application configuration
│ ├── crud.py # Database CRUD operations
│ ├── database.py # Database connection and vector DB setup
│ ├── main.py # FastAPI application entry point
│ ├── models.py # SQLAlchemy database models
│ └── schemas.py # Pydantic schemas for request/response
├── alembic.ini # Alembic configuration
├── requirements.txt # Python dependencies
└── README.md # This file
The application can be configured through environment variables. Key configuration options:
DATABASE_URL: Database connection string (default:sqlite:///./test.db)SECRET_KEY: JWT secret key (default:CHANGE_THIS- change in production!)OLLAMA_HOST: Ollama server host (default:http://localhost:11434)OLLAMA_MODEL: Default Ollama model (default:llama3.2)OLLAMA_TIMEOUT: Ollama request timeout in seconds (default:30)
To create new database migrations:
- Make changes to your models in
app/models.py - Generate a migration:
alembic revision --autogenerate -m "Description of changes"- Apply the migration:
alembic upgrade headTo rollback migrations:
alembic downgrade -1pytestThe project follows PEP 8 coding standards. You can check code style with:
flake8 app/- Add new models in
app/models.py - Create corresponding schemas in
app/schemas.py - Add CRUD operations in
app/crud.py - Create API endpoints in appropriate router files under
app/routers/ - Generate and apply database migrations
- Change the default
SECRET_KEYin production - Use HTTPS in production environments
- Implement rate limiting for API endpoints
- Validate all user inputs
- Keep dependencies updated
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License.