- Purpose & Features
- Architecture & Repository Structure
- Getting Started
- Usage
- Testing
- GenAI Teacher Chat Mode
- Troubleshooting & Codespaces Notes
- Further Reading
This repository can be used as a starting point for building custom LLM applications using Open Source tooling and models. It incorporates Ollama, Open WebUI, Langchain, Streamlit, Chroma, and PGVector using Docker to containerize the application and Docker Compose to run the various service dependencies.
Key Features:
- LLM integration (Ollama, LangChain)
- RAG (Retrieval Augmented Generation) examples
- Agentic AI patterns (ReAct, multi-tool agents)
- Streamlit web interface
- Vector database support (Chroma, PGVector)
- Integrated teaching assistant (GenAI Teacher)
Main Components:
- src - Application code and examples
- etc - Scripts and configuration
- tests - Unit and integration tests
- Dockerfile, compose.yml - Containerization and orchestration
- Docker Engine installed
- Option 1: Docker Engine configured with >= 12 GB memory
- Option 2: Docker Engine configured with >= 8 GB memory and Ollama Executable installed
brew install ollama
- Register for a free Brave Search API key here
- 2000 calls/month free, affordable scaling
- Brave Search API info
- Add your key to a .env file in the project root:
echo "BRAVE_SEARCH_API_KEY=your_key_here" >> .env
Option 1 (CPU-only, easiest):
docker compose --profile=cpu up -dOption 2 (GPU, fastest):
docker compose up -d
./etc/ollama_entrypoint.shOption 3 (GitHub Codespaces):
- Open in Codespaces: Click "Code" β "Codespaces" β "Create codespace"
- Setup environment:
docker compose --profile=cpu up -d
- Access the application:
- Streamlit App: Auto-opens or check "Ports" tab
- Open WebUI: Available on port 3000
docker compose --profile=cpu up -d # Start all services
docker compose down # Stop all services
docker compose logs -f # View service logs
docker compose ps # Check service status
docker compose run --rm app pytest tests/ -v # Run tests- Streamlit: http://localhost:8501/
- Open WebUI: http://localhost:3000/
- List Ollama Models:
curl http://localhost:11434/api/tags
- Ollama API Docs
This application includes a comprehensive test suite that runs in Docker for consistency. Unit tests use mocks, while integration tests connect to real services.
# Run all tests
docker compose run --rm app pytest tests/ -v
# Run just unit tests
docker compose run --rm app pytest tests/unit/ -v
# Run integration tests (requires services running)
docker compose run --rm app pytest tests/integration/ -m integration -v
# Run with coverage
docker compose run --rm app pytest tests/ --cov=src --cov-report=term# Test a specific file
docker compose run --rm app pytest tests/unit/test_app.py -v
# Test a specific class
docker compose run --rm app pytest tests/unit/example/test_simple_chat.py::TestSimpleChat -v
# Test a specific method
docker compose run --rm app pytest tests/unit/test_app.py::TestApp::test_model_selection_logic -v- Run the full test suite:
docker compose run --rm app pytest tests/ -v - Ensure all tests pass before making any changes
- Update packages incrementally and test after each change
This repository includes an integrated AI teaching assistant accessible through GitHub Copilot Chat. The GenAI Teacher provides personalized guidance for learning LLM, RAG, and Agentic AI patterns.
In any GitHub interface (VS Code, GitHub.com, or GitHub Mobile), use the #genai-teacher chat mode:
#genai-teacher [your question or request]Get detailed explanations of demo files and concepts:
#genai-teacher explain simple_chat.py
#genai-teacher explain RAG patterns
#genai-teacher explain the ReAct agent patternReceive structured learning guidance through topics:
#genai-teacher guide me through RAG
#genai-teacher guide me through the basics
#genai-teacher guide me through agent developmentGet personalized learning progressions:
#genai-teacher learning path for beginners
#genai-teacher learning path for RAG
#genai-teacher learning path for agentsGet help implementing new features:
#genai-teacher how to create a new RAG example
#genai-teacher implement a custom retriever
#genai-teacher add error handling to my chainBeginner Starting Point:
#genai-teacher guide me through the basicsUnderstanding a Specific File:
#genai-teacher explain src/example/agentic_chat.pyGetting Implementation Help:
#genai-teacher how to add a new vector database exampleThe GenAI Teacher understands the repository structure, coding patterns, and can provide context-aware guidance tailored to your current learning needs.
- CPU-Only: Codespaces uses CPU-only mode (no GPU acceleration)
- Model Loading: Initial model downloads may take 5-10 minutes
- Port Forwarding: All ports are automatically forwarded and accessible via HTTPS
- Persistence: Your workspace persists across sessions
For the web search example, add your API key to the .env file:
echo "BRAVE_SEARCH_API_KEY=your_key_here" >> .env
docker compose up -d
./etc/ollama_entrypoint.sh- Ollama - Open source app for LLM models, prompts, tools, and functions
- Website: https://ollama.com/
- Github: https://github.com/ollama/ollama
- Open WebUI - Web UI wrapper for Ollama
- Website: https://openwebui.com/
- Github: https://github.com/open-webui/open-webui
- RAG
- LangChain
- Hugging Face
- Chroma
- PGVector
- Wikipedia
- Web - Brave API Search Loader
- Arxiv