A modern chatbot interface built with Next.js that connects to your local Ollama models via FastAPI.
```bash
brew install ollama
curl -fsSL https://ollama.ai/install.sh | sh
Download from https://ollama.ai/download
```
```bash
ollama serve
ollama pull llama3.2
```
```bash pip install fastapi uvicorn httpx ```
```bash
python scripts/fastapi_server.py ```
```bash npm run dev ```
Create a .env.local file:
```
FASTAPI_URL=http://localhost:8000
OLLAMA_MODEL=llama3.2
OLLAMA_BASE_URL=http://localhost:11434
```
- 🤖 Local AI inference with Ollama
- 💾 Auto-save conversations to localStorage
- 📁 Export chats as JSON files
- 🔄 Switch between conversation sessions
- 🎨 Professional red-themed UI
- ⚡ Fast local processing (no API costs!)
- Open this project in VSCode
- Install the Python extension
- Use the integrated terminal to run both servers
- Debug FastAPI with breakpoints if needed
- Ollama not connecting: Make sure
ollama serveis running - Model not found: Run
ollama pull <model-name>first - FastAPI errors: Check the terminal running the Python server
- CORS issues: The FastAPI server is configured for localhost:3000