Thanks to visit codestin.com
Credit goes to github.com

Skip to content

A lightweight, privacy‑friendly chatbot UI that runs entirely on your machine. Built with Streamlit for the interface and Ollama for local LLM inference.

Notifications You must be signed in to change notification settings

RamezHas/chatbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Local Chatbot (Streamlit + Ollama)

A simple, privacy‑friendly chatbot UI built with Streamlit and powered by Ollama for local LLM inference.

image

✨ Features

  • Multi‑chat sessions: create, switch, rename, and delete conversations.
  • Per‑chat system prompt: customize the assistant’s behavior for each chat.
  • Streaming responses: see answers appear in real time.
  • Temperature & Max tokens controls: fine‑tune generation.
  • Local storage: chats are saved as JSON files in ./chats/.

🧩 Requirements

  • Python 3.9+
  • https://ollama.ai installed and running locally
  • Streamlit and Requests libraries

🚀 Quick Start

  1. Install dependencies

    pip install streamlit requests
  2. Start Ollama and pull a model

    ollama serve
    ollama pull llama3.2
  3. Run the app

    streamlit run app.py

💾 Data Persistence

Chats are stored in ./chats/<chat_id>.json with this structure:

{
  "name": "Chat 1",
  "created_at": "2025-11-07T15:42:10",
  "model": "llama3.2",
  "system_prompt": "You are a helpful AI assistant.",
  "messages": [
    { "role": "user", "content": "Hello" },
    { "role": "assistant", "content": "Hi! How can I help?" }
  ]
}

⚙️ Configuration

Edit app.py to change:

  • OLLAMA_CHAT_API: Ollama API endpoint (default: http://localhost:11434/api/chat)
  • MODEL_NAME: default model name
  • SAVE_DIR: folder for chat files
  • Logo path and size

📚 Tech Stack

  • Frontend: Streamlit
  • Backend: Ollama local LLM
  • Storage: JSON files

About

A lightweight, privacy‑friendly chatbot UI that runs entirely on your machine. Built with Streamlit for the interface and Ollama for local LLM inference.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages