An Open-Source AI Assistant Platform.
- Modern web application built with React and Material-UI.
- API service built with FastAPI.
- Local AI inference with Ollama.
This application uses ephemeral browser storage for user chat data. This ensures that your data is kept private, but it also means that your chat history and AI profile settings will be lost when your browser session ends.
Hello! Welcome to the KenGPT setup walkthrough. This guide will help you get started with deploying KenGPT on a local machine for a single-user environment using Docker. Perfect for personal use or small teams.
First you will need to install Docker. If you want to use a GPU for AI inference, you will need the NVIDIA Container Toolkit along with installing Ollama. To install models, you'll use the ollama pull command line tool. However, in order to avoid conflicts when deploying with Docker, you will need to shutdown the Ollama service before deploying KenGPT.
NOTE: When deploying with Docker compose, the Ollama service will expect an available volume at
/usr/share/ollama/.ollamafor storing models. This is also the default location for theollama pullcommand.
Did you install what we need? Let's check if everything is ready for us.
# Docker and Docker Compose
docker version
docker compose version
# NVIDIA Container Toolkit
nvidia-container-cli --versionReady to get started? Let's deploy KenGPT with Docker Compose.
# Deploy with Docker Compose
docker compose up --build -dThat's it! You can continue to the next step, Hello KenGPT, to access the web app.
You should now be able to access the web app at http://localhost:80/ (or whatever host you deployed to). Go ahead and open it in your browser. You should see the KenGPT web app.
You can make changes to your KenGPT settings by clicking the gear icon in the top left corner of the app. Here you can change and define your own AI profiles including setting custom instructions and the AI model.