- Local AI Chat – All inference is performed locally, no cloud required.
- Multi-Model Support – Choose between Llama 3.2, Mistral, and Gemma 2 2B.
- Modern UI – Built with React, TypeScript, and Electron.
- FastAPI Backend - Utilizes Python based backend to create streamed responses from downloaded models
- Streaming Responses – Real-time conversational flow.
- Safe Shutdown – Gracefully closes background processes.
- Installer – Simple setup with NSIS, includes Ollama download prompt
_(⚠️ Mandatory: Ollama must be installed and running
Developed by Mirang Bhandari
Licensed under the MIT License (LICENSE)
ISAC uses Ollama to run AI models locally.
You must install Ollama before using ISAC – it is responsible for downloading the models from the internet on first use, after which they can be used fully offline for inference and streaming.
- Beta by @Bloodwingv2 in #1
Full Changelog: https://github.com/Bloodwingv2/I.S.A.C/commits/1.0.0