A browser-based AI application that runs entirely on your device without sending data to external servers. This project uses Svelte 5, SvelteKit, WebAssembly, and various AI libraries for chat, transcription, text-to-speech, and image processing!
You can try the application at: https://nook.software
- Chat - LLM conversations using models like Gemma3 via llama.cpp
- Transcribe - Speech-to-text using Whisper AI with subtitle export
- Text-to-Speech - Voice synthesis with Kitten TTS, Piper, and Kokoro models
- Background Remover - AI-powered background removal
-
Clone the repository
-
Install dependencies
nvm use npm install
-
Configure environment variables
cp .env.example .env
Available variables:
PUBLIC_DISABLE_OPFS=true
- Disable OPFS caching for testing fallback behavior
-
Start the development server
npm run dev
-
Open your browser and navigate to
http://localhost:5173
npm run build
# Build the Docker image
docker build -t nook .
# Run the container
docker run -p 3000:3000 nook
The application downloads AI models directly to your browser and runs inference using WebAssembly. Models are cached locally using OPFS when available. This approach ensures your data stays private and can work offline after the initial model download.
- Thanks to the creator of https://clowerweb.github.io/tts-studio/ for excellent TTS examples!
- Thanks to the creators of Wllama and Transformers.js