An AI-powered tool that automatically transcribes, summarizes, and extracts action items from meeting recordings, lectures, and interviews.
Katip helps you save time by turning long audio recordings into useful summaries and to-do lists. Upload your meeting, lecture, or interview recording, and Katip will:
- Transcribe the audio to text using OpenAI's Whisper
- Summarize the key points and important decisions
- Extract action items and create a task list
Available as a web app, desktop app (Windows, macOS, Linux), and mobile app (Android).
- ποΈ Audio Transcription - Convert speech to text with Whisper
- π Smart Summaries - Get structured summaries of main topics and decisions
- π€ Local LLM Support - Use Ollama, LM Studio, or Llama.cpp for private, offline summarization
- β Task Extraction - Automatically identify and list action items
- π Multi-language - Support for 10 languages
- π» Cross-platform - Web, desktop, and mobile apps
- π¨ Modern UI - Clean interface with dark mode support
- π Open Source - Fully transparent and customizable
- β‘ GPU Acceleration - Vulkan support for faster transcription
- Node.js (v20 or higher)
- pnpm (v10 or higher)
- Rust (latest stable)
For mobile development:
- Android Studio (for Android)
# Clone the repository
git clone https://github.com/odest/katip.git
cd katip
# Install dependencies
pnpm install
# Start development
pnpm devDesktop App:
# CPU-only (default)
pnpm tauri dev
# With Vulkan GPU acceleration (recommended for AMD/NVIDIA GPUs)
pnpm tauri dev -- --features vulkanWeb App:
pnpm --filter web devBuild for Production:
# CPU-only build
pnpm build
# Desktop with GPU acceleration
pnpm tauri build -- --features vulkan- Upload Audio - Drop your meeting or lecture recording
- Transcription - Whisper converts speech to text
- AI Processing - LLM analyzes the transcript
- Get Results - View summary and action items
If you are using the Web version and want to connect to a local LLM provider like Ollama, you need to configure CORS to allow requests from the browser.
For Ollama, set the OLLAMA_ORIGINS environment variable before starting the server:
# Windows (PowerShell)
$env:OLLAMA_ORIGINS="*"; ollama serve
# Mac/Linux
OLLAMA_ORIGINS="*" ollama serve- Frontend: Next.js, React, TypeScript
- Desktop/Mobile: Tauri, Rust
- AI: OpenAI Whisper, LLM integration
- Styling: Tailwind CSS, shadcn/ui
- State: Zustand
- Database: PostgreSQL, SQLite
- Build: pnpm, Turborepo
katip/
βββ apps/
β βββ native/ # Desktop & mobile (Tauri + Next.js)
β βββ web/ # Web app (Next.js)
βββ packages/
β βββ ui/ # Shared UI components
β βββ i18n/ # Translations
We welcome contributions! Please check CONTRIBUTING.md for guidelines.
This project is licensed under GPL-3.0. See LICENSE for details.
Built with tauri-nextjs-template