ai
The simplest way to run LLaMA on your local machine
Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!
Drop in a screenshot and convert it to clean code (HTML/Tailwind/React/Vue)
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
🕵️‍♂️ Library designed for developers eager to explore the potential of Large Language Models (LLMs) and other generative AI through a clean, effective, and Go-idiomatic approach.
Distribute and run LLMs with a single file.
A playground for creative exploration that uses SDXL Turbo.
The subtitles and translations are generated in real-time and displayed as pop-ups.
Foundational Models for State-of-the-Art Speech and Text Translation
Enhanced ChatGPT Clone: Features Agents, MCP, DeepSeek, Anthropic, AWS, OpenAI, Responses API, Azure, Groq, o1, GPT-5, Mistral, OpenRouter, Vertex AI, Gemini, Artifacts, AI model switching, message…
Command line artificial intelligence - Your local LLM context-feeder
Fully private LLM chatbot that runs entirely with a browser with no server needed. Supports Mistral and LLama 3.
THIS PROJECT HAS MOVED TO https://github.com/1backend/1backend. Build AI products faster. A language-agnostic microservices platform for building AI applications.
User-friendly AI Interface (Supports Ollama, OpenAI API, ...)
Repository of model demos using TT-Buda
An Open Source implementation of Notebook LM with more flexibility and features
A holistic way of understanding how Llama and its components run in practice, with code and detailed documentation.
Transcribe any audio to text, translate and edit subtitles 100% locally with a web UI. Powered by whisper models!
Robust Speech Recognition via Large-Scale Weak Supervision
KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale