Summarize emails received by Thunderbird mail client extension via locally run LLM. Early development.
-
Updated
Feb 9, 2024 - Python
Summarize emails received by Thunderbird mail client extension via locally run LLM. Early development.
Use your open source local model from the terminal
This project allows you to run your own local Large Language Model (LLM) chatbot using an API like Ollama.
A powerful shell that's powered by a locally running LLM (ideally Llama 3.x or Qwen 2.5)
ramalama-based model swapping server
A narrative/roleplay engine with TCOD levels driven by unreliable narrators - both in the literal and literature sense. Currently hooks into LMStudio and gemini for responses, allowing overrides of tasks to player control.
Lightweight Python tool using Optuna for tuning llama.cpp flags: towards optimal tok/s for your machine
📚 LocalLLaMA Archive — Community-powered static archive for r/LocalLLaMA
TransFire is a simple tool that allows you to use your locally running LLMs while far from home, whitout requiring port forwarding
Run gguf LLM models in Latest Version TextGen-webui and koboldcpp
Copilot hack for running local copilot without auth and proxying
Local AI Search assistant web or CLI for ollama and llama.cpp. Lightweight and easy to run, providing a Perplexity-like experience.
Secure Flutter desktop app connecting Auth0 authentication with local Ollama AI models via encrypted tunneling. Access your private AI instances remotely while keeping data on your hardware.
Full featured demo application for OllamaSharp
Unified management and routing for llama.cpp, MLX and vLLM models with web dashboard.
Like grep but for natural language questions. Based on Mistral 7B or Mixtral 8x7B.
A local and uncensored AI entity.
AubAI brings you on-device gen-AI capabilities, including offline text generation and more, directly within your app.
LLM RAG Application with Cross-Encoders Re-ranking for YouTube video 🎥
Add a description, image, and links to the localllama topic page so that developers can more easily learn about it.
To associate your repository with the localllama topic, visit your repo's landing page and select "manage topics."