The easiest way to use Ollama in .NET
-
Updated
Oct 17, 2025 - C#
The easiest way to use Ollama in .NET
Reliable model swapping for any local OpenAI compatible server - llama.cpp, vllm, etc
✨ Kubectl plugin to create manifests with LLMs
🏗️ Fine-tune, build, and deploy open-source LLMs easily!
[NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
Social and customizable AI writing assistant! ✍️
LLM RAG Application with Cross-Encoders Re-ranking for YouTube video 🎥
AubAI brings you on-device gen-AI capabilities, including offline text generation and more, directly within your app.
A local and uncensored AI entity.
Like grep but for natural language questions. Based on Mistral 7B or Mixtral 8x7B.
Full featured demo application for OllamaSharp
Unified management and routing for llama.cpp, MLX and vLLM models with web dashboard.
Local AI Search assistant web or CLI for ollama and llama.cpp. Lightweight and easy to run, providing a Perplexity-like experience.
Secure Flutter desktop app connecting Auth0 authentication with local Ollama AI models via encrypted tunneling. Access your private AI instances remotely while keeping data on your hardware.
📚 LocalLLaMA Archive — Community-powered static archive for r/LocalLLaMA
TransFire is a simple tool that allows you to use your locally running LLMs while far from home, whitout requiring port forwarding
Run gguf LLM models in Latest Version TextGen-webui and koboldcpp
Copilot hack for running local copilot without auth and proxying
Summarize emails received by Thunderbird mail client extension via locally run LLM. Early development.
Use your open source local model from the terminal
Add a description, image, and links to the localllama topic page so that developers can more easily learn about it.
To associate your repository with the localllama topic, visit your repo's landing page and select "manage topics."