Input text from speech in any Linux window, the lean, fast and accurate way, using whisper.cpp OFFLINE. Speak with local LLMs via llama.cpp.
-
Updated
Jul 25, 2025 - Shell
Input text from speech in any Linux window, the lean, fast and accurate way, using whisper.cpp OFFLINE. Speak with local LLMs via llama.cpp.
The Ollama Toolkit is a collection of powerful tools designed to enhance your experience with the Ollama project, an open-source framework for deploying and scaling machine learning models. Think of it as your one-stop shop for streamlining workflows and unlocking the full potential of Ollama!
Данный проект основан на llama.cpp и компилирует только RPC-сервер, а так же вспомогательные утилиты, работающие в режиме RPC-клиента, необходимые для реализации распределённого инференса конвертированных в GGUF формат Больших Языковых Моделей (БЯМ) и Эмбеддинговых Моделей.
Zsh-centric command-line interface for interacting with local Large Language Models (LLMs). Chat directly on the command line with non-contiguous command line calls.
Runpod-LLM provides ready-to-use container scripts for running large language models (LLMs) easily on RunPod.
A fully auto configured, self-hosted local AI & database stack on Debian WSL2.
Containerized LLM for any use-case big or small
This is a Bash script to automatically launch llama-server, detects available .gguf models, and selects GPU layers based on your free VRAM.
Minimal GUI to invoke Llama.cpp
A simple Bash script to make running Llama.cpp simpler and easier to deploy across various models and configurations
A cross-platform template for running and managing llama-swap with pre-configured scripts and OS detection. Easily deploy, control, and interact with multiple LLM models on macOS, Linux, and Windows using llama.cpp or any OpenAI-compatible server.
Scripts for locally building, installing and using llama.cpp
Install latest llama.cpp, whisper.cpp, and llama-swap on macOS in one command: /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/huajiejin/ggml-tools-installer/main/install_all.sh)"
Add a description, image, and links to the llamacpp topic page so that developers can more easily learn about it.
To associate your repository with the llamacpp topic, visit your repo's landing page and select "manage topics."