-
llama-cpp-2
llama.cpp bindings for Rust
-
gbnf
working with GBNF
-
gguf-utils
handling gguf files
-
deepthought
Functional wrapper around Llama.cpp with Rust Dynamic datatypes and Vector store support for creating RAG applications
-
tenere
TUI interface for LLMs written in Rust
-
llama-cpp-sys-2
Low Level Bindings to llama.cpp
-
helios-engine
A powerful and flexible Rust framework for building LLM-powered agents with tool support, both locally and online
-
toktrie_hf_tokenizers
HuggingFace tokenizers library support for toktrie and llguidance
-
ggus
GGUF in Rust🦀
-
llm_client
easiest Rust interface for local LLMs
-
ggufy
Unified GGUF wrapper for llama.cpp and Ollama
-
kalosm-streams
A set of streams for pretrained models in Kalosm
-
paddler
Open-source LLMOps platform for hosting and scaling AI in your own infrastructure
-
llama-cpp-4
llama.cpp bindings for Rust
-
rower
Stateful load balancer custom-tailored for llama.cpp and focused on simplicity, forked from distantmagic/paddler
-
llama_cpp
High-level bindings to llama.cpp with a focus on just being really, really easy to use
-
ggml-quants
GGml defined quantized data types and their quant/dequant algorithm
-
epistemology
clear way of hosting llama.cpp as a private HTTP API
-
toktrie_hf_downloader
HuggingFace Hub download library support for toktrie and llguidance
-
drama_llama
language modeling and text generation
-
toktrie_tiktoken
HuggingFace tokenizers library support for toktrie and llguidance
-
shimmy-llama-cpp-2
llama.cpp bindings for Rust with MoE CPU offloading support
-
fellhorn-llama-cpp-2
llama.cpp bindings for Rust
-
kalosm-common
Helpers for kalosm downloads and candle utilities
-
lmcpp
Rust bindings for llama.cpp's server with managed toolchain, typed endpoints, and UDS/HTTP support
-
llama-cpp-sys-4
Low Level Bindings to llama.cpp
-
lsp-ai
open-source language server that serves as a backend for AI-powered functionality, designed to assist and empower software engineers, not replace them
-
hayride-llama-rs-sys
Hayride llama.cpp rust bindings
-
kproc-llm
Knowledge Processing library, using LLMs
-
fellhorn-llama-cpp-sys-2
Low Level Bindings to llama.cpp
-
rmistral
interface for Mistral models
-
shimmy-llama-cpp-sys-2
Low Level Bindings to llama.cpp with MoE CPU offloading support
-
simple_llama
run llama.cpp in Rust. based on llama-cpp-2
-
rs-llama-cpp
Automated Rust bindings generation for LLaMA.cpp
-
llama-cpp-sys-3
llama.cpp bindings
-
thalamus
A deep learning mesh node server platform for linux/mac/unix
-
crabml
core package
-
icebreaker
A local AI chat app powered by 🦀Rust, 🧊iced, 🤗Hugging Face, and 🦙llama.cpp
-
crabml-cli
crabml cli
-
llm_devices
Device management and build system for LLM inference
-
babichjacob-llama-cpp-2
llama.cpp bindings for Rust
-
llm_interface
backend for the llm_client crate
-
localllm
qwen2 model lib by llama.cpp
-
babichjacob-llama-cpp-sys-2
Low Level Bindings to llama.cpp
-
htmx_llamacpp_server
server for the htmx_llamacpp project
-
llama_link
A llama.cpp server interface
-
llm-neox
GPT-NeoX for the
llmecosystem -
llm-bloom
BLOOM (BigScience Large Open-science Open-access Multilingual Language Model) for the
llmecosystem -
llm-gpt2
GPT-2 for the
llmecosystem -
llm-gptj
GPT-J for the
llmecosystem -
llm-base
The base for
llm; provides common structure for model implementations. Not intended for use by end-users. -
llama-sys
bindings for llama.cpp
-
utils-tree-sitter
Utils for working with splitter-tree-sitter
-
llm-cli
A CLI for running inference on supported Large Language Models. Powered by the
llmlibrary. -
infa-gguf
A minimal rust machine learning library in wip
-
infa
A minimal rust machine learning library in wip
-
infa-core
A minimal rust machine learning library in wip
-
infa-impl
A minimal rust machine learning library in wip
Try searching with DuckDuckGo.