-
toon-format
Token-Oriented Object Notation (TOON) - a token-efficient JSON alternative for LLM prompts
-
error-toon
Compress verbose browser errors for LLM consumption. Save 70-90% tokens.
-
kalosm-sample
A common interface for token sampling and helpers for structered llm sampling
-
infiniloom
High-performance repository context generator for LLMs - Claude, GPT-4, Gemini optimized
-
mxp
MXP (Mesh eXchange Protocol) - High-performance protocol for agent-to-agent communication
-
hedl-core
Core parser and data model for HEDL format
-
clay-cli
An AI-powered developer assistant and TUI to streamline your git and project workflows
-
unifictl
CLI for UniFi Site Manager (API v1/EA)
-
mdstream
Streaming-first Markdown middleware for LLM output (committed + pending blocks, render-agnostic)
-
toon
Token-Oriented Object Notation – a token-efficient JSON alternative for LLM prompts
-
chace
CHamal's AutoComplete Engine - An LLM based code completion engine
-
iron_cli
Command-line interface for Iron Cage agent management
-
regexr
A high-performance regex engine built from scratch with JIT compilation and SIMD acceleration
-
toktrie
LLM Token Trie library
-
lnmp-llb
LNMP-LLM Bridge Layer - Optimization layer for LLM prompt visibility and token efficiency
-
toon-rust
Token-Oriented Object Notation (TOON) - JSON for LLM prompts at half the tokens. Rust implementation.
-
flyllm
unifying LLM backends as an abstraction layer with load balancing
-
bundle_repo
Pack a local or remote Git Repository to XML for LLM Consumption
-
ort-rs
Object Record Table - a CSV like structured data format with native object and array support
-
toktop
A terminal-based LLM cost and usage monitor
-
llm-samplers
Token samplers for large language models
-
zed_llm_client
A client for interacting with the Zed LLM API
-
pyscription
Token-efficient README generator that parses Python APIs and docstrings
-
loctok
Count LOC (lines of code) & TOK (LLM tokens), fast
-
toktrie_hf_downloader
HuggingFace Hub download library support for toktrie and llguidance
-
zeta-salience
Salience analysis engine for intelligent token prioritization in LLM inference
-
quicktok
Minimal, fast, multi-threaded implementation of the Byte Pair Encoding (BPE) for LLM tokenization
-
asterai
client library for asterai
-
llm-weaver
Manage long conversations with any LLM
-
llm_prompt
Low Level Prompt System for API LLMs and local LLMs
-
language-barrier-core
providing abstractions for Large Language Models
-
alith-prompt
LLM Prompting
-
llm-observatory-core
Core types, traits, and utilities for LLM Observatory
-
rtoon
Token-Oriented Object Notation - A compact, human-readable format for LLM data with 30-60% fewer tokens than JSON
-
agent-chain-core
Core library for agent-chain crate
-
catgrad-llm
Tools for LLMs built with catgrad
-
llm-sentinel-core
Core types, error handling, and configuration for LLM-Sentinel anomaly detection system
-
gpt4
a cli to interact with the openai gpt4 api
-
wordchuck
LLM Tokenizer Library
Try searching with DuckDuckGo or on crates.io.