Stars
A Geometric Coincidence: How a Deterministic Global Phase Model Reproduces the CHSH Value Through Trigonometric Averaging
Open-source LLM load balancer and serving platform for self-hosting LLMs at scale 🏓🦙
Ce module propose une introduction pratique aux principes de cartographie (mapping) et de localisation 2D dans le cadre d’un robot mobile équipé d’un Lidar LD06 et d'un microcontrôleur ESP32.
XLeRobot: Practical Dual-Arm Mobile Home Robot for $660
🤗 LeRobot: Making AI for Robotics more accessible with end-to-end learning
MCP server paired with a browser extension that enables AI agents to control the user's browser.
A TypeScript framework for building MCP servers.
Open source codebase powering the HuggingChat app
💫 Toolkit to help you get started with Spec-Driven Development
Unified management and routing for llama.cpp, MLX and vLLM models with web dashboard.
Trae Agent is an LLM-based agent for general purpose software engineering tasks.
The 100 line AI agent that solves GitHub issues or helps you in your command line. Radically simple, no huge configs, no giant monorepo—but scores >74% on SWE-bench verified!
Code at the speed of thought – Zed is a high-performance, multiplayer code editor from the creators of Atom and Tree-sitter.
Moshi is a speech-text foundation model and full-duplex spoken dialogue framework. It uses Mimi, a state-of-the-art streaming neural audio codec.
Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflo…
Lightweight coding agent that runs in your terminal
A new lightweight, hybrid routing mesh protocol for packet radios
An open-source AI agent that lives in your terminal.
The SmythOS Runtime Environment (SRE) is an open-source, cloud-native runtime for agentic AI. Secure, modular, and production-ready, it lets developers build, run, and manage intelligent agents acr…
Share a single keyboard and mouse between multiple computers.
allozaur / llama.cpp
Forked from ggml-org/llama.cppLLM inference in C/C++
LostRuins / koboldcpp
Forked from ggml-org/llama.cppRun GGUF models easily with a KoboldAI UI. One File. Zero Install.
Self-hosted game stream host for Moonlight.
fastllm是后端无依赖的高性能大模型推理库。同时支持张量并行推理稠密模型和混合模式推理MOE模型,任意10G以上显卡即可推理满血DeepSeek。双路9004/9005服务器+单显卡部署DeepSeek满血满精度原版模型,单并发20tps;INT4量化模型单并发30tps,多并发可达60+。
Terminal chat client compatible with Meshtastic firmware