-
01:21
(UTC -12:00)
Lists (7)
Sort Name ascending (A-Z)
Stars
Professional Antigravity Account Manager & Switcher. One-click seamless account switching for Antigravity Tools. Built with Tauri v2 + React (Rust).专业的 Antigravity 账号管理与切换工具。为 Antigravity 提供一键无缝账号切…
The First Systematic Vibe Coding Open-Source Tutorial | From Zero to Full-Stack, Empowering Everyone to Build Products with AI | Live at: www.vibevibe.cn ;首个系统化 Vibe Coding 开源教程 | 零基础到全栈实战,让人人都能用 A…
📚 《从零开始构建智能体》——从零开始的智能体原理与实践教程
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
Debug, evaluate, and monitor your LLM applications, RAG systems, and agentic workflows with comprehensive tracing, automated evaluations, and production-ready dashboards.
LLM Council works together to answer your hardest questions
This is the homepage of a new book entitled "Mathematical Foundations of Reinforcement Learning."
Awesome-LLM: a curated list of Large Language Model
Large Language Model (LLM) Systems Paper List
Lightweight coding agent that runs in your terminal
Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.
A lightweight reinforcement learning framework that integrates seamlessly into your codebase, empowering developers to focus on algorithms with minimal intrusion.
A Flexible Framework for Experiencing Heterogeneous LLM Inference/Fine-tune Optimizations
An Easy-to-use, Scalable and High-performance RLHF Framework based on Ray (PPO & GRPO & REINFORCE++ & TIS & vLLM & Ray & Dynamic Sampling & Async Agentic RL)
Tiny Model, Big Logic: Diversity-Driven Optimization Elicits Large-Model Reasoning Ability in VibeThinker-1.5B
🐫 CAMEL: The first and the best multi-agent framework. Finding the Scaling Law of Agents. https://www.camel-ai.org
Stable Diffusion web UI
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
A high-throughput and memory-efficient inference and serving engine for LLMs
这是一个简单的技术科普教程项目,主要聚焦于解释一些有趣的,前沿的技术概念和原理。每篇文章都力求在 5 分钟内阅读完成。