Stars
OceanGym: A Benchmark Environment for Underwater Embodied Agents
We are committed to the open-sourcing quantitative knowledge, aiming to bridge the information gap between the domestic and international quantitative finance industries. 我们致力于量化知识的开源与汉化,打破国内外量化金融行…
An LLM agent that conducts deep research (local and web) on any given topic and generates a long report with citations.
verl: Volcano Engine Reinforcement Learning for LLMs
🌐 Make websites accessible for AI agents. Automate tasks online with ease.
[WSDM 2026] LookAhead Tuning: Safer Language Models via Partial Answer Previews
[WWW 2025] A Dockerized Schema-Guided LLM Agent-based Knowledge Extraction System.
A high-throughput and memory-efficient inference and serving engine for LLMs
🚀🤖 Crawl4AI: Open-source LLM Friendly Web Crawler & Scraper. Don't be shy, join here: https://discord.gg/jP8KfhDhyN
Fully open reproduction of DeepSeek-R1
DSPy: The framework for programming—not prompting—language models
🦜🔗 The platform for reliable agents.
[EMNLP 2025] Circuit-Aware Editing Enables Generalizable Knowledge Learners
Train transformer language models with reinforcement learning.
[EMNLP 2025] OmniThink: Expanding Knowledge Boundaries in Machine Writing through Thinking
Tongyi Deep Research, the Leading Open-source Deep Research Agent
An Open-source Framework for Data-centric, Self-evolving Autonomous Language Agents
An LLM-powered knowledge curation system that researches a topic and generates a full-length report with citations.
[ICLR 2025] LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs
[ACL 2024] DINER: Debiasing Aspect-based Sentiment Analysis with Multi-variable Causal Inference
[ICLR 2025] Benchmarking Agentic Workflow Generation
Official style files for papers submitted to venues of the Association for Computational Linguistics
A library for mechanistic interpretability of GPT-style language models
[EMNLP 2024] To Forget or Not? Towards Practical Knowledge Unlearning for Large Language Models
OneEdit: A Neural-Symbolic Collaboratively Knowledge Editing System.
Must-read Papers on Knowledge Editing for Large Language Models.
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
[NeurIPS 2024] Knowledge Circuits in Pretrained Transformers