Thanks to visit codestin.com
Credit goes to github.com

Skip to content

EIT-NLP/Awesome-Latent-CoT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 

Repository files navigation

Reasoning Beyond Language: A Comprehensive Survey on Latent Chain-of-Thought Reasoning

Xinghao Chen1,2, Anhao Zhao2, Heming Xia1, Xuan Lu2, Hanlin Wang1,
Yanjun Chen1,2, Wei Zhang2, Jian Wang1, Wenjie Li1, Xiaoyu Shen2
1Department of Computing, The Hong Kong Polytechnic University
2Ningbo Digital Twin Institute, Eastern Institute of Technology, Ningbo, China

Intro

This repository contains a regularly updated paper list for Latent CoT Reasoning.

Awesome Last Commit

Whereof one cannot speak, thereof one must be silent. -- Ludwig Wittgenstein

Reasoning in latent space shifts the way AI models think, moving beyond language tokens to represent thought processes in a more abstract, non-language space. Just as humans often think without words, latent space allows for more flexible and efficient reasoning.

Key advantages include:

  1. Richer Thought Representation: Latent space captures complex, non-verbal thoughts that language alone can't express.
  2. Lower Latency: It allows for higher information density, reducing the need for token-based decoding and speeding up reasoning.

This approach brings AI closer to human-like cognition, enabling faster, more flexible, and powerful models for real-world tasks.

Citation

If you find our survey useful for your research, please consider citing the following paper:

@article{eit2025latentcot,
      title={Reasoning Beyond Language: A Comprehensive Survey on Latent Chain-of-Thought Reasoning}, 
      author={Xinghao Chen and Anhao Zhao and Heming Xia and Xuan Lu and Hanlin Wang and Yanjun Chen and Wei Zhang and Jian Wang and Wenjie Li and Xiaoyu Shen},
      year={2025},
      eprint={2505.16782},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.16782}, 
}

Updates

  • 2025-05-22: 📝 The survey is now available on arXiv!
  • 2025-02-16: 🚀 Latent CoT Repo launched!

Content

Keywords Convention

Abbreviation

Conference

Main Features

Papers

Token-wise Strategies

Discrete Tokens

  • Think before you speak: Training language models with pause tokens
    Sachin Goyal,Ziwei Ji, Ankit Singh Rawat, Aditya Krishna Menon, Sanjiv Kumar, Vaishnavh Nagarajan. [pdf], 2023.10.
  • Guiding Language Model Reasoning with Planning Tokens
    Xinyi Wang, Lucas Caccia, Oleksiy Ostapenko, Xingdi Yuan, William Yang Wang, Alessandro Sordoni. [pdf], [code], 2023.10.
  • Thinking Tokens for Language Modeling
    David Herel, Tomas Mikolov. [pdf], 2024.05.
  • Let's think dot by dot: Hidden computation in transformer language models
    Jacob Pfau, William Merrill, Samuel R. Bowman. [pdf], [code], 2024.04.
  • Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking
    Eric Zelikman, Georges Harik, Yijia Shao, Varuna Jayasiri, Nick Haber, Noah D. Goodman. [pdf], 2024.03.
  • Reasoning to Learn from Latent Thoughts
    Yangjun Ruan, Neil Band, Chris J. Maddison, Tatsunori Hashimoto. [pdf], [code], 2025.03.
  • Mining Hidden Thoughts from Texts: Evaluating Continual Pretraining with Synthetic Data for LLM Reasoning
    Yoichi Ishibashi, Taro Yano, Masafumi Oyamada. [pdf], 2025.03.
  • Disentangling Memory and Reasoning Ability in Large Language Models
    Mingyu Jin, Weidi Luo, Sitao Cheng, Xinyi Wang, Wenyue Hua, Ruixiang Tang, William Yang Wang, Yongfeng Zhang. [pdf], [code], 2024.11.
  • Token Assorted: Mixing Latent and Text Tokens for Improved Language Model Reasoning
    DiJia Su, Hanlin Zhu, Yingchen Xu, Jiantao Jiao, Yuandong Tian, Qinqing Zheng. [pdf], 2025.02.
  • Latent Preference Coding: Aligning Large Language Models via Discrete Latent Codes
    Zhuocheng Gong, Jian Guan, Wei Wu, Huishuai Zhang, Dongyan Zhao. [pdf], 2025.02.
  • Efficient Pretraining Length Scaling
    Bohong Wu, Shen Yan, Sijun Zhang, Jianqiao Lu, Yutao Zeng, Ya Wang, Xun Zhou. [pdf], 2025.04.
  • Fast Thinking for Large Language Models
    Haoyu Zheng, Zhuonan Wang, Yuqian Yuan, Tianwei Lin, Wenqiao Zhang, Zheqi Lv, Juncheng Li, Siliang Tang, Yueting Zhuang, Hongyang He. [pdf], 2025.09.

Continuous Tokens

  • Training Large Language Models to Reason in a Continuous Latent Space
    Shibo Hao, Sainbayar Sukhbaatar, DiJia Su, Xian Li, Zhiting Hu, Jason Weston, Yuandong Tian. [pdf], [code], 2024.12.
  • Compressed Chain of Thought: Efficient Reasoning Through Dense Representations
    Jeffrey Cheng, Benjamin Van Durme. [pdf], 2024.12.
  • Expediting and Elevating Large Language Model Reasoning via Hidden Chain-of-Thought Decoding
    Tianqiao Liu, Zui Chen, Zitao Liu, Mi Tian, Weiqi Luo. [pdf], 2024.09.
  • LightThinker: Thinking Step-by-Step Compression
    Jintian Zhang, Yuqi Zhu, Mengshu Sun, Yujie Luo, Shuofei Qiao, Lun Du, Da Zheng, Huajun Chen, Ningyu Zhang. [pdf], 2025.02.
  • CODI: Compressing Chain-of-Thought into Continuous Space via Self-Distillation
    Zhenyi Shen, Hanqi Yan, Linhai Zhang, Zhanghao Hu, Yali Du, Yulan He. [pdf], 2025.02.
  • SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs
    Yige Xu, Xu Guo, Zhiwei Zeng, Chunyan Miao. [pdf], 2025.02.
  • SoftCoT++: Test-Time Scaling with Soft Chain-of-Thought Reasoning
    Yige Xu, Xu Guo, Zhiwei Zeng, Chunyan Miao. [pdf], 2025.05.
  • LLM Pretraining with Continuous Concepts
    Jihoon Tack, Jack Lanchantin, Jane Yu, Andrew Cohen, Ilia Kulikov, Janice Lan, Shibo Hao, Yuandong Tian, Jason Weston, Xian Li. [pdf], [code], 2025.02.
  • Soft Thinking: Unlocking the Reasoning Potential of LLMs in Continuous Concept Space
    Zhen Zhang, Xuehai He, Weixiang Yan, Ao Shen, Chenyang Zhao, Shuohang Wang, Yelong Shen, Xin Eric Wang. [pdf], [code], 2025.05.
  • Think Silently, Think Fast: Dynamic Latent Compression of LLM Reasoning Chains
    Wenhui Tan, Jiaze Li, Jianzhong Ju, Zhenbo Luo, Jian Luan, Ruihua Song. [pdf], [code], 2025.05.
  • Hybrid Latent Reasoning via Reinforcement Learning
    Zhenrui Yue, Bowen Jin, Huimin Zeng, Honglei Zhuang, Zhen Qin, Jinsung Yoon, Lanyu Shang, Jiawei Han, Dong Wang. [pdf], [code], 2025.05.
  • Seek in the Dark: Reasoning via Test-Time Instance-Level Policy Gradient in Latent Space
    Zhenrui Yue, Bowen Jin, Huimin Zeng, Honglei Zhuang, Zhen Qin, Jinsung Yoon, Lanyu Shang, Jiawei Han, Dong Wang. [pdf], [code], 2025.05.
  • Enhancing Latent Computation in Transformers with Latent Tokens
    Yuchang Sun, Yanxi Chen, Yaliang Li, Bolin Ding. [pdf], [code], 2025.05.
  • System-1.5 Reasoning: Traversal in Language and Latent Spaces with Dynamic Shortcuts
    Xiaoqiang Wang, Suyuchen Wang, Yun Zhu, Bang Liu. [pdf], 2025.05.
  • Text Generation Beyond Discrete Token Sampling
    Yufan Zhuang, Liyuan Liu, Chandan Singh, Jingbo Shang, Jianfeng Gao. [pdf], 2025.10.
  • Efficient Post-Training Refinement of Latent Reasoning in Large Language Models
    Xinyuan Wang, Dongjie Wang, Wangyang Ying, Haoyue Bai, Nanxu Gong, Sixun Dong, Kunpeng Liu, Yanjie Fu. [pdf], [code], 2025.06.
  • DART: Distilling Autoregressive Reasoning to Silent Thought
    Nan Jiang, Ziming Wu, De-Chuan Zhan, Fuming Lai, Shaobing Lian. [pdf], 2025.06.
  • Parallel Continuous Chain-of-Thought with Jacobi Iteration
    Haoyi Wu, Zhihao Teng, Kewei Tu. [pdf], 2025.06.
  • LLMs are Single-threaded Reasoners: Demystifying the Working Mechanism of Soft Thinking
    Junhong Wu, Jinliang Lu, Zixuan Ren, Gangqiang Hu, Zhi Wu, Dai Dai, Hua Wu. [pdf], 2025.08.
  • SynAdapt: Learning Adaptive Reasoning in Large Language Models via Synthetic Continuous Chain-of-Thought
    Jianwei Wang, Ziming Wu, Fuming Lai, Shaobing Lian, Ziqian Zeng. [pdf], 2025.08.
  • LTA-thinker: Latent Thought-Augmented Training Framework for Large Language Models on Complex Reasoning
    Jiaqi Wang, Binquan Ji, Haibo Luo, Yiyang Qi, Ruiting Li, Huiyan Wang, Yuantao Han, Cangyi Yang, jiaxu Zhang, Feiliang Ren. [pdf], 2025.09.
  • Soft Tokens, Hard Truths
    Natasha Butt, Ariel Kwiatkowski, Ismail Labiad, Julia Kempe, Yann Ollivier. [pdf], 2025.09.
  • SIM-CoT: Supervised Implicit Chain-of-Thought
    Xilin Wei, Xiaoran Liu, Yuhang Zang, Xiaoyi Dong, Yuhang Cao, Jiaqi Wang, Xipeng Qiu, Dahua Lin. [pdf], [code], 2025.09.
  • R-Capsule: Compressing High-Level Plans for Efficient Large Language Model Reasoning
    Hongyu Shan, Mingyang Song, Chang Dai, Di Liang, Han Chen. [pdf], 2025.09.
  • Pretraining LLM with Latent Thoughts in Continuous Space
    Boyi Zeng, He Li, Shixiang Song, Yixuan Wang, Ziwei He, Xinbing Wang, Zhouhan Lin. [pdf], 2025.09. ![](https://img.shields.io/badge/Pythia Arch-blue)
  • MARCOS: Deep Thinking by Markov Chain of Continuous Thoughts
    Jiayu Liu, Zhenya Huang, Anya Sims, Enhong Chen, Yee Whye Teh, Ning Miao. [pdf], 2025.09.
  • Latent Thinking Optimization: Your Latent Reasoning Language Model Secretly Encodes Reward Signals in its Latent Thoughts
    Hanwen Du, Yuxin Dong, Xia Ning. [pdf], 2025.09.
  • LatentEvolve: Self-Evolving Test-Time Scaling in Latent Space
    Guibin Zhang, Fanci Meng, Guancheng Wan, Zherui Li, Kun Wang, Zhenfei Yin, Lei Bai, Shuicheng Yan. [pdf], [code], 2025.09.
  • SwiReasoning: Switch-Thinking in Latent and Explicit for Pareto-Superior Reasoning LLMs
    Dachuan Shi, Abedelkadir Asi, Keying Li, Xiangchi Yuan, Leyan Pan, Wenke Lee, Wen Xiao. [pdf], [code], 2025.10.
  • Parallel Test-Time Scaling for Latent Reasoning Models
    Runyang You, Yongqi Li, Meng Liu, Wenjie Wang, Liqiang Nie, Wenjie Li. [pdf], [code], 2025.10.
  • Thinking on the Fly: Test-Time Reasoning Enhancement via Latent Thought Policy Optimization
    Wengao Ye, Yan Liang, Lianlei Shan. [pdf], [code], 2025.10.
  • Towards Inference-time Scaling for Continuous Space Reasoning
    Minghan Wang, Thuy-Trang Vu, Ehsan Shareghi, Gholamreza Haffari. [pdf], 2025.10.
  • Latent Reasoning in LLMs as a Vocabulary-Space Superposition
    Jingcheng Deng, Liang Pang, Zihao Wei, Shichen Xu, Zenghao Duan, Kun Xu, Yang Song, Huawei Shen, Xueqi Cheng. [pdf], [code], 2025.10.
  • LaDiR: Latent Diffusion Enhances LLMs for Text Reasoning
    Haoqiang Kang, Yizhe Zhang, Nikki Lijing Kuang, Nicklas Majamaki, Navdeep Jaitly, Yi-An Ma, Lianhui Qin. [pdf], [code], 2025.10.
  • SemCoT: Accelerating Chain-of-Thought Reasoning through Semantically-Aligned Implicit Tokens
    Yinhan He, Wendy Zheng, Yaochen Zhu, Zaiyi Zheng, Lin Su, Sriram Vasudevan, Qi Guo, Liangjie Hong, Jundong Li. [pdf], [code], 2025.10.
  • SofT-GRPO: Surpassing Discrete-Token LLM Reinforcement Learning via Gumbel-Reparameterized Soft-Thinking Policy Optimization
    Zhi Zheng, Wee Sun Lee. [pdf], [code], 2025.11.
  • Think Consistently, Reason Efficiently: Energy-Based Calibration for Implicit Chain-of-Thought
    Zhikang Chen, Sen Cui, Deheng Ye, Yu Zhang, Yatao Bian, Tingting Zhu. [pdf], 2025.11.

Internal Mechanisms

Structural CoT

  • CoTFormer: A Chain-of-Thought Driven Architecture with Budget-Adaptive Computation Cost at Inference
    Amirkeivan Mohtashami, Matteo Pagliardini, Martin Jaggi. [pdf], 2024.08.
  • Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach
    Jonas Geiping, Sean McLeish, Neel Jain, John Kirchenbauer, Siddharth Singh, Brian R. Bartoldson, Bhavya Kailkhura, Abhinav Bhatele, Tom Goldstein. [pdf], [code], [model], 2025.02.
  • Enhancing Auto-regressive Chain-of-Thought through Loop-Aligned Reasoning
    Qifan Yu, Zhenyu He, Sijie Li, Xun Zhou, Jun Zhang, Jingjing Xu, Di He. [pdf], [code], 2025.02.
  • Inner Thinking Transformer: Leveraging Dynamic Depth Scaling to Foster Adaptive Internal Thinking
    Yilong Chen, Junyuan Shang, Zhenyu Zhang, Yanxi Xie, Jiawei Sheng, Tingwen Liu, Shuohuan Wang, Yu Sun, Hua Wu, Haifeng Wang. [pdf], 2025.02.
  • Reasoning with Latent Thoughts: On the Power of Looped Transformers
    Nikunj Saunshi, Nishanth Dikkala, Zhiyuan Li, Sashank J. Reddi, Sanjiv Kumar. [pdf], 2025.01.
  • Pretraining Language Models to Ponder in Continuous Space
    Boyi Zeng, Shixiang Song, Siyuan Huang, Yixuan Wang, He Li, Ziwei He, Xinbing Wang, Zhiyu Li, Zhouhan Lin. [pdf], 2025.05.
  • The 4th Dimension for Scaling Model Size
    Ruike Zhu, Hanwen Zhang, Tianyu Shi, Chi Wang, Tianyi Zhou, Zengyi Qin. [pdf], 2025.05.
  • Hierarchical Reasoning Model
    Guan Wang, Jin Li, Yuhao Sun, Xing Chen, Changling Liu, Yue Wu, Meng Lu, Sen Song, Yasin Abbasi Yadkori. [pdf], [code], 2025.06.
  • Skip a Layer or Loop it? Test-Time Depth Adaptation of Pretrained LLMs
    Ziyue Li, Yang Li, Tianyi Zhou. [pdf], 2025.07.
  • Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation
    Sangmin Bae, Yujin Kim, Reza Bayat, Sungnyun Kim, Jiyoun Ha, Tal Schuster, Adam Fisch, Hrayr Harutyunyan, Ziwei Ji, Aaron Courville, Se-Young Yun. [pdf], 2025.07.
  • Less is More: Recursive Reasoning with Tiny Networks
    Alexia Jolicoeur-Martineau. [pdf], 2025.10.
  • Encode, Think, Decode: Scaling test-time reasoning with recursive latent thoughts
    Yeskendir Koishekenov, Aldo Lipani, Nicola Cancedda. [pdf], 2025.10.
  • Unlocking Out-of-Distribution Generalization in Transformers via Recursive Latent Space Reasoning
    Awni Altabaa, Siyu Chen, John Lafferty, Zhuoran Yang. [pdf], [code] 2025.10.
  • Scaling Latent Reasoning via Looped Language Models
    Rui-Jie Zhu, Zixuan Wang, Kai Hua, Tianyu Zhang, Ziniu Li, Haoran Que, Boyi Wei, Zixin Wen, Fan Yin, He Xing, Lu Li, Jiajun Shi, Kaijing Ma, Shanda Li, Taylor Kergan, Andrew Smith, Xingwei Qu, Mude Hui, Bohong Wu, Qiyang Min, Hongzhi Huang, Xun Zhou, Wei Ye, Jiaheng Liu, Jian Yang, Yunfeng Shi, Chenghua Lin, Enduo Zhao, Tianle Cai, Ge Zhang, Wenhao Huang, Yoshua Bengio, Jason Eshraghian. [pdf], [code], 2025.10.
  • Parallel Loop Transformer for Efficient Test-Time Computation Scaling
    Bohong Wu, Mengzhao Chen, Xiang Luo, Shen Yan, Qifan Yu, Fan Xia, Tianqi Zhang, Hongrui Zhan, Zheng Zhong, Xun Zhou, Siyuan Qiao, Xingyan Bin. [pdf], 2025.10.
  • LSRL: Process-Supervised GRPO on Latent Recurrent States Improves Mathematical Reasoning
    Hangliang Ren. [pdf], 2025.11.
  • Teaching Pretrained Language Models to Think Deeper with Retrofitted Recurrence
    Sean McLeish, Ang Li, John Kirchenbauer, Dayal Singh Kalra, Brian R. Bartoldson, Bhavya Kailkhura, Avi Schwarzschild, Jonas Geiping, Tom Goldstein, Micah Goldblum. [pdf], [code], 2025.11.

Representational CoT

  • Implicit Chain of Thought Reasoning via Knowledge Distillation
    Yuntian Deng, Kiran Prasad, Roland Fernandez, Paul Smolensky, Vishrav Chaudhary, Stuart Shieber. [pdf], [code], 2023.11.
  • From Explicit CoT to Implicit CoT: Learning to Internalize CoT Step by Step
    Yuntian Deng, Yejin Choi, Stuart Shieber. [pdf], [code], 2024.05.
  • Distilling System 2 into System 1
    Ping Yu, Jing Xu, Jason Weston, Ilia Kulikov. [pdf], 2024.06.

Analysis and Interpretability

  • On the Biology of a Large Language Model
    Anthropic. [pdf], 2025.03.
  • Jump to Conclusions: Short-Cutting Transformers with Linear Transformations
    Alexander Yom Din, Taelin Karidi, Leshem Choshen, Mor Geva. [pdf], 2023.03.
  • Towards a Mechanistic Interpretation of Multi-Step Reasoning Capabilities of Language Models
    Yifan Hou, Jiaoda Li, Yu Fei, Alessandro Stolfo, Wangchunshu Zhou, Guangtao Zeng, Antoine Bosselut, Mrinmaya Sachan. [pdf], 2023.10.
  • A Mechanistic Analysis of a Transformer Trained on a Symbolic Multi-Step Reasoning Task
    Jannik Brinkmann, Abhay Sheshadri, Victor Levoso, Paul Swoboda, Christian Bartelt. [pdf], 2024.02.
  • Do Large Language Models Latently Perform Multi-Hop Reasoning?
    Sohee Yang, Elena Gribovskaya, Nora Kassner, Mor Geva, Sebastian Riedel. [pdf], 2024.02.
  • Understanding and Patching Compositional Reasoning in LLMs
    Zhaoyi Li, Gangwei Jiang, Hong Xie, Linqi Song, Defu Lian, Ying Wei. [pdf], 2024.02.
  • Distributional reasoning in LLMs: Parallel reasoning processes in multi-hop reasoning
    Yuval Shalev, Amir Feder, Ariel Goldstein. [pdf], 2024.06.
  • Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization
    Boshi Wang, Xiang Yue, Yu Su, Huan Sun. [pdf], 2024.05.
  • Can Language Models Learn to Skip Steps?
    Tengxiao Liu, Qipeng Guo, Xiangkun Hu, Cheng Jiayang, Yue Zhang, Xipeng Qiu, Zheng Zhang. [pdf], [code], 2024.09.
  • Think-to-Talk or Talk-to-Think? When LLMs Come Up with an Answer in Multi-Step Reasoning
    Keito Kudo, Yoichi Aoki, Tatsuki Kuribayashi, Shusaku Sone, Masaya Taniguchi, Ana Brassard, Keisuke Sakaguchi, Kentaro Inui. [pdf], 2024.12.
  • Do LLMs Really Think Step-by-step In Implicit Reasoning?
    Yijiong Yu. [pdf], [code], 2024.11.
  • Implicit Reasoning in Transformers is Reasoning through Shortcuts
    Tianhe Lin, Jian Xie, Siyu Yuan, Deqing Yang. [pdf], 2025.03.
  • Uncovering Latent Chain of Thought Vectors in Language Models
    Jason Zhang, Scott Viteri. [pdf], 2024.09.
  • Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation
    Yiming Wang, Pei Zhang, Baosong Yang, Derek F. Wong, Rui Wang. [pdf], 2024.10.
  • Internal Chain-of-Thought: Empirical Evidence for Layer-wise Subtask Scheduling in LLMs
    Zhipeng Yang, Junzhuo Li, Siyu Xia, Xuming Hu. [pdf], [code], 2025.05.
  • To CoT or To Loop? A Formal Comparison Between Chain-of-Thought and Looped Transformers
    Kevin Xu, Issei Sato. [pdf], 2025.05.
  • Reasoning by Superposition: A Theoretical Perspective on Chain of Continuous Thought
    Hanlin Zhu, Shibo Hao, Zhiting Hu, Jiantao Jiao, Stuart Russell, Yuandong Tian. [pdf], 2025.05.
  • Continuous Chain of Thought Enables Parallel Exploration and Reasoning
    Halil Alperen Gozeten, M. Emrullah Ildiz, Xuechen Zhang, Hrayr Harutyunyan, Ankit Singh Rawat, Samet Oymak. [pdf], 2025.05.
  • Do Language Models Use Their Depth Efficiently?
    Róbert Csordás, Christopher D. Manning, Christopher Potts. [pdf], 2025.05.
  • Language models can learn implicit multi-hop reasoning, but only if they have lots of training data
    Yuekun Yao, Yupei Du, Dawei Zhu, Michael Hahn, Alexander Koller. [pdf], 2025.05.
  • Latent Chain-of-Thought? Decoding the Depth-Recurrent Transformer
    Wenquan Lu, Yuechuan Yang, Kyle Lee, Yanshu Li, Enqi Liu. [pdf], 2025.07.
  • LLMs Have a Heart of Stone: Demystifying the Soft Thinking Ability of Large Reasoning Models
    Chünhung Wu, Jinliang Lu, Zixuan Ren, Gangqiang Hu, Zhi Wu, Dai Dai, Hua Wu. [pdf], 2025.08.
  • A Formal Comparison Between Chain-of-Thought and Latent Thought
    Kevin Xu, Issei Sato. [pdf], 2025.09.
  • Emergence of Superposition: Unveiling the Training Dynamics of Chain of Continuous Thought
    Hanlin Zhu, Shibo Hao, Zhiting Hu, Jiantao Jiao, Stuart Russell, Yuandong Tian. [pdf], 2025.09.
  • Hierarchical Reasoning Models: Perspectives and Misconceptions
    Renee Ge, Qianli Liao, Tomaso Poggio. [pdf], 2025.09.
  • Interpreting the Latent Structure of Operator Precedence in Language Models
    Dharunish Yugeswardeenoo, Harshil Nukala, Cole Blondin, Sean O Brien, Vasu Sharma, Kevin Zhu. [pdf], 2025.10.

Applications and Future Directions

  • Efficient Reasoning with Hidden Thinking
    Xuan Shen, Yizhou Wang, Xiangxi Shi, Yanzhi Wang, Pu Zhao, Jiuxiang Gu. [pdf], [code], 2025.01.
  • Learning More Effective Representations for Dense Retrieval through Deliberate Thinking Before Search
    Yifan Ji, Zhipeng Xu, Zhenghao Liu, Yukun Yan, Shi Yu, Yishan Li, Zhiyuan Liu, Yu Gu, Ge Yu, Maosong Sun. [pdf], [code], 2025.02.
  • Think Before Recommend: Unleashing the Latent Reasoning Power for Sequential Recommendation
    Jiakai Tang, Sunhao Dai, Teng Shi, Jun Xu, Xu Chen, Wen Chen, Wu Jian, Yuning Jiang. [pdf], [code], 2025.03.
  • Enhancing Non-Core Language Instruction-Following in Speech LLMs via Semi-Implicit Cross-Lingual CoT Reasoning Hongfei Xue, Yufeng Tang, Hexin Liu, Jun Zhang, Xuelong Geng, Lei Xie. [pdf], 2025.04.
  • Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language Models
    Jiacheng Ye, Shansan Gong, Liheng Chen, Lin Zheng, Jiahui Gao, Han Shi, Chuan Wu, Xin Jiang, Zhenguo Li, Wei Bi, Lingpeng Kong. [pdf], 2024.02.
  • Reinforcing the Diffusion Chain of Lateral Thought with Diffusion Language Models
    Zemin Huang, Zhiyang Chen, Zijun Wang, Tiancheng Li, Guo-Jun Qi. [pdf], 2025.05.
  • Multimodal Latent Language Modeling with Next-Token Diffusion
    Yutao Sun, Hangbo Bao, Wenhui Wang, Zhiliang Peng, Li Dong, Shaohan Huang, Jianyong Wang, Furu Wei. [pdf], 2024.12.
  • SEAL: Steerable Reasoning Calibration of Large Language Models for Free
    Runjin Chen, Zhenyu Zhang, Junyuan Hong, Souvik Kundu, Zhangyang Wang. [pdf], [code], 2025.04.
  • SSR: Enhancing Depth Perception in Vision-Language Models via Rationale-Guided Spatial Reasoning
    Yang Liu, Ming Ma, Xiaomin Yu, Pengxiang Ding, Han Zhao, Mingyang Sun, Siteng Huang, Donglin Wang. [pdf], [code], 2025.05.
  • Beyond Chains of Thought: Benchmarking Latent-Space Reasoning Abilities in Large Language Models
    Thilo Hagendorff, Sarah Fabi. [pdf], 2025.04.
  • Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens
    Zeyuan Yang, Xueyang Yu, Delin Chen, Maohao Shen, Chuang Gan. [pdf], [code], 2025.06.
  • Bridging Search and Recommendation through Latent Cross Reasoning
    Teng Shi, Weicong Qin, Weijie Yu, Xiao Zhang, Ming He, Jianping Fan, Jun Xu. [pdf], 2025.08.
  • LARES: Latent Reasoning for Sequential Recommendation
    Enze Liu, Bowen Zheng, Xiaolei Wang, Wayne Xin Zhao, Jinpeng Wang, Sheng Chen, Ji-Rong Wen. [pdf], 2025.06.
  • Reinforced Latent Reasoning for LLM-based Recommendation
    Yang Zhang, Wenxin Xu, Xiaoyan Zhao, Wenjie Wang, Fuli Feng, Xiangnan He, Tat-Seng Chua. [pdf], 2025.05.
  • Multimodal Chain of Continuous Thought for Latent-Space Reasoning in Vision-Language Models
    Tan-Hanh Pham, Chris Ngo. [pdf], 2025.08.
  • Latent Visual Reasoning
    Bangzheng Li, Ximeng Sun, Jiang Liu, Ze Wang, Jialian Wu, Xiaodong Yu, Hao Chen, Emad Barsoum, Muhao Chen, Zicheng Liu. [pdf], 2025.09.
  • Reasoning in the Dark: Interleaved Vision-Text Reasoning in Latent Space
    Chao Chen, Zhixin Ma, Yongqi Li, Yupeng Hu, Yinwei Wei, Wenjie Li, Liqiang Nie. [pdf], [code], 2025.10.
  • Latent Chain-of-Thought for Visual Reasoning
    Guohao Sun, Hang Hua, Jian Wang, Jiebo Luo, Sohail Dianat, Majid Rabbani, Raghuveer Rao, Zhiqiang Tao. [pdf], [code], 2025.10.
  • Latent Sketchpad : Sketching Visual Thoughts to Elicit Multimodal Reasoning in MLLMs
    Huanyu Zhang, Wenshan Wu, Chengzu Li, Ning Shang, Yan Xia, Yangyu Huang, Yifan Zhang, Li Dong, Zhang Zhang, Liang Wang, Tieniu Tan, Furu Wei. [pdf], [code], 2025.10. ![](https://img.shields.io/badge/Latent Sketchpad-blue)
  • CoCoVa: Chain of Continuous Vision-Language Thought for Latent Space Reasoning
    Jizheng Ma, Xiaofei Zhou, Yanlong Song, Han Yan. [pdf], 2025.11.

Resources

For most recent Efficient Reasoning research, see Awesome-Efficient-Reasoning, and Awesome-Efficient-Reasoning-Models [Paper].

LatentCoT-Horizon: a list of papers contains a larger scope of latent reasoning.

awesome-llm-implicit-reasoning: a list of papers contains implicit reasoning in LLMs.

Acknowledgements

If We’ve accidentally missed your papers on the list, please reach out to us, and we’ll make sure to add them as soon as possible!

About

This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •