Hi👋 I am Zhouqi Hua (华洲琦).
I am a first year PhD student at Fudan University, in the joint program in Large Model Center of Shanghai AI Laboratory, advised by Dr. Wenwei Zhang, Dr. Kai Chen and Prof. Dahua Lin. Before that, I received the bachelor degree at Tongji University in 2025.
My research focus on generalization in LLMs, including length generalization and compositional generalization. Now I'm interested in investigating the mathematical abilities of LLMs.
") does not match the recommended repository name for your site ("").
", so that your site can be accessed directly at "http://".
However, if the current repository name is intended, you can ignore this message by removing "{% include widgets/debug_repo_name.html %}" in index.html.
",
which does not match the baseurl ("") configured in _config.yml.
baseurl in _config.yml to "".
Lei Bai, Zhongrui Cai, ..., Zhouqi Hua, ..., Yu Qiao et al.
Preprint
Intern-S1 is a large multimodal MoE foundation model trained with massive scientific data and mixture-of-rewards reinforcement learning, achieving SOTA performance in scientific reasoning and professional tasks while remaining competitive in general reasoning among open-source models.
Lei Bai, Zhongrui Cai, ..., Zhouqi Hua, ..., Yu Qiao et al.
Preprint
Intern-S1 is a large multimodal MoE foundation model trained with massive scientific data and mixture-of-rewards reinforcement learning, achieving SOTA performance in scientific reasoning and professional tasks while remaining competitive in general reasoning among open-source models.
Zhouqi Hua*, Wenwei Zhang*#, Chengqi Lyu, Yuzhe Gu, Songyang Gao, Kuikun Liu, Dahua Lin#, Kai Chen# (* equal contribution, # corresponding author)
Under review.
Turing Machine Imitation Learning (TAIL) is a synthetic CoT framework that enhances the length generalization of LLMs on computable reasoning tasks by imitating Turing Machine execution, achieving state-of-the-art performance across 18 challenging tasks.
Zhouqi Hua*, Wenwei Zhang*#, Chengqi Lyu, Yuzhe Gu, Songyang Gao, Kuikun Liu, Dahua Lin#, Kai Chen# (* equal contribution, # corresponding author)
Under review.
Turing Machine Imitation Learning (TAIL) is a synthetic CoT framework that enhances the length generalization of LLMs on computable reasoning tasks by imitating Turing Machine execution, achieving state-of-the-art performance across 18 challenging tasks.