Thanks to visit codestin.com
Credit goes to huahenry.github.io

Portrait
Zhouqi Hua
Ph.D. Student
Fudan University & Shanghai AI Lab
About Me

Hi👋 I am Zhouqi Hua (华洲琦).

I am a first year PhD student at Fudan University, in the joint program in Large Model Center of Shanghai AI Laboratory, advised by Dr. Wenwei Zhang, Dr. Kai Chen and Prof. Dahua Lin. Before that, I received the bachelor degree at Tongji University in 2025.

My research focus on generalization in LLMs, including length generalization and compositional generalization. Now I'm interested in investigating the mathematical abilities of LLMs.

Education
  • Shanghai AI Lab
    Shanghai AI Lab
    Research Intern @ Large Model Center
    Joint Ph.D. Student
    Sep. 2025 - present
  • Fudan University
    Fudan University
    Ph.D. Student in Computer Science
    Sep. 2025 - present
  • Tongji University
    Tongji University
    B.S. in Computer Science
    Sep. 2021 - Jul. 2025
Honors & Awards
  • Tongji Excellent Student Award
    2024
  • Tongji Excellent Student Scholarship (First Prize)
    2024
  • National Second Prize in CCCC-MAIC
    2024
  • Tongji Excellent Student Scholarship (First Prize)
    2023
News
2025
We release Intern-S1, an advanced open-source scientific multimodal reasoning model. Try it
Aug 21
We release a new paper TAIL proposing a programmatic approach to enhance length generalization in LLMs.
Jul 21
Selected Publications (view all )
Intern-S1: A Scientific Multimodal Foundation Model
Intern-S1: A Scientific Multimodal Foundation Model

Lei Bai, Zhongrui Cai, ..., Zhouqi Hua, ..., Yu Qiao et al.

Preprint

Intern-S1 is a large multimodal MoE foundation model trained with massive scientific data and mixture-of-rewards reinforcement learning, achieving SOTA performance in scientific reasoning and professional tasks while remaining competitive in general reasoning among open-source models.

Intern-S1: A Scientific Multimodal Foundation Model

Lei Bai, Zhongrui Cai, ..., Zhouqi Hua, ..., Yu Qiao et al.

Preprint

Intern-S1 is a large multimodal MoE foundation model trained with massive scientific data and mixture-of-rewards reinforcement learning, achieving SOTA performance in scientific reasoning and professional tasks while remaining competitive in general reasoning among open-source models.

The Imitation Game: Turing Machine Imitator is Length Generalizable Reasoner
The Imitation Game: Turing Machine Imitator is Length Generalizable Reasoner

Zhouqi Hua*, Wenwei Zhang*#, Chengqi Lyu, Yuzhe Gu, Songyang Gao, Kuikun Liu, Dahua Lin#, Kai Chen# (* equal contribution, # corresponding author)

Under review.

Turing Machine Imitation Learning (TAIL) is a synthetic CoT framework that enhances the length generalization of LLMs on computable reasoning tasks by imitating Turing Machine execution, achieving state-of-the-art performance across 18 challenging tasks.

The Imitation Game: Turing Machine Imitator is Length Generalizable Reasoner

Zhouqi Hua*, Wenwei Zhang*#, Chengqi Lyu, Yuzhe Gu, Songyang Gao, Kuikun Liu, Dahua Lin#, Kai Chen# (* equal contribution, # corresponding author)

Under review.

Turing Machine Imitation Learning (TAIL) is a synthetic CoT framework that enhances the length generalization of LLMs on computable reasoning tasks by imitating Turing Machine execution, achieving state-of-the-art performance across 18 challenging tasks.

All publications