Weiji Xie* 1,2,
Jinrui Han* 1,2,
Jiakun Zheng* 1,3,
Huanyu Li1,4,
Xinzhe Liu1,5,
Jiyuan Shi1,
Weinan Zhang2,
Chenjia Bai†1,
Xuelong Li1
* Equal Contribution † Corresponding Author
1Institute of Artificial Intelligence (TeleAI), China Telecom
2Shanghai Jiao Tong University
3East China University of Science and Technology
4Harbin Institute of Technology
5ShanghaiTech University
- [2025-06] We release the code and paper for PBHC.
This is the official implementation of the paper KungfuBot: Physics-Based Humanoid Whole-Body Control for Learning Highly-Dynamic Skills.
Our paper introduces a physics-based control framework that enables humanoid robots to learn and reproduce challenging motions through multi-stage motion processing and adaptive policy training.
This repository includes:
- Motion processing pipeline
- Collect human motion from various sources (video, LAFAN, AMASS, etc.) to a unified SMPL format (
motion_source/) - Filter, correct and retarget human motion to the robot (
smpl_retarget/) - Visualize and analyze the processed motions (
smpl_vis/,robot_motion_process/)
- Collect human motion from various sources (video, LAFAN, AMASS, etc.) to a unified SMPL format (
- RL-based motion imitation framework (
humanoidverse/)- Train the policy in IsaacGym
- Deploy trained policies in MuJoCo for sim2sim verification. The framework is designed for easy extension--custom policies and real-world deployment modules can be plugged in with minimal effort
- Example data (
example/)- Sample motion data in our experiments (
example/motion_data/, you can visualize the motion data with tools inrobot_motion_process/) - A pretrained policy checkpoint (
example/pretrained_hors_stance_pose/)
- Sample motion data in our experiments (
-
Refer to
INSTALL.mdfor environment setup and installation instructions. -
Each module folder (e.g.,
humanoidverse,smpl_retarget) contains a dedicatedREADME.mdexplaining its purpose and usage. -
How to let your robot perform a new motion?
- Collect the motion data from the source and process the motion data to the SMPL format (
motion_source/). - Retarget the motion data to the robot (
smpl_retarget/, chooseMinkorPHCpipeline as you like). - Visualize the processed motion to check whether the motion quality is satisfiable (
smpl_vis/,robot_motion_process/). - Train a policy for the processed motion in IsaacGym (
humanoidverse/). - Deploy the policy in MuJoCo or real-world robot (
humanoidverse/).
- Collect the motion data from the source and process the motion data to the SMPL format (
description: provide description file for SMPL and G1 robot.motion_source: docs for getting SMPL format data.smpl_retarget: tools for SMPL to G1 robot retargeting.smpl_vis: tools for visualizing SMPL format data.robot_motion_process: tools for processing robot format motion. Including visualization, interpolation, and trajectory analysis.humanoidverse: training RL policyexample: example motion and ckpt for using PBHC
If you find our work helpful, please cite:
@article{xie2025kungfubot,
title={KungfuBot: Physics-Based Humanoid Whole-Body Control for Learning Highly-Dynamic Skills},
author={Xie, Weiji and Han, Jinrui and Zheng, Jiakun and Li, Huanyu and Liu, Xinzhe and Shi, Jiyuan and Zhang, Weinan and Bai, Chenjia and Li, Xuelong},
journal={arXiv preprint arXiv:2506.12851},
year={2025}
}This codebase is under CC BY-NC 4.0 license. You may not use the material for commercial purposes, e.g., to make demos to advertise your commercial products.
- ASAP: We use
ASAPlibrary to build our RL codebase. - RSL_RL: We use
rsl_rllibrary for the PPO implementation. - Unitree: We use
Unitree G1as our testbed robot. - Maskedmimic: We use the retargeting pipeline in
Maskedmimic, which based on Mink. - PHC: We incorporate the retargeting pipeline from
PHCinto our implementation. - GVHMR: We use
GVHMRto extract motions from videos. - IPMAN: We filter motions based on
IPMANcodebase.
Feel free to open an issue or discussion if you encounter any problems or have questions about this project.
For collaborations, feedback, or further inquiries, please reach out to:
- Weiji Xie: [email protected] or Weixin
shisoul - Jinrui Han: [email protected] or Weixin
Bw_rooneY - Chenjia Bai (Corresponding Author): [email protected]
- You can also join our weixin discussion group for timely Q&A. Since the group already exceeds 200 members, you'll need to first add one of the authors on Weixin to receive an invitation to join.
We welcome contributions and are happy to support the community in building upon this work!