Official implementation of the paper by Yilin Wen, Kechuan Dong, and Yusuke Sugano: "Robust Long-Term Test-Time Adaptation for 3D Human Pose Estimation Through Motion Discretization", AAAI 2026.
[Paper and Appendix] | [Supplementary Video] | [Project Page]
If you find this work helpful, please consider citing:
@inproceedings{wen2026robust,
title={Robust Long-Term Test-Time Adaptation for 3D Human Pose Estimation Through Motion Discretization},
author={Wen, Yilin and Dong, Kechuan and Sugano, Yusuke},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2026}
}The code is tested with the following environment:
Ubuntu 22.04
Python 3.10
PyTorch 2.4.0
You can create a conda environment for this project by running:
conda env create -f environment.yml
conda activate OTTPAIf you encounter issues with torchgeometry, you may modify the kernel code by referring to I2L-Mesh or CycleAdapt.
Please prepare the pretrained models and other SMPL-related files in the ./assets/ directory following this structure:
${root_dir_of_this_repo}/
├── assets/
│ ├── human_models/
│ │ ├── J_regressor_h36m_smpl.npy
│ │ ├── smpl_mean_params.npz
│ │ └── smpl/
│ │ ├── SMPL_FEMALE.pkl
│ │ ├── SMPL_MALE.pkl
│ │ └── SMPL_NEUTRAL.pkl
│ ├── pose_prior/
│ │ └── gmm_08.pkl
│ └── pretrained_models/
│ ├── hmr_basemodel.pt
│ ├── motionnet.pth
│ └── motionnet_stats/
│ ├── mean.pt
│ └── std.pt
├── configs/
├── core/
├── dataset/
├── exp/
├── models/
├── utils/
└── ...
All files except those under smpl/ can be downloaded from: Link.
For SMPL models, please download them from the official SMPL website.
Please download 3DPW and Ego-Exo4D from their official websites.
We provide precomputed 2D detections (OpenPose and ViTPose), as well as other miscellaneous files at: Link.
We save images to LMDB for adaptation using convert_to_lmdb.py:
python convert_to_lmdb.py --cfg <path_to_cfg> --participant_uid <uid>Parameters:
-
--cfg: We provide two example configuration files under theconfigsdirectory for running on the 3DPW and Ego-Exo4D datasets. While you can ignore most parameters at this step, you need to properly set:dir_root: root directory of the downloaded datasetdir_lmdb: root directory of the output image LMDBdir_misc: root directory of 2D detection and miscellaneous files obtained
-
--participant_uid:- For Ego-Exo4D dataset, it indicates the participant UID to be processed
- For 3DPW dataset, this parameter will be ignored at this stage, as images of all participants will be processed in a single run
Examples:
For 3DPW subjects:
python convert_to_lmdb.py --cfg ./configs/3dpw.ymlFor an Ego-Exo4D subject:
python convert_to_lmdb.py --cfg ./configs/egoexo.yml --participant_uid 161Use adapt.py to run adaptation on input videos:
python adapt.py --gpu 0 --cfg <path_to_config> --participant_uid <uid>Parameters:
-
--cfg: In the configuration file, you also need to properly indicate the 2D joint detections (vitposeoropenpose) used for adaptation. -
--participant_uid:- On Ego-Exo4D, we test with participants with UIDs: 388, 386, 387, 384, 383, 323, 318, 376, 320, 316, 382, 558, 317, 161, 549, 525, 544, 34, 40, 244, 243, 245, 239, 240, 46, 226, 24, 254, 227, 256 (as reported in Table 5 of our appendix).
- For 3DPW, use values 0-4, indicating the five participants involved in the test set.
Examples:
For a 3DPW subject:
python adapt.py --gpu 0 --cfg ./configs/3dpw.yml --participant_uid 1For an Ego-Exo4D subject:
python adapt.py --gpu 0 --cfg ./configs/egoexo.yml --participant_uid 161Use visualize_result.py to visualize estimation results after adaptation. This will produce figures similar to Figure 3 in our main text.
python visualize_result.py --dir_exp <dir_to_exp>Parameters:
--dir_exp: Root directory of experiment results, which is thecfg.output_dirused in adaptation. The script will load the log file and estimation results saved during adaptation, and save visualization results under<dir_to_exp>/results.
Part of the codebase relies on the following repositories:
The pretrained MotionNet (motionnet.pth) is under the ModelGo Attribution-NonCommercial-ResponsibleAI License, Version 2.0 (MG-NC-RAI-2.0).