π§βπ§ You donβt need a clean dataset to train a motion cleanup model.
StableMotion learns to fix corrupted motions directly from raw mocap data β no handcrafted data pairs, no synthetic artifact augmentation.
β Β Raw corrupted data ββ β Β Clean results!
- StableMotion: Training Motion Cleanup Models with Unpaired Corrupted Data
Create and activate a new conda environment:
conda create --name stablemotion python=3.11.8
conda activate stablemotion
Install the required packages:
pip install -r requirements.txt
The SMPL model is required for preprocessing, evaluation, and visualization.
Please follow the README from TEMOS to obtain the deps
folder with SMPL+H downloaded, and place the deps
folder under ./data_loaders/amasstools
.
Text-to-Motion Retrieval (TMR) is used for evaluation.
Please follow the README from TMR to download pretrained TMR models. After downloading, place the models in the following structure:
StableMotion/
βββ tmr_models/
βββ tmr_humanml3d_guoh3dfeats
βββ tmr_kitml_guoh3dfeats
To fix a path discrepancy, replace the config file tmr_models/tmr_humanml3d_guoh3dfeats/config.json
with misc/tmr_humanml3d_guoh3dfeats_config/config.json
.
To play around, download a StableMotion checkpoint trained on BrokenAMASS from OneDrive and place it under the ./save
directory.
Please follow the README for DATA to download and preprocess the original AMASS dataset.
Then, run the following scripts to build BrokenAMASS:
python -m data_loaders.corrupting_globsmpl_dataset --mode train
python -m data_loaders.corrupting_globsmpl_dataset --mode test
After preprocessing and corruption, your dataset folder should look like this:
dataset/
βββ AMASS
βββ AMASS_20.0_fps_nh
βββ AMASS_20.0_fps_nh_smpljoints_neutral_nobetas
βββ AMASS_20.0_fps_nh_globsmpl_base_cano
βββ AMASS_20.0_fps_nh_globsmpl_corrupted_cano
βββ meta_AMASS_20.0_fps_nh_globsmpl_corrupted_cano/
βββ mean.pt
βββ std.pt
misc.
The released version of BrokenAMASS may differ slightly from the version used in the experiments reported in the paper, due to different random seeds. Contact [email protected] for further questions.
If you want to clean up your own motion data, we strongly recommend preparing the training data with quality labels and training your own StableMotion model on that dataset β this is exactly what StableMotion was designed for!
Train the StableMotion model on BrokenAMASS:
python -m train.train_stablemotion_smpl_glob \
--save_dir save/stablemotion \
--data_dir dataset/AMASS_20.0_fps_nh_globsmpl_corrupted_cano \
--normalizer_dir dataset/meta_AMASS_20.0_fps_nh_globsmpl_corrupted_cano \
--l1_loss \
--model_ema \
--gradient_clip \
--batch_size 128 \
--num_steps 1_000_000 \
--train_platform_type TensorboardPlatform
Clean up corrupted motion sequences using the trained model:
# Basic inference
python -m sample.fix_globsmpl \
--model_path save/stablemotion/ema001000000.pt \
--use_ema \
--batch_size 32 \
--testdata_dir dataset/AMASS_20.0_fps_nh_globsmpl_corrupted_cano \
--output_dir ./output/stablemotion_vanilla
# Enhanced inference with ensemble and adaptive cleanup
python -m sample.fix_globsmpl \
--model_path save/stablemotion/ema001000000.pt \
--use_ema \
--batch_size 32 \
--testdata_dir dataset/AMASS_20.0_fps_nh_globsmpl_corrupted_cano \
--ensemble \
--enable_sits \
--classifier_scale 100 \
--output_dir ./output/stablemotion_hack
Evaluate the quality of cleaned motion sequences:
python -m eval.eval_scripts --data_path ./output/stablemotion_vanilla/results.npy
Content preservation metrics:
Collect clean ground truth
To evaluate content preservation, first record the clean ground-truth data from dataset/AMASS_20.0_fps_nh_globsmpl_base_cano
:
python -m sample.fix_globsmpl \
--model_path save/stablemotion/ema001000000.pt \
--use_ema \
--batch_size 32 \
--testdata_dir dataset/AMASS_20.0_fps_nh_globsmpl_base_cano \
--output_dir ./output/benchmark_clean
--collect_dataset
Then run evaluation with ground truth:
python -m eval.eval_scripts \
--data_path ./output/stablemotion_vanilla/results.npy \
--gt_data_path ./output/benchmark_clean/results_collected.npy
Generate visual renderings of the cleaned motion data:
python -m visualize.render_scripts \
--data_path ./output/stablemotion_vanilla/results.npy \
--rendersmpl
We sincerely thank the open-sourcing of these works where our code is based on: MDM, stmc, diffusers, TMR, humor, PixArt-Ξ±, and stable-audio-tools.
This code is distributed under an MIT LICENSE.
Note that our code depends on other libraries, including TMR, SMPL, SMPL-X, and uses datasets which each have their own respective licenses that must also be followed.
If you find our work helpful, please cite:
@inproceedings{
mu2025StableMotion,
author={Mu, Yuxuan and Ling, Hung Yu and Shi, Yi and Baira Ojeda, Ismael and Xi, Pengcheng and Shu, Chang and Zinno, Fabio and Peng, Xue Bin},
title = {StableMotion: Training Motion Cleanup Models with Unpaired Corrupted Data},
year = {2025},
booktitle = {SIGGRAPH Asia 2025 Conference Papers (SIGGRAPH Asia '25 Conference Papers)}
}