Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Murrol/StableMotion

Repository files navigation

StableMotion: Training Motion Cleanup Models with Unpaired Corrupted Data

Project Youtube Project

πŸ§‘β€πŸ”§ You don’t need a clean dataset to train a motion cleanup model.
StableMotion learns to fix corrupted motions directly from raw mocap data β€” no handcrafted data pairs, no synthetic artifact augmentation.

Teaser

❌ Β  Raw corrupted data    βœ… Β  Clean results!

Table of Contents

Installation βš™οΈ

Environment Setup

Create and activate a new conda environment:

conda create --name stablemotion python=3.11.8
conda activate stablemotion

Dependencies

Install the required packages:

pip install -r requirements.txt 

SMPL Dependency

The SMPL model is required for preprocessing, evaluation, and visualization.

Please follow the README from TEMOS to obtain the deps folder with SMPL+H downloaded, and place the deps folder under ./data_loaders/amasstools.

TMR Dependency

Text-to-Motion Retrieval (TMR) is used for evaluation.

Please follow the README from TMR to download pretrained TMR models. After downloading, place the models in the following structure:

StableMotion/
└── tmr_models/
    └── tmr_humanml3d_guoh3dfeats
    └── tmr_kitml_guoh3dfeats

To fix a path discrepancy, replace the config file tmr_models/tmr_humanml3d_guoh3dfeats/config.json with misc/tmr_humanml3d_guoh3dfeats_config/config.json.

Pretrained Checkpoint: StableMotion-BrokenAMASS

To play around, download a StableMotion checkpoint trained on BrokenAMASS from OneDrive and place it under the ./save directory.

Quick Start

0. Get Benchmark Dataset: BrokenAMASS

Please follow the README for DATA to download and preprocess the original AMASS dataset.

Then, run the following scripts to build BrokenAMASS:

python -m data_loaders.corrupting_globsmpl_dataset --mode train
python -m data_loaders.corrupting_globsmpl_dataset --mode test

After preprocessing and corruption, your dataset folder should look like this:

dataset/
β”œβ”€β”€ AMASS
β”œβ”€β”€ AMASS_20.0_fps_nh
β”œβ”€β”€ AMASS_20.0_fps_nh_smpljoints_neutral_nobetas
└── AMASS_20.0_fps_nh_globsmpl_base_cano
β”œβ”€β”€ AMASS_20.0_fps_nh_globsmpl_corrupted_cano
└── meta_AMASS_20.0_fps_nh_globsmpl_corrupted_cano/
    └── mean.pt
    └── std.pt
misc.

The released version of BrokenAMASS may differ slightly from the version used in the experiments reported in the paper, due to different random seeds. Contact [email protected] for further questions.

0.5 Customized Dataset 🎯

If you want to clean up your own motion data, we strongly recommend preparing the training data with quality labels and training your own StableMotion model on that dataset β€” this is exactly what StableMotion was designed for!

1. Training πŸš€

Train the StableMotion model on BrokenAMASS:

python -m train.train_stablemotion_smpl_glob \
  --save_dir save/stablemotion \
  --data_dir dataset/AMASS_20.0_fps_nh_globsmpl_corrupted_cano \
  --normalizer_dir dataset/meta_AMASS_20.0_fps_nh_globsmpl_corrupted_cano \
  --l1_loss \
  --model_ema \
  --gradient_clip \
  --batch_size 128 \
  --num_steps 1_000_000 \
  --train_platform_type TensorboardPlatform

2. Inference 🧼

Clean up corrupted motion sequences using the trained model:

# Basic inference
python -m sample.fix_globsmpl \
  --model_path save/stablemotion/ema001000000.pt \
  --use_ema \
  --batch_size 32 \
  --testdata_dir dataset/AMASS_20.0_fps_nh_globsmpl_corrupted_cano \
  --output_dir ./output/stablemotion_vanilla

# Enhanced inference with ensemble and adaptive cleanup
python -m sample.fix_globsmpl \
  --model_path save/stablemotion/ema001000000.pt \
  --use_ema \
  --batch_size 32 \
  --testdata_dir dataset/AMASS_20.0_fps_nh_globsmpl_corrupted_cano \
  --ensemble \
  --enable_sits \
  --classifier_scale 100 \
  --output_dir ./output/stablemotion_hack

Evaluation πŸ“Š

Evaluate the quality of cleaned motion sequences:

python -m eval.eval_scripts --data_path ./output/stablemotion_vanilla/results.npy

Content preservation metrics:

Collect clean ground truth

To evaluate content preservation, first record the clean ground-truth data from dataset/AMASS_20.0_fps_nh_globsmpl_base_cano:

python -m sample.fix_globsmpl \
  --model_path save/stablemotion/ema001000000.pt \
  --use_ema \
  --batch_size 32 \
  --testdata_dir dataset/AMASS_20.0_fps_nh_globsmpl_base_cano \
  --output_dir ./output/benchmark_clean
  --collect_dataset

Then run evaluation with ground truth:

python -m eval.eval_scripts \
  --data_path ./output/stablemotion_vanilla/results.npy \
  --gt_data_path ./output/benchmark_clean/results_collected.npy

Visualization πŸ‘­

Generate visual renderings of the cleaned motion data:

python -m visualize.render_scripts \
  --data_path ./output/stablemotion_vanilla/results.npy \
  --rendersmpl

Acknowlegements

We sincerely thank the open-sourcing of these works where our code is based on: MDM, stmc, diffusers, TMR, humor, PixArt-Ξ±, and stable-audio-tools.

License

This code is distributed under an MIT LICENSE.

Note that our code depends on other libraries, including TMR, SMPL, SMPL-X, and uses datasets which each have their own respective licenses that must also be followed.

Citation

If you find our work helpful, please cite:

@inproceedings{
    mu2025StableMotion,
    author={Mu, Yuxuan and Ling, Hung Yu and Shi, Yi and Baira Ojeda, Ismael and Xi, Pengcheng and Shu, Chang and Zinno, Fabio and Peng, Xue Bin},
    title = {StableMotion: Training Motion Cleanup Models with Unpaired Corrupted Data},
    year = {2025},
    booktitle = {SIGGRAPH Asia 2025 Conference Papers (SIGGRAPH Asia '25 Conference Papers)}
}

About

[SIGGRAPH Asia 2025] Implementation of "StableMotion: Training Motion Cleanup Models with Unpaired Corrupted Data"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages