Thanks to visit codestin.com
Credit goes to github.com

Skip to content

sanjeevmk/smf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SMF: Template-free and Rig-free Animation Transfer using Kinetic Codes

SIGGRAPH Asia 2025 (ACM Transactions on Graphics)

Project Page Paper Model

SMF Teaser SMF Teaser

Overview

Self-Supervised Motion Fields (SMF) is a novel, rig-free, and template-free framework for transferring motion to 3D characters from either 3D animations or monocular 2D videos.

We represent motion by sampling sparse keypoints tracked over time and learn to encode this information into a latent Kinetic Code. Our deformation module then uses this Kinetic Code to directly manipulate the target character's mesh. By learning a mapping from sparse keypoints to a dense surface deformation, SMF bypasses the need for predefined rigs or registration to a template.

This approach allows SMF to generalize across a wide variety of target shapes and geometries, enabling the transfer of complex, unseen motions to new characters.

🚀 Quick Start

Installation

# Clone the repository
https://github.com/sanjeevmk/smf.git

# Create virtual env and install packages
conda create -n smf python=3.10
conda activate smf
pip3 install  torch==2.5.0 torchvision --index-url https://download.pytorch.org/whl/cu121
pip3 install omegaconf
cd src/fast-transformers/ ; pip3 install -e . --no-build-isolation --config-settings editable_mode=compat
pip3 install trimesh scipy libigl cholespy torch_sparse
pip3 install torch_sparse torch_scatter
pip3 install "git+https://github.com/facebookresearch/pytorch3d.git@stable"
pip3 install torchdiffeq termcolor matplotlib

Training Data (AMASS motion sequences)

We trained our model on human motions from AMASS.

Our training set can be found here: Drive Link These contain the AMASS motion paramters. To parse them, you'd need the SMPLH body model: Drive Link

(Note that we only use AMASS motion parameters to generate the mesh motion dataset - the parameters are not used by our template-free training/inference method)

To use your own dataset:

  • Ensure all meshes are manifold and watertight.
  • Global Translation/Rotation is removed.

Trained Models

3D to 3D motion transfer : Hugging Face

2D to 3D motion transfer : TBA

Training

cd src
python3 main.py configs/ode/*.json

Inference

Same as training, with train=False set in config.

TODO

  • Config file key details
  • Demo code

📄 Citation

If you find SMF useful in your research, please consider citing our paper:

@article{muralikrishnan2025smf,
  title={SMF: Template-free and Rig-free Animation Transfer using Kinetic Codes},
  author={Muralikrishnan, Sanjeev and Dutt, Niladri Shekhar and Mitra, Niloy J},
  journal={ACM Transactions on Graphics (TOG)},
  volume={44},
  number={6},
  article={261}
  year={2025},
  publisher={ACM New York, NY, USA}
}

📜 License

This project is released under the MIT License.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published