Thanks to visit codestin.com
Credit goes to github.com

Skip to content

zhongleilz/Sketch2Animation

Repository files navigation

Sketch2Anim: Towards Transferring Sketch Storyboards into 3D Animation

Sketch2Anim: Towards Transferring Sketch Storyboards into 3D Animation

Lei Zhong, Chuan Guo, Yiming Xie,Jiawei Wang, Changjian Li

teaser

Citation

If you find our code or paper helpful, please consider starring our repository and citing:

@Article{Zhong:2025:Sketch2Anim, 
    Title = {Sketch2Anim: Towards Transferring Sketch Storyboards into 3D
    Animation}, 
    Author = {Lei Zhong, Chuan Guo, Yiming Xie, Jiawei Wang and Changjian Li}, 
    Journal = {ACM Transaction on Graphics (TOG)},
    volume={44},
    number={4},
    pages={1--15},
    Year = {2025}, 
    Publisher = {ACM New York, NY, USA} 
} 

TODO List

  • Code for Inference and Pretrained model.
  • [] Blender Plugin.
  • [] Evaluation code and metrics.
  • [] Code for training.

PRETRAINED_WEIGHTS

Download the pretrained model from Google Drive and then copy it to ./save/checkpoints/.

Getting started

This code requires:

  • Python 3.9
  • conda3 or miniconda3
  • CUDA capable GPU (one is enough)

1. Setup environment

Install ffmpeg (if not already installed):

sudo apt update
sudo apt install ffmpeg

For windows use this instead.

Setup conda env:

conda env create -f environment.yml
conda activate sketch2anim
python -m spacy download en_core_web_sm
pip install git+https://github.com/openai/CLIP.git

Download dependencies:

bash prepare/download_glove.sh
bash prepare/download_t2m_evaluators.sh
bash prepare/prepare_t5.sh
bash prepare/download_smpl_models.sh

2. Get data

Full data (text + motion capture)

HumanML3D - Follow the instructions in HumanML3D, then copy the result dataset to our repository:

cp -r ../HumanML3D/HumanML3D ./dataset/HumanML3D

3. 2D Sketch Motion Synthesis

Please add the content text to ./demo/demo.json, then run:

bash demo.sh

Example output:

2D Motion Synthesis Example

4. Blender Plugin

We plan to open-source the plugin within this week. plugin

Acknowledgments

Our code is heavily based on MLD and MotionLCM.
The motion visualization is based on MLD and TMOS. We also thank the following works: guided-diffusion, MotionCLIP, text-to-motion, actor, joints2smpl, MoDi, HumanML3D, OmniControl.

License

This code is distributed under an MIT LICENSE.

Note that our code depends on several other libraries, including SMPL, SMPL-X, and PyTorch3D, and utilizes the HumanML3D datasets. Each of these has its own respective license that must also be adhered to.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published