Sketch2Anim: Towards Transferring Sketch Storyboards into 3D Animation
If you find our code or paper helpful, please consider starring our repository and citing:
@Article{Zhong:2025:Sketch2Anim,
Title = {Sketch2Anim: Towards Transferring Sketch Storyboards into 3D
Animation},
Author = {Lei Zhong, Chuan Guo, Yiming Xie, Jiawei Wang and Changjian Li},
Journal = {ACM Transaction on Graphics (TOG)},
volume={44},
number={4},
pages={1--15},
Year = {2025},
Publisher = {ACM New York, NY, USA}
}
- Code for Inference and Pretrained model.
- [] Blender Plugin.
- [] Evaluation code and metrics.
- [] Code for training.
Download the pretrained model from Google Drive and then copy it to ./save/checkpoints/.
This code requires:
- Python 3.9
- conda3 or miniconda3
- CUDA capable GPU (one is enough)
Install ffmpeg (if not already installed):
sudo apt update
sudo apt install ffmpegFor windows use this instead.
Setup conda env:
conda env create -f environment.yml
conda activate sketch2anim
python -m spacy download en_core_web_sm
pip install git+https://github.com/openai/CLIP.gitDownload dependencies:
bash prepare/download_glove.sh
bash prepare/download_t2m_evaluators.sh
bash prepare/prepare_t5.sh
bash prepare/download_smpl_models.shHumanML3D - Follow the instructions in HumanML3D, then copy the result dataset to our repository:
cp -r ../HumanML3D/HumanML3D ./dataset/HumanML3DPlease add the content text to ./demo/demo.json, then run:
bash demo.shExample output:
We plan to open-source the plugin within this week.

Our code is heavily based on MLD and MotionLCM.
The motion visualization is based on MLD and TMOS.
We also thank the following works:
guided-diffusion, MotionCLIP, text-to-motion, actor, joints2smpl, MoDi, HumanML3D, OmniControl.
This code is distributed under an MIT LICENSE.
Note that our code depends on several other libraries, including SMPL, SMPL-X, and PyTorch3D, and utilizes the HumanML3D datasets. Each of these has its own respective license that must also be adhered to.

