Official implementation of CVPR 2025 Highlight paper "RGBAvatar: Reduced Gaussian Blendshapes for Online Modeling of Head Avatars".
-
Clone this repository.
git clone https://github.com/gapszju/RGBAvatar.git cd RGBAvatar -
Create conda environment.
conda create -n rgbavatar python=3.10 conda activate rgbavatar -
Install PyTorch and nvdiffrast. Please make sure that the PyTorch CUDA version matches your system's CUDA version. We use CUDA 11.8 here.
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 pip install git+https://github.com/NVlabs/nvdiffrast -
Install other packages.
pip install -r requirements.txt -
Compile PyTorch CUDA extension.
pip install submodules/diff-gaussian-rasterization
For offline reconstruction, we use FLAME template model and follow INSTA to preprocess the video sequence.
-
You need to create an account on the FLAME website and download FLAME 2020 model. Please unzip FLAME2020.zip and put
generic_model.pklunder./data/FLAME2020. -
Please follow the instructions in INSTA. You may first use Metrical Photometric Tracker to track and run
generate.shprovided by INSTA to mask the head. -
Organize the INSTA's output in the following form, and modify the
data_dirin config file to refer to the dataset path.<DATA_DIR> ├──<SUBJECT_NAME> ├── checkpoint # FLAME parameter for each frame, generated by the tracker ├── images # generated by the script of INSTA
For online reconstruction, we use FaceWareHouse template model and a real-time face tracker DDE to compute the expression coeafficients in real-time. We will release the code of this version in the future.
python train_offline.py --subject SUBJECT_NAME --work_name WORK_NAME --config CONFIG_FILE_PATH --preload
Command Line Arguments for train_offline.py
Subject name for training (bala by default).
A nick name for the experiment, training results will be saved under output/WORK_NAME.
Config file path (config/offline.yaml by default).
Use train/test/all split of the image sequence (train by default).
Whether to preload image data to CPU memory, which accelerate the training speed.
Whether to output log information during training.
We provide 12 pretrained avatar models here.
python train_online.py --subject SUBJECT_NAME --work_name WORK_NAME --config CONFIG_FILE_PATH --video_fps 25
Command Line Arguments for train_online.py
Subject name for training (bala by default).
A nick name for the experiment, training results will be saved under output/WORK_NAME.
Config file path (config/online.yaml by default).
FPS of the input video stream (25 by default ).
Whether to output log information during training.
python calculate_metrics.py --subject SUBJECT_NAME --work_name WORK_NAME --config CONFIG_FILE_PATH
Command Line Arguments for calculate_metrics.py
Subject name for training (bala by default).
Path of the expeirment output folder (output by default).
Name of the experiment to be evaluated.
Frame number where split the training and test set. (-350 by default ).
python render.py --subject SUBJECT_NAME --work_name WORK_NAME
Command Line Arguments for render.py
Subject name for training (bala by default).
Path of the expeirment output folder (output by default).
Name of the experiment to be rendered.
Whether to use white background, back by default.
Whether to render the alpha channel.
TBD
[update on 2025.08.14] We provide training and rendering scripts on NeRSemble dataset, we use the preprocessed data provided by GaussianAvatars. Please set the dataset root path in config/nersemble.yaml and put the flame2023.pkl file under data/FLAME2023 folder. The FLAME 2023 model can be downloaded from FLAME website.
python train_offline_nersemble.py --subject SUBJECT_NAME --work_name WORK_NAME --config CONFIG_FILE_PATH
Command Line Arguments for train_offline_nersemble.py
Subject name for training (074 by default).
A nick name for the experiment, training results will be saved under output/WORK_NAME.
Config file path (config/nersemble.yaml by default).
Whether to output log information during training.
python render_nersemble.py --subject SUBJECT_NAME --work_name WORK_NAME
Command Line Arguments for render_nersemble.py
Subject name for training (074 by default).
Path of the expeirment output folder (output by default).
Name of the experiment to be rendered.
Whether to use white background, back by default.
Whether to render the alpha channel.
@InProceedings{Li_2025_CVPR,
author = {Li, Linzhou and Li, Yumeng and Weng, Yanlin and Zheng, Youyi and Zhou, Kun},
title = {RGBAvatar: Reduced Gaussian Blendshapes for Online Modeling of Head Avatars},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {10747-10757}
}