[ICCV 2025] Official implementation of PRM, a feed-forward framework for high-quality 3D mesh generation with photometric stereo images.
- [✅] Release inference and training code.
- [✅] Release model weights.
- [✅] Release huggingface gradio demo. Please try it at demo link.
- Release ComfyUI demo.
We recommend using Python>=3.10, PyTorch>=2.1.0, and CUDA>=12.1.
conda create --name PRM python=3.10
conda activate PRM
pip install -U pip
# Ensure Ninja is installed
conda install Ninja
# Install the correct version of CUDA
conda install cuda -c nvidia/label/cuda-12.1.0
# Install PyTorch and xformers
# You may need to install another xformers version if you use a different PyTorch version
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
pip install xformers==0.0.22.post7
# Install Triton
pip install triton
# Install other requirements
pip install -r requirements.txtThe pretrained model can be found model card.
Our inference script will download the models automatically. Alternatively, you can manually download the models and put them under the ckpts/ directory.
bash run.shWe provide our training code to facilitate future research. For training data, we used filtered Objaverse for training. Before training, you need to pre-processe the environment maps and OBJ files into formats that fit our dataloader.
# OBJ files to mesh files that can be readed
python obj2mesh.py path_to_obj save_pathFor preprocessing environment maps, please run
# Pre-process environment maps
python light2map.py path_to_env save_pathTo train the sparse-view reconstruction models, please run:
# Training on Mesh representation
python train.py --base configs/PRM.yaml --gpus 0,1,2,3,4,5,6,7 --num_nodes 1Note that you need to change to root_dir and light_dir to pathes that you save the preprocessed GLB files and environment maps.
If you find our work useful for your research or applications, please cite using this BibTeX:
@article{ge2024prm,
title={PRM: Photometric Stereo based Large Reconstruction Model},
author={Ge, Wenhang and Lin, Jiantao and Shen, Guibao and Feng, Jiawei and Hu, Tao and Xu, Xinli and Chen, Ying-Cong},
journal={arXiv preprint arXiv:2412.07371},
year={2024}
}We thank the authors of the following projects for their excellent contributions to 3D generative AI!