🪄DiGA3D: Coarse-to-Fine Diffusional Propagation of Geometry and Appearance for Versatile 3D Inpainting
Jingyi Pan1, Dan Xu2, Qiong Luo1,2
1The Hong Kong University of Science and Technology (Guangzhou)
2The Hong Kong University of Science and TechnologyICCV 2025
We introduce DiGA3D, a novel and versatile 3D inpainting pipeline that leverages diffusion models to propagate consistent appearance and geometry in a coarse-to-fine manner. First, DiGA3D develops a robust strategy for selecting multiple reference views to reduce errors during propagation. Next, DiGA3D designs an Attention Feature Propagation (AFP) mechanism that propagates attention features from the selected reference views to other views via diffusion models to maintain appearance consistency. Furthermore, DiGA3D introduces a Texture-Geometry Score Distillation Sampling (TG-SDS) loss to further improve the geometric consistency of inpainted 3D scenes. Extensive experiments on multiple 3D inpainting tasks demonstrate the effectiveness of our method.
The evaluation results for object removal, object replacement, and object re-texturing are available here. You can download the data as a .zip file or browse the folders directly from the link.
- Release paper.
- Release evaluation results.
- Release code.
Codes will be released. Please stay tuned :D
If you find our paper or code useful in your research, please consider giving a star and citation.
@inproceedings{pan2025diga3d,
title={DiGA3D: Coarse-to-Fine Diffusional Propagation of Geometry and Appearance for Versatile 3D Inpainting},
author={Pan, Jingyi and Xu, Dan and Luo, Qiong},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
year={2025}
}