PyTorch implementation of "Self-Ensembling Gaussian Splatting for Few-Shot Novel View Synthesis", ICCV 2025, Oral.
# Create conda environment
conda create -n gaussian python=3.9
conda activate gaussian
# Install dependencies
bash setup.shWe evaluate our method on the LLFF, DTU, Mip-NeRF360, and MVImgNet datasets. Note that, due to the stochastic nature of 3DGS, the evaluation results might be slightly different from those reported in the main paper.
-
Download LLFF from here.
-
Update
base_pathintools/colmap_llff.pyto the actual path of your data. -
Run COLMAP to initialize point clouds and camera parameters:
python tools/colmap_llff.py
bash scripts/run_llff.sh {your data path}-
Download Mip-Nerf360 from here.
-
Update
base_pathintools/colmap_360.pyto the actual path of your data. -
Run COLMAP to initialize point clouds and camera parameters:
python tools/colmap_360.py
bash scripts/run_360.sh {your data path}-
Follow the instructions here to download and organize the dataset. Download the masks from here
-
Update
base_pathintools/colmap_dtu.pyto the actual path of your data. -
Run COLMAP to initialize point clouds and camera parameters:
python tools/colmap_dtu.py
Note that COLMAP fails in some cases (scan8,scan40,scan110 with 3 views; scan21 with 6 views), so we use randomly initialized point clouds.
Update mask_path in copy_mask_dtu.sh accordingly, and then:
bash scripts/run_dtu.sh {your data path}Evaluation on LLFFEvaluation on Mip-Nerf360Evaluation on DTU- Evaluation on MvImgNet
If you find the project useful, please consider citing:
@article{zhao2024self,
title={Self-Ensembling Gaussian Splatting for Few-Shot Novel View Synthesis},
author={Zhao, Chen and Wang, Xuan and Zhang, Tong and Javed, Saqib and Salzmann, Mathieu},
journal={arXiv preprint arXiv:2411.00144},
year={2024}
}