I used instance on GCP with A100-80GB, jupyter 4.0 version, cuda 11.8(default tamplate in vertex AI workbench)
ffmpeg -framerate 30 -pattern_type glob -i 'frames/*.jpg' \
-c:v libx264 -crf 23 -pix_fmt yuv420p \
-vf "scale=1920:-2" \
output_ready.mp4git clone https://github.com/Omal1k/vipe.git wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt-get update
sudo apt-get install cuda-toolkit-12-8
echo 'export PATH=/usr/local/cuda-12.4/bin:$PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda-12.4/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrc
nvcc --version# Create a new conda environment and install 3rd-party dependencies
conda env create -f envs/base.yml
conda activate vipe
# You can switch to your own PyPI index if you want.
pip install -r envs/requirements.txt --extra-index-url https://download.pytorch.org/whl/cu128
# Build the project and install it into the current environment
# Omit the -e flag to install the project as a regular package
pip install --no-build-isolation -e .Once the python package is installed, you can use the vipe CLI to process raw videos in mp4 format.
vipe infer YOUR_VIDEO.mp4You can use the following script to convert the ViPE results to COLMAP format. For example:
python scripts/vipe_to_colmap.py vipe_results/ --sequence dog_examplesudo apt-get install cuda-toolkit-11-8
echo 'export PATH=/usr/local/cuda-11.8/bin:$PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda-11.8/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrc
nvcc --versionconda deactivate
cd
git clone https://github.com/graphdeco-inria/gaussian-splatting --recursive
cd gaussian-splattingconda env create --file environment.yml
conda activate gaussian_splattingmkdir -p /home/jupyter/vipe_results_colmap/output_ready/sparse/0
mv /home/jupyter/vipe_results_colmap/output_ready/sparse/*.bin /home/jupyter/vipe_results_colmap/output_ready/sparse/0/
cd /home/jupyter/vipe_results_colmap/output_ready/
mv images temp_images
mkdir -p images/images
mv temp_images/* images/images/
rmdir temp_images
python train.py -s <path to COLMAP or NeRF Synthetic dataset>python render.py -m output/[take the latest folder]ffmpeg -framerate 30 -i output/<ID>/train/renders/%05d.png -c:v libx264 -pix_fmt yuv420p demo_video.mp4And finally you have obtained rendered video, I added it to this repo: demo_video.mp4