Thanks to visit codestin.com
Credit goes to github.com

Skip to content

LuxeaveForks/Vchitect-2.0

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Vchitect-2.0: Parallel Transformer for Scaling Up Video Diffusion Models

1Shanghai Artificial Intelligence Laboratory 

👋 Join our Lark and Discord


Hits Generic badge Generic badge

🔥The technical report is coming soon!

🔥 Update and News

  • [2024.09.14] 🔥 Inference code and checkpoint are released.

😲 Gallery

Installation

1. Create a conda environment and install PyTorch

Note: You may want to adjust the CUDA version according to your driver version.

conda create -n VchitectXL -y
conda activate VchitectXL
conda install python=3.11 pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia -y

2. Install dependencies

pip install -r requirements.txt

Inference

First download the checkpoint.

test_file=$1
save_dir=$2
ckpt_path=$3

python inference.py --test_file "${test_file}" --save_dir "${save_dir}" --ckpt_path "${ckpt_path}"

🔑 License

This code is licensed under Apache-2.0. The framework is fully open for academic research and also allows free commercial usage.

About

Vchitect-2.0: Parallel Transformer for Scaling Up Video Diffusion Models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%