[📄 Paper] [🌐 Project Page] [🤗 Model Weights]
holocine_demo_video_compression.mp4
Yihao Meng1,2, Hao Ouyang2, Yue Yu1,2, Qiuyu Wang2, Wen Wang2,3, Ka Leong Cheng2,
Hanlin Wang1,2, Yixuan Li2,4, Cheng Chen2,5, Yanhong Zeng2, Yujun Shen2, Huamin Qu1
1HKUST, 2Ant Group, 3ZJU, 4CUHK, 5NTU
- What it is: A text-to-video model that generates full scenes, not just isolated clips.
- Key Feature: It maintains consistency of characters, objects, and style across all shots in a scene.
- How it works: You provide shot-by-shot text prompts, giving you directorial control over the final video.
Strongly recommend seeing our demo page.
If you enjoyed the videos we created, please consider giving us a star 🌟.
- Full inference code
HoloCine-14B-fullHoloCine-14B-sparse
HoloCine-14B-full-l(For videos longer than 1 minute)HoloCine-14B-sparse-l(For videos longer than 1 minute)HoloCine-5B-full(For limited-memory users)HoloCine-5B-sparse(For limited-memory users)
- Support first frame and key-frame input
HoloCine-audio
Thanks Dango233 for implementing comfyui node for HoloCine. (kijai/ComfyUI-WanVideoWrapper#1566) and https://github.com/Dango233/ComfyUI-WanVideoWrapper-Multishot/. This part is still under test, so feel to leave an issue if you encounter any problem here.
git clone https://github.com/yihao-meng/HoloCine.git
cd HoloCineWe use a environment similar to diffsynth. If you have a diffsynth environment, you can probably reuse it.
conda create -n HoloCine python=3.10
pip install -e .We use FlashAttention-3 to implement the sparse inter-shot attention. We highly recommend using FlashAttention-3 for its fast speed. We provide a simple instruction on how to install FlashAttention-3.
git clone https://github.com/Dao-AILab/flash-attention.git
cd flash-attention
cd hopper
python setup.py installIf you encounter environment problem when installing FlashAttention-3, you can refer to their official github page https://github.com/Dao-AILab/flash-attention.
If you cannot install FlashAttention-3, you can use FlashAttention-2 as an alternative, and our code will automatically detect the FlashAttention version. It will be slower than FlashAttention-3,but can also produce the right result.
If you want to install FlashAttention-2, you can use the following command:
pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.8.3/flash_attn-2.8.3+cu12torch2.4cxx11abiFALSE-cp310-cp310-linux_x86_64.whlIf you already have downloaded Wan 2.2 14B T2V before, skip this section.
If not, you need the T5 text encoder and the VAE from the original Wan 2.2 repository: https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B
Based on the repository's file structure, you only need to download models_t5_umt5-xxl-enc-bf16.pth and Wan2.1_VAE.pth.
You do not need to download the google, high_noise_model, or low_noise_model folders, nor any other files.
We recommend using huggingface-cli to download only the necessary files. Make sure you have huggingface_hub installed (pip install huggingface_hub).
This command will download only the required T5 and VAE models into the correct directory:
huggingface-cli download Wan-AI/Wan2.2-T2V-A14B \
--local-dir checkpoints/Wan2.2-T2V-A14B \
--allow-patterns "models_t5_*.pth" "Wan2.1_VAE.pth"Alternatively, go to the "Files" tab on the Hugging Face repo and manually download the following two files:
models_t5_umt5-xxl-enc-bf16.pthWan2.1_VAE.pth
Place both files inside a new folder named checkpoints/Wan2.2-T2V-A14B/.
Download our fine-tuned high-noise and low-noise DiT checkpoints from the following link:
[➡️ Download HoloCine_dit Model Checkpoints Here]
This download contain the four fine-tuned model files. Two for full_attention version: full_high_noise.safetensors, full_low_noise.safetensors. And two for sparse inter-shot attention version: sparse_high_noise.safetensors, sparse_high_noise.safetensors. The sparse version is still uploading.
You can choose a version to download, or try both version if you want.
The full attention version will have better performance, so we suggest you start from it. The sparse inter-shot attention version will be slightly unstable (but also great in most cases), but faster than the full attention version.
For full attention version:
Create a new folder named checkpoints/HoloCine_dit/full/ and place both high and low noise files inside.
For sparse attention version:
Create a new folder named checkpoints/HoloCine_dit/full/ and place both high and low noise files inside.
If you downloaded the full model, your checkpoints directory should look like this:
checkpoints/
├── Wan2.2-T2V-A14B/
│ ├── models_t5_umt5-xxl-enc-bf16.pth
│ └── Wan2.1_VAE.pth
└── HoloCine_dit/
└── full/
├── full_high_noise.safetensors
└── full_low_noise.safetensors
(If you downloaded the sparse model, replace full with sparse.)
We release two version of models, one using full attention to model the multi-shot sequence (our default), the other using sparse Inter-shot attention.
To use the full attention version.
python HoloCine_inference_full_attention.pyTo use the sparse inter-shot attention version.
python HoloCine_inference_sparse_attention.pyIf you don't have enough VRAM, you can reduce the frame amount from 241 to 81 (15s to 5s).
To achieve precise content control of each shot, our prompt is designed to follow a format. Our inference script is designed to be flexible and we support two way to input the text prompt. Note that currently the text encoder will truncate truncate any prompt that exceeds its 512-token limit, so make sure the prompt is concise and less than 512 token.
This is the easiest way to create new multi-shot prompts. You provide the components as separate arguments inside the script, and our helper function will format them correctly.
global_caption: A string describing the entire scene, characters, and setting.shot_captions: A list of strings, where each string describes one shot in sequential order.num_frames: The total number of frames for the video (default is241as we train on this sequence length).shot_cut_frames: (Optional) A list of frame numbers where you want cuts to happen. By defult, the script will automatically calculate evenly spaced cuts. If you want to customize it, make sure you understand that the shot cut number indicated byshot_cut_framesshould align withshot_captions.
Example (inside HoloCine_inference_full_attention.py):
run_inference(
pipe=pipe,
negative_prompt=scene_negative_prompt,
output_path="test_structured_output.mp4",
# Choice 1 inputs
global_caption="The scene is set in a lavish, 1920s Art Deco ballroom during a masquerade party. [character1] is a mysterious woman with a sleek bob, wearing a sequined silver dress and an ornate feather mask. [character2] is a dapper gentleman in a black tuxedo, his face half-hidden by a simple black domino mask. The environment is filled with champagne fountains, a live jazz band, and dancing couples in extravagant costumes. This scene contains 5 shots.",
shot_captions=[
"Medium shot of [character1] standing by a pillar, observing the crowd, a champagne flute in her hand.",
"Close-up of [character2] watching her from across the room, a look of intrigue on his visible features.",
"Medium shot as [character2] navigates the crowd and approaches [character1], offering a polite bow. ",
"Close-up on [character1]'s eyes through her mask, as they crinkle in a subtle, amused smile.",
"A stylish medium two-shot of them standing together, the swirling party out of focus behind them, as they begin to converse."
],
num_frames=241
)008_seed0.mp4
This mode allows you to provide the full, concatenated prompt string, just like in our original script. This is useful if you want to re-using our provided prompts.
The format must be exact:
[global caption] ... [per shot caption] ... [shot cut] ... [shot cut] ...
Example (inside HoloCine_inference_full_attention.py):
run_inference(
pipe=pipe,
negative_prompt=scene_negative_prompt,
output_path="test_raw_string_output.mp4",
# Choice 2 inputs
prompt="[global caption] The scene features a young painter, [character1], with paint-smudged cheeks and intense, focused eyes. Her hair is tied up messily. The setting is a bright, sun-drenched art studio with large windows, canvases, and the smell of oil paint. This scene contains 6 shots. [per shot caption] Medium shot of [character1] standing back from a large canvas, brush in hand, critically observing her work. [shot cut] Close-up of her hand holding the brush, dabbing it thoughtfully onto a palette of vibrant colors. [shot cut] Extreme close-up of her eyes, narrowed in concentration as she studies the canvas. [shot cut] Close-up on the canvas, showing a detailed, textured brushstroke being slowly applied. [shot cut] Medium close-up of [character1]'s face, a small, satisfied smile appears as she finds the right color. [shot cut] Over-the-shoulder shot showing her add a final, delicate highlight to the painting.",
num_frames=241,
shot_cut_frames=[37, 73, 113, 169, 205]
)010_seed0.mp4
We provide several commented-out examples directly within the HoloCine_inference_full_attention.py and HoloCine_inference_sparse_attention.py script. You can uncomment any of these examples to try them out immediately.
If you want to quickly test the model's stability on your own text prompt and don't want to design it by yourself, you can use LLM like gemini 2.5 pro to generate text prompt based on our format. Based on our test, the model is quite stable on diverse genres of text prompt.
If you find this work useful, please consider citing our paper:
@article{meng2025holocine,
title={HoloCine: Holistic Generation of Cinematic Multi-Shot Long Video Narratives},
author={Meng, Yihao and Ouyang, Hao and Yu, Yue and Wang, Qiuyu and Wang, Wen and Cheng, Ka Leong and Wang, Hanlin and Li, Yixuan and Chen, Cheng and Zeng, Yanhong and Shen, Yujun and Qu, Huamin},
journal={arXiv preprint arXiv:2510.20822},
year={2025}
}This project is licensed under the CC BY-NC-SA 4.0 (Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License).
The code is provided for academic research purposes only.
For any questions, please contact [email protected].