Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Bujiazi/DiCache

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DiCache: Let Diffusion Model Determine Its Own Cache


Shanghai Jiao Tong University, University of Science and Technology of China, Fudan University,
The Chinese University of Hong Kong, Shanghai Artificial Intelligence Laboratory
(*Equal Contribution)(Corresponding Author)

arXiv Project Page


DiCache is a training-free adaptive caching strategy for accelerating diffusion models at runtime.

📖 Click for the full abstract of DiCache

Recent years have witnessed the rapid development of acceleration techniques for diffusion models, especially caching-based acceleration methods. These studies seek to answer two fundamental questions: "When to cache" and "How to use cache", typically relying on predefined empirical laws or dataset-level priors to determine caching timings and adopting handcrafted rules for multi-step cache utilization. However, given the highly dynamic nature of the diffusion process, they often exhibit limited generalizability and fail to cope with diverse samples. In this paper, a strong sample-specific correlation is revealed between the variation patterns of the shallow-layer feature differences in the diffusion model and those of deep-layer features. Moreover, we have observed that the features from different model layers form similar trajectories. Based on these observations, we present DiCache, a novel training-free adaptive caching strategy for accelerating diffusion models at runtime, answering both when and how to cache within a unified framework. Specifically, DiCache is composed of two principal components: (1) Online Probe Profiling Scheme leverages a shallow-layer online probe to obtain an on-the-fly indicator for the caching error in real time, enabling the model to dynamically customize the caching schedule for each sample. (2) Dynamic Cache Trajectory Alignment adaptively approximates the deep-layer feature output from multi-step historical caches based on the shallow-layer feature trajectory, facilitating higher visual quality. Extensive experiments validate DiCache's capability in achieving higher efficiency and improved fidelity over state-of-the-art approaches on various leading diffusion models including WAN 2.1, HunyuanVideo and Flux.

🎨 Overview


🎬 Demo Video

dicache_demo_video_compressed.mp4

💻 Method


DiCache consists of Online Probe Profiling Strategy and Dynamic Cache Trajectory Alignment. The former dynamically determines the caching timing with an online shallow-layer probe at runtime, while the latter combines multi-step caches based on the probe feature trajectory to adaptively approximate the feature at the current timestep. By integrating the above two techniques, DiCache answers "when" and "how" to cache for diffusion models within a unified framework.

🔧 Installations

Setup repository and conda environment

git clone https://github.com/Bujiazi/DiCache.git
cd DiCache

conda create -n dicache python=3.11
conda activate dicache

pip install -r requirements.txt

🎈 Quick Start

DiCache + FLUX

Model downloading is automatic for FLUX.

cd FLUX
python run_flux_dicache.py

DiCache + HunyuanVideo

Follow here to manually download model checkpoints and store them in HunyuanVideo/ckpts.

cd HunyuanVideo
sh run_hunyuanvideo_dicache.sh

DiCache + HunyuanVideo + Sparse VideoGen

Coming Soon

DiCache + WAN 2.1

Follow here to manually download model checkpoints and store them in WAN2.1/ckpts.

cd WAN2.1
sh run_wan_dicache.sh

🖋 News

  • Code for WAN2.1 (V1.0) is released! (2025.10.7)
  • Our Project page is released! (2025.8.30)
  • Code for HunyuanVideo (V1.0) is released! (2025.8.30)
  • Code for FLUX (V1.0) is released! (2025.8.28)
  • Paper is available on arXiv! (2025.8.24)

🏗️ Todo

  • 🚀 Release DiCache for HunyuanVideo + Sparse VideoGen
  • 🚀 Release DiCache for WAN2.1
  • 🚀 Release the project page
  • 🚀 Release DiCache for HunyuanVideo
  • 🚀 Release DiCache for FLUX
  • 🚀 Release paper

📎 Citation

If you find our work helpful, please consider giving a star ⭐ and citation 📝

@article{bu2025dicache,
  title={DiCache: Let Diffusion Model Determine Its Own Cache},
  author={Bu, Jiazi and Ling, Pengyang and Zhou, Yujie and Wang, Yibin and Zang, Yuhang and Wu, Tong and Lin, Dahua and Wang, Jiaqi},
  journal={arXiv preprint arXiv:2508.17356},
  year={2025}
}

📣 Disclaimer

This is official code of DiCache. All the copyrights of the demo images and audio are from community users. Feel free to contact us if you would like remove them.

💞 Acknowledgements

The code is built upon the below repositories, we thank all the contributors for open-sourcing.

About

Official implementation of DiCache: Let Diffusion Model Determine Its Own Cache

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published