Thanks to visit codestin.com
Credit goes to github.com

Skip to content

MiZhenxing/ThinkDiff

log

I Think, Therefore I Diffuse: Enabling Multimodal In-Context Reasoning in Diffusion Models

Zhenxing Mi$^1$, Kuan-Chieh Wang$^2$, Guocheng Qian$^2$, Hanrong Ye$^1$, Runtao Liu$^1$, Sergey Tulyakov$^2$, Kfir Aberman$^2$, Dan Xu$^1$

$^1\text{HKUST}$, $^2\text{Snap Inc.}$

TL;DR

  • Aligning VLM to an LLM decoder, instead of a diffusion decoder.
  • It's based on the finding that the LLM decoder shares the same input space with the diffusion decoder.
  • ThinkDiff-LVLM aligns deep features of LVLM's generated tokens, instead of deep features of LVLM's input tokens, to the decoders.
  • This transfers the reasoning capabilities to diffusion decoders. (Generated tokens are answers while input tokens are only questions.)

Introduction

This paper presents ThinkDiff, a novel alignment paradigm that enables multimodal in-context understanding and reasoning capabilities in text-to-image diffusion models by integrating the capabilities of vision-language models (VLMs). Directly aligning VLMs with diffusion decoders via diffusion loss requires complex and costly reasoning-based data pairs with multimodal inputs and image outputs. Instead, ThinkDiff leverages vision-language training as a proxy task, aligning VLMs to a large language model (LLM) decoder. This proxy task is feasible because the LLM decoder shares the same input feature space as diffusion decoders that use the corresponding LLM encoder for text embedding. As a result, alignment with diffusion decoders can be achieved by alignment with the LLM decoder. ThinkDiff effectively transfers multimodal in-context understanding and reasoning capabilities from VLMs to diffusion models, eliminating the need for complex reasoning-based multimodal datasets by using only readily available image-text pairs for training. Experiment results demonstrate that ThinkDiff significantly improves performance on the challenging CoBSAT benchmark for multimodal in-context reasoning generation, raising the best accuracy from 19.2% to 46.3%, with only 5 hours of training on 4 A100 GPUs.

🌟Multimodal in-conetxt reseasoning generation

Multimodal in-conetxt composition

🌟Single image + text for video

🌟Click here🌟 for the videos!

🌟Single image + text

🌟Two images

🌟Two images + text

More results are in the Project Page!

Dataset

Follow the MiniGPT-4 dataset guidance to download image datasets in WebDataset format.

Environment Setup

  1. Create and activate environment
conda create -n thinkdiff python==3.9.21 -y
conda activate thinkdiff
  1. Install PyTorch (adjust CUDA version as needed)
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.8 -c pytorch -c nvidia

PyTorch 2.4.0 is also supported

Install dependencies

pip install -r requirements.txt

If you encounter errors with pyairports, install it manually:

pip install git+https://github.com/ozeliger/pyairports.git@dev

Install our modified vLLM:

Please refer vLLM for Embedding for install instrcutions.

Checkpoints

Please download our checkpoints here.

ThinkDiff-LVLM

  1. Data Preprocessing

Generate WIDS JSON list similar to this

python scripts/get_wids_input_json_para.py
  1. Precompute embeddings
bash runs/run_qwen2_vl_embed_ccsbu.sh
  1. Training
bash runs/train_thinkdiff_lvlm_ccsbu.sh
  1. Testing
bash runs/test_thinkdiff_lvlm.sh

ThinkDiff-CLIP

  1. Training (No Preprocessing Required)
bash runs/train_thinkdiff_clip.sh
  1. Testing
bash runs/test_thinkdiff_clip_image_text.sh
bash runs/test_thinkdiff_clip_two_images.sh
bash runs/test_thinkdiff_clip_video_text.sh

More detailed training and testing instructions will be updated soon.

Citation

@article{mi2025thinkdiff,
  title={I Think, Therefore I Diffuse: Enabling Multimodal In-Context Reasoning in Diffusion Models},
  author={Mi, Zhenxing and Wang, Kuan-Chieh and Qian, Guocheng and Ye, Hanrong and Liu, Runtao and Tulyakov, Sergey and Aberman, Kfir and Xu, Dan},
  journal={ICML},
  year={2025}
}

About

ICML2025, I Think, Therefore I Diffuse: Enabling Multimodal In-Context Reasoning in Diffusion Models

Topics

Resources

License

MIT and 2 other licenses found

Licenses found

MIT
LICENSE
BSD-3-Clause
LICENSE_Lavis.md
BSD-3-Clause
LICENSE_MiniGPT4.md

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published