Thanks to visit codestin.com
Credit goes to github.com

Skip to content

SAIS-FUXI/IPO

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

46 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

IPO: Iterative Preference Optimization for Text-to-Video Generation

arXiv Project Page Hugging Face 2B ckpt Hugging Face 5B ckpt

This repo contains IPO pre-trained weights, training/sampling code, for our paper IPO: Iterative Preference Optimization for Text-to-Video Generation.

News

  • (πŸ”₯ New) 2025/2/26πŸ’₯: We open source our model after post-training using the IPO method on 5B, **IPOC-5B-v1.0,and we use more data to train the reward model after optimization.

  • (πŸ”₯ New) 2025/2/26πŸ’₯: We open source our model after post-training using the IPO method on cogvideo-2B, **IPOC-2B-v1.0

βœ… TODO List

  • Release IPOC-2B-v1.0 weights
  • Release IPOC-5B-v1.0 weights
  • Support WANX 2.1
  • Open-source training code
  • Open-source training data

Table of Contents

  1. Comparison of video demos
  2. Introduction
  3. Model Usage
  4. Citation
  5. Acknowledgements

Comparison of video demos

IPO-2B
001.mp4
002.mp4
003.mp4
004.mp4
005.mp4
006.mp4
007.mp4
008.mp4
CogvideoX-2B
001.mp4
002.mp4
003.mp4
004.mp4
005.mp4
006.mp4
007.mp4
008.mp4

Introduction

We propose to align video foundation models with human preferences from the perspective of post-training in this paper. Consequently, we introduce an Iterative Preference Optimization strategy to enhance generated video quality by incorporating human feedback. Specifically, IPO exploits a critic model to justify video generations for pairwise ranking as in Direct Preference Optimization or point-wise scoring as in Kahneman-Tversky Optimization. Given this, IPO optimizes video foundation models with guidance of signals from preference feedback, which helps improve generated video quality in subject consistency, motion smoothness and aesthetic quality, etc. In addition, IPO incorporates the critic model with the multi-modality large language model, which enables it to automatically assign preference labels without need of retraining or relabeling. In this way, IPO can efficiently perform multi-round preference optimization in an iterative manner, without the need of tediously manual labeling. Comprehensive experiments demonstrate that the proposed IPO can effectively improve the video generation quality of a pretrained model and help a model with only 2B parameters surpass the one with 5B parameters. Besides, IPO achieves new state-of-the-art performance on VBench benchmark.

Model Usage

inference

pip install -r requirements.txt
python scripte/inference.py --prompts ""

πŸ”— Citation

If you find IPO useful for your research and applications, please cite using this BibTeX:

@article{yang2025ipo,
  title={Ipo: Iterative preference optimization for text-to-video generation},
  author={Yang, Xiaomeng and Tan, Zhiyu and Li, Hao},
  journal={arXiv preprint arXiv:2502.02088},
  year={2025}
}

Acknowledgements

We greatly appreciate the contribution of CogvideoX to open source. Our IPO is based on the CogvideoX model for post training, which is consistent in usage with CogvideoX.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages