This repo contains IPO pre-trained weights, training/sampling code, for our paper IPO: Iterative Preference Optimization for Text-to-Video Generation.
-
(π₯ New)
2025/2/26π₯: We open source our model after post-training using the IPO method on 5B, **IPOC-5B-v1.0οΌand we use more data to train the reward model after optimization. -
(π₯ New)
2025/2/26π₯: We open source our model after post-training using the IPO method on cogvideo-2B, **IPOC-2B-v1.0
- Release IPOC-2B-v1.0 weights
- Release IPOC-5B-v1.0 weights
- Support WANX 2.1
- Open-source training code
- Open-source training data
- Comparison of video demos
- Introduction
- Model Usage
- Citation
- Acknowledgements
| IPO-2B | |||
001.mp4 |
002.mp4 |
003.mp4 |
004.mp4 |
005.mp4 |
006.mp4 |
007.mp4 |
008.mp4 |
| CogvideoX-2B | |||
001.mp4 |
002.mp4 |
003.mp4 |
004.mp4 |
005.mp4 |
006.mp4 |
007.mp4 |
008.mp4 |
We propose to align video foundation models with human preferences from the perspective of post-training in this paper. Consequently, we introduce an Iterative Preference Optimization strategy to enhance generated video quality by incorporating human feedback. Specifically, IPO exploits a critic model to justify video generations for pairwise ranking as in Direct Preference Optimization or point-wise scoring as in Kahneman-Tversky Optimization. Given this, IPO optimizes video foundation models with guidance of signals from preference feedback, which helps improve generated video quality in subject consistency, motion smoothness and aesthetic quality, etc. In addition, IPO incorporates the critic model with the multi-modality large language model, which enables it to automatically assign preference labels without need of retraining or relabeling. In this way, IPO can efficiently perform multi-round preference optimization in an iterative manner, without the need of tediously manual labeling. Comprehensive experiments demonstrate that the proposed IPO can effectively improve the video generation quality of a pretrained model and help a model with only 2B parameters surpass the one with 5B parameters. Besides, IPO achieves new state-of-the-art performance on VBench benchmark.
pip install -r requirements.txt
python scripte/inference.py --prompts ""
If you find IPO useful for your research and applications, please cite using this BibTeX:
@article{yang2025ipo,
title={Ipo: Iterative preference optimization for text-to-video generation},
author={Yang, Xiaomeng and Tan, Zhiyu and Li, Hao},
journal={arXiv preprint arXiv:2502.02088},
year={2025}
}We greatly appreciate the contribution of CogvideoX to open source. Our IPO is based on the CogvideoX model for post training, which is consistent in usage with CogvideoX.