Thanks to visit codestin.com
Credit goes to github.com

Skip to content

jp-guo/CODiff

Repository files navigation

Compression-Aware One-Step Diffusion Model for JPEG Artifact Removal

Jinpei Guo, Zheng Chen, Wenbo Li, Yong Guo, and Yulun Zhang, "Compression-Aware One-Step Diffusion Model for JPEG Artifact Removal", ICCV, 2025

[paper] [supplementary material]

🔥🔥🔥 News

  • 2025-02-14: This repo is released.

Abstract: Diffusion models have demonstrated remarkable success in image restoration tasks. However, their multi-step denoising process introduces significant computational overhead, limiting their practical deployment. Furthermore, existing methods struggle to effectively remove severe JPEG artifact, especially in highly compressed images. To address these challenges, we propose CODiff, a compression-aware one-step diffusion model for JPEG artifact removal. The core of CODiff is the compression-aware visual embedder (CaVE), which extracts and leverages JPEG compression priors to guide the diffusion model. Moreover, We propose a dual learning strategy for CaVE, which combines explicit and implicit learning. Specifically, explicit learning enforces a quality prediction objective to differentiate low-quality images with different compression levels. Implicit learning employs a reconstruction objective that enhances the model's generalization. This dual learning allows for a deeper and more comprehensive understanding of JPEG compression. Experimental results demonstrate that CODiff surpasses recent leading methods in both quantitative and visual quality metrics.


CODiff reconstruction demos on JPEG images with QF=1

CODiff reconstruction demos on JPEG images with QF=5

CODiff reconstruction demos on JPEG images with QF=10


Contents


Setup

Environment

The implementation is primarily developed on top of OSEDiff's code foundation.

conda env create -f environment.yml
conda activate codiff

Models

Please download the following models and place them in the model_zoo directory.

  1. SD-2.1-base
  2. CODiff
  3. CaVE

Training

Training consists of two stages. In the first stage, we train CaVE. In the second stage, we freeze the parameters of CaVE and fine-tune CODiff's UNet with LoRA.

First Stage

Update the training configuration file with appropriate values, then run:

python main_train_cave.py

Second Stage

Update the configuration file with appropriate values, specify the CaVE checkpoint from the first stage in train_codiff.sh, and launch training:

bash train_codiff.sh

Testing

Specify the paths to the CaVE and CODiff checkpoints, as well as the dataset directory in test_codiff.sh, then run:

bash test_codiff.sh

Results

We achieved state-of-the-art performance on LIVE-1, Urban100 and DIV2K-val datasets. Detailed results can be found in the paper.

 Quantitative Comparisons (click to expand)
  • Quantitative results on LIVE-1 dataset from the main paper.

  • Quantitative results on Urban100 dataset from the main paper.

  • Quantitative results on DIV2K-val dataset from the main paper.

  •  Visual Comparisons (click to expand)
  • Visual results on LIVE-1 dataset from the main paper.

  • Visual results on Urban100 dataset from the main paper.

  • Visual results on DIV2K-val dataset from the main paper.

  • Citation

    If you find the code helpful in your research or work, please cite the following paper(s).

    @article{guo2025compression,
        title={Compression-Aware One-Step Diffusion Model for JPEG Artifact Removal},
        author={Guo, Jinpei and Chen, Zheng and Li, Wenbo and Guo, Yong and Zhang, Yulun},
        journal={arXiv preprint arXiv:2502.09873},
        year={2025}
    }
    

    Acknowledgements

    This code is built on FBCNN and OSEDiff.

    About

    [ICCV'25] Compression-Aware One-Step Diffusion Model for JPEG Artifact Removal

    Resources

    Stars

    Watchers

    Forks

    Packages

    No packages published