๐ฐNews | ๐งInstall | ๐ฟModels Zoo | ๐ทQuick Demo | ๐ค Train&Test | ๐คDatasets | ๐License | ๐ฎ Contact | ๐ค Acknowledgements
This is the official PyTorch codes for the paper.
Restore Anything with Masks๏ผLeveraging Mask Image Modeling for Blind All-in-One Image Restoration
Chujie Qin, Ruiqi Wu, Zikun Liu, Xin Lin, Chunle Guo, Hyun Hee Park, Chongyi Liโ
( โ indicates corresponding author )
In ECCV 2024, [Paper Link]
RAM++: Robust Representation Learning via Adaptive Mask for All-in-One Image Restoration
Zilong Zhang*, Chujie Qin*, Chunle Guo, Yong Zhang, Chao Xue, Ming-Ming Cheng, Chongyi Liโ
(*indicates equal contribution; โ indicates corresponding author)
arxiv preprint, [HomePage], [Paper Link]
- RAM is a Blind All-In-One Image Restoration framework that can simultaneously handle 7 Restoration Tasks and achieve SOTA performance !
- RAM focus on tackling how to extract Image Prior instead of degradation prior from diverse corrupted images by Leveraging Mask Image Modeling.
|
- A persistent distribution shift between standard AIOIR (and broader low-level vision) training sets and real-world degradations constrains generalization; we plan to mitigate this gap in subsequent work.
- DINOv3 maintains dense feature maps over long-time training, which benefits image restoration; accordingly, we plan to replace DINOv2 with DINOv3.
- RAM++ shows minimal performance decay as task count grows, indicating strong scaling potential; we encourage fine-tuning or re-training on larger, real-world datasets.
- Sep 20, 2025: Dec 27, 2023: Update an extension version of our ECCV 24 paper (Project Page/Paper).
- Feb 24, 2025: A Jittor Version is available at RAM-Jittor.
- Oct 20, 2024: Release pretrained weights on Google Drive.
- Oct 3, 2024: Release related code of our paper.
- Clone and enter our repository:
git clone https://github.com/Dragonisss/RAM.git RAM cd RAM - Simply run the
install.shfor installation!source install.sh - Activate the environment whenever you test!
conda activate RAM
If your requirement is for academic research and you would like to benchmark our method, please refer to pretrained_models.md, where we have a rich variety of models available across a diverse range training strategies, pre-training, and fine-tuning models.
Our pipeline can be applied to any image restoration network. We provide the pre-trained and fine-tuned model files for SwinIR and PromptIR mentioned in the paper.
๐ New: RAM++ pretrained and finetuned models are now available! ๐ You can download all the mentioned model weights via Hugging Face or simply download only the models you are interested in!
| Method | Phase | Framework | Download Links | Config File |
|---|---|---|---|---|
| RAM++ | Pretrain | Restormer | [GoogleDrive] | [options/RAM_Plus/7task/7task_pretrain.yaml] |
| RAM++ | Finetune | Restormer | [GoogleDrive] | [options/RAM_Plus/7task/7task_finetune.yaml] |
| RAM | Pretrain | SwinIR | [GoogleDrive] | [options/RAM_SwinIR/ram_swinir_pretrain.yaml] |
| RAM | Finetune | SwinIR | [GoogleDrive] | [options/RAM_SwinIR/ram_swinir_finetune.yaml] |
| RAM | Pretrain | PromptIR | [GoogleDrive] | [options/RAM_PromptIR/ram_promptir_pretrain.yaml] |
| RAM | Finetune | PromptIR | [GoogleDrive] | [options/RAM_PromptIR/ram_promptir_finetune.yaml] |
To build RAM++, please download below weights and DINOv2 FIRST, and place them under the ./pretrained_model .
We provide scripts for inference your own images in inference/inference.py.
You could run python inference/inference.py --help to get detailed information of this scripts.
Before proceeding, please ensure that the relevant datasets have been prepared as required. You can download required datasets by following benchmarks.md
1.Pretraining with MIM We use the collected datasets for model training. First, we execute the following command:
torchrun \
--nproc_per_node=[num of gpus] \
--master_port=[PORT] ram/train.py \
-opt [OPT] \
--launcher pytorch
# e.g.
torchrun \
--nproc_per_node=8 \
--master_port=4321 ram/train.py \
-opt options/RAM_SwinIR/ram_swinir_pretrain.yaml \
--launcher pytorch2.Mask Attribute Conductance Analysis
We use proposed Mask Attribute Conductance Analysis to analyze the importance of different layers for finetuning. You can run the following command to conduct MAC analysis:
#============ MAC Analysis For RAM ============#
python scripts/mac_analysis.py -opt [OPT]
# e.g.
python scripts/mac_analysis.py \
-opt options/RAM_SwinIR/ram_swinir_mac.yml
#============ MAC Analysis For RAM++ ============#
python scripts/adaSAM_mac_analysis.py -opt [OPT]
# e.g.
python scripts/adaSAM_mac_analysis.py \
-opt options/RAM_Plus/3task/3task_mac.yamlFor convenience, we have provided the analysis results of tRAM-SwinIR,RAM-PromptIR and RAM++, mentioned in the paper. You can find them in ./mac_analysis_result/
3.Finetuning
torchrun \
--nproc_per_node=[num of gpus] \
--master_port=[PORT] ram/train.py \
-opt [OPT] \
--launcher pytorch
# e.g.
torchrun \
--nproc_per_node=8 \
--master_port=4321 ram/train.py \
-opt options/RAM_SwinIR/ram_swinir_finetune.yaml \
--launcher pytorchYou can also add CUDA_DEVICE_VISIBLE= to choose gpu you want to use.
We have provided a script for fast evaluation:
torchrun \
--nproc_per_node=1 \
--master_port=[PORT] ram/test.py \
-opt [OPT] --launcher pytorchTo benchmark the performance of RAM on the test dataset, you can run the following command:
# RAM-SwinIR
torchrun \
--nproc_per_node=1 \
--master_port=4321 ram/test.py \
-opt options/test/ram_swinir_benchmark.yml --launcher pytorch
# RAM-PromptIR
torchrun \
--nproc_per_node=1 \
--master_port=4321 ram/test.py \
-opt options/test/ram_promptir_benchmark.yml --launcher pytorchTo benchmark the performance of RAM++ on the test dataset, you can run the following command:
# 3-task
torchrun \
--nproc_per_node=1 \
--master_port=4321 ram/test.py \
-opt options/3task/3task_benchmark.yaml --launcher pytorch
# 5-task
torchrun \
--nproc_per_node=1 \
--master_port=4321 ram/test.py \
-opt options/5task/5task_benchmark.yaml --launcher pytorch
# 7-task
torchrun \
--nproc_per_node=1 \
--master_port=4321 ram/test.py \
-opt options/7task/7task_benchmark.yaml --launcher pytorchThis code is licensed under the Pi-Lab License 1.0 for non-commercial use only. Please note that any commercial use of this code requires formal permission prior to use.
If you find our repo useful for your research, please consider citing our paper:
@inproceedings{qin2024restore,
title={Restore Anything with Masks: Leveraging Mask Image Modeling for Blind All-in-One Image Restoration},
author={Qin, Chu-Jie and Wu, Rui-Qi and Liu, Zikun and Lin, Xin and Guo, Chun-Le and Park, Hyun Hee and Li, Chongyi},
booktitle={European Conference on Computer Vision},
pages={364--380},
year={2024},
organization={Springer}
}
@misc{zhang2025ramrobustrepresentationlearning,
title={RAM++: Robust Representation Learning via Adaptive Mask for All-in-One Image Restoration},
author={Zilong Zhang and Chujie Qin and Chunle Guo and Yong Zhang and Chao Xue and Ming-Ming Cheng and Chongyi Li},
year={2025},
eprint={2509.12039},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.12039},
}For technical questions, please contact chujie.qin[AT]mail.nankai.edu.cn and zhangzilong[AT]mail.nankai.edu.cn
This work builds based on BasicSR. Some code are borrows from Restormer and AdaMAE. We are grateful to its authors and contributors for their outstanding open-source efforts and support.
We also thank all of our contributors.