Thanks to visit codestin.com
Credit goes to Github.com

Skip to content

zju-jiangqi/OmniLens

Repository files navigation

OmniLens: Towards Universal Lens Aberration Correction via LensLib-to-Specific Domain Adaptation

     

Qi Jiang1*, Yao Gao1,*, Shaohua Gao1, Zhonghua Yi1, Xiaolong Qian1, Hao Shi1, Kailun Yang2,3, Lei Sun1,4, Kaiwei Wang1, Jian Bai1

1State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University
2School of Artificial Intelligence and Robotics, Hunan University
3National Engineering Research Center of Robot Visual Perception and Control Technology, Hunan University
4Sofia University St. ``Kliment Ohridski'', INSAIT
* Equal contribution.

⭐ If OmniLens is helpful for you, please help star this repo. Thanks!

📖 Table Of Contents

🆕 Update

  • 2025.11.24: The AODLib-EAOD is released 🔥
  • 2025.11.24: The DA-Training code is released 🔥
  • 2025.11.24: This repo is released 🔥

⌛ TODO

  • Release Test Images
  • Release Lens Data for AODLib-EAOD
  • Release Code for EAOD

🎆 Abstract

Emerging universal Computational Aberration Correction (CAC) paradigms provide an inspiring solution to light-weight and high-quality imaging with a universal model trained on a lens library (LensLib) to address arbitrary lens optical aberrations blindly. However, the limited coverage of existing LensLibs leads to poor generalization of the trained models to unseen lenses, whose fine-tuning pipeline is also confined to the lens-descriptions-known case. In this work, we introduce OmniLens, a flexible solution to universal CAC via (i) establishing a convincing LensLib with comprehensive coverage for pre-training a robust base model, and (ii) adapting the model to any specific lens designs with unknown lens descriptions via fast LensLib-to-specific domain adaptation. To achieve these, an Evolution-based Automatic Optical Design (EAOD) pipeline is proposed to generate a rich variety of lens samples with realistic aberration behaviors. Then, we design an unsupervised regularization term for efficient domain adaptation on a few easily accessible real-captured images based on the statistical observation of dark channel priors in degradation induced by lens aberrations. Extensive experiments demonstrate that the LensLib generated by EAOD effectively develops a universal CAC model with strong generalization capabilities, which can also improve the non-blind lens-specific methods by 0.35~1.81dB in PSNR. Additionally, the proposed domain adaptation method significantly improves the base model, especially in severe aberration cases (at most 2.59dB in PSNR).

👀 Introduction

What is OmniLens?

A flexible framework that serves any optical lens aberration correction. Whether or not you know the specific lens design, you can benefit from OmniLens. It supports zero‑shot correction of unknown lenses and can also act as a pretrained model to enable better designs of lens‑specific aberration correction models. In addition, OmniLens is the first to show that with only a pretrained model and a few casually captured images of the target optical system, unsupervised domain adaptation finetuning can yield impressive real‑world results.

What can OmniLens do?

  • For users without optical expertise: OmniLens provides pre-trained models, and direct zero-shot inference can address many lens aberration degradation cases. In addition, you can simply capture 25 to 50 images with the target system and use our framework’s DA-Training mode to quickly adapt the model to your optical system.
  • For users targeting at lens-specific aberration correction (non-blind): OmniLens serves as a strong pre-training foundation that boosts lens-specific models while markedly reducing specific data needs and training time. You can load the OmniLens pre-trained weights and finetune on your own lens data, or use our released dataset to train a pre-training model that matches your chosen architecture.
  • For researchers of blind aberration correction: OmniLens releases the AODLib‑EAOD dataset covering diverse optical degradation patterns from different lens types, enabling models that generalize impressively across varied aberration distributions. We anticipate further work on universal blind aberration correction model designs building on this dataset.

How OmniLens achieves these?

  • We introduce Evolution-based Automatic Optical Design (EAOD) method, an automatic pipeline that generates manufacturable optical designs satisfying given specifications. This enables the construction of a large lens library that broadly covers real‑world aberration distributions.
  • We demonstrate that Unsupervised Domain Adaptation (UDA) effectively transfers models from a broadly generalizable LensLib pre-training domain to target lens‑specific domains. We also identify and validate the Dark Channek Prior (DCP) property of optical degradation and leverage it to devise our DA framework.

📈 Results

AODLib-EAOD v.s. other LensLibs

Quantitative Evaluation of the OmniLens Framework

Visual Results on Real-World Dataset

⚙️ Setup

The implementation of our work is based on BasicSR, which is an open source toolbox for image/video restoration tasks.

  • Clone this repo or download the project.
git clone https://github.com/zju-jiangqi/OmniLens
cd OmniLens
  • Requirements.
python 3.10
pytorch 1.12.1
cuda 11.3
conda create -n OmniLens python=3.10
conda activate OmniLens
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
pip install -r requirements.txt
python setup.py develop

📖 LensLib Data

Please download the AODLib-EAOD datasets from our Huggingface. The downloaded AODLib-EAOD data contains 3 parts. Here are the descriptions:

  • -Imgs: This training ready dataset comprises 2,111 pairs of degraded and ground truth images. Each pair is produced by randomly sampling one lens from the 1,000 sampled AODLib-EAOD lenses and simulating aberration, distortion, and ISP. All pre-trained models reported in the paper are trained on this paired set.
  • -Lens_Descriptions: It contains PSFs (-psf) computed at 64 uniformly normalized fields of view for the 1,000 selected lenses, together with relative illumination distributions (-ill) and distortion maps (-distort), which can be used directly to simulate aberration degradation and distortion for each lens.
  • -All_Lenses: The unsampled raw lens library. It likewise includes the PSFs, relative illumination distributions, and distortion maps that can be used directly for simulation. The RMS spot radius and related specifications for each lens are annotated in the filenames. We recommend that interested researchers use these data to filter desired aberration degradations and construct datasets.

🔧 Training

If you would like to train your own universal blind aberration correction model-Pre-Training Phase

Step1: Prepare LensLib Data

Following the instructions in LensLib Data to prepare AODLib-EAOD. Please modify the paths to training image pairs (-Imgs) and your target specific lens test data in options/train/pretrain/train_SwinIR_PSNR.yml or options/train/pretrain/train_FeMaSR_lib.yml

Step2: Training a Universal Model

We use SwinIR and FeMaSR as examples. We also recommend using any other architecture you prefer to train your universal model with our data.

For SwinIR run:

PYTHONPATH="./:${PYTHONPATH}" CUDA_VISIBLE_DEVICES=0 python basicsr/train.py -opt options/train/pretrain/train_SwinIR_PSNR.yml --auto_resume

For FeMaSR run:

PYTHONPATH="./:${PYTHONPATH}" CUDA_VISIBLE_DEVICES=0 python basicsr/train.py -opt options/train/pretrain/train_FeMaSR_lib.yml --auto_resume

If you want to finetune a pre-trained universal model for a specific lens or perform unsupervised finetuning using our DA pipeline-Finetuning and DA Phase

Lens-Specific Finetuning

Step1: Prepare Your Specific Data

Prepare your own lens‑specific training paired data under a specific lens.

Please modify the paths to training image pairs and your target specific lens test data in options/train/specific/train_SwinIR_Specific.yml or options/train/specific/train_FemaSR_Specific.yml

Step2: Prepare the Pre-Trained Universal Model

You can download our pre-trained SwinIR and FeMaSR weights from our Huggingface and place them in the pretrain/ folder. Please modify the paths to the pretrained model in options/train/specific/train_SwinIR_Specific.yml or options/train/specific/train_FemaSR_Specific.yml

Step3: Finetuning a Specific Model

For SwinIR run:

PYTHONPATH="./:${PYTHONPATH}" CUDA_VISIBLE_DEVICES=0 python basicsr/train.py -opt options/train/specific/train_SwinIR_Specific.yml --auto_resume

For FeMaSR run:

PYTHONPATH="./:${PYTHONPATH}" CUDA_VISIBLE_DEVICES=0 python basicsr/train.py -opt options/train/specific/train_FemaSR_Specific.yml --auto_resume

Lens-Specific DA-Training

Step1: Prepare LensLib Data and the Unpaired Data

Following the instructions in LensLib Data to prepare AODLib-EAOD (only DA-Training needed).

Prepare your own lens‑specific training data: several real-world images captured with the target lens (for DA-Training).

Please modify the data paths in options/train/uda/train_SwinIR_DA.yml or options/train/uda/train_FeMaFA_DA.yml

Step2: Prepare the Pre-Trained Universal Model

You can download our pre-trained SwinIR and FeMaSR weights from our Huggingface and place them in the pretrain/ folder. Please modify the paths to the pretrained model in options/train/uda/train_SwinIR_DA.yml or options/train/uda/train_FeMaFA_DA.yml

Step3: DA-Training

For SwinIR run:

PYTHONPATH="./:${PYTHONPATH}" CUDA_VISIBLE_DEVICES=0,1 python basicsr/train.py -opt options/train/uda/train_SwinIR_DA.yml --auto_resume

For FeMaSR run:

PYTHONPATH="./:${PYTHONPATH}" CUDA_VISIBLE_DEVICES=0,1 python basicsr/train.py -opt options/train/uda/train_FeMaFA_DA.yml --auto_resume

💫 Inference

Step1: Prepare Your Opitcal Degradation Images

Please modify the data paths in options/test/test_SwinIR.yml or options/test/test_FeMaSR.yml

Step2: Prepare the Pre-Trained Universal Model

You can download our pre-trained SwinIR and FeMaSR weights from our Huggingface and place them in the pretrain/ folder. Please modify the paths to the pretrained model in options/test/test_SwinIR.yml or options/test/test_FeMaSR.yml

Step3: Zero-Shot Inference

For SwinIR run:

PYTHONPATH="./:${PYTHONPATH}" CUDA_VISIBLE_DEVICES=0 python basicsr/test.py -opt options/test/test_SwinIR.yml --auto_resume

For FeMaSR run:

PYTHONPATH="./:${PYTHONPATH}" CUDA_VISIBLE_DEVICES=0 python basicsr/test.py -opt options/test/test_FeMaSR.yml --auto_resume

😄 Acknowledgement

This project is built based on the excellent BasicSR project. We further recommend using DeepLens to simulate optical degradations using our released PSFs and lens data.

😃 Citation

Please cite us if our work is useful for your research.

@article{jiang2024flexible,
  title={A Flexible Framework for Universal Computational Aberration Correction via Automatic Lens Library Generation and Domain Adaptation},
  author={Jiang, Qi and Gao, Yao and Gao, Shaohua and Yi, Zhonghua and Sun, Lei and Shi, Hao and Yang, Kailun and Wang, Kaiwei and Bai, Jian},
  journal={arXiv preprint arXiv:2409.05809},
  year={2024}
}

📓 License

This project is released under the Apache 2.0 license.

✉️ Contact

If you have any questions, please feel free to contact [email protected].

About

Official implementation of OmniLens framework

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages