Thanks to visit codestin.com
Credit goes to github.com

Skip to content
/ LASQ Public

Official pytorch implementation for "Luminance-Aware Statistical Quantization: Unsupervised Hierarchical Learning for Illumination Enhancement"

Notifications You must be signed in to change notification settings

XYLGroup/LASQ

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

76 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🚀 LASQ: Unsupervised Hierarchical Learning for Illumination Enhancement


🏗️ 1. Introduction

LASQ reformulates low-light image enhancement as a statistical sampling process over hierarchical luminance distributions, leveraging a diffusion-based forward process to autonomously model luminance transitions and achieve unsupervised, generalizable light restoration across diverse illumination conditions. The overall architecture is illustrated below 👇

pipeline


📦 2. Create Environment

# Environment setup
conda create -n LSAQ python=3.11
conda activate LASQ

# Install dependencies
pip install -r requirements.txt

📂 3. Data Preparation

3.1 💾 Data Preparation

LOLv1 dataset: Chen Wei, Wenjing Wang, Wenhan Yang, and Jiaying Liu. "Deep Retinex Decomposition for Low-Light Enhancement". BMVC, 2018. 🌐Google Drive

LSRW dataset: Jiang Hai, Zhu Xuan, Ren Yang, Yutong Hao, Fengzhu Zou, Fang Lin, and Songchen Han. "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network". Journal of Visual Communication and Image Representation, 2023. 🌐Baiduyun (extracted code: wmrr)

Test datesets without GT: 🌐Google Drive

Challenging Scenes: 🌐Google Drive

3.2 🗂️ Datasets Organization

We provide a script TXT_Generation.py to automatically generate dataset path files that are compatible with our code. Please place the generated files according to the directory structure shown below 👇

data/
 ├── Image_restoration/
 │    └── LOL-v1/
 │        ├── LOLv1_val.txt
 │        └── unpaired_train.txt

🧩 4. Pre-trained Models

You can download our pre-trained model from 🌐Google Drive and place them according to the following directory structure 👇

ckpt/
 ├── stage1/
 │    └── stage1_weight.pth.tar
 └── stage2/
      └── stage2_weight.pth.tar

🧪 5. Testing

python3 evaluate.py

🔬 6. Training

CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --nproc_per_node=4 train.py

🖼️ 7. Visual Comparison

Visual Result 1

Visual Result 2

📚 8. Citation

If you use this code or ideas from the paper for your research, please cite our paper:

@article{kong2025luminance,
  title={Luminance-Aware Statistical Quantization: Unsupervised Hierarchical Learning for Illumination Enhancement},
  author={Kong, Derong and Yang, Zhixiong and Li, Shengxi and Zhi, Shuaifeng and Liu, Li and Liu, Zhen and Xia, Jingyuan},
  journal={arXiv preprint arXiv:2511.01510},
  year={2025}
}

🙏 9. Acknowledgement

The codes are based on LightenDiffusion. Please also cite their paper. We thank all the authors for their contributions.

About

Official pytorch implementation for "Luminance-Aware Statistical Quantization: Unsupervised Hierarchical Learning for Illumination Enhancement"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages