LASQ reformulates low-light image enhancement as a statistical sampling process over hierarchical luminance distributions, leveraging a diffusion-based forward process to autonomously model luminance transitions and achieve unsupervised, generalizable light restoration across diverse illumination conditions. The overall architecture is illustrated below 👇
# Environment setup
conda create -n LSAQ python=3.11
conda activate LASQ
# Install dependencies
pip install -r requirements.txtLOLv1 dataset: Chen Wei, Wenjing Wang, Wenhan Yang, and Jiaying Liu. "Deep Retinex Decomposition for Low-Light Enhancement". BMVC, 2018. 🌐Google Drive
LSRW dataset: Jiang Hai, Zhu Xuan, Ren Yang, Yutong Hao, Fengzhu Zou, Fang Lin, and Songchen Han. "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network". Journal of Visual Communication and Image Representation, 2023. 🌐Baiduyun (extracted code: wmrr)
Test datesets without GT: 🌐Google Drive
Challenging Scenes: 🌐Google Drive
We provide a script TXT_Generation.py to automatically generate dataset path files that are compatible with our code. Please place the generated files according to the directory structure shown below 👇
data/
├── Image_restoration/
│ └── LOL-v1/
│ ├── LOLv1_val.txt
│ └── unpaired_train.txt
You can download our pre-trained model from 🌐Google Drive and place them according to the following directory structure 👇
ckpt/
├── stage1/
│ └── stage1_weight.pth.tar
└── stage2/
└── stage2_weight.pth.tar
python3 evaluate.py
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --nproc_per_node=4 train.py
If you use this code or ideas from the paper for your research, please cite our paper:
@article{kong2025luminance,
title={Luminance-Aware Statistical Quantization: Unsupervised Hierarchical Learning for Illumination Enhancement},
author={Kong, Derong and Yang, Zhixiong and Li, Shengxi and Zhi, Shuaifeng and Liu, Li and Liu, Zhen and Xia, Jingyuan},
journal={arXiv preprint arXiv:2511.01510},
year={2025}
}The codes are based on LightenDiffusion. Please also cite their paper. We thank all the authors for their contributions.