Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
10 views22 pages

TSP Cmes 63595

This article presents a novel low-light image enhancement algorithm that utilizes a dehazing physical model to improve image clarity and detail in low-light environments. The method involves generating virtual hazy images, optimizing atmospheric light estimation using a genetic algorithm, and refining results through morphological operations and guided filtering. Experimental results demonstrate that the proposed algorithm significantly outperforms traditional methods, achieving a peak signal-to-noise ratio (PSNR) of 17.09 and a structural similarity index (SSIM) of 0.74.

Uploaded by

seizer225588
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views22 pages

TSP Cmes 63595

This article presents a novel low-light image enhancement algorithm that utilizes a dehazing physical model to improve image clarity and detail in low-light environments. The method involves generating virtual hazy images, optimizing atmospheric light estimation using a genetic algorithm, and refining results through morphological operations and guided filtering. Experimental results demonstrate that the proposed algorithm significantly outperforms traditional methods, achieving a peak signal-to-noise ratio (PSNR) of 17.09 and a structural similarity index (SSIM) of 0.74.

Uploaded by

seizer225588
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Computer Modeling in

Tech Science Press


Engineering & Sciences

Doi:10.32604/cmes.2025.063595

ARTICLE

A Low Light Image Enhancement Method Based on Dehazing Physical Model


Wencheng Wang1,2, * , Baoxin Yin1,2 , Lei Li2, * , Lun Li1 and Hongtao Liu1
1
College of Machinery and Automation, Weifang University, Weifang, 261061, China
2
College of Engineering, Qufu Normal University, Rizhao, 276827, China
*Corresponding Authors: Wencheng Wang. Email: [email protected]; Lei Li. Email: [email protected]
Received: 18 January 2025; Accepted: 25 March 2025; Published: 30 May 2025

ABSTRACT: In low-light environments, captured images often exhibit issues such as insufficient clarity and detail loss,
which significantly degrade the accuracy of subsequent target recognition tasks. To tackle these challenges, this study
presents a novel low-light image enhancement algorithm that leverages virtual hazy image generation through dehazing
models based on statistical analysis. The proposed algorithm initiates the enhancement process by transforming the
low-light image into a virtual hazy image, followed by image segmentation using a quadtree method. To improve the
accuracy and robustness of atmospheric light estimation, the algorithm incorporates a genetic algorithm to optimize
the quadtree-based estimation of atmospheric light regions. Additionally, this method employs an adaptive window
adjustment mechanism to derive the dark channel prior image, which is subsequently refined using morphological
operations and guided filtering. The final enhanced image is reconstructed through the hazy image degradation model.
Extensive experimental evaluations across multiple datasets verify the superiority of the designed framework, achieving
a peak signal-to-noise ratio (PSNR) of 17.09 and a structural similarity index (SSIM) of 0.74. These results indicate that
the proposed algorithm not only effectively enhances image contrast and brightness but also outperforms traditional
methods in terms of subjective and objective evaluation metrics.

KEYWORDS: Dark channel prior; quadtree decomposition; genetic algorithm; atmospheric light; image enhancement

1 Introduction
During the image acquisition process, limitations in ambient lighting conditions or imaging equipment
often result in various quality degradation problems, including insufficient brightness, low contrast, and loss
of fine details. These deficiencies significantly reduce the accuracy of subsequent machine vision tasks such
as image segmentation, target detection and object recognition [1]. In response to these limitations, low-light
image enhancement technology has emerged as a crucial research focus [2,3]. By employing advanced image
enhancement algorithms, it becomes possible to substantially improve image quality and clarity, thereby
enhancing the accuracy of object detection and recognition systems [4–7].
In recent years, low-light image enhancement has emerged as a hot topic, garnering significant attention
from numerous research institutions and scholars [8]. Historically, algorithmic approaches in this field can
be divided into two categories: traditional algorithms and deep learning-based algorithms [9,10]. Traditional
algorithms encompass techniques like histogram equalization, gamma correction, Retinex methods, and
image fusion approaches [11,12]. For instance, Fan et al. [13] introduced histogram adjustable algorithms
to mitigate issues of over-enhancement and artifacts commonly associated with traditional histogram
equalization. Ye et al. [14] presented a dual histogram equalization algorithm, which enhances image

Copyright © 2025 The Authors. Published by Tech Science Press.


This work is licensed under a Creative Commons Attribution 4.0 International License, which permits
unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1596 Comput Model Eng Sci. 2025;143(2)

quality by reducing brightness offset and grayscale merging. To address the over-enhancement of bright
areas, Pattanayak et al. [15] developed a bilinear segmented gamma correction method that employs two
linear functions to independently optimize dark and bright regions. Additionally, Acharya et al. [16]
introduced a region-specific gamma adjustment approach utilizing neighborhood intensity statistics for
parameter adaptation, achieving effective contrast enhancement. Du et al. [17] presented an improved
Retinex method that balances denoising and detail preservation through dual constraints and reflectance
mapping reweighting. Similarly, Jia et al. [18] proposed a robust Retinex-based approach with reflectance
mapping reweighting, significantly improving the brightness of low-light images. Ye et al. [19] introduced a
dual-domain enhanced fusion method that effectively addresses issues of low contrast and detail loss. Sun
et al. [20] developed a multi-weight fusion algorithm that optimally balances brightness enhancement and
computational efficiency, providing a practical solution for image processing tasks. Based on the Weber-
Fechner principle, Wang et al. [21] implemented logarithmic grayscale mapping, introducing a simple yet
effective method for adaptive color image improvement. Xin et al. [22] proposed a dark channel-based
specular enhancement algorithm to tackle the challenge of data degradation in specular reflection images
obtained under practical conditions. Inspired by the concept that fusing multi-exposure images can produce
high-quality results, Wang et al. [23] introduced a novel enhancement framework based on a virtual exposure
strategy, achieving remarkable results in color restoration.
In deep learning research, owing to its successful applications in image classification, object detection,
and other areas, more and more researchers are exploring its potential in low-light image enhancement.
Chen et al. [24] utilized Transformer and CNN (Convolutional Neural Network) architectures to design
models that address issues such as low PSNR and detail loss in low-light environments. Wang et al. [25]
propose a Parallel Multi-Scale Network (PMSNet), which effectively preserves details and utilizes angular
correlations through parallel multi-scale feature processing and 3D convolution streams. Experiments
demonstrate that its performance surpasses existing methods. The low-light image enhancement approach
utilizing Generative Adversarial Networks (GANs), introduced by Xu et al. [26], performs well in brightness,
contrast, and detail, particularly outstanding in lighting restoration and noise reduction. The dual branch
generative adversarial network proposed by Jin et al. [27] provides an effective improvement solution for
low contrast and noise issues in low-light images, while the deep autoencoder proposed by Luo et al. [28]
adaptively enhances image details by recognizing signal features. In addition, Dong et al. [29] proposed
an approach incorporating attention mechanisms and adversarial autoencoder architecture to effectively
improve image quality in response to color shift and low contrast phenomena, demonstrating the application
prospects of deep learning in complex scenes. Wang et al. [30] proposed a novel self-adversarial GAN (SA-
GAN) to enhance images, which adds further constraints to the generation process through a designed
self-adversarial mode. Han et al. [31] introduced a rapid and effective image enhancement framework
utilizing conditional GAN, which fully utilizes limited hierarchical features through aggregation strategies
and connection operations, and has good generalization ability. These algorithms have become research
directions, but demand substantial amounts of data for training.
In 2011, Dong et al. [32] observed the image features of numerous low-light images and their inverted
images and discovered that the inverted images of low-light images exhibited striking similarities to hazy
images. Based on this correlation, they proposed an image enhancement algorithm using dark channel prior
(DCP) theory. While this approach enhances low-light image quality to a certain degree, it ignores the
influence of strong light sources or white objects in estimating atmospheric light values. In images containing
multiple lighting conditions and complex scenes, it is difficult to accurately distinguish pixels representing
atmospheric light values. To tackle these challenges, this study introduces a DCP-based image enhancement
algorithm by using quadtree and genetic algorithm improvement. During the atmospheric light estimation
Comput Model Eng Sci. 2025;143(2) 1597

process, our algorithm uses a quadtree decomposition algorithm to decompose the image into small blocks
and optimizes the estimation of atmospheric light values using a genetic algorithm. It evaluates the quality
of each solution based on a fitness function, generates a new generation of individuals through selection,
crossover, and mutation operations, and finally selects the solution with the best fitness as the atmospheric
light value. In addition, in the estimation of transmittance, the image dark channel is calculated by adaptive
patch size and subjected to erosion and dilation operations. Finally, adaptive guided filtering is chosen to
achieve better results. The contributions of this approach can be summarized as follows:
(1) A novel hybrid approach that combines quadtree decomposition with genetic algorithm optimization
for more accurate atmospheric light estimation.
(2) An adaptive dark channel prior extraction method that dynamically adjusts to varying lighting condi-
tions.
(3) A refinement process using morphological operations and guided filtering to reduce artifacts and
improve image clarity.
(4) A design concept for virtual hazy images was proposed, and low-light image enhancement was
achieved based on the hazy degradation model, with a simple and effective process.
The remaining sections are organized as follows: Section 2 introduces the relevant theories of hazy image
degradation models, Section 3 elaborates on the details of the proposed algorithm, Section 4 is experiments
and analysis, and the last section is conclusions and future work.

2 Related Work
2.1 Atmospheric Scattering Model
The commonly used physical model for image restoration is the atmospheric scattering model in the
dehazing field. Described by the incident light attenuation model and atmospheric imaging model. It is
shown in Fig. 1.

Figure 1: Diagram of atmospheric scattering model


1598 Comput Model Eng Sci. 2025;143(2)

E (d, λ) = E 0 (λ) e −β(λ)d + E∞ (λ) (1 − e −β(λ)d ) (1)

The d is the distance between the scenic spot and the imaging system, i.e., the depth of the scene;
λ represents the wavelength of the light, and E 0 (λ) is the irradiance of the light at x = 0; β (λ) is the
atmospheric scattering coefficient; E 0 (λ) e −β(λ)d and E∞ (λ) (1 − e −β(λ)d ) are the direct attenuation term
and ambient light term, respectively.
The simplified model expression is as follows:

I (x) = J (x) t (x) + A (1 − t (x)) (2)

Among them, A is the atmospheric light value; x represents the pixels of the image; t (x) is transmit-
tance; I (x) is a hazy image captured in hazy weather, and J (x) is a clear image obtained under normal
lighting conditions.
The transmittance is expressed by the following equation:

t (x) = e−βd(x) (3)

where β represents the atmospheric attenuation coefficient, and d (x) represents the depth of the scene and
can be understood as the distance from a target point to an image capture device.
For ease of understanding, Eq. (2) is usually expressed as:
I (x) − A
J (x) = +A (4)
t (x)

2.2 Dark Channel Prior Dehazing


The dark channel prior theory is a rule obtained by He et al., after analyzing a large number of hazy
images in non sky regions [33], at least one color component has a low intensity and is close to 0 at certain
pixels, which can be expressed as:

J d ark (x) = min ( min (J c (y))) → 0 (5)


y∈Ω(x) c∈{r, g,b}

where Ω (x) is a local window centered around x, c is one of the color channels, and J c (y) is any color
channel of the image in the RGB three channels.
After performing the operation on Eq. (2), we obtain:
I c (x) J c (x)
= t (x) + (1 − t (x)) (6)
Ac Ac
Calculate the minimum value twice on both sides of Eq. (6):

I c (y) J c (y)
min [ min ] = t (x) min [ min ] + 1 − t (x) (7)
y∈Ω(x) c∈{r, g,b} Ac y∈Ω(x) c∈{r, g,b} Ac

According to the prior theory of dark channels, it can be concluded that:

J c (y)
J d ark (x) = min [ min ]→0 (8)
y∈Ω(x) c∈{r, g,b} Ac
Comput Model Eng Sci. 2025;143(2) 1599

The equation for transmission estimation is:

I c (y)
t (x) = 1 − min [ min ] (9)
y∈Ω(x) c∈{r, g,b} Ac

In practical situations, a correction coefficient ω(0 < ω < 1) is employed to modulate the strength
of dehazing, ω usually taken as 0.95, to preserve a certain amount of fog in the image. The equation for
transmission estimation is:
I c (y)
t (x) = 1 − ω min [ min ] (10)
y∈Ω(x) c∈{r, g,b} Ac

At this point, the transmittance can be obtained. The atmospheric light value A is taken as the maximum
value among the brightest 0.1% in the DCP image. After obtaining t(x) and A, substitute them into the
formula to restore the dehazed image.
In recent years, numerous advanced dehazing methods based on DCP have continued to emerge. Wang
et al. [34] proposed a four-point interpolation method to compute accurate atmospheric light and optimize
transmittance by combining the grayscale image with the input image. Li et al. [35] developed an approach
combining enhanced bright channel prior with DCP, employing a particle swarm-optimized Otsu algorithm
for sky/non-sky segmentation to facilitate physical model parameter estimation. Zhang et al. [36] utilized
multi-channel phase activation and constrained DCP to integrate phase-adjusted Gaussian kernel functions
in the Fourier transform frequency domain, optimizing the brightness channel and improving color fidelity
and brightness.
However, these methods have not yet solved the problem of image color fidelity. Due to inaccurate
parameter extraction, the output image often suffers from color distortion and increased noise. Based on
this, we proposed an optimization method for calculating atmospheric light values and transmittance, which
greatly boosts image enhancement performance partially.

3 Algorithm Process
This work applies the DCP theory and atmospheric scattering model for image enhancement. Firstly,
the input low-light image is inverted to obtain a virtual hazy image. An improved DCP dehazing algorithm
is designed for image dehazing processing, and then the inversion is performed again to obtain the final
enhancement image. In this method, the calculation of dark channel images uses a patch size adaptive
adjustment mechanism, and morphological operations and guided filtering are performed for refinement,
improving the accuracy of obtaining transmittance. A combination of quadtree and genetic algorithms
was used to obtain atmospheric light values. The image was segmented using a quadtree segmentation
method, and the quadtree estimation of atmospheric light value regions was further optimized using a
genetic algorithm, improving the accuracy and robustness of atmospheric light estimation. Fig. 2 presents
the workflow diagram of the proposed approach.
1600 Comput Model Eng Sci. 2025;143(2)

Figure 2: The flowchart of the proposed method

3.1 Generate Virtual Hazy Images


To obtain a virtual hazy image, it is first necessary to invert the low-light image, and the inverted image
has some similarities with the hazy image. The calculation method is as follows:

I (i, j) = 255 − J (i, j) (11)

Among them, I (i, j) and J (i, j) are the virtual hazy image and source low light image, respectively. Fig. 3
demonstrates the comparative results, where (a) is the low light image, (b) is the virtual hazy image obtained
by inverting (a), and (c) is the real hazy image. The histograms on the right are the average grayscale
distribution histograms calculated from the left graphs (a), (b), and (c), respectively. It can be seen that the
grayscale distribution of virtual hazy images is similar to that of real hazy images. It is possible to consider
using image dehazing methods to dehazing virtual hazy images, and then reversing the images to low light
enhancement results.

Figure 3: Comparison of low-light image and hazy image


Comput Model Eng Sci. 2025;143(2) 1601

3.2 Atmospheric Light Estimation Using Quadtree Decomposition and Genetic Algorithm
Accurately extracting regions of atmospheric light values is of significant importance in image dehazing.
Conventional DCP-based dehazing methods typically utilize the brightest 0.1 percentile of pixels in the dark
channel map as atmospheric light estimates. However, this method is susceptible to the influence of noise
and outliers, which may result in the inaccurate selection of atmospheric light values. To address this issue,
this paper proposes a method that combines quadtree segmentation and a genetic algorithm to obtain more
accurate atmospheric light values. The flowchart is shown in Fig. 4.

Figure 4: Flow chart of atmospheric light value calculation

(1) Rough localization of quadtree decomposition region


A quadtree is an effective data structure for recursively segmenting images. In this module, the
program gradually divides the image into smaller regions. By setting segmentation thresholds, the program
determines the termination conditions for segmentation. If the pixel value in the length and width directions
of the subquadrant is less than 100, stop decomposing.
During each segmentation process, the program checks the current area to determine whether it should
be further subdivided or saved as a leaf node. Finally, the program extracts the segmented effective minimum
regions from the leaf nodes through recursive traversal of the quadtree, ensuring that subsequent processing
only focuses on these meaningful parts for analysis. Partial testing results are shown in Fig. 5.

Figure 5: Quadtree decomposition results

(2) Genetic algorithm precise localization


After completing the initial segmentation of image regions through quadtree decomposition. To achieve
finer localization more effectively, methods such as genetic algorithm (GA), particle swarm optimization
(PSO), Ant colony optimization (ACO), and crow search optimization (CSO) can be employed. Each of
1602 Comput Model Eng Sci. 2025;143(2)

these methods has its own characteristics. For example, GA is suitable for complex, multimodal optimization
problems but incurs higher computational costs. PSO is well-suited for continuous optimization problems,
with fast convergence, though it is prone to falling into local optima. ACO is ideal for discrete optimization
problems, particularly those related to path planning. CSO is simple to implement and suitable for small to
medium-scale continuous optimization problems, though its research is still in the early stages. Considering
the specifics of this study, we adopted the GA algorithm for optimization.
The generation of the initial population depends on the leaf node data obtained from quadtree
segmentation. At this stage, randomly select regions from these leaf nodes and initialize a candidate solution
population, where each solution corresponds to a potential atmospheric light value region. This initial
population provides diversity for genetic algorithms and lays the foundation for subsequent optimization.
Assuming that the number of ci leaf node regions after quadtree segmentation is N, the number of individuals
in the initial population is set to P, usually P < N. The genes of each individual can be represented by the
coordinates of the leaf nodes:

c i = (x i , y i , w i , h i ) (12)

Among them, (xi , yi ) is the coordinate of the upper left corner, and (wi , hi ) is the width and height of
the region.
The evolutionary process is the core part of the genetic algorithm. At this stage, multiple steps are taken
to optimize the atmospheric light value region. The fitness function is the core of genetic algorithms, used
to evaluate the quality of each individual in the population, that is, the fitness of each candidate atmospheric
light region. The evaluation of fitness combines the average and variance of pixel brightness within the region
and measures the level and uniformity of average brightness by comparing the average brightness with the
variance. The brighter the area, the more reliable the atmospheric light region may be. A small variance
means that the brightness distribution within the region is more uniform, indicating more representative
lighting characteristics. The evaluation formulas are as follows:
μi
Fitness = (13)
σi + φ
2

1
μi = ∑ I (x, y) (14)
w i × h i x , y∈c i
1 2
σi 2 = ∑ (I (x, y) − μ i ) (15)
w i × h i x , y∈c i

Among them, μi represents the ci average brightness value within the region, σ i2 represents the
brightness variance within the region ci , and φ is a negligible value added to circumvent division by zero.
The higher the average brightness μi , the more likely the area is to be an atmospheric light value, while the
smaller the variance σ i2 , the more uniform the brightness distribution within the area. Therefore, A higher
fitness function value indicates a greater probability of the atmospheric light region being selected.
Secondly, perform selection, crossover, and mutation operations. In the selection process, a tournament
selection mechanism is used to randomly select K individuals for comparison (K = 3 in this algorithm), and
the candidate exhibiting the greatest fitness score is chosen to advance to the subsequent generation. This
method randomly selects multiple individuals to compare their fitness, selects the best one, and ensures that
high fitness individuals are retained in the offspring population. The formula is expressed as:

c ne x t = arg max Fitness (c j ) (16)


cj
Comput Model Eng Sci. 2025;143(2) 1603

Cross operation involves selecting multiple crossover points between two individuals, exchanging
corresponding gene intervals to generate new individuals, and increasing population diversity. Randomly
select two individuals c1 and c2 as parents, select multiple crossover points in the gene sequence of the parents,
exchange gene fragments, and generate two new individuals c1new and c2new . The formula is expressed as:

c new = Crossover (c 1 , c 2 ) (17)

Mutation operation involves applying mutations to newly generated individuals and randomly adjusting
some of their genes (coordinates) with a small probability. The setting of mutation probability is to prevent
the algorithm from converging prematurely to local optima and ensure the ability to explore new search
spaces. And by randomly adjusting some genes of the generated individuals, the algorithm can avoid getting
stuck in local optima too early.

c mut = c + △c (18)

Among them, △c is a random vector used to make small adjustments to the individual’s coordinates,
with a mutation probability set to 0.01.
In order to enhance the stability and robustness of the algorithm, a mechanism for running the genetic
algorithm multiple times was designed. The genetic algorithm will be run multiple times, and the average
brightness intensity of the optimal area will be obtained as the final output atmospheric light value. Run the
n genetic algorithm to record the average brightness of the best solution each time μibest , take the average of
all the best solutions run, and obtain the final estimate of atmospheric light value:

1 n i
μ f inal = ∑μ (19)
n i=1 best

In this algorithm, the setting n = 10 directly indicates the optimal atmospheric light area in the form of
a rectangular box on the original image. By combining quadtree decomposition with the global optimization
capability of genetic algorithm, our proposed method can efficiently and accurately estimate the atmospheric
light value region. Quadtree decomposition can effectively reduce the search space and improve processing
efficiency, while genetic algorithms ensure the accuracy and stability of the final estimation results. This
method has demonstrated superior performance in handling complex and ever-changing scenes, providing
a reliable foundation for subsequent image enhancement operations. The experimental results are presented
in Fig. 6.

Figure 6: The optimal region for atmospheric light values


1604 Comput Model Eng Sci. 2025;143(2)

3.3 Calculation of Coarse Transmittance Based on Dark Primary Color Prior


In DCP calculation, the pixel are obtained by finding the minimum value of the set of RGB channels for
each pixel within a small patch centered on that point. Adaptively adjust the patch size used for dark channel
calculation based on the brightness distribution of the image. To guarantee that the patch dimensions align
with the luminance distribution of the image, it is essential to initially assess the brightness distribution
and then dynamically modify the patch size accordingly. When the brightness of image is concentrated in
the highlighted area, that is, when most of the pixel values are high, the patch size should be appropriately
increased; When the brightness distribution is relatively uniform, maintain a medium-sized pattern; If there
are many dark areas in the image, that is, when most of the pixel values are low, reduce the patch size. So, in
this method, when the average brightness is higher than 180, select a patch of 9 × 9 pixels. When the average
brightness is between 80 and 180, choose a 7 × 7 patch pixels. When the average brightness is below 80, select
a 3 × 3 patch pixels.
To further improve the accuracy of dark channel images, weighted minimum filtering is adopted. In
the traditional dark channel calculation process, it may be affected by misjudgment of noise or non-hazy
areas. To address this issue, assign a ω (x ′ , y′ ) weight to each pixel, which is defined based on its brightness
and color. Darker pixels near the center of the dark area are given higher weights. The formula for weighted
minimum filtering is:

ω (x ′ , y′ ) × J c (x ′ , y′ ))
w e i g hted
J d ark (x, y) = min ( min (20)
c∈{R,G ,B} (x ′ , y ′ )∈Ω(x , y)

By calculating the weighted minimum value, more accurate dark channel images can be generated. Next,
morphological operations are performed on the generated dark channel image, starting with erosion. The
erosion operation slides a structural element in the image and takes the minimum value within the current
local window as the central pixel value in the new image, thereby reducing the influence of bright areas
on dark channel results in the image and preserving the minimum information of the local structure. The
operation of corrosion can be expressed as:

(x ′ , y′ )
w e i g hted
ark (x, y) =
J derod ed
min J d ark (21)
(x ′ , y ′ )∈B(x , y)

Among them, B (x, y) represents the structural element that serves as the center of (x, y). The corrosion
operation not only preserves the lowest values in local areas, but also effectively reduces the interference of
bright areas on the results.
To avoid losing image details during corrosion operations, the dark channel image is expanded using the
same structural elements as corrosion. Expansion is the inverse operation of corrosion. By sliding structural
elements, the highest value observed within the window is used as the central pixel value to smooth the dark
channel image and compensate for the information lost during the corrosion operation.
The expansion operation can be expressed as:
′ ′
J dd ark
i l ated
(x, y) = max ark (x , y )
J derod ed
(22)
(x ′ , y ′ )∈B(x , y)

The dilation operation smooths the dark channel image by expanding the minimum value of the local
area, avoiding the influence of extreme values in the generated image on the final image enhancement effect.
The calculation result is shown in Fig. 7.
Comput Model Eng Sci. 2025;143(2) 1605

Figure 7: Transmittance calculation

Through the above steps, combined with adaptive patch size dark channel calculation, weighted
minimum filtering, and morphological operations, it is possible to more accurately calculate the dark
channels of low-light images, thereby optimizing the processing effect of image enhancement.

3.4 Refinement of Transmittance Diagram


Transmittance represents the degree of intensity attenuation of light passing through a medium in an
image. The classic transmittance calculation is based on the prior assumption of dark channels, which means
that the dark channels in hazy areas of natural images tend to be zero. By using transmittance, it is possible
to restore fog-free images that are close to the real scene. The transmittance can be computed using the
following formula:

t (x, y) = 1 − ω ⋅ J dd ark
i l ated
(x, y) /A (23)

To improve the performance of transmittance estimation and preserve edge details, guided filtering is
employed to enhance and refine the initially estimated transmittance with a value of 0.95. Guided filtering is
an edge-preserving filter that can achieve smoothing while preserving image edge details. The formula is:
1
qi = ∑ (a j I i + b j ) (24)
∣ω∣ j∈ω(i)
1606 Comput Model Eng Sci. 2025;143(2)

In the formula, qi represents the filtered transmittance of image Ii , which is the original input image.
The sum bj is the linear coefficient of the filter, which can be determined through the following equation:

∑ i∈ω( j) (I i − μ j ) (t i − t j )
aj = (25)
σ j2 + 

bj = tj − ajμj (26)
where μj and σ j2 are respectively the local mean and local variance, which are regularization parameters used
to balance filtering effectiveness and noise suppression. In this work, the radius and regularization parameters
of the guiding filter are adaptively adjusted according to the image size, and the original image is used as
the guiding image I(x), combined with the transmittance map t(x) for filtering processing. This ensures the
smoothness of the transmittance map while preserving the edge details of the image. The operation example
is shown in Fig. 7e.

3.5 Virtual Hazy Image Dehazing and Low Light Enhancement


After refining the transmittance, it is necessary to constrain it to avoid image artifacts caused by
excessively small transmittance values. It is adopted the following tolerance mechanism to correct the
transmittance value:

t (x) = max (t 0 , t (x)) (27)

Among them, t0 = 0.1 is the lower limit value set to ensure that the transmittance is not too low, thereby
avoiding artifacts during the defogging process. The final hazy-free image can be reconstructed using the
following equation:
I (x) − A
J (x) = +A (28)
max (t (x) , t 0 )

After defogging the virtual hazy image, the original low light image can be enhanced by performing the
inverse operation according to Formula (11).

4 Experiments and Analysis


4.1 Experimental Environment
In order to verify the effectiveness of the proposed method, we built a testing platform: the experimental
platform is MATLAB 2022b, Windows 11, 2.10 GHz processor, 32.0 GB RAM on the system. Multiple
advanced image enhancement algorithms were run on low light images from multiple datasets. The dataset
M is a selfbuuild dataset of 1000 low light images, which features authentic low-light images of diverse
resolutions, encompassing indoor objects and furnishings, as well as outdoor structures, urban scenes, and
natural environments. In addition, we paper also conducted experimental comparisons on the publicly
available datasets DICM [37], LIME [38], LOW, and FUSION [39] by running the Dong algorithm [32],
IBOOST [40], LDR [37], PLM [41], RetinexNet [42] and IceNet [43].

4.2 Subjective Evaluation


Firstly, subjective perception analysis is conducted on the algorithm results from the perspective of
visual perception. The following are some of the results obtained by running the algorithm on dataset M, as
shown in Fig. 8.
Comput Model Eng Sci. 2025;143(2) 1607

Figure 8: Experimental results of proposed method

In order to fully demonstrate the superiority of the proposed algorithm, multiple advanced algorithms
were run on dataset M, and some details were enlarged and displayed. Some of the results are shown in Fig. 9.
To validate the efficacy of the algorithm proposed in this study, various advanced image enhancement
algorithms were run on the LOW dataset, DICM dataset and LIME dataset, and comparative experiments
were conducted. Partial experimental results were selected, as shown in Figs. 10–12.
From the above figures, it can be seen that the Dong algorithm performs well in improving the overall
brightness of low-light images, allowing hidden details to be revealed and significantly improving the
visibility of the image. However, the details of the image may be excessively smoothed or blurred during
the processing, indicating the shortcomings of the algorithm in preserving details. The IBOOST algorithm
also performs well on enhancing the clarity of object boundaries, making the image more vivid and able to
display more image information. However, there is a problem of excessive sharpening, which can result in
slight artifacts in the edge areas. The LDR algorithm performs moderately in improving image brightness
and contrast and can enhance image brightness without appearing too aggressive. It exhibits natural color
reproduction without obvious distortion, making the image look more realistic and natural. In terms of detail
restoration, the LDR algorithm performs well but may appear slightly blurry in certain areas. Overall, the
LDR algorithm can improve image brightness and contrast while maintaining good naturalness, making it
suitable for scenes that require realistic and natural color reproduction. However, the enhancement effect
may not be ideal when processing extremely low-light images. The PLM algorithm performs significantly
in improving image brightness and contrast, effectively enhancing the overall visual effect of the image.
However, in terms of detail restoration, PLM introduces some noise that affects the naturalness of the
image. In addition, color processing may not be balanced enough, resulting in a slight color cast. The image
enhanced by the RetinexNet algorithm may exhibit color distortion. The proposed algorithm performs well
in improving image brightness and contrast, and can significantly enhance the overall visual effect of the
image. It performs well in color reproduction, avoiding serious color cast problems and making the image
1608 Comput Model Eng Sci. 2025;143(2)

look more realistic and natural. In terms of detail restoration, the proposed algorithm performs excellently,
displaying more image information while effectively suppressing noise. Overall, the proposed algorithm
demonstrates significant advantages in the field of low-light image enhancement, making it suitable for
scenes that require significant improvements in brightness and contrast while maintaining good naturalness
and detail preservation.

Input image
g Dong
g IBOOST LDR PLM Our method

Figure 9: Comparison of local enhancement effects with multiple algorithms

Figure 10: Enhancement results of different algorithms on the LOW dataset


Comput Model Eng Sci. 2025;143(2) 1609

Figure 11: Enhancement results of different algorithms on the DICM dataset

Figure 12: Enhancement results of different algorithms on the LIME dataset

4.3 Objective Evaluation


In terms of objective image quality evaluation, reference image quality evaluation metrics and nonref-
erence image quality evaluation metrics were selected to verify the effectiveness of the algorithm [44–46].
In terms of reference image quality evaluation metrics, PSNR, SSIM, and mean square error (MSE) were
selected. In terms of no-reference image quality evaluation metrics, entropy, average gradient, and edge
strength were selected. The experimental data takes the average of the results obtained by running image
enhancement algorithms on 1000 low-light images contained in dataset M. The experimental data chart
without reference image quality evaluation metrics was obtained by running different low light image
enhancement algorithms on the DICM dataset, LIME dataset, LOW dataset, and FUSION dataset. Ten
images were randomly selected from each dataset, and the average of the running results was taken.

4.3.1 Image Quality Evaluation with Reference Images


The proposed method is compared with several existing algorithms through detailed experimental
data. These comparisons were calculated in experiments using several evaluation metrics to quantify the
performance of each algorithm in image enhancement, including PSNR, SSIM, and MSE.
1610 Comput Model Eng Sci. 2025;143(2)

(1) PSNR
PSNR represents the ratio of the peak value of the image signal to the noise, which is based on the error
between corresponding pixels and can better reflect the reconstruction quality of the image signal. A higher
value indicates less distortion in the image signal and superior image quality. The computation method is:

1 M N 2
MSE (x, y) = ∑ ∑ ∣x (i, j) − y (i, j)∣ (29)
MN i=1 j=1

2552
PSN R (x, y) = 10 log10 (30)
MSE (x, y)
In the equation, M and N denote the length and width, respectively, and (i, j) represent the pixel position;
x and y represent the input image and the reference image, respectively.
(2) SSIM
SSIM is grounded in the premise that the human visual system is adept at extracting structural details
from images, serving as a metric to quantify the similarity between two images. It comprehensively evaluates
the brightness, contrast, and structural characteristics of the images. The numerical results of structural
similarity are within the interval [0, 1]. A higher value signifies a smaller discrepancy between the output
image and the undistorted image, reflecting superior image quality and enhanced visual performance. The
calculation formula is:
2μ x μ y + C 1 2σx y + C 2
SSIM (x, y) = ⋅ 2 (31)
μ x + μ y + C1 σx + σ y 2 C2
2 2

In the equation, μ x and μ y are the average gray values, σx 2 and σ y 2 are the variances, and σx y is
the covariance.
(3) MSE
Mean square error quantifies the squared difference between each pixel’s actual value in the input image
and its corresponding pixel value in the reference image. A low mean square error indicates that the difference
between the actual pixel value and the reference image pixel value is small, indicating high image quality.
The formula is:
1 N 2
MSE = ∑ (x i − x̂i ) (32)
N i=1

In the formula, x̂i and x i are the pixel value of the input image and the pixel value of the reference
image, respectively.
The PSNR value is usually used to measure the quality of image image restoration, with larger values
signifying more effective restoration outcomes. The PSNR value of the proposed method is 17.09, which is the
highest among the listed algorithms (Table 1). This indicates that the new algorithm performs well in noise
suppression and detail restoration, effectively reducing noise and preserving the information of the original
image. The SSIM value of the proposed method is 0.74, indicating its excellent performance in preserving
image structural information, effectively preserving the structural features of the image and maintaining
its natural appearance. In terms of MSE, the value of the new algorithm is 1617.19, which indicats that the
disparity between the processed image and the original image is reduced to the greatest extent, directly
reflecting the global effect of the algorithm. This algorithm outperforms all other listed algorithms in this
Comput Model Eng Sci. 2025;143(2) 1611

metric, indicating its excellent performance in overall error control. A low MSE value makes the image more
accurate in terms of detail representation.

Table 1: Image quality evaluation table with reference images

Dong IBOOST LDR PLM Our method


PSNR (dB) 16.85 16.51 13.81 14.91 17.09
SSIM 0.68 0.72 0.59 0.65 0.74
MSE 1751.92 2156.48 4142.64 2508.38 1617.19

In summary, our proposed method performs excellently in terms of image quality, achieving the
best results in PSNR, SSIM, and MSE, indicating its outstanding abilities in noise suppression, structure
preservation, and error minimization. Due to effective noise suppression, the enhanced image is clearer and
more detailed, and good structural preservation makes the enhanced image look natural without distortion
caused by enhancement. Minimizing errors ensures that image details are preserved and textures and edges
are clearer. Less noise and distortion make the enhanced image visually more comfortable and suitable for
human viewing.

4.3.2 Image Quality Evaluation without Reference Images


(1) Entropy. Entropy is an important metric commonly used to evaluate image quality, reflecting the
information content and complexity of an image. Entropy evaluates the quality of an image by
measuring the randomness of pixel distribution. The higher the entropy, the richer the details. The
calculation formula is:
n
H = − ∑ p (x i ) log2 p (x i ) (33)
i=1

In the equation, p(xi ) is the xi probability of the pixel value.


(2) Average gradient. The computation of the average gradient is based on the grayscale value of the image,
which is used to measure the clarity and detail expression ability of the image and characterize the
relative clarity of the image. The larger the average gradient, the richer the image details and edge
information. The calculation formula is:

 ((I (i + 1, j) − I (i, j))2 + (I (i, j + 1) − I (i, j))2 )
1 M−1 N−1 

AG (F) = ∑ i=1 ∑ j=1 (34)
(M − 1) (N − 1) 2
Within the mathematical formulation, M and N are the width and height (i, j) of the image, and I is
the pixel value of the image at the position.
(3) Edge intensity. It is used to measure the clarity and detail representation of an image, and is an
important concept in image processing. It is essentially the amplitude of the gradient of edge points,
reflecting the edges of the image. The higher the edge intensity of the image, the better the image quality.


M N

 ∑ ∑ s x (i, j)2 + s y (i, j)2
i=1 j=1
EI (F) = (35)
M∗N
1612 Comput Model Eng Sci. 2025;143(2)

sx = F ∗ hx (36)
sy = F ∗ hy (37)

In the formula, M and N are the width and height of the image, hx and hy are the x, y directional Sobel
operators, Sx and Sy are the Sobel convolution results of the operators.
The entropy value of our proposed method is 7.31 (Table 2), indicating that the enhanced image contains
rich information and retains complete details. This means that the algorithm can effectively preserve and
enhance the detailed information of the image during the processing. The average gradient of our method
is 9.21, which fully demonstrates the significant advantages of the algorithm in image clarity. It shows that
it has a significant effect in improving image clarity and can better enhance the details and contrast of the
image. The edge strength value of this algorithm is much higher than other algorithms, indicating that the
enhanced image edges are clear, demonstrating the excellent performance of this algorithm in preserving
and enhancing image edges.

Table 2: Image quality evaluation table without reference images

Dong IBOOST LDR PLM Our method


Entropy 7.03 7.14 7.11 7.21 7.31
Average gradient 7.01 6.80 6.27 7.67 9.21
Edge intensity 69.34 67.34 63.13 77.06 91.44

Results of different algorithms on four datasets is shown in Fig.13. Based on the comprehensive analysis
of the above metrics, it can be found that the proposed algorithm has multiple significant advantages in
image processing. It demonstrates excellent performance in noise suppression, information retention, clarity
improvement, and edge enhancement.

DICM Dataset LIME Dataset


100 150
100
50 50
0
0 DONG IBOOST LDR PLM Our
DONG IBOOST LDR PLM Our method method

Entropy Average gradient Edge intensity Entropy Average gradient Edge intensity

(a) Comparison on the DICM dataset (b) Comparison on the LIME dataset

Figure 13: (Continued)


Comput Model Eng Sci. 2025;143(2) 1613

LOW Dataset FUSION Dataset


150 100

100
50
50
0
0 DONG IBOOST LDR PLM Our
DONG IBOOST LDR PLM Our method method

Entropy Average gradient Edge intensity Entropy Average gradient Edge intensity

(c) Comparison on the LOW dataset (d) Comparison on the FUSION dataset

Figure 13: Results of different algorithms on four datasets

5 Conclusions
This work proposes an improved dark channel prior algorithm for improving the quality of low-light
images. Through the integration of quadtree segmentation and genetic algorithm techniques, the process of
estimating atmospheric light values has been refined, thereby significantly improving the image’s contrast
and brightness. The adaptive adjustment of window size improves the accuracy of dark channel images,
while the guided filtering technology is used to refine the transmittance map, improving the dehazing effect.
The experiments on multiple datasets achieved a PSNR of 17.09 and an SSIM of 0.74, which shows that the
algorithm exhibits significant advantages in enhancing image details and preserving structural information.
Since the genetic algorithm is used in this work, the computational complexity remains high. Further
reducing the algorithmic complexity for real-time applications is the focus of our future research.
Acknowledgement: We acknowledge the editors and the anonymous reviewers for insightful suggestions on this work.

Funding Statement: This work was supported by the Natural Science Foundation of Shandong Province (nos.
ZR2023MF047, ZR2024MA055 and ZR2023QF139), the Enterprise Commissioned Project (nos. 2024HX104 and
2024HX140), the China University Industry-University-Research Innovation Foundation (nos. 2021ZYA11003 and
2021ITA05032), the Science and Technology Plan for Youth Innovation of Shandong’s Universities (no. 2019KJN012).

Author Contributions: Wencheng Wang wrote the initial manuscript and developed the main system. Baoxin Yin co-
wrote the manuscript. Lei Li and Lun Li helped to design the algorithm and participate the testing. Hongtao Liu revised
the manuscript. All authors reviewed the results and approved the final version of the manuscript.

Availability of Data and Materials: The dataset and source code used in this research are available from the first author
on reasonable request.

Ethics Approval: Not applicable.

Conflicts of Interest: The authors declare no conflicts of interest to report regarding the present study.

References
1. Liu H, Deng X, Shao H. Advancements in remote sensing image dehazing: introducing URA-net with multi-scale
dense feature fusion clusters and gated jump connection. Comput Model Eng Sci. 2024;140(3):2397–424. doi:10.
32604/cmes.2024.049737.
2. Upadhyay BB, Sarawadekar K. A low cost FPGA implementation of retinex based low-light image enhancement
algorithm. IEEE Trans Circuits Syst II Express Briefs. 2024;71(7):3503–7. doi:10.1109/TCSII.2024.3361561.
1614 Comput Model Eng Sci. 2025;143(2)

3. Wang W, Yuan X, Chen Z, Wu X, Gao Z. Weak-light image enhancement method based on adaptive local gamma
transform and color compensation. J Sens. 2021;2021(1):5563698. doi:10.1155/2021/5563698.
4. Qu J, Liu RW, Gao Y, Guo Y, Zhu F, Wang FY. Double domain guided real-time low-light image enhancement for
ultra-high-definition transportation surveillance. IEEE Trans Intell Transp Syst. 2024;25(8):9550–62. doi:10.1109/
TITS.2024.3359755.
5. Xu X, Wang R, Lu J. Low-light image enhancement via structure modeling and guidance. In: 2023 IEEE/CVF
Conference on Computer Vision and Pattern Recognition (CVPR); 2023 Jun 17–24; Vancouver, BC, Canada. p.
9893–903. doi:10.1109/CVPR52729.2023.00954.
6. Fan Y, Wang Y, Liang D, Chen Y, Xie H, Wang FL, et al. Low-FaceNet: face recognition-driven low-light image
enhancement. IEEE Trans Instrum Meas. 2024;73:5019413. doi:10.1109/TIM.2024.3372230.
7. Zhou J, Shi T, Zhang W, Chu W. Underwater diver image enhancement via dual-guided filtering. Comput Model
Eng Sci. 2022;131(2):1063–81. doi:10.32604/cmes.2022.019447.
8. Wang W, Yuan X, Wu X, Liu Y. Fast image dehazing method based on linear transformation. IEEE Trans Multimed.
2017;19(6):1142–55. doi:10.1109/TMM.2017.2652069.
9. Wang W, Wu X, Yuan X, Gao Z. An experiment-based review of low-light image enhancement methods. IEEE
Access. 2020;8:87884–917. doi:10.1109/ACCESS.2020.2992749.
10. Li X, Liu M, Ling Q. Pixel-wise gamma correction mapping for low-light image enhancement. IEEE Trans Circuits
Syst Video Technol. 2024;34(2):681–94. doi:10.1109/TCSVT.2023.3286802.
11. Guo L, Wan R, Yang W, Kot AC, Wen B. Cross-image disentanglement for low-light enhancement in real world.
IEEE Trans Circuits Syst Video Technol. 2024;34(4):2550–63. doi:10.1109/TCSVT.2023.3303574.
12. Liu X, Xie Q, Zhao Q, Wang H, Meng D. Low-light image enhancement by retinex-based algorithm unrolling and
adjustment. IEEE Trans Neural Netw Learn Syst. 2024;35(11):15758–71. doi:10.1109/TNNLS.2023.3289626.
13. Fan X, Wang J, Wang H, Xia C. Contrast-controllable image enhancement based on limited histogram. Electronics.
2022;11(22):3822. doi:10.3390/electronics11223822.
14. Ye B, Jin S, Li B, Yan S, Zhang D. Dual histogram equalization algorithm based on adaptive image correction. Appl
Sci. 2023;13(19):10649. doi:10.3390/app131910649.
15. Pattanayak A, Acharya A, Panda NR. Local gamma correction using bi-linear function for dark image enhance-
ment. In: 2021 19th OITS International Conference on Information Technology (OCIT); 2021 Dec 16–18;
Bhubaneswar, India. p. 84–9. doi:10.1109/OCIT53463.2021.00027.
16. Acharya A, Giri AV. Contrast improvement using local gamma correction. In: 2020 6th International Conference
on Advanced Computing and Communication Systems (ICACCS); 2020 Mar 6–7; Coimbatore, India. p. 110–4.
doi:10.1109/icaccs48705.2020.9074386.
17. Du S, Zhao M, Liu Y, You Z, Shi Z, Li J, et al. Low-light image enhancement and denoising via dual-constrained
Retinex model. Appl Math Model. 2023;116(2):1–15. doi:10.1016/j.apm.2022.11.022.
18. Jia F, Wong HS, Wang T, Zeng T. A reflectance re-weighted Retinex model for non-uniform and low-light image
enhancement. Pattern Recognit. 2023;144(12):109823. doi:10.1016/j.patcog.2023.109823.
19. Ye Y, Zhang J, Li Z. Infrared and visible image fusion method based on dual domain enhancement in low
illumination environment. In: 2022 IEEE International Conference on Dependable, Autonomic and Secure
Computing, International Conference on Pervasive Intelligence and Computing, International Conference
on Cloud and Big Data Computing, International Conference on Cyber Science and Technology Congress
(DASC/PiCom/CBDCom/CyberSciTech); 2022 Sep 12–15; Falerna, Italy. p. 1–7. doi:10.1109/dasc/picom/cbdcom/
cy55231.2022.9927867.
20. Sun L, Chen S, Yao X, Zhang Y, Tao Z, Liang P. Image enhancement method and application for intelligent
monitoring target recognition in coal mines. J China Coal Soc. 2023;112:495–504. doi:10.13225/j.cnki.jccs.2023.
0489.
21. Wang W, Chen Z, Yuan X. Simple low-light image enhancement based on Weber-Fechner law in logarithmic space.
Signal Process Image Commun. 2022;106(6):116742. doi:10.1016/j.image.2022.116742.
22. Xin Y, Jia Z, Yang J, Kasabov NK. Specular reflection image enhancement based on a dark channel prior. IEEE
Photonics J. 2021;13(1):6500211. doi:10.1109/JPHOT.2021.3053906.
Comput Model Eng Sci. 2025;143(2) 1615

23. Wang W, Yan D, Wu X, He W, Chen Z, Yuan X, et al. Low-light image enhancement based on virtual exposure.
Signal Process Image Commun. 2023;118(12):117016. doi:10.1016/j.image.2023.117016.
24. Chen K, Chen B, Wu S. Low-light image enhancement based on Transformer and CNN architecture. In: 2023 35th
Chinese Control and Decision Conference (CCDC); 2023 May 20–22; Yichang, China. p. 3628–33. doi:10.1109/
CCDC58219.2023.10326484.
25. Wang X, Chen K, Wang Z, Huang W. PMSNet: parallel multi-scale network for accurate low-light light-field image
enhancement. IEEE Trans Multimed. 2023;26:2041–55. doi:10.1109/TMM.2023.3291498.
26. Xu B, Zhou D, Li W. Image enhancement algorithm based on GAN neural network. IEEE Access.
2022;10:36766–77. doi:10.1109/ACCESS.2022.3163241.
27. Jin H, Wang Q, Su H, Xiao Z. Event-guided low light image enhancement via a dual branch GAN. J Vis Commun
Image Represent. 2023;95(2):103887. doi:10.1016/j.jvcir.2023.103887.
28. Luo G, He G, Jiang Z, Luo C. Attention-based mechanism and adversarial autoencoder for underwater image
enhancement. Appl Sci. 2023;13(17):9956. doi:10.3390/app13179956.
29. Dong W, Wang C, Sun H, Teng Y, Xu X. Multi-scale attention feature enhancement network for single image
dehazing. Sensors. 2023;23(19):8102. doi:10.3390/s23198102.
30. Wang H, Yang M, Yin G, Dong J. Self-adversarial generative adversarial network for underwater image enhance-
ment. IEEE J Ocean Eng. 2024;49(1):237–48. doi:10.1109/JOE.2023.3297731.
31. Han J, Zhou J, Wang L, Wang Y, Ding Z. FE-GAN: fast and efficient underwater image enhancement model based
on conditional GAN. Electronics. 2023;12(5):1227. doi:10.3390/electronics12051227.
32. Dong X, Wang G, Pang Y, Li W, Wen J, Meng W, et al. Fast efficient algorithm for enhancement of low lighting
video. In: 2011 IEEE International Conference on Multimedia and Expo; 2011 Jul 11–15; Barcelona, Spain. p. 1–6.
doi:10.1109/ICME.2011.6012107.
33. He K, Sun J, Tang X. Single image haze removal using dark channel prior. IEEE Trans Pattern Anal Mach Intell.
2011;33(12):2341–53. doi:10.1109/TPAMI.2010.168.
34. Wang C, Ding M, Zhang Y, Wang L. A single image enhancement technique using dark channel prior. Appl Sci.
2021;11(6):2712. doi:10.3390/app11062712.
35. Li C, Yuan C, Pan H, Yang Y, Wang Z, Zhou H, et al. Single-image dehazing based on improved bright channel
prior and dark channel prior. Electronics. 2023;12(2):299. doi:10.3390/electronics12020299.
36. Zhang L, Yan L, Li S, Li S. MMDCP: an image enhancement algorithm incorporating multi-channel phase
activation and multi-constrained dark channel prior. Int J Patt Recogn Artif Intell. 2024;38(4):2454005. doi:10.1142/
S0218001424540053.
37. Lee C, Lee C, Kim CS. Contrast enhancement based on layered difference representation. In: 2012 19th IEEE
International Conference on Image Processing; 2012 Sep 30–Oct 3; Orlando, FL, USA. p. 965–8. doi:10.1109/ICIP.
2012.6467022.
38. Guo X, Li Y, Ling H. LIME: low-light image enhancement via illumination map estimation. IEEE Trans Image
Process. 2017;26(2):982–93. doi:10.1109/TIP.2016.2639450.
39. Wang Q, Fu X, Zhang XP, Ding X. A fusion-based method for single backlit image enhancement. In: 2016 IEEE
International Conference on Image Processing (ICIP); 2016 Sep 25–28; Phoenix, AZ, USA. p. 4077–81. doi:10.1109/
ICIP.2016.7533126.
40. Al-Ameen Z. Nighttime image enhancement using a new illumination boost algorithm. IET Image Process.
2019;13(8):1314–20. doi:10.1049/iet-ipr.2018.6585.
41. Yu SY, Zhu H. Low-illumination image enhancement algorithm based on a physical lighting model. IEEE Trans
Circuits Syst Video Technol. 2019;29(1):28–37. doi:10.1109/TCSVT.2017.2763180.
42. Wei C, Wang W, Yang W, Liu J. Deep retinex decomposition for low-light enhancement. arXiv:1808.04560. 2018.
43. Ko K, Kim CS. IceNet for interactive contrast enhancement. IEEE Access. 2021;9:168342–54. doi:10.1109/ACCESS.
2021.3137993.
44. Wang W, Chen Z, Yuan X, Wu X. Adaptive image enhancement method for correcting low-illumination images.
Inf Sci. 2019;496(4):25–41. doi:10.1016/j.ins.2019.05.015.
1616 Comput Model Eng Sci. 2025;143(2)

45. Xu L, Hu C, Hu Y, Jing X, Cai Z, Lu X. UPT-Flow: multi-scale transformer-guided normalizing flow for low-light
image enhancement. Pattern Recognit. 2025;158(12):111076. doi:10.1016/j.patcog.2024.111076.
46. Zhang S, Zhao S, An D, Li D, Zhao R. LiteEnhanceNet: a lightweight network for real-time single underwater
image enhancement. Expert Syst Appl. 2024;240(2):122546. doi:10.1016/j.eswa.2023.122546.

You might also like