DIP Module 2
DIP Module 2
IMAGE ENHANCEMENT IN
SPATIAL DOMAIN
Mrs. Harshitha S
Assistant Professor
Dept. of E&IE., JSSATE
Bengaluru
Introduction
► Spatial Domain-image plane itself
► Image processing methods are based on direct manipulation of pixels
► Two principal categories
► Intensity Transformations: Operate on single pixels of an image
(Contrast Manipulation, Thresholding)
► Spatial Filtering: Whole image
(Sharpening the neighborhood of every pixel)
► Spatial Domain processes are denoted by-
► g(x,y) = T [f(x,y)]
► ‘T’ is an operator on ‘f’ defined over a neighborhood of point (x,y)
Introduction
► ‘T’ is applied on each pixel-POINT PROCESSING
► s = T (r)
► ‘s’ and ‘r’ are variables denoting the intensities of ‘g’and ‘f’ respectively at point (x,y)
Contrast Thresholding
Stretching
Introduction
► ‘T’ is applied on pixels in neighborhood- SPATIAL DOMAIN FILTERING
► ‘g’ depends only on value of ‘f’ at (x, y)
Basic Intensity Transformation Functions
► Linear (Negative and Identity Transformations)
► Logarithmic (Log and Inverse Log Transformations)
► Power Law (nth Power and nth Root Transformations)
► The negative of an image with intensity levels in the range [0, L-1] is-
► s=L–1–r
► If r = 0, s = L-1; If r = L-1, s = 0
► Maps a narrow range of low intensity values to wider range of output levels
► Expands the dark pixels while compresses the higher level values
Result of
Fourier
applying the log
spectrum
transformation
Magnetic resonance Results of applying the Results of applying the Results of applying the
(MR) image of a Transformation with c=1 Transformation with c=1 Transformation with c=1
fractured human spine and γ =0.6 and γ =0.4 and γ =0.3
Power Law Transformation
► The given image is predominantly dark
► An expansion of gray levels are desirable
► Can be accomplished using power-law transformation with a fractional
exponent
► As gamma decreased from 0.6 to 0.4, more detail became visible
► Further decrease of gamma to 0.3 enhanced a little more detail in the
background
► Began to reduce contrast to the point where the image started to have a very
slight “washed-out” look, especially in the background
Power Law Transformation
Aerial image Results of applying the Results of applying the Results of applying the
Transformation with c=1 Transformation with c=1 Transformation with c=1
and γ =3 and γ =4 and γ =5
Power Law Transformation
► The image to be enhanced now has a washed-out appearance
► A compression of gray levels is desirable
► Suitable results were obtained with gamma values of 3.0 and 4.0
► Gamma = 4.0 has a slightly more appealing appearance because it has higher
contrast
► The result obtained with gamma=5.0 has areas that are too dark, in which
some detail is lost
Piecewise-Linear Transformations
Contrast Stretching
► Low contrast images-Poor illumination, wrong setting of a lens aperture etc.
► r1 = s1; r2 = s2
► Linear Function
► Produces no changes in the intensity levels
► r1 = r2; s1 = 0 s2 = L-1
► Thresholding Function
► Creates a binary image
A Low-contrast Image Result of contrast stretching Result of thresholding
Piecewise-Linear Transformations
Contrast Stretching
► It is an 8-bit image with low contrast
A fractal is an image
generated from
mathematical
expressions
The eight bit planes of the image
Histogram Processing
► Histograms are the basis for numerous spatial domain processing
techniques
► Histogram
► Provides useful image statistics
► Is also quite useful in other image processing applications, such as
image compression and segmentation
► Is simple to calculate in software
► Lend themselves to economic hardware implementations
► Normalize a histogram
► By dividing each of its values by the total number of pixels in the image
► p(rk)= nk /MN, for k=0,1,……..,L-1 , M & N are the row and column
dimensions
► p(rk) gives an estimate of the probability of occurrence of gray level rk
► The sum of all components of a normalized histogram is equal to 1
Histogram Processing
► The horizontal axis of each histogram plot corresponds to gray level values, r k
► The vertical axis corresponds to values of h(rk) = nk or p(rk)= nk /MN (normalized)
► Histogram plots are plots of h(rk) = nk versus rk or p(rk)= nk /MN versus rk
Histogram Processing
► In the bright image the components of the histogram are biased toward the high
side of the gray scale
Histogram Processing
► An image with low contrast has a histogram that will be narrow and
will be centered towards the middle of the gray scale
► For a monochrome image this implies a dull, washed-out gray look
Histogram Processing
Histogram of
the Original
Image
► 3-bit image. Hence L=8. Intensity levels are integers in the range [0, 7]
► 64 X 64 pixels. Hence MN=4096
Histogram Equalization-Example
►
Histogram Equalization-Example
►
Transformation Function
Histogram Equalization-Example
► We round off the ‘s’ values to the nearest integer
► These are the values of the equalized histogram
► There are only five distinct intensity levels
rk nk sk ps (sk) ps (sk)
r0 = 0 790 1 790/4096 0.19
r1 = 1 1023 3 1023/4096 0.25
r2 = 2 850 5 850/4096 0.21
r3 = 3 656 6
(656+329)/4096 0.24
r4 = 4 329 6
r5 = 5 245 7
r6 = 6 122 7 (245+122+81)/4096 0.11
r7 = 7 81 7
Histogram of the Equalized Image
Histogram Equalization
Example-Dark Image
Histogram Equalization
Example-Bright Image
Histogram Equalization
► Sometimes useful to be able to specify the shape of the histogram that we wish the processed
image to have
► The method used to generate a processed image that has a specified histogram is called
histogram matching or histogram specification
Specified Histogram
Histogram Matching-Example
► Compute all the values of the transformation function ‘G’
Histogram Matching-Example
► Compute all the values of the transformation function ‘G’
r0 = 0 790 1 0.19 z0 = 0 0 0 0
r1 = 1 1023 3 0.25 z1 = 1 0 0 0
r2 = 2 850 5 0.21 z2 = 2 0 0 0
r3 = 3 656 6 0.16 z3 = 3 1 790/4096 0.19
r4 = 4 329 6 0.08 z4 = 4 2 1023/4096 0.25
r5 = 5 245 7 0.06 z5 = 5 5 850/4096 0.21
r6 = 6 122 7 0.03 z6 = 6 6 (656+329)/4096 0.24
r7 = 7 81 7 0.02 z7 = 7 7 (245+122+81)/4096 0.11
► Filtering
► Creates a new pixel with coordinates equal to the coordinates of the center of the
neighborhood
► With value equal to the result of the filtering operation
► The center of the filter visits each pixel in the input image
► A processed/Filtered Image
► ‘x’ and ‘y’ are varied so that each pixel in ‘w’ visits every pixel in ‘f’
Smoothing Spatial Filters
► Smoothing filters are used for
► Blurring
► Noise reduction
► By replacing the value of every pixel in an image by the average of the gray levels in the
neighborhood defined by the filter mask
► This process results in an image with reduced “sharp” transitions in gray levels
► Since random noise typically consists of sharp transitions in gray levels, the most obvious application of
smoothing is noise reduction
► However, edges (which almost always are desirable features of an image) also are characterized
by sharp transitions in gray levels, so averaging filters have the undesirable side effect that they
blur edges
► Instead of being 1/9, the coefficients of the filter are all 1’s
► It is computationally more efficient to have coefficients valued 1
► At the end of the filtering process the entire image is divided by 9
► In this mask the pixel at the center of the mask is multiplied by a higher
value than any other, thus giving this pixel more importance in the
calculation of the average
► The diagonal terms are further away from the center than the orthogonal
neighbors and, thus, are weighed less than these immediate neighbors
of the center pixel
► The basic strategy behind weighing the center point the highest and
then reducing the value of the coefficients as a function of increasing
distance from the origin is simply an attempt to reduce blurring in the
smoothing process
Order-Statistics Filters
► These are nonlinear spatial filters whose response is based on
► Ordering (ranking) the pixels contained in the image area encompassed by the filter
► Then replacing the value of the center pixel with the value determined by the ranking result
► Example is the median filter
► Replaces the value of a pixel by the median of the gray levels in the neighborhood of that pixel (the
original value of the pixel is included in the computation of the median)
► Median filters are quite popular because, they provide excellent noise-reduction capabilities, with
considerably less blurring than linear smoothing filters of similar size
► Median filters are particularly effective in the presence of impulse noise (salt-and-pepper noise)
► The median ξ of a set of values is such that half the values in the set are less than or equal to ξ
and half are greater than to ξ
► In order to perform median filtering at a point in an image,
► First sort the values of the pixel in question and its neighbors
► Determine their median
► Assign this value to that pixel
Order-Statistics Filters
► For example
► In a 3 X 3 neighborhood the median is the 5th largest value
► In a 5 X 5 neighborhood the 13th largest value
► Thus, image differentiation enhances edges and other discontinuities (such as noise)
Foundation
► We consider sharpening filters based on first and second order derivatives
► Since we are dealing with digital quantities whose values are finite, the maximum
possible gray-level change also is finite, and the shortest distance over which that
change can occur is between adjacent pixels
Foundation
► A basic definition of the first-order derivative of a one-dimensional function f(x) is the
difference
(a) A Simple Image (b) 1-D horizontal gray level profile along the center of the image and
including the isolated noise point
► Figure (a) shows a simple image that contains various solid objects, a line, and a single
noise point
► Figure (b) shows a horizontal gray-level profile (scan line) of the image along the
center and including the noise point
► This profile is the one-dimensional function
Foundation
(c) Simplified profile (the points are joined by dashed lines to simplify interpretation)
Figure (c) shows a simplification of the profile, with just enough numbers in order to
analyze how the first- and second-order derivatives behave
Foundation
► In the simplified diagram
► The transition in the ramp spans four pixels
► The noise point is a single pixel
► The line is three pixels thick
► The transition into the gray-level step takes place between adjacent pixels
► The number of gray levels was simplified to only eight levels
► We note that
► The first-order derivative is nonzero along the entire ramp
► The second-order derivative is nonzero only at the onset and end of the ramp
► Since edges in an image resemble this type of transition
► The first-order derivatives produce “thick” edges
► The second-order derivatives produce much finer ones
Foundation
► Next we encounter the isolated noise point
► The response at and around the point is much stronger for the second-order derivative than
for the first-order derivative
► A second-order derivative is much more aggressive than a first-order derivative in enhancing
sharp changes
► Thus, we can expect a second-order derivative to enhance fine detail (including noise) much
more than a first-order derivative
► In most applications, the second derivative is better suited than the first derivative for
image enhancement because of its ability to enhance fine detail
► The principle of use of first derivatives in image processing is for edge extraction
Use of Second Derivatives for
Enhancement–The Laplacian
► The approach consists of defining a discrete formulation of the
second-order derivative and then constructing a filter mask based on
that formulation
► Isotropic filters are rotation invariant, in the sense that rotating the image
and then applying the filter gives the same result as applying the filter to
the image first and then rotating the result
Development of the method
0 1 0
1 -4 1
0 1 0
► The difference in sign must be kept in mind when combining (by addition
or subtraction) a Laplacian-filtered image with another image
Image Sharpening using the Laplacian
Image of the North Pole Laplacian filtered image Image enhanced by using Eq.
of the moon
Use of First Derivatives for Enhancement—
The Gradient
► For a function f (x, y),the gradient of ‘f’ at coordinates (x, y)is defined as
the two-dimensional column vector
z1 z2 z3
(x-1, y-1) (x-1, y) (x-1, y+1)
(x, y-1) (x, y) (x, y+1)
z4 z5 z6
(x+1, y-1) (x+1, y) (x+1, y+1)
z7 z8 z9
z4 z5 z6
z7 z8 z9
► This equation can be implemented with the two masks shown below
► Robert Cross-Gradient Operators -1 0 0 -1
0 1 1 0
Development of the method
► This equation can be implemented with the two masks shown below
► The Sobel operators
-1 -2 -1 -1 0 1 z1 z2 z3
0 0 0 -2 0 2 z4 z5 z6
1 2 1 -1 0 1 z7 z8 z9
Use of Gradient for Edge Detection
► Figure (b) shows the gradient obtained using two Sobel masks
► The edge defects also are quite visible in this image, but with the added
advantage that constant or slowly varying shades of gray have been
eliminated, thus simplifying considerably the computational task required for
automated inspection
► The gradient process also highlighted small specs that are not readily visible in
the gray-scale image
► Specs like these can be foreign matter, air pockets in a supporting solution, or
miniscule imperfections in the lens
THANK YOU