Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
6 views84 pages

DIP Module 2

The document discusses image enhancement techniques in the spatial domain, focusing on methods such as intensity transformations and spatial filtering. It covers various transformation functions including linear, logarithmic, power law, and histogram processing, explaining their applications and effects on image quality. Additionally, it introduces histogram equalization and matching as techniques to improve image contrast and achieve desired histogram shapes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views84 pages

DIP Module 2

The document discusses image enhancement techniques in the spatial domain, focusing on methods such as intensity transformations and spatial filtering. It covers various transformation functions including linear, logarithmic, power law, and histogram processing, explaining their applications and effects on image quality. Additionally, it introduces histogram equalization and matching as techniques to improve image contrast and achieve desired histogram shapes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

MODULE-2

IMAGE ENHANCEMENT IN
SPATIAL DOMAIN
Mrs. Harshitha S
Assistant Professor
Dept. of E&IE., JSSATE
Bengaluru
Introduction
► Spatial Domain-image plane itself
► Image processing methods are based on direct manipulation of pixels
► Two principal categories
► Intensity Transformations: Operate on single pixels of an image
(Contrast Manipulation, Thresholding)
► Spatial Filtering: Whole image
(Sharpening the neighborhood of every pixel)
► Spatial Domain processes are denoted by-
► g(x,y) = T [f(x,y)]
► ‘T’ is an operator on ‘f’ defined over a neighborhood of point (x,y)
Introduction
► ‘T’ is applied on each pixel-POINT PROCESSING
► s = T (r)
► ‘s’ and ‘r’ are variables denoting the intensities of ‘g’and ‘f’ respectively at point (x,y)

Contrast Thresholding
Stretching
Introduction
► ‘T’ is applied on pixels in neighborhood- SPATIAL DOMAIN FILTERING
► ‘g’ depends only on value of ‘f’ at (x, y)
Basic Intensity Transformation Functions
► Linear (Negative and Identity Transformations)
► Logarithmic (Log and Inverse Log Transformations)
► Power Law (nth Power and nth Root Transformations)

Basic Intensity Transformation Curves


Image Negatives

► The negative of an image with intensity levels in the range [0, L-1] is-
► s=L–1–r
► If r = 0, s = L-1; If r = L-1, s = 0

► Reversing the intensity levels produces the equivalent of a photographic negative

► Suited for enhancing white or gray detail embedded in dark regions


Image Negatives

Original Digital Negative


mammogram image

► Original image is a digital mammogram showing a small lesion


► The visual content is the same in both images
► However it is much easier to analyze the breast tissue in the
negative image
Log Transformations

► The general form of log transformation is-


► s = c log (1+r)
► c is a constant and it is assumed that r ≥ 0

► Maps a narrow range of low intensity values to wider range of output levels

► Expands the dark pixels while compresses the higher level values

► Opposite is true for inverse log transformation


Log Transformations

Result of
Fourier
applying the log
spectrum
transformation

► Fourier spectrum with values in the range 0 to 1.5*106


► When these values are scaled linearly for display in an 8-bit system, the brightest pixels will dominate
the display, at the expense of lower values of the spectrum
► Instead we apply Log Transformation to the spectrum values, then the range of values of the result
become 0 to 6.2
► The wealth of detail visible in this image as compared to a straight display of the spectrum is evident
Power Law (Gamma) Transformations

► The general form of this transformation is-


► s=crγ
► c and γ are positive constants
Power Law Transformation

Magnetic resonance Results of applying the Results of applying the Results of applying the
(MR) image of a Transformation with c=1 Transformation with c=1 Transformation with c=1
fractured human spine and γ =0.6 and γ =0.4 and γ =0.3
Power Law Transformation
► The given image is predominantly dark
► An expansion of gray levels are desirable
► Can be accomplished using power-law transformation with a fractional
exponent
► As gamma decreased from 0.6 to 0.4, more detail became visible
► Further decrease of gamma to 0.3 enhanced a little more detail in the
background
► Began to reduce contrast to the point where the image started to have a very
slight “washed-out” look, especially in the background
Power Law Transformation

Aerial image Results of applying the Results of applying the Results of applying the
Transformation with c=1 Transformation with c=1 Transformation with c=1
and γ =3 and γ =4 and γ =5
Power Law Transformation
► The image to be enhanced now has a washed-out appearance
► A compression of gray levels is desirable
► Suitable results were obtained with gamma values of 3.0 and 4.0
► Gamma = 4.0 has a slightly more appealing appearance because it has higher
contrast
► The result obtained with gamma=5.0 has areas that are too dark, in which
some detail is lost
Piecewise-Linear Transformations
Contrast Stretching
► Low contrast images-Poor illumination, wrong setting of a lens aperture etc.

► Expands the range of intensity levels in an


image so that it spans the full intensity range

► r1 = s1; r2 = s2
► Linear Function
► Produces no changes in the intensity levels

► r1 = r2; s1 = 0 s2 = L-1
► Thresholding Function
► Creates a binary image
A Low-contrast Image Result of contrast stretching Result of thresholding
Piecewise-Linear Transformations
Contrast Stretching
► It is an 8-bit image with low contrast

► Second figure shows the result of contrast stretching


► By setting (r1, s1) = (rmin, 0) and (r2, s2) =(rmax,L-1)
► rmin and rmax denote the minimum and maximum gray levels in the
image, respectively
► Thus, the transformation function stretched the levels linearly from
their original range to the full range [0, L-1]

► Third figure shows the result of using the thresholding function


► With r1 = r2 = m, the mean gray level in the image
Piecewise-Linear Transformations
Gray Level/Intensity Level Slicing
► Highlighting/enhancing the specific range of intensities

► Applications include enhancing features such as masses


of water in satellite imagery and enhancing flaws in X-ray
images

► Two Variations of Implementation-

1. Display a high value for all gray levels in the range of


interest and a low value for all other gray levels- Binary
Image

► This transformation highlights range [A, B] of gray levels


and reduces all others to a constant level
Piecewise-Linear Transformations
Gray Level/Intensity Level Slicing
► Two Variations of Implementation-

2. Brightens the desired range of gray levels but preserves the


background and gray-level tonalities in the image

► This transformation highlights range [A, B] but preserves all


other levels
Aortic Angiogram Result of Slicing-1 Result of Slicing-2
Piecewise-Linear Transformations
Bit Plane Slicing
► Instead of highlighting gray-level ranges, highlighting the contribution made to
total image appearance by specific bits might be desired
► Let each pixel in an image be represented by 8 bits
► The image is composed of eight 1-bit planes
► Bit-plane 0 for the least significant bit
► Bit plane 7 for the most significant bit
► The higher-order bits (especially the top four) contain the majority of the visually
significant data
► The other bit planes contribute to more subtle details in the image
► Separating a digital image into its bit planes
► Analyzing the relative importance played by each bit of the image,
► Aids in determining the adequacy of the number of bits used to quantize each pixel
Piecewise-Linear Transformations
Bit Plane Slicing
Bit Plane Slicing

An 8-bit fractal image

A fractal is an image
generated from
mathematical
expressions
The eight bit planes of the image
Histogram Processing
► Histograms are the basis for numerous spatial domain processing
techniques

► Histogram manipulation can be used effectively for image


enhancement

► Histogram
► Provides useful image statistics
► Is also quite useful in other image processing applications, such as
image compression and segmentation
► Is simple to calculate in software
► Lend themselves to economic hardware implementations

► Popular tool for real-time image processing


Histogram Processing
► The histogram of a digital image with gray levels in the range [0,L-1] is
a discrete function
► h(rk) = nk
► rk is the kth gray level
► nk is the number of pixels in the image having gray level rk

► Normalize a histogram
► By dividing each of its values by the total number of pixels in the image
► p(rk)= nk /MN, for k=0,1,……..,L-1 , M & N are the row and column
dimensions
► p(rk) gives an estimate of the probability of occurrence of gray level rk
► The sum of all components of a normalized histogram is equal to 1
Histogram Processing
► The horizontal axis of each histogram plot corresponds to gray level values, r k
► The vertical axis corresponds to values of h(rk) = nk or p(rk)= nk /MN (normalized)
► Histogram plots are plots of h(rk) = nk versus rk or p(rk)= nk /MN versus rk
Histogram Processing

► In the bright image the components of the histogram are biased toward the high
side of the gray scale
Histogram Processing

► An image with low contrast has a histogram that will be narrow and
will be centered towards the middle of the gray scale
► For a monochrome image this implies a dull, washed-out gray look
Histogram Processing

► In the high-contrast image the components of the histogram cover a broad


range of the gray scale and also the distribution of pixels is uniform
► An image whose pixels tend to occupy the entire range of possible gray levels
and tend to be distributed uniformly, will have an appearance of high contrast
and will exhibit a large variety of gray tones
Histogram Equalization
► Consider continuous intensity values
► ‘r’ denotes the intensities of an image to be processed
► ‘r’ is normalized and is in the range [0, 1]
► r=0-Black, r=1-White
► We use the transformation to produce an output intensity level ‘s’ for every pixel
in the input image having intensity ‘r’
s = T (r) 0≤r≤1
► Assume that the transformation function T(r) satisfies the following conditions
► T(r) is single-valued and monotonically increasing in the interval 0 ≤ r ≤ 1
► 0 ≤ T(r) ≤ 1 for 0 ≤ r ≤ 1
Histogram Equalization
► The first condition guarantees that the output intensity values will never be less than the
corresponding input values
► Prevents artifacts created by reversals of intensity
► The second condition guarantees that the range of output intensities is same as that of
the input
► Satisfies both the conditions but multiple values map to a
single value
► Monotonic transformation function
► One-to-one mapping
► Many-to-one mapping
► Perfectly fine when mapping ‘r’ to ‘s’
Histogram Equalization
► The inverse transformation from s back to r is denoted as
r = T-1 (s) 0≤s≤1
► Presents a problem when we want to recover the values of ‘r’ uniquely from the mapped
values
► This requires that T(r) should be strictly monotonically
increasing
► Guarantees that inverse mappings will be single valued
► One-to-one mapping in both the directions
Histogram Equalization
► For discrete values we deal with probabilities and summations instead of probability density
functions and integrals

► MN is the total number of pixels in the image


► nk is the number of pixels that have gray level rk
► L is the total number of possible gray levels in the image
► The discrete version of the transformation function is-

► A plot of pr (rk) versus rk is called a histogram


► The transformation (mapping) given in Equation above is called histogram equalization or
histogram linearization
Histogram Equalization-Example
► Consider a 3-bit image of size 64 X 64 pixels which has the intensity distribution as shown
below. Obtain the histogram equalization of this image

Histogram of
the Original
Image

► 3-bit image. Hence L=8. Intensity levels are integers in the range [0, 7]
► 64 X 64 pixels. Hence MN=4096
Histogram Equalization-Example

Histogram Equalization-Example

Transformation Function
Histogram Equalization-Example
► We round off the ‘s’ values to the nearest integer
► These are the values of the equalized histogram
► There are only five distinct intensity levels

rk nk sk ps (sk) ps (sk)
r0 = 0 790 1 790/4096 0.19
r1 = 1 1023 3 1023/4096 0.25
r2 = 2 850 5 850/4096 0.21
r3 = 3 656 6
(656+329)/4096 0.24
r4 = 4 329 6
r5 = 5 245 7
r6 = 6 122 7 (245+122+81)/4096 0.11
r7 = 7 81 7
Histogram of the Equalized Image
Histogram Equalization

Example-Dark Image
Histogram Equalization

Example-Bright Image
Histogram Equalization

Example-Low Contrast Image


Histogram Equalization

Example-High Contrast Image


Histogram Equalization

Histogram transformation functions for the previous four images


Histogram Matching
► Histogram equalization automatically determines a transformation function that seeks to
produce an output image that has a uniform histogram

► Sometimes useful to be able to specify the shape of the histogram that we wish the processed
image to have

► The method used to generate a processed image that has a specified histogram is called
histogram matching or histogram specification

► Development of the method


► Consider continuous gray levels r and z and let pr(r) and pz(z) denote their corresponding continuous
probability density functions
► r and z denote the gray levels of the input and output (processed) images, respectively
► We can estimate pr(r) from the given input image, while pz(z) is the specified probability density
function that we wish the output image to have
Histogram Matching
► The discrete version of the transformation function is-

► MN is the total number of pixels in the image


► nk is the number of pixels that have gray level rk
► L is the total number of possible gray levels in the image
► The discrete formulation of the transformation function is
Histogram Matching-Procedure
► Step 1: Obtain pr(r) from the input image and use Eq. (1) to obtain the values of ‘sk’.
Round the resulting values to the nearest integer
► Step 2: Use Eqn (2) to obtain the transformation function G(zq). Round the values of ‘G’ to
the nearest integer and store the values in a table
► Step 3:
► For every value of ‘sk’, use the stored values of ‘G’ to find the corresponding value of ‘z q‘ so that
G(zq) is closest to ‘sk’
► This process is a mapping from ‘s’ to ‘z’
► When more than one value of ‘zq‘ satisfies the given ‘sk’ , choose the smallest value
► Step 4:
► Obtain the output image by first equalizing the input image using Eq. (1)
► For each pixel with value ‘sk’ in the equalized image perform inverse mapping zq = G–1(sk) to
obtain the corresponding pixel in the output image
► When all pixels have been processed, the probability of the output image will be equal to the
specified probability
Histogram Matching-Example
► Consider a 3-bit image of size 64 X 64 pixels which has the intensity distribution as shown
below. Obtain the histogram of this image so that it will have the values as specified

Histogram of the Original Image


► 3-bit image. Hence L=8. Intensity levels are integers in the range [0, 7]

► 64 X 64 pixels. Hence MN=4096


Histogram Matching-Example

Specified Histogram
Histogram Matching-Example
► Compute all the values of the transformation function ‘G’
Histogram Matching-Example
► Compute all the values of the transformation function ‘G’

Transformation Function obtained from the


specified histogram
Histogram Matching-Example
rk nk sk pr (rk) zq G (zq) pz (zq) pz (zq)

r0 = 0 790 1 0.19 z0 = 0 0 0 0
r1 = 1 1023 3 0.25 z1 = 1 0 0 0
r2 = 2 850 5 0.21 z2 = 2 0 0 0
r3 = 3 656 6 0.16 z3 = 3 1 790/4096 0.19
r4 = 4 329 6 0.08 z4 = 4 2 1023/4096 0.25
r5 = 5 245 7 0.06 z5 = 5 5 850/4096 0.21
r6 = 6 122 7 0.03 z6 = 6 6 (656+329)/4096 0.24
r7 = 7 81 7 0.02 z7 = 7 7 (245+122+81)/4096 0.11

Result of performing histogram specification


Comparison between histogram
equalization and histogram matching

Image of the Mars moon Histogram of Original Image


Photos taken by NASA’s Mars Global Surveyor
Comparison between histogram
equalization and histogram matching

Histogram Equalized Histogram of Equalization Histogram of


Image Transformation Function Equalized Image
Comparison between histogram
equalization and histogram matching

Enhanced Image Specified Histogram Histogram Curves Histogram of Enhanced


Image
Fundamentals of Spatial Filtering
► Spatial Filtering
► One of the principal tools
► Used for a broad spectrum of applications
► Filter
► Accepts (Passes) or rejects certain frequency components
► Ex: Low pass filter passes low frequencies
► Net effect produced is to Blur (Smooth) an Image
► Spatial filters
► Spatial Masks
► Kernels
► Templates
► Windows
Mechanics of Spatial Filtering
► Spatial Filter consists of
► A neighborhood (typically a small rectangle)
► Predefined operation
► Performed on the image pixels encompassed by the neighborhood

► Filtering
► Creates a new pixel with coordinates equal to the coordinates of the center of the
neighborhood
► With value equal to the result of the filtering operation
► The center of the filter visits each pixel in the input image
► A processed/Filtered Image

► Linear Spatial Filter


► Non-linear spatial Filter
Mechanics of Linear
Spatial Filtering

► At any point (x, y) in the


image, the response g(x, y) of
the filter is
► Sum of the products of the
filter coefficients and the
image pixels
Mechanics of Linear Spatial Filtering
► The center coefficient of the filter w(0,0) aligns with the pixel at location
(x, y)
► In general, linear spatial filtering of an image of size M X N with a filter of
size m x n is given by

► ‘x’ and ‘y’ are varied so that each pixel in ‘w’ visits every pixel in ‘f’
Smoothing Spatial Filters
► Smoothing filters are used for
► Blurring
► Noise reduction

► Blurring is used in preprocessing steps


► Removal of small details from an image prior to object extraction
► Bridging of small gaps in lines or curves

► Noise reduction can be accomplished by


► Blurring with a linear filter
► Nonlinear filtering
Smoothing Linear Filters
► The output (response) of a smoothing, linear spatial filter is simply the average of the pixels
contained in the neighborhood of the filter mask
► Averaging filters/low pass filters

► By replacing the value of every pixel in an image by the average of the gray levels in the
neighborhood defined by the filter mask

► This process results in an image with reduced “sharp” transitions in gray levels
► Since random noise typically consists of sharp transitions in gray levels, the most obvious application of
smoothing is noise reduction

► However, edges (which almost always are desirable features of an image) also are characterized
by sharp transitions in gray levels, so averaging filters have the undesirable side effect that they
blur edges

► A major use of averaging filters is in the reduction of “irrelevant” detail in an image


► “Irrelevant” means pixel regions that are small with respect to the size of the filter mask
Smoothing Linear Filters
► Use of this filter yields the standard average of the pixels under the
mask

► This is the average of the gray levels of the pixels in the 3 x 3


neighborhood defined by the mask

► Instead of being 1/9, the coefficients of the filter are all 1’s
► It is computationally more efficient to have coefficients valued 1
► At the end of the filtering process the entire image is divided by 9

► An m x n mask would have a normalizing constant equal to 1/mn

► A spatial averaging filter in which all coefficients are equal is


sometimes called a box filter
Smoothing Linear Filters
► This mask yields a weighted average
► The pixels are multiplied by different coefficients, thus giving more
importance (weight) to some pixels at the expense of others

► In this mask the pixel at the center of the mask is multiplied by a higher
value than any other, thus giving this pixel more importance in the
calculation of the average

► The other pixels are inversely weighted as a function of their distance


from the center of the mask

► The diagonal terms are further away from the center than the orthogonal
neighbors and, thus, are weighed less than these immediate neighbors
of the center pixel

► The basic strategy behind weighing the center point the highest and
then reducing the value of the coefficients as a function of increasing
distance from the origin is simply an attempt to reduce blurring in the
smoothing process
Order-Statistics Filters
► These are nonlinear spatial filters whose response is based on
► Ordering (ranking) the pixels contained in the image area encompassed by the filter
► Then replacing the value of the center pixel with the value determined by the ranking result
► Example is the median filter
► Replaces the value of a pixel by the median of the gray levels in the neighborhood of that pixel (the
original value of the pixel is included in the computation of the median)
► Median filters are quite popular because, they provide excellent noise-reduction capabilities, with
considerably less blurring than linear smoothing filters of similar size
► Median filters are particularly effective in the presence of impulse noise (salt-and-pepper noise)
► The median ξ of a set of values is such that half the values in the set are less than or equal to ξ
and half are greater than to ξ
► In order to perform median filtering at a point in an image,
► First sort the values of the pixel in question and its neighbors
► Determine their median
► Assign this value to that pixel
Order-Statistics Filters
► For example
► In a 3 X 3 neighborhood the median is the 5th largest value
► In a 5 X 5 neighborhood the 13th largest value

X-ray Image of Circuit Noise Reduction with a 3 Noise Reduction with a 3


Board Corrupted by Salt X 3 Averaging Mask X 3 Median Filter
and Pepper Noise
Order-Statistics Filters
► Figure (a)shows an X-ray image of a circuit board heavily corrupted by
salt-and-pepper noise
► Figure (b) is the result of processing the noisy image with a 3 X 3 neighborhood
averaging mask
► Figure (c) is the result of using a 3 X 3 median filter
► The image processed with the averaging filter has less visible noise, but the price paid is
significant blurring
► The superiority in all respects of median over average filtering in this case is quite evident
► Median filtering is much better suited than averaging for the removal of additive
salt-and-pepper noise
► Other examples are Max and Min filters
Sharpening Spatial Filters
► The principal objective of sharpening is to
► Highlight fine detail in an image or
► To enhance detail that has been blurred, either in error or as a natural effect of a particular
method of image acquisition

► Image averaging is analogous to integration and sharpening is analogous to


differentiation

► The strength of the response of a derivative operator is proportional to the degree of


discontinuity of the image at the point at which the operator is applied

► Thus, image differentiation enhances edges and other discontinuities (such as noise)
Foundation
► We consider sharpening filters based on first and second order derivatives

► The derivatives of a digital function are defined in terms of differences

► There are various ways to define these differences

► We require that any definition we use for a first derivative


1) must be zero in flat segments (areas of constant gray-level values)
2) must be nonzero at the onset of a gray-level step or ramp
3) must be nonzero along ramps
Foundation
► Similarly, any definition of a second derivative
1) must be zero in flat areas
2) must be nonzero at the onset and end of a gray-level step or ramp
3) must be zero along ramps of constant slope

► Since we are dealing with digital quantities whose values are finite, the maximum
possible gray-level change also is finite, and the shortest distance over which that
change can occur is between adjacent pixels
Foundation
► A basic definition of the first-order derivative of a one-dimensional function f(x) is the
difference

► Similarly, we define a second-order derivative as the difference


Foundation

(a) A Simple Image (b) 1-D horizontal gray level profile along the center of the image and
including the isolated noise point

► Figure (a) shows a simple image that contains various solid objects, a line, and a single
noise point
► Figure (b) shows a horizontal gray-level profile (scan line) of the image along the
center and including the noise point
► This profile is the one-dimensional function
Foundation

(c) Simplified profile (the points are joined by dashed lines to simplify interpretation)

Figure (c) shows a simplification of the profile, with just enough numbers in order to
analyze how the first- and second-order derivatives behave
Foundation
► In the simplified diagram
► The transition in the ramp spans four pixels
► The noise point is a single pixel
► The line is three pixels thick
► The transition into the gray-level step takes place between adjacent pixels
► The number of gray levels was simplified to only eight levels

► We note that
► The first-order derivative is nonzero along the entire ramp
► The second-order derivative is nonzero only at the onset and end of the ramp
► Since edges in an image resemble this type of transition
► The first-order derivatives produce “thick” edges
► The second-order derivatives produce much finer ones
Foundation
► Next we encounter the isolated noise point
► The response at and around the point is much stronger for the second-order derivative than
for the first-order derivative
► A second-order derivative is much more aggressive than a first-order derivative in enhancing
sharp changes
► Thus, we can expect a second-order derivative to enhance fine detail (including noise) much
more than a first-order derivative

► The thin line is a fine detail


► We see essentially the same difference between the two derivatives

► Finally, at the gray-level step


► The response of the two derivatives is the same
► The second derivative has a transition from positive back to negative
Foundation
► We arrive at the following conclusions
1) First-order derivatives generally produce thicker edges in an image
2) Second-order derivatives have a stronger response to fine detail, such as thin lines and
isolated points
3) First-order derivatives generally have a stronger response to a gray-level step
4) Second-order derivatives produce a double response at step changes in gray level

► In most applications, the second derivative is better suited than the first derivative for
image enhancement because of its ability to enhance fine detail

► The principle of use of first derivatives in image processing is for edge extraction
Use of Second Derivatives for
Enhancement–The Laplacian
► The approach consists of defining a discrete formulation of the
second-order derivative and then constructing a filter mask based on
that formulation

► We are interested in isotropic filters, whose response is independent of


the direction of the discontinuities in the image to which the filter is
applied

► Isotropic filters are rotation invariant, in the sense that rotating the image
and then applying the filter gives the same result as applying the filter to
the image first and then rotating the result
Development of the method

► The simplest isotropic derivative operator is the Laplacian, which, for a


function (image) f(x, y) of two variables, is defined as
(x-1, y-1) (x-1, y) (x-1, y+1)
(x, y-1) (x, y) (x, y+1)
(x+1, y-1) (x+1, y) (x+1, y+1)

► The partial second-order derivative in the x-direction is

► The partial second-order derivative in the y-direction is


Development of the method
► This equation can be implemented using the mask shown below

0 1 0

1 -4 1

0 1 0

► The difference in sign must be kept in mind when combining (by addition
or subtraction) a Laplacian-filtered image with another image
Image Sharpening using the Laplacian

Image of the North Pole Laplacian filtered image Image enhanced by using Eq.
of the moon
Use of First Derivatives for Enhancement—
The Gradient
► For a function f (x, y),the gradient of ‘f’ at coordinates (x, y)is defined as
the two-dimensional column vector

► The magnitude of this vector is given by

► It is common practice to approximate the magnitude of the gradient by


using absolute values instead of squares and square roots
Development of the method
► Consider the following image points in a 3 X 3 region

z1 z2 z3
(x-1, y-1) (x-1, y) (x-1, y+1)
(x, y-1) (x, y) (x, y+1)
z4 z5 z6
(x+1, y-1) (x+1, y) (x+1, y+1)
z7 z8 z9

► The center point,z5,denotes f(x, y), z1 denotes f(x-1, y-1),and so on

► The simplest approximations to a first-order derivative are


Development of the method

► Other definition proposed by Roberts [1965] in the early development of


digital image processing use cross differences

► We compute the gradient as


z1 z2 z3

z4 z5 z6

z7 z8 z9

► This equation can be implemented with the two masks shown below
► Robert Cross-Gradient Operators -1 0 0 -1
0 1 1 0
Development of the method

► The smallest filter mask in which we are interested is of size 3 X 3

► An approximation using absolute values, at point z5,but using a 3 X 3


mask, is

► This equation can be implemented with the two masks shown below
► The Sobel operators
-1 -2 -1 -1 0 1 z1 z2 z3

0 0 0 -2 0 2 z4 z5 z6

1 2 1 -1 0 1 z7 z8 z9
Use of Gradient for Edge Detection

Optical Image of Contact Lens Sobel Gradient


Use of Gradient for Edge Detection
► Fig (a) shows optical image of a contact lens, illuminated by a lighting
arrangement designed to highlight imperfections
► The two edge defects in the lens boundary

► Figure (b) shows the gradient obtained using two Sobel masks

► The edge defects also are quite visible in this image, but with the added
advantage that constant or slowly varying shades of gray have been
eliminated, thus simplifying considerably the computational task required for
automated inspection

► The gradient process also highlighted small specs that are not readily visible in
the gray-scale image
► Specs like these can be foreign matter, air pockets in a supporting solution, or
miniscule imperfections in the lens
THANK YOU

You might also like