CS589-04 Digital Image Processing
Lecture 6. Image Segmentation
Spring 2008
New Mexico Tech
Fundamentals
► Let R represent the entire spatial region occupied by an
image. Image segmentation is a process that partitions R
into n sub-regions, R1, R2, …, Rn, such that
n
(a) Ri R.
i 1
(b) Ri is a connected set. i 1, 2, ..., n.
(c) Ri R j .
(d) ( Ri ) TRUE for i 1, 2, ..., n.
(e) ( Ri R j ) FALSE for any adjacent regions
Ri and R j .
12/26/20 2
12/26/20 3
Background
► First-order derivative
f
f '( x) f ( x 1) f ( x)
x
► Second-order derivative
2 f
f ( x 1) f ( x 1) 2 f ( x)
x 2
12/26/20 4
12/26/20 5
Characteristics of First and Second Order
Derivatives
► First-order derivatives generally produce thicker edges in
image
► Second-order derivatives have a stronger response to fine
detail, such as thin lines, isolated points, and noise
► Second-order derivatives produce a double-edge response
at ramp and step transition in intensity
► The sign of the second derivative can be used to
determine whether a transition into an edge is from light to
dark or dark to light
12/26/20 6
Detection of Isolated Points
► The Laplacian
2
f 2
f
f ( x, y ) 2 2
2
x y
f ( x 1, y ) f ( x 1, y ) f ( x, y 1) f ( x, y 1)
4 f ( x, y )
9
1 if | R( x, y ) | T R wk zk
g ( x, y ) k 1
0 otherwise
12/26/20 7
12/26/20 8
Line Detection
► Second derivatives to result in a stronger response and to
produce thinner lines than first derivatives
► Double-line effect of the second derivative must be
handled properly
12/26/20 9
12/26/20 10
Detecting Line in Specified Directions
► Let R1, R2, R3, and R4 denote the responses of the masks in
Fig. 10.6. If, at a given point in the image, |Rk|>|Rj|, for all
j≠k, that point is said to be more likely associated with a
line in the direction of mask k.
12/26/20 11
12/26/20 12
Edge Detection
► Edges are pixels where the brightness function changes
abruptly
► Edge models
12/26/20 13
12/26/20 14
12/26/20 15
12/26/20 16
Basic Edge Detection by Using First-Order
Derivative
f
g x x
f grad ( f )
g y f
y
The magnitude of f
M ( x, y ) mag(f ) g x 2 g y 2
The direction of f
gx
( x, y ) tan
1
g y
The direction of the edge
12/26/20 17
- 90
Basic Edge Detection by Using First-Order
Derivative
f
g x x
Edge normal: f grad ( f )
g y f
y
Edge unit normal: f / mag(f )
In practice,sometimes the magnitude is approximated by
f f f f
mag(f )= + or mag(f )=max | |,| |
x y x y
12/26/20 18
12/26/20 19
12/26/20 20
12/26/20 21
12/26/20 22
12/26/20 23
12/26/20 24
12/26/20 25
12/26/20 26
12/26/20 27
12/26/20 28
Advanced Techniques for Edge Detection
► The Marr-Hildreth edge detector
x2 y 2
G ( x, y ) e 2 2
, : space constant.
Laplacian of Gaussian (LoG)
2
G ( x , y ) 2
G ( x, y )
2 G ( x, y )
x 2 y 2
x 2 2 y x y2
2 2 2 2
x y
2e 2 e 2
x y
x2 y 2 x2 y 2
x 2
1 y 1
2
4 2 e 2 2
4 2 e 2 2
x2 y 2
x y
2 2 2
12/26/20 e
2 2 29
4
12/26/20 30
Marr-Hildreth Algorithm
1. Filter the input image with an nxn Gaussian lowpass
filter. N is the smallest odd integer greater than or equal
to 6
2. Compute the Laplacian of the image resulting from step1
3. Find the zero crossing of the image from step 2
g ( x , y ) G ( x , y ) f ( x, y )
2
12/26/20 31
12/26/20 32
The Canny Edge Detector
► Optimal for step edges corrupted by white noise.
► The Objective
1. Low error rate
The edges detected must be as close as possible to the true edge
2. Edge points should be well localized
The edges located must be as close as possible to the true edges
3. Single edge point response
The number of local maxima around the true edge should be
minimum
12/26/20 33
The Canny Edge Detector: Algorithm (1)
Let f ( x, y ) denote the input image and
G ( x, y ) denote the Gaussian function:
x2 y 2
G ( x, y ) e 2 2
We form a smoothed image, f s ( x, y ) by
convolving G and f :
f s ( x, y ) G ( x, y ) f ( x, y )
12/26/20 34
The Canny Edge Detector: Algorithm(2)
Compute the gradient magnitude and direction (angle):
M ( x, y ) g x 2 g y 2
and
( x, y ) arctan( g y / g x )
where g x f s / x and g y f s / y
Note: any of the filter mask pairs in Fig.10.14 can be used
to obtain g x and g y
12/26/20 35
The Canny Edge Detector: Algorithm(3)
The gradient M ( x, y ) typically contains wide ridge around
local maxima. Next step is to thin those ridges.
Nonmaxima suppression:
Let d1 , d 2 , d3 , and d 4 denote the four basic edge directions for
a 3 3 region: horizontal, -45o , vertical,+45 , respectively.
1. Find the direction d k that is closest to ( x, y ).
2. If the value of M ( x, y ) is less than at least one of its two
neighbors along d k , let g N ( x, y ) 0 (suppression);
otherwise, let g N ( x, y ) M ( x, y )
12/26/20 36
12/26/20 37
The Canny Edge Detector: Algorithm(4)
The final operation is to threshold g N ( x, y ) to reduce
false edge points.
Hysteresis thresholding:
g NH ( x, y ) g N ( x, y ) TH
g NL ( x, y ) g N ( x, y ) TL
and
g NL ( x, y ) g NL ( x, y ) g NH ( x, y )
12/26/20 38
The Canny Edge Detector: Algorithm(5)
Depending on the value of TH , the edges in g NH ( x, y )
typically have gaps. Longer edges are formed using
the following procedure:
(a). Locate the next unvisited edge pixel, p, in g NH ( x, y ).
(b). Mark as valid edge pixel all the weak pixels in g NL ( x, y )
that are connected to p using 8-connectivity.
(c). If all nonzero pixel in g NH ( x, y ) have been visited go to
step (d), esle return to (a).
(d). Set to zero all pixels in g NL ( x, y ) that were not marked as
12/26/20
valid edge pixels. 39
The Canny Edge Detection: Summary
► Smooth the input image with a Gaussian filter
► Compute the gradient magnitude and angle images
► Apply nonmaxima suppression to the gradient magnitude
image
► Use double thresholding and connectivity analysis to detect
and link edges
12/26/20 40
12/26/20
TL 0.04; TH 0.10; 4 and a mask of size 25 25 41
12/26/20 42
TL 0.05; TH 0.15; 2 and a mask of size 13 13
12/26/20 43
12/26/20 44
Edge Linking and Boundary Detection
► Edge detection typically is followed by linking algorithms
designed to assemble edge pixels into meaningful edges
and/or region boundaries
► Three approaches to edge linking
Local processing
Regional processing
Global processing
12/26/20 45
Local Processing
► Analyze the characteristics of pixels in a small
neighborhood about every point (x,y) that has been
declared an edge point
► All points that similar according to predefined criteria are
linked, forming an edge of pixels.
Establishing similarity: (1) the strength (magnitude) and
(2) the direction of the gradient vector.
A pixel with coordinates (s,t) in Sxy is linked to the pixel at
(x,y) if both magnitude and direction criteria are satisfied.
12/26/20 46
Local Processing
Let S xy denote the set of coordinates of a neighborhood
centered at point (x, y ) in an image. An edge pixel with
coordinate (s, t ) in S xy is similar in magnitude to the pixel
at (x, y ) if
M ( s , t ) M ( x, y ) E
An edge pixel with coordinate (s, t ) in S xy is similar in angle
to the pixel at (x, y ) if
( s, t ) ( x, y) A
12/26/20 47
Local Processing: Steps (1)
1. Compute the gradient magnitude and angle arrays,
M(x,y) and ( x, y ) , of the input image f(x,y)
2. Form a binary image, g, whose value at any pair of
coordinates (x,y) is given by
1 if M ( x, y ) TM and ( x, y ) A TA
g ( x, y )
0 otherwise
TM : threshold A : specified angle direction
TA : a "band" of acceptable directions about A
12/26/20 48
Local Processing: Steps (2)
3. Scan the rows of g and fill (set to 1) all gaps (sets of 0s)
in each row that do not exceed a specified length, K.
4. To detect gaps in any other direction, rotate g by this
angle and apply the horizontal scanning procedure in
step 3.
12/26/20 49
12/26/20 50
Regional Processing
► The location of regions of interest in an image are known
or can be determined
► Polygonal approximations can capture the essential shape
features of a region while keeping the representation of
the boundary relatively simple
► Open or closed curve
Open curve: a large distance between two consecutive
points in the ordered sequence relative to the distance
between other points
12/26/20 51
12/26/20 52
Regional Processing: Steps
1. Let P be the sequence of ordered, distinct, 1-valued
points of a binary image. Specify two starting points,
A and B.
2. Specify a threshold, T, and two empty stacks, OPEN
and ClOSED.
3. If the points in P correspond to a closed curve, put A
into OPEN and put B into OPEN and CLOSES. If the
points correspond to an open curve, put A into OPEN
and B into CLOSED.
4. Compute the parameters of the line passing from the
last vertex in CLOSED to the last vertex in OPEN.
12/26/20 53
Regional Processing: Steps
5. Compute the distances from the line in Step 4 to all
the points in P whose sequence places them
between the vertices from Step 4. Select the point,
Vmax, with the maximum distance, Dmax
6. If Dmax> T, place Vmax at the end of the OPEN stack as
a new vertex. Go to step 4.
7. Else, remove the last vertex from OPEN and insert it
as the last vertex of CLOSED.
8. If OPEN is not empty, go to step 4.
9. Else, exit. The vertices in CLOSED are the vertices of
the polygonal fit to the points in P.
12/26/20 54
12/26/20 55
12/26/20 56
12/26/20 57
Global Processing Using the Hough Transform
► “The Hough transform is a general technique for
identifying the locations and orientations of certain types
of features in a digital image. Developed by Paul Hough in
1962 and patented by IBM, the transform consists of
parameterizing a description of a feature at any given
location in the original image’s space. A mesh in the space
defined by these parameter is then generated, and at each
mesh point a value is accumulated, indicating how well an
object generated by the parameters defined at that point
fits the given image. Mesh points that accumulate
relatively larger values then describe features that may be
projected back onto the image, fitting to some degree the
features actually present in the image.”
http://planetmath.org/encyclopedia/HoughTransform.html
12/26/20 58
12/26/20 59
12/26/20 60
Edge-linking Based on the Hough Transform
1. Obtain a binary edge image
2. Specify subdivisions in plane
3. Examine the counts of the accumulator cells for high
pixel concentrations
4. Examine the relationship between pixels in chosen cell
12/26/20 61
12/26/20 62
12/26/20 63
Thresholding
1 if f ( x, y ) T (object point)
g ( x, y )
0 if f ( x, y ) T (background point)
T : global thresholding
Multiple thresholding
a if f ( x, y ) T2
g ( x, y ) b if T1 f ( x, y ) T2
c if f ( x, y ) T1
12/26/20 64
12/26/20 65
The Role of Noise in Image Thresholding
12/26/20 66
The Role of Illumination and Reflectance
12/26/20 67
Basic Global Thresholding
1. Select an initial estimate for the global threshold, T.
2. Segment the image using T. It will produce two groups of pixels: G1
consisting of all pixels with intensity values > T and G2 consisting of
pixels with values T.
3. Compute the average intensity values m1 and m2 for the pixels in
G1 and G2, respectively.
4. Compute a new threshold value.
1
T m1 m 2
2
5. Repeat Steps 2 through 4 until the difference between values of T in
successive iterations is smaller than a predefined parameter T .
12/26/20 68
12/26/20 69
Optimum Global Thresholding Using Otsu’s
Method
► Principle: maximizing the between-class variance
Let {0, 1, 2, ..., L -1} denote the L distinct intensity levels
in a digital image of size M N pixels, and let ni denote the
number of pixels with intensity i.
L 1
pi ni / MN and p
i 0
i 1
k is a threshold value, C1 [0, k ], C2 [k 1, L -1]
k L 1
P1 (k ) pi and P2 (k ) p i 1 P1 ( k )
i 0 i k 1
12/26/20 70
Optimum Global Thresholding Using Otsu’s
Method
The mean intensity value of the pixels assigned to class
C1 is
k
1 k
m1 ( k ) iP(i / C1 ) ipi
i 0 P1 (k ) i 0
The mean intensity value of the pixels assigned to class
C2 is
L 1
1 L 1
m2 ( k ) iP(i / C2 ) ipi
i k 1 P2 (k ) i k 1
1 1 P2 m2 mG (Global mean value)
Pm
12/26/20 71
Optimum Global Thresholding Using Otsu’s
Method
Between-class variance, B2 is defined as
B2 P1 (m1 mG ) 2 P2 (m2 mG ) 2
= P1 P2 (m1 m2 ) 2
mG P1 m1P1
2
=
P1 (1 P1 )
mG P1 m
2
=
P1 (1 P1 )
12/26/20 72
Optimum Global Thresholding Using Otsu’s
Method
The optimum threshold is the value, k*, that maximizes
B2 (k *), B2 (k *) max B2 (k )
0 k L 1
1 if f ( x, y ) k *
g ( x, y )
0 if f ( x, y ) k *
B2
Separability measure 2
G
12/26/20 73
Otsu’s Algorithm: Summary
1. Compute the normalized histogram of the input
image. Denote the components of the histogram by
pi, i=0, 1, …, L-1.
2. Compute the cumulative sums, P1(k), for k = 0, 1,
…, L-1.
3. Compute the cumulative means, m(k), for k = 0, 1,
…, L-1.
4. Compute the global intensity mean, mG.
5. Compute the between-class variance, for k = 0, 1,
…, L-1.
12/26/20 74
Otsu’s Algorithm: Summary
6. Obtain the Otsu’s threshold, k*.
7. Obtain the separability measure.
12/26/20 75
12/26/20 76
Using Image Smoothing to Improve Global Thresholding
12/26/20 77
Using Edges to Improve Global Thresholding
12/26/20 78
Using Edges to Improve Global Thresholding
1. Compute an edge image as either the magnitude of the
gradient, or absolute value of the Laplacian of f(x,y)
2. Specify a threshold value T
3. Threshold the image and produce a binary image, which
is used as a mask image; and select pixels from f(x,y)
corresponding to “strong” edge pixels
4. Compute a histogram using only the chosen pixels in
f(x,y)
5. Use the histogram from step 4 to segment f(x,y) globally
12/26/20 79
12/26/20 80
12/26/20 81
Multiple Thresholds
In the case of K classes, C1 , C2 , ..., CK , the between-class
variance is
K
B2 Pk mk mG
2
k 1
1
where Pk pi and mk ip i
iCk Pk iCk
The optimum threshold values, k1*, k2 *, ..., k K 1 * that maximize
B2 (k1*, k2 *, ..., k K 1*) max B2 ( k1 , k2 , ..., k K 1 )
0 k L 1
12/26/20 82
12/26/20 83
Variable Thresholding: Image Partitioning
► Subdivide an image into nonoverlapping rectangles
► The rectangles are chosen small enough so that the
illumination of each is approximately uniform.
12/26/20 84
12/26/20 85
12/26/20 86
Variable Thresholding Based on Local Image
Properties
Let xy and mxy denote the standard deviation and mean value
of the set of pixels contained in a neighborhood S xy , centered
at coordinates (x, y ) in an image. The local thresholds,
Txy a xy bmxy
If the background is nearly constant,
Txy a xy bm
1 if f ( x, y ) Txy
g ( x, y )
0 if f ( x, y) Txy
12/26/20 87
Variable Thresholding Based on Local Image
Properties
A modified thresholding
1 if Q(local parameters) is true
g ( x, y )
0 otherwise
e.g .,
true if f ( x, y ) a xy AND f ( x, y ) bmxy
Q( xy , mxy )
false otherwise
12/26/20 88
a=30
b=1.5
mxy = mG
12/26/20 89
Variable Thresholding Using Moving Averages
► Thresholding based on moving averages works well when
the objects are small with respect to the image size
► Quite useful in document processing
► The scanning (moving) typically is carried out line by line
in zigzag pattern to reduce illumination bias
12/26/20 90
Variable Thresholding Using Moving Averages
Let zk 1 denote the intensity of the point encountered in
the scanning sequence at step k 1. The moving average
(mean intensity) at this new point is given by
1 k 1 1
m( k 1)
n i k 2 n
zi m( k ) ( zk 1 zk )
n
where n denotes the number of points used in computing
the average and m(1) z1 / n, the border of the image were
padded with n -1 zeros.
12/26/20 91
Variable Thresholding Using Moving Averages
1 if f ( x, y ) Txy
g ( x, y )
0 if f ( x, y ) Txy
Txy bmxy
12/26/20 92
N = 20
b=0.5
12/26/20 93
12/26/20 94
Region-Based Segmentation
► Region Growing
1. Region growing is a procedure that groups pixels or subregions into
larger regions.
2. The simplest of these approaches is pixel aggregation, which starts
with a set of “seed” points and from these grows regions by
appending to each seed points those neighboring pixels that have
similar properties (such as gray level, texture, color, shape).
3. Region growing based techniques are better than the edge-based
techniques in noisy images where edges are difficult to detect.
12/26/20 95
Region-Based Segmentation
Example: Region Growing based on 8-connectivity
f ( x, y ) : input image array
S ( x, y ): seed array containing 1s (seeds) and 0s
Q( x, y ): predicate
12/26/20 96
Region Growing based on 8-connectivity
1. Find all connected components in S ( x, y) and erode each
connected components to one pixel; label all such pixels
found as 1. All other pixels in S are labeled 0.
2. Form an image f Q such that, at a pair of coordinates (x,y),
let fQ ( x, y ) 1 if the Q is satisfied otherwise f Q ( x, y ) 0.
3. Let g be an image formed by appending to each seed point
in S all the 1-value points in f Q that are 8-connected to that
seed point.
4. Label each connencted component in g with a different region
label. This is the segmented image obtained by region growing.
12/26/20 97
TRUE if the absolute difference of the intensities
Q between the seed and the pixel at (x,y) is T
FALSE otherwise
12/26/20 98
12/26/20 99
4-connectivity
12/26/20 100
8-connectivity
12/26/20 101
12/26/20 102
12/26/20 103
Region Splitting and Merging
R : entire image Ri :entire image Q: predicate
1. For any region Ri , If Q( Ri ) = FALSE,
we divide the image Ri into quadrants.
2. When no further splitting is possible,
merge any adjacent regions R j and Rk
for which Q( R j Rk ) = TRUE.
3. Stop when no further merging is possible.
12/26/20 104
12/26/20 105
12/26/20 106
TRUE if a and 0 m b
Q
FALSE otherwise
12/26/20 107
Segmentation Using Morphological
Watersheds
► Three types of points in a topographic interpretation:
Points belonging to a regional minimum
Points at which a drop of water would fall to a single
minimum. (The catchment basin or watershed of that
minimum.)
Points at which a drop of water would be equally likely
to fall to more than one minimum. (The divide lines
or watershed lines.)
Watershed lines
12/26/20 108
Segmentation Using Morphological
Watersheds: Backgrounds
http://www.icaen.uiowa.edu/~dip/LECTURE/Segmentation3.html#watershed
12/26/20 109
Watershed Segmentation: Example
► The objective is to find watershed lines.
► The idea is simple:
Suppose that a hole is punched in each regional minimum and that
the entire topography is flooded from below by letting water rise
through the holes at a uniform rate.
When rising water in distinct catchment basins is about the merge, a
dam is built to prevent merging. These dam boundaries correspond
to the watershed lines.
12/26/20 110
12/26/20 111
12/26/20 112
Watershed Segmentation Algorithm
► Start with all pixels with the lowest possible value.
These form the basis for initial watersheds
► For each intensity level k:
For each group of pixels of intensity k
1. If adjacent to exactly one existing region, add these pixels to that region
2. Else if adjacent to more than one existing regions, mark as boundary
3. Else start a new region
12/26/20 113
Watershed Segmentation: Examples
Watershed
algorithm is
often used on
the gradient
image instead
of the original
image.
12/26/20 114
Watershed Segmentation: Examples
Due to noise and other local irregularities of the gradient, over-segmentation
might occur.
12/26/20 115
Watershed Segmentation: Examples
A solution is to limit the number of regional minima. Use markers to specify
the only allowed regional minima.
12/26/20 116
Watershed Segmentation: Examples
A solution is to limit the number of regional minima. Use markers to specify
the only allowed regional minima. (For example, gray-level values might be
used as a marker.)
12/26/20 117
Use of Motion in Segmentation
12/26/20 118
K-means Clustering
► Partition the data points into K clusters randomly. Find
the centroids of each cluster.
► For each data point:
Calculate the distance from the data point to each cluster.
Assign the data point to the closest cluster.
► Recompute the centroid of each cluster.
► Repeat steps 2 and 3 until there is no further change in
the assignment of data points (or in the centroids).
12/26/20 119
K-Means Clustering
12/26/20 120
K-Means Clustering
12/26/20 121
K-Means Clustering
12/26/20 122
K-Means Clustering
12/26/20 123
K-Means Clustering
12/26/20 124
K-Means Clustering
12/26/20 125
K-Means Clustering
12/26/20 126
K-Means Clustering
12/26/20 127
Clustering
► Example
D. Comaniciu and P.
Meer, Robust Analysis
of Feature Spaces:
Color Image
Segmentation, 1997.
12/26/20 128