IMAGE ANALYSIS
Image analysis methods extract information from an image
by using automatic or semiautomatic techniques termed:
scene analysis, image description, image
understanding, pattern recognition, computer/machine
vision etc.).
Image analysis differs from other types of image
processing methods, such as enhancement or
restoration in that the final result of image analysis
procedures is a numerical output rather than a picture.
© P. Strumillo
IMAGE ANALYSIS
Classification Image understanding
Search the database for this Door open?
fingerprint
Image analysis steps
Preprocessing
Segmentation
Feature extraction
database
Classification and query
interpretation
The best enemy spy!
Industrial robotics
Cartography, Geology
! "
9
2 1 3
Feature description Segmentation Classification
Spatial features Thresholding Clustering
Transform features Boundary based segm. Statistical classif.
Edges and boundaries Region based segm. Decision trees
Shape features Template matching Neural networks
Moments Texture segmentation Similarity measures
Texture
Segmentation
Image segmentation is a key step in image analysis.
Segmentation subdivides an image into its components. It
distinguishes objects of interest from background, e.g.
Optical Character Recognition (OCR) systems first
segment character shapes from an image before they start
to recognise them.
The segmentation operation only subdivides an image;
it does not attempt to recognise the segmented image
parts.
Aerial photos
Microscope image of cells
Thresholding
Amplitude thresholding (i.e. in the brightness domain)
is the basis approach to image segmentation.
A threshold T is selected a that would separate the two
modes, i.e. any image point for which f(x,y)>T is
considered as an object; otherwise, the point is called a
background point.
The thresholded image (binary image) is defined by:
1 for f ( x, y ) ≥ T
g ( x, y ) =
0 for f ( x, y ) < T
Thresholding
Suppose an image f(x,y) contains bright objects
surrounded by dark background and its gray-level
histogram is shown in the figure.
?
#&'(
1000
800
256 for f ( x , y ) ≥ T
600
g( x , y ) =
400
0 for f ( x , y ) < T
200
0 50 100 150 200 250
#$%
!
DEMO MATLAB
DEMO MATLAB
%MATLAB
x=imread('cameraman.tif'); figure(1), imshow(x);
bw=im2bw(x,0.5); figure(2), imshow(bw)
Thresholding
When T is set on the basis of the entire image f(x,y) the
threshold is called global.
If T depends on spatial coordinates (x,y) the threshold is
termed dynamic.
If T depends on both f(x,y) and some local image property
p(x,y) - e.g., the average gray level in a neighbourhood
centred on (x,y), the threshold is called local and T is set
according to a test function:
T = T[p( x , y),f ( x , y)]
Thresholding
In the case of multilevel thresholding the threshold becomes
a vector T= [T1, T2, …TN] and an image is partitioned into N+1
sub-regions, e.g. for two-level thresholding:
&) ≤ ') ≤ *)
Image thresholding can be extended into more than one
dimensions.
This may apply to multi-band
thresholding of colour images
in any colour coordinates,
e.g. RGB or HIS.
DEMO MATLAB
%MATLAB
x=imread('cameraman.tif'); figure(1), imshow(x);
bw=im2bw(x,0.5); figure(2), imshow(bw)
Multilevel thresholding -example
250
200
150
100
50
x=imread('blood1.tif');
figure(1), imshow(x);
figure(2),imshow(x), colormap(jet(16))
Thresholding - revisited
1
0.9
P1
0.8
0.7
0.6
0.5 P2
0.4
0.3
T?
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
m1 m2
Optimum Threshold
Suppose an image contains two intensity values m1,
m2 combined with additive Gaussian noise N(0,σ) that
appear in an image with apriori probabilities P1 and P2
correspondingly P1 + P2=1.
The task is to define a threshold level T that would
minimise the overall segmentation error
Optimum Threshold
For σ1, σ2 solution to this optimisation problem is the
following:
m1 + m2 σ2 P2
T= + ln
2 m1 − m2 P1
If P1=P2 , the optimal threshold is the average of the
image intensities.
Textures from the Brodatz album
2500
2000
1500
1000
500
0 50 100 150 200 250
Region-oriented segmentation
The main idea in region-based segmentation
techniques is to identify different regions in an
image that have similar features (gray level, colour,
texture, etc.).
There are two main region-based image
segmentation techniques:
region growing (merging)
region splitting .
General formulation
Let R represent the entire image. Segmentation may be
viewed as a process that partitions R into N disjoint regions,
R1, R1, …, RN, such that:
N
a)
Ri = R
i =1
b) Ri is a connected region, i = 1, 2, …N,
c) Ri ∩ Rj = ∅ for all i and j, i≠j,
d) P(Ri) = TRUE for i = 1, 2, …N,
e) P(Ri ∪ Rj) = FALSE for i≠j
where P(Ri) is a logical predicate over the points in set Ri and
∅ is the empty set.
General formulation
Condition a) indicates that the segmentation must be
complete (all points assigned a region).
Condition c) indicates that the regions must be disjoint.
Condition d) states that all pixels in a segmented region
must satisfy the assumed predicate.
Condition e) indicates that distinct regions must be different
according to predicate P. An example predicate:
z ( x , y ) − mi ≤ 2σi
where: z(x,y) is a gray level in (x,y) coordinate, mi, σi are
the mean and standard variation of the analysed region.
Region growing
In region growing the image is divided into atomic
regions (e.g., pixels, templates). These “seed” points
grow by appending to each point other points that have
similar properties. The key problem lies in selecting
proper criterion (predicate) for merging. A frequently
used merging rule is:
Merge two regions if they are „similar”
(in terms of a predefined features)
Similarity measures
Popular similarity measures are:
• dot product (projection of one vector onto
direction of the other):
(
xiT x j = xi x j cos xi , x j )
• and Euclidean distance:
( )
d xi ,x j = (xi ( k ) − x j ( k ))2
k
Image segmentation example
© Univ. of Washington
Region splitting
Initially, the image is subdivided into a set of arbitrary
disjoint regions, then splits of the regions take place in
attempt to satisfy region uniformity criteria (a-e).
An example split algorithm works as follows:
Subdivide the entire image successively into smaller
and smaller quadrant regions until P(Ri) = TRUE for
any region; that is if P is FALSE for any quadrant,
subdivide the quadrant into sub-quadrants, and so on.
This splitting technique is called the quad-tree
decomposition.
The quad tree
R1 R2
R1 R2 R3 R4
R41 R42
R3
R41 R42 R43 R44
R43 R44
DEMO
MATLAB
(quad tree)
Region splitting and merging
However, if only splitting is used the final result likely
would contain adjacent regions with identical properties.
This can be improved by allowing merging as well as
splitting. The following procedure can be implemented:
•Split any region Ri if P(Ri) = FALSE,
•Merge any adjacent regions for which P(Ri ∪ Rj) = TRUE,
•Stop when no further merging or splitting is possible.
Template matching
An important problem in image analysis is the detection
of a presence of an object in a scene.
This problem can be solved by using a special type of an
image segmentation technique in which an a priori
knowledge about the detected object (a template) is
used to identify its location in a given scene.
Template matching
Image correlation technique can be used as the basis
for finding matches of a searched pattern w(x,y) of size
J×K within an image f(x,y) of a larger size M×N.
c( s ,t ) = f ( x , y )w( x − s , y − t ) ∀( s ,t )
x y
the summation is taken over the image region where w
and f overlap.
Template matching
The correlation function has the disadvantage of being
sensitive to local intensities of w(x,y) and f(x,y). In
order to get rid of this difficulty pattern matching by
using the correlation coefficient is used:
[ f ( x, y) − f ( x, y )][w( x − s, y − t ) − w ]
x y
γ ( s, t ) =
{ [ f ( x, y ) − f ( x, y)]2 [w( x − s, y − t ) − w ]2 }1 / 2
x y x y
Template matching
%MATLAB Illustration of the template matching
z=xcorr2(c,x); technique (above and below). The
imshow(z,[ ]); detected patter is letter “C”.
Template
matching
Template matching - example
Ballot Correlation result Theresholding