Digital Image Processing SCSVMV Dept of ECE
Unit V
IMAGE REPRESENTATION AND
RECOGNITION
LEARNING OBJECTIVES:
Image representation is the process of generating descriptions from the
visual contents of an image. After reading this unit, the reader should be
familiar with the following concepts:
1. Boundary representation
2. Chain Code
3. Descriptors
4. Pattern Recognition
BOUNDARY REPRESENTATION:
Models are a more explicit representation than CSG.The object is
represented by a complicated data structure giving information about
each of the object's faces, edges and vertices and how they are joined
together.
Appears to be a more natural representation for Vision since surface
information is readily available.The description of the object can be into
two parts:
Topology: Itrecords the connectivity of the faces, edges and vertices by
means of pointers in the data structure.
Geometry: Itdescribes the exact shape and position of each of the edges,
faces and vertices.The geometry of a vertex is just its position in space as
given by its (x,y,z) coordinates.Edges may be straight lines, circular arcs,
etc.A face is represented by some description of its surface (algebraic or
parametric forms used).
Fig. Faces, edges and vertices
Page | 96
Digital Image Processing SCSVMV Dept of ECE
CHAIN CODE:
A chain code is a lossless compression algorithm for monochrome images.
The basic principle of chain codes is to separately encode each connected
component, or "blot", in the image. For each such region, a point on the
boundary is selected and its coordinates are transmitted. The encoder
then moves along the boundary of the image and, at each step, transmits
a symbol representing the direction of this movement. This continues
until the encoder returns to the starting position, at which point the blot
has been completely described, and encoding continues with the next blot
in the image.
This encoding method is particularly effective for images consisting of a
reasonable number of large connected components.Some popular chain
codes include the Freeman Chain Code of Eight Directions (FCCE),
Vertex Chain Code (VCC), Three Orthogonal symbol chain code (3OT) and
Directional Freeman Chain Code of Eight Directions (DFCCE).
A related blot encoding method is crack code Algorithms exist to convert
between chain code, crack code, and run-length encoding.
1. Let i=3
2. Mark the points in the object that have furthest distance from each
other with p and p2
3. Connect the points in the order they are numbered with lines
4. For each segment in the polygon, find the point on the perimeter
between the point that have furthest distance to the polygonal line-
segment. If this distance Is larger than a threshold, mark the point
with a label pi
5. Renumber the points so that they are consecutive
6. Increase i
7. If no points have been added break, otherwise go to 3
The convex hull H of a set S is defined as the smallest convex set that
contains S. We define the set
D= H \ S
The points that have neighbors from the sets D, H and CH is called p.
These points are used for representation of the object. To limit the
number of points found it is possible to smooth the edge of the objectWe
define an object’s skeleton or medial axel as the points in the object that
have several nearest neighbors on the edge of the object
Page | 97
Digital Image Processing SCSVMV Dept of ECE
To find the skeleton by using the definition is very costly, since it involves
calculating the distance between all points in the object to all points on
the perimeter.
Usually some iterative method is used to remove perimeter pixels until
the skeleton alone remains.
In order to achieve this the following rules must be adhered to:
1. Erosion of object must be kept to a minimum
2. Connected components must not be broken
3. End points must not be removed
The purpose with Description is to Quantify a representation of an object.
This implies that we instead of talking about the areas in the image we
can talk about their properties such as length, curviness and so on.
One of the simplest descriptions is the length P of the perimeter of an
object.
The obvious measure of perimeter – length is the number of edge pixels.
That is, pixel that do belong to the object, but have a neighbor that belong
to the object, but have a neighbor that belong to the background.
A more precise measure is to assume that each pixel center is a corner in
a polygon.
The length of the perimeter is given by
P= aNe+ bNo
Page | 98
Digital Image Processing SCSVMV Dept of ECE
Intuitively we would like to set a=1 and b=
It is however possible to show that the length of the perimeter will be
slightly overestimated with these values.
The optimal weights for a and b (in least square sense) will depend on the
curviness of the perimeter.
If the perimeter is straight line (!?) a=0.948 and b=1.343 will be optimal.
If it is assumed that the direction of two consecutive line-segments is
uncorrelated a=0.948 and b=1.343 will be optimal.
The diameter of an object is defined as
Where D is a metric
The line that passes through the points Pi and Pjthat defines the
diameter is called the main axis of the object.
The curviness of perimeter can be obtained by calculating the angle
between two consecutive line-segments of the polygonal approximation
The curvature at point Pjof the curve
We can approximate
the area of an object with the
number of pixels belonging to
the object.
More accurate measures are, however obtained by using a polygonal
approximation. The area of a polygon segment (with one corner in the
origin) is given by the area of the entire polygon is then
A circle of radius r has the area A=πr2 and the length of the perimeter is
P=2 πr. So, by defining the quotient
We have a measurement that is 1, if the object is a circle. The larger the
Page | 99
Digital Image Processing SCSVMV Dept of ECE
measurement the less circle-like is the object
The measure of the shape of the object can be obtained according to:
1. Calculate the chain code for the object
2. Calculate the difference code for the chain code
3. Rotate the code so that it is minimal
4. This number is called the shape number of the object
5. The length of the number is called the order of the shape
The quotient between the
length of the main axis and the
largest width orthogonal to the
main axis is called the
eccentricity of the object.
Given a order of the shape we can
find the rectangle that
best approximates the
eccentricity of an object.
The rectangle is rotated so that its
main axis coincides with one
of the objects
The rectangle defines a resampling grid.
Page | 100
Digital Image Processing SCSVMV Dept of ECE
The shape number is then calculated on the re-sampled object.
Suppose we have a perimeter of length N made out of coordinates (xi, yi)
We then define the functions x(k) = xk and y(k) = yk and
s(k) = x(k) +jy(k)
For k= 0, 1, ……., (N-1)
The discrete Fourier transform of s(k) is
The complex coefficients a(u) is called Fourier Descriptors of the object
By using M<=N coeffiencts in the reconstruction of the object an
approximation is obtained
Observe that it is still N points in the countour but that we use M
frequencies to construct the object.
TEXTURES:
"Texture" is an ambiguous word and in the context of texture synthesis
may have one of the following meanings. In common speech, "texture" used
as a synonym for "surface structure". Texture has been described by five
different properties in the psychology of perception: coarseness, contrast,
directionality, line-likeness and roughness.
In 3D computer graphics, a texture is a digital image applied to the
surface of a three-dimensional model by texture mapping to give the
model a more realistic appearance. Often, the image is a photograph of a
"real" texture, such as wood grain.In image processing, every digital
image composed of repeated elements is called a "texture." Texture can be
arranged along a spectrum going from stochastic to regular:
Page | 101
Digital Image Processing SCSVMV Dept of ECE
Stochastic Textures: Texture images of stochastic textures look like noise:
colour dots that are randomly scattered over the image, barely specified
by the attributes minimum and maximum brightness and average colour.
Many textures look like stochastic textures when viewed from a distance.
An example of a stochastic texture is roughcast.
Structured textures. These textures look like somewhat regular patterns.
An example of a structured texture is a stonewall or a floor tiled with
paving stones.
Visual Descriptors: In computer vision, visual descriptors or image
descriptors are descriptions of the visual features of the contents in
images, videos, algorithms, or applications that produce such
descriptions. They describe elementary characteristics such as the shape,
the color, the texture or the motion, among others.
Introduction: As a result of the new communication technologies and the
massive use of Internet in our society, the amount of audio-visual
information available in digital format is increasing considerably.
Therefore, it has been necessary to design some systems that allow us to
describe the content of several types of multimedia information in order
to search and classify them.
The audio-visual descriptors are in charge of the contents description.
These descriptors have a good knowledge of the objects and events found
in a video, image or audio and they allow the quick and efficient searches
of the audio-visual content.This system can be compared to the search
engines for textual contents. Although it is certain, that it is relatively
easy to find text with a computer, is much more difficult to find concrete
audio and video parts.
For instance, imagine somebody searching a scene ofa happy person. The
happiness is a feeling and it is not evident its shape, color and texture
description in images.The description of the audio-visual content is not a
superficial task and it is essential for the effective use of this type of
archives. The standardization system that deals with audio-visual
descriptors is the MPEG-7 (Motion Picture Expert Group - 7).
Types of Visual Descriptors:
Descriptors are the first step to find out the connection between pixels
contained in a digital image and what humans recall after having
Page | 102
Digital Image Processing SCSVMV Dept of ECE
observed an image or a group of images after some minutes.
Visual descriptors are divided in two main groups:
General Information Descriptors:They contain low level descriptors which
give a description about color, shape, regions, textures and motion.
Specific Domain Information Descriptors:They give information about
objects and events in the scene. A concrete example would be face
recognition.General information descriptors consist of a set of descriptors
that covers different basic and elementary features like: color, texture,
shape, motion, location and others. This description is automatically
generated by means of signal processing.
COLOR:The most basic quality of visual content. Five tools are defined to
describe color. The three first tools represent the color distribution and
the last ones describe the color relation between sequences or group of
images:
1. Dominant Color Descriptor (DCD)
2. Scalable Color Descriptor (SCD)
3. Color Structure Descriptor (CSD)
4. Color Layout Descriptor (CLD)
Group of Frame (GoF) or Group-of-Pictures (GoP):
TEXTURE:Also, an important quality in order to describe an image. The
texture descriptors characterize image textures or regions. They observe
the region homogeneity and the histograms of these region borders. The
set of descriptors is formed by:
1. Homogeneous Texture Descriptor (HTD)
2. Texture Browsing Descriptor (TBD)
3. Edge Histogram Descriptor (EHD)
SHAPE:Contains important semantic information due to human ‘s ability
to recognize objects through their shape. However, this information can
only be extracted by means of a segmentation similar to the one that the
human visual system implements. Nowadays, such a segmentation
system is not available yet, however there exists a serial of algorithms
which are considered to be a goodapproximation. These descriptors
describe regions, contours and shapes for 2D images and for 3D volumes.
The shape descriptors are the following ones:
1. Region-based Shape Descriptor (RSD)
2. Contour-based Shape Descriptor (CSD)
3. 3-D Shape Descriptor (3-D SD)
Page | 103
Digital Image Processing SCSVMV Dept of ECE
MOTION:Defined by four different descriptors which describe motion in
video sequence. Motion is related to the objects motion in the sequence
and to the camera motion. This last information is provided by the
capture device, whereas the rest is implemented by means of image
processing. The descriptor set is the following one:
1. Motion Activity Descriptor (MAD)
2. Camera Motion Descriptor (CMD)
3. Motion Trajectory Descriptor (MTD)
4. Warping and Parametric Motion Descriptor (WMD and PMD)
LOCATION:Elements location in the image is used to describe elements
in the spatial domain. In addition, elements can also be located in the
temporal domain:
1. Region Locator Descriptor (RLD)
2. Spatio Temporal Locator Descriptor (STLD)
Specific Domain Information Descriptors:These descriptors, which give
information about objects and events in the scene, are not easily
extractable, even more when the extraction is to be automatically done.
Nevertheless, they can be manually processed.
As mentioned before, face recognition is a concrete example of an
application that tries to automatically obtain this information.
Descriptors Applications:
Among all applications, the most important ones are:
• Multimedia documents search engines and classifiers.
• Digital Library:visual descriptors allow a very detailed and concrete
search of any video or image by means of different search
parameters. For instance, the search of films where a known actor
appears, the search of videos containing the Everest mountain, etc.
• Personalized electronic news service.
• Possibility of an automatic connection to a TV channel broadcasting a
soccer match, for example, whenever a player approaches the goal
area.
• Control and filtering of concrete audio-visual contents, like violent or
pornographic material. Also, authorization for some multimedia
contents.
What is Pattern Recognition?
Pattern recognition is the process of recognizing patterns by using
Page | 104
Digital Image Processing SCSVMV Dept of ECE
machine learning algorithm. Pattern recognition can be defined as the
classification of data based on knowledge already gained or on statistical
information extracted from patterns and/or their representation. One of
the important aspects of the pattern recognition is its application
potential.
Examples: Speech recognition, speaker identification, multimedia
document recognition (MDR), automatic medical diagnosis.
In a typical pattern recognition application, the raw data is processed and
converted into a form that is amenable for a machine to use. Pattern
recognition involves classification and cluster of patterns.In classification,
an appropriate class label is assigned to a pattern based on an abstraction
that is generated using a set of training patterns or domain knowledge.
Classification is used in supervised learning.Clustering generated a
partition of the data which helps decision making, the specific decision-
making activity of interest to us. Clustering is used in an unsupervised
learning.Features may be represented as continuous, discrete or discrete
binary variables. A feature is a function of one or more measurements,
computed so that it quantifies some significant characteristics of the
object.
Example: consider our face then eyes, ears, nose etc are features of the
face.
A set of features that are taken together, forms the features vector.
Example: In the above example of face, if all the features (eyes, ears, nose
etc) taken together then the sequence is feature vector ([eyes, ears, nose]).
Feature vector is the sequence of a features represented as a d-
dimensional column vector. In case of speech, MFCC (Mel frequency
Cepstral Coefficient) is the spectral features of the speech. Sequence of
first 13 features forms a feature vector.
Pattern recognition possesses the following features:
• Pattern recognition system should recognize familiar pattern quickly
and accurate
• Recognize and classify unfamiliar objects
• Accurately recognize shapes and objects from different angles
• Identify patterns and objects even when partly hidden
• Recognize patterns quickly with ease, and with automaticity
PatternRecognition System:Pattern is everything around in this digital
world. A pattern can either be seen physically or it can be observed
Page | 105
Digital Image Processing SCSVMV Dept of ECE
mathematically by applying algorithms.
• In Pattern Recognition, pattern is comprising of the following two
fundamental things:
• Collection of observations
• The concept behind the observation
FeatureVector:The collection of observations is also known as a feature
vector. A feature is a distinctive characteristic of a good or service that
sets it apart from similar items. Feature vector is the combination of n
features in n-dimensional column vector.The different classes may have
different features values but the same class always has the same features
value
Classifier and Decision Boundaries:
In a statistical-classification problem, a decision boundary is a
hypersurface that partitions the underlying vector space into two sets. A
decision boundary is the region of a problem space in which the output
label of a classifier is ambiguous.Classifier is a hypothesis or discrete-
valued function that is used to assign (categorical) class labels to
particular data points.Classifier is used to partition the feature space into
class-labeled decision regions.
While Decision Boundaries are the borders between decisionregions.
Components in Pattern Recognition System:
Pattern recognition systems can be partitioned into components. There
are five typical components for various pattern recognition systems.
These are as following:
Sensor: A sensor is a device used to measure a property, such as pressure,
position, temperature, or acceleration, and respond with feedback.
A Preprocessing Mechanism: Segmentation is used and it is the process of
partitioning a data into multiple segments. It can also be defined as the
technique of dividing or partitioning a data into parts called segments.
A Feature Extraction Mechanism: Feature extraction starts from an
initial set of measured data and builds derived values (features) intended
to be informative and non-redundant, facilitating the subsequent learning
and generalization steps, and in some cases leading to better human
interpretations. It can be manual or automated.
A Description Algorithm: Pattern recognition algorithms generally aim to
provide a reasonable answer for all possible inputs and to perform “most
likely” matching of the inputs, taking into account their statistical
Page | 106
Digital Image Processing SCSVMV Dept of ECE
variation
A Training Set: Training data is a certain percentage of an overall dataset
along with testing set. As a rule, the better the training data, the better
the algorithm or classifier performs.
Page | 107
Digital Image Processing SCSVMV Dept of ECE
MULTIPLE CHOICE QUESTIONS
1. To convert a continuous sensed data into Digital form, which of the
following is required?
a) Sampling
b) Quantization
c) Both Sampling and Quantization
d) Neither Sampling nor Quantization
2. For a continuous image f(x, y), Quantization is defined as
a) Digitizing the coordinate values
b) Digitizing the amplitude values
c) All of the mentioned
d) None of the mentioned
3. What is pixel?
a) Pixel is the elements of a digital image
b) Pixel is the elements of an analog image
c) Pixel is the cluster of a digital image
d) Pixel is the cluster of an analog image
4. Which is a color attribute that describes a pure color?
a) Saturation
b) Hue
c) Brightness
d) Intensity
5. Which is a color attribute that describes a pure color?
a) Saturation
b) Hue
c) Brightness
d) Intensity
6. A typical size comparable in quality to monochromatic TV image is of
size
a) 256 X 256
b) 512 X 512
c) 1920 X 1080
d) 1080 X 1080
Page | 108
Digital Image Processing SCSVMV Dept of ECE
7. The number of grey values is integer powers of:
a) 4
b) 2
c) 8
d) 1
8. The section of the real plane spanned by the coordinates of an image is
called the __________
a) Spatial Domain
b) Coordinate Axes
c) Plane of Symmetry
d) None of the Mentioned
9. The procedure done on a digital image to alter the values of its
individual pixels is
a) Neighborhood Operations
b) Image Registration
c) Geometric Spatial Transformation
d) Single Pixel Operation
10. What is the first and foremost step in Image Processing?
a) Image restoration
b) Image enhancement
c) Image acquisition
d) Segmentation
11. How many numbers of steps are involved in image processing?
a) 10
b) 9
c) 11
d) 12
12. After digitization process a digital image with M rows and N columns
have to be positive and for the number, L, max gray levels i.e. an
integer power of 2 for each pixel. Then, the number b, of bits required
to store a digitized image is:
a) b=M*N*k
b) b=M*N*L
c) b=M*L*k
d) b=L*N*k
Page | 109
Digital Image Processing SCSVMV Dept of ECE
13. In digital image of M rows and N columns and L discrete gray levels,
calculate the bits required to store a digitized image for M=N=32 and
L=16.
a) 16384
b) 4096
c) 8192
d) 512
14. The quality of a digital image is well determined by ___________
a) The number of samples
b) The discrete gray levels
c)All of the mentioned
d) None of the mentioned
15. The most familiar single sensor used for Image Acquisition is
a) Microdensitometer
b) Photodiode
c)CMOS
d) None of the Mentioned
16. The difference is intensity between the highest and the lowest
intensity levels in an image is ___________
a) Noise
b) Saturation
c)Contrast
d) Brightness
17. Which of the following expression is used to denote spatial domain?
a) g(x,y)=T[f(x,y)]
b) f(x+y)=T[g(x+y)]
c)g(xy)=T[f(x,y)]
d) g(x-y)=T[f(x-y)]
18. What is the general form of representation of power transformation?
a) s=crγ
b) c=srγ
c) s=rc
d) s=rcγ
19. What is the general form of representation of log transformation?
Page | 110
Digital Image Processing SCSVMV Dept of ECE
a. s=clog10(1/r)
b. s=clog10(1+r)
c. s=clog10(1*r)
d. s=clog10(1-r)
20. Which expression is obtained by performing the negative
transformation on the negative of an image with gray levels in the range
[0, L-1]?
a) s=L+1-r
b) s=L+1+r
c) s=L-1-r
d) s=L-1+r
21. What is the full form for PDF, a fundamental descriptor of random
variables i.e. gray values in an image?
a) Pixel distribution function
b) Portable document format
c) Pel deriving function
d) Probability density function
22. What is the output of a smoothing, linear spatial filter?
a) Median of pixels
b) Maximum of pixels
c) Minimum of pixels
d) Average of pixels
23. Which of the following is the disadvantage of using smoothing filter?
a) Blur edges
b) Blur inner pixels
c) Remove sharp transitions
d) Sharp edges
24. Which of the following comes under the application of image blurring?
a) Object detection
b) Gross representation
c) Object motion
d) Image segmentation
25. Which of the following derivatives produce a double response at step
changes in gray level?
a) First order derivative
Page | 111
Digital Image Processing SCSVMV Dept of ECE
b) Third order derivative
c) Second order derivative
d) First and second order derivatives
26. What is the name of process used to correct the power-law response
phenomena?
a) Beta correction
b) Alpha correction
c) Gamma correction
d) Pie correction
27. Which of the following transformation function requires much
information to be specified at the time of input?
a) Log transformation
b) Power transformation
c) Piece-wise transformation
d) Linear transformation
28. In which type of slicing, highlighting a specific range of gray levels in
an image often is desired?
a) Gray-level slicing
b) Bit-plane slicing
c) Contrast stretching
d) Byte-level slicing
29. In general, which of the following assures of no ringing in the output?
a) Gaussian Lowpass Filter
b) Ideal Lowpass Filter
c) Butterworth Lowpass Filter
d) All of the mentioned
30. The lowpass filtering process can be applied in which of the following
area(s)?
a) The field of machine perception, with application of character
recognition
b) In field of printing and publishing industry
c) In field of processing satellite and aerial images
d) All of the mentioned
31. The function of filters in Image sharpening in frequency domain is to
perform reverse operation of which of the following Lowpass filter?
Page | 112
Digital Image Processing SCSVMV Dept of ECE
a) Gaussian Lowpass filter
b) Butterworth Lowpass filter
c) Ideal Lowpass filter
d) None of the Mentioned
32. Which of the following is a second-order derivative operator?
a) Histogram
b) Laplacian
c) Gaussian
d) None of the mentioned
33.Dark characteristics in an image are better solved using ___________
a) Laplacian Transform
b) Gaussian Transform
c) Histogram Specification
d) Power-law Transformation
34. _____________ is used to detect diseases such as bone infection and
tumors.
a) MRI Scan
b) PET Scan
c) Nuclear Whole-Body Scan
d) X-Ray
35. An alternate approach to median filtering is ______________
a) Use a mask
b) Gaussian filter
c) Sharpening
d) Laplacian filter
36.Sobel is better than prewet in image
a) Sharpening
b) Blurring
c) Smoothing
d) Contrast
37.If the inner region of the object is textured then approach, we use is
a) Discontinuity
b) Similarity
c) Extraction
d) Recognition
Page | 113
Digital Image Processing SCSVMV Dept of ECE
38.Gradient magnitude images are more useful in
a) Point Detection
b) Line Detection
c) Area Detection
d) Edge Detection
39.Diagonal lines are angles at
a) 0
b) 30
c) 45
d) 90
40.Transition between objects and background shows
a) Ramp Edges
b) Step Edges
c) Sharp Edges
d) Both A And B
41.For edge detection we use
a) First Derivative
b) Second Derivative
c) Third Derivative
42. Sobel gradient is not that good for detection of
a) Horizontal Lines
b) Vertical Lines
c) Diagonal Lines
d) Edges
43. Method in which images are input and attributes are output is called
a) Low Level Processes
b) High Level Processes
c) Mid-Level Processes
d) Edge Level Processes
44. Computation of derivatives in segmentation is also called
a) Spatial Filtering
b) Frequency Filtering
c) Low Pass Filtering
d) High Pass Filtering
Page | 114
Digital Image Processing SCSVMV Dept of ECE
45. Transition of intensity takes place between
a) Adjacent Pixels
b) Near Pixels
c) Edge Pixels
d) Line Pixels
46. Discontinuity approach of segmentation depends upon
a) Low Frequencies
b) Smooth Changes
c) Abrupt Changes
d) Contrast
47. Two regions are said to be adjacent if their union forms
a) Connected Set
b) Boundaries
c) Region
d) Image
47. Example of similarity approach in image segmentation is
a) Edge Based Segmentation
b) Boundary Based Segmentation
c) Region Based Segmentation
d) Both A And B
48. LOG stands for
a) Laplacian of Gaussian
b) Length of Gaussian
c) Laplacian of Gray Level
d) Length of Gray level
49.Double line effect is produced by
a) First Derivative
b) Second Derivative
c) Third Derivative
d) Both A And B
50.Intersection between zero intensity and extreme of second derivative is
called
a) Discontinuity
b) Similarity
Page | 115
Digital Image Processing SCSVMV Dept of ECE
c) Continuity
d) Zero Crossing
51. DWT stands for
a) Discrete Wavelet Transform
b) Discrete Wavelet Transformation
c) Digital Wavelet Transform
d) Digital Wavelet Transformation
52. Discarding every sample is called
a) Up Sampling
b) Filtering
c) Down Sampling A
d) Blurring
53. High contrast images are considered as
a) Low Resolution
b) High Resolution
c) Intense
d) Blurred
54. In multiresolution processing * represents the
a) Complete Conjugate Operation
b) Complex Conjugate Operation
c) Complete Complex Operation
d) Complex Complex Operation
55.Representing image in more than one resolution is
a) Histogram
b) Image Pyramid
c) Local Histogram
d) Equalized Histogram
56.Subband of input image, showing dD(m,n) is called
a) Approximation
b) Vertical Detail
c) Horizontal Detail
d) Diagonal Detail
57. Which of the following measures are not used to describe a region?
a) Mean and median of grey values
Page | 116
Digital Image Processing SCSVMV Dept of ECE
b) Minimum and maximum of grey values
c) Number of pixels alone A
d) Number of pixels above and below mean
58. Which of the following techniques is based on the Fourier transform?
a) Structural
b) Spectral
c) Statistical
d) Topological
59. Which of the following of a boundary is defined as the line
perpendicular to the major axis?
a) Equilateral axis
b) Equidistant axis
c) Minor axis A
d) Median axis
60. Which of the following techniques of boundary descriptions have the
physical interpretation of boundary shape?
a) Fourier transform
b) Statistical moments
c) Laplace transform
d) Curvature
Page | 117
Digital Image Processing SCSVMV Dept of ECE
REFERENCES
1. Rafael C. Gonzales, Richard E. Woods, “Digital Image Processing”,
Third Edition, Pearson Education, 2010.
2. Rafael C. Gonzalez, Richard E. Woods, Steven L. Eddins, “Digital
Image Processing Using MATLAB”, Third Edition Tata Mc Graw Hill Pvt.
Ltd., 2011.
3. Anil Jain K. “Fundamentals of Digital Image Processing”, PHI
Learning Pvt. Ltd., 2011.ACC.NO: B130746
4. Willliam K Pratt, “Digital Image Processing”, John Willey, 2002.
5. Malay K. Pakhira, “Digital Image Processing and Pattern Recognition”,
First Edition, PHI Learning Pvt. Ltd., 2011.
Page | 118