Image Feature
Extraction &
Representation
CHAPTER FOUR
Feature Extraction
Feature Extraction is an important technique in DIP & Computer
Vision widely used for tasks like:
Object recognition
Image alignment and stitching (to create a panorama)
3D stereo reconstruction
Navigation for robots/self-driving cars
and more…
2
What are Features?
Features are parts or
patterns of an object in an
image that help to identify
it.
For example — a square has 4
corners and 4 edges, they can be
called features of the square, and they
help us humans identify it’s a square.
Features include properties like
corners, edges, regions of
interest points, ridges, etc.
3
Cont’d
Features are the basic attributes or aspects which clearly help us
identify the particular object, image, or anything.
Features are the marked properties which are unique.
There is no exact definition of the features of an image but things
like the shape, size, orientation, etc. constitute the feature of the
image.
Features may be specific structures in the image such as points,
edges or objects.
Extracting these features can be done using different techniques.
4
Traditional feature detection
techniques
• Harris Corner Detection — Uses a Gaussian window function to detect corners.
• Shi-Tomasi Corner Detector — The authors modified the scoring function used in Harris
Corner Detection to achieve a better corner detection technique.
• Scale-Invariant Feature Transform (SIFT) — This technique is scale invariant unlike
the previous two.
• Speeded-Up Robust Features (SURF) — This is a faster version of SIFT as the name
says.
• Features from Accelerated Segment Test (FAST) — This is a much more faster corner
detection technique compared to SURF.
• Binary Robust Independent Elementary Features (BRIEF) — This is only a feature
descriptor that can be used with any other feature detector. This technique reduces the
memory usage by converting descriptors in floating point numbers to binary strings.
• Oriented FAST and Rotated BRIEF (ORB) — SIFT and SURF are patented and this
algorithm from OpenCV labs is a free alternative to them, that uses FAST keypoint detector
and BRIEF descriptor.
5
Feature Detection
Feature detection includes methods
for computing abstractions of image
information and making local
decisions at every image point
whether there is an image feature of a
given type at that point or not
6
Classification system
Feature Extractor
Camera (Image Processing) Classifier
7
Image Analysis
Typical steps:
• Pre-processing
• Segmentation (object detection)
• Feature extraction
• Feature selection
• Classifier training
• Evaluation of classifier performance.
8
Features for image analysis
Applications:
• Remote sensing
• Medical imaging
• Character recognition
• Robot Vision
• …
Major goal of image feature extraction:
Given an image, or a region within an image, generate the features that will subsequently be
fed to a classifier in order to classify the image in one of the possible classes.
(Theodoridis & Koutroumbas: «Pattern Recognition», Elsevier 2006).
9
Feature extraction
The goal is to generate features that exhibit high information-packing
properties:
• Extract the information from the raw data that is most relevant for
discrimination between the classes
• Extract features with low within-class variability and high between class
variability
• Discard redundant information.
• The information in an image f[i,j] must be reduced to enable reliable
classification (generalization)
• A 64x64 image, 4096-dimensional feature space!
10
“Curse of dimensionality”
Error rate
New data
Training data
d
11
Feature Types (Regional Features)
• Colour features
• Gray level features
• Shape features
• Histogram (texture) features
12
Color histograms
A color histogram h is a D-dimensional vector, which is obtained by
quantizing the color space into D distinct colors
Typical values of D are 32, 64, 256, 1024, …
Example: the HSV color space can be quantized into D=32 colors:
H is divided into 8 intervals, and S into 4
V = 0 guarantees invariance to light intensity
The i-th component (also called bin) of h stores the percentage (number) of pixels in the
image whose color is mapped to the i-th color
Although conceptually simple, color histograms are widely used since they are relatively
invariant to translation, rotation, scale changes and partial occlusions
D = 64
13
Shape features - example
14
Closest fitting ellipse
Orientation:
y
Eccentrisity:
18
Major and Minor Axes
19
Representing Texture
Texture provides information about the uniformity, granularity and regularity
of the image surface
It is usually computed just considering the gray-scale values of pixels
V channel in HSV
20
Histogram (texture) features
• First order statistics (information related to the gray level distribution)
• Second order statistics (information related to spatial/relative distribution of gray level), i.e.
second order histogram, co-occurrence matrix
Histogram:
Moments from gray level histogram: Entropy:
21
Histogram (texture) features
Central moments:
Features:
22
Feature selection
• A number of feature candidates may have
Scatter plot of features
been generated
• Using all candidates will easily lead to over
traing (unreliable classification of new data)
• Dimmensionality reduction is required,
i.e. feature selection!
• Exhaustive search impossible!
• Trial and error (select feature combination,
train classifier, estimate error rate).
• Suboptimal search
• «Branch and Bound» search
• Linear or non-linear mappings to lower
dimensional feature space.
23
Dimensionality reduction – linear transformations
• Projection of multidimensional feature
vectors to a lower-dimensional feature
space
• Example: Fishers linear discriminant
provides a projection from a d-dimensional
space (d>1) to a one-dimensional space in
such a way that the separation between
classes are maximized.
24
Object Recognition
Object Detection Object Identification
Where is Jane? Who is it?
Where is a Face? Is it Jane or Erik?
Is there a face in the image?
25
General Problems of Recognition
Invariance:
•“External parameters”
• Pose
• Illumination
•“Internal parameters”
• Person identity
• Facial expression
Applicable to many classes
of objects
26
Object Detection
Task
Given an input image, determine if there are
objects of a given class (e.g. faces, people,
cars..) in the image and where they are
located.
27
Face Detection – basic scheme
Face examples Classification
Result
Off-line
training
Classifier
Feature vector (x1, x2 ,…, xn)
Feature Extraction
Non-face examples Pixel pattern
Search for faces at
different resolutions
and locations
28
Image Features
19x19
Histogram Masking
Equalization
Gray 283 [0, 1]
x-y-Sobel
Histogram
Filtering
Equalization
Gradient 17x17 [0, 1]
Haar Wavelet Normalization
Transform
Wavelets 1740 [0, 1]
29
Identification
Task:
A B
Given an image of an object of a
particular class (e.g. face) identify
which exemplar it is. A B
C D
C D
30
Problems in Face Identification
Limited information in a single face image
Illumination Rotation
31
System Architecture
Identification Result
Training Data
A
Classifier Support Vector Machine,
….
Feature extraction Gray, Gradient,
Wavelets, …
Face Image
32