Thanks to visit codestin.com
Credit goes to www.scribd.com

100% found this document useful (1 vote)
263 views50 pages

Feature Detection and Matching

The document discusses feature detection and matching for images. It describes two main approaches: tracking features between nearby images using correlation or least squares, and independently detecting features in each image and then matching based on local appearance. It explains that feature extraction involves finding locations like corners and blobs that can be matched. Feature matching has four stages: detection, description, matching candidates, and tracking. Local features are useful for tasks like image alignment, 3D reconstruction, and object recognition. Good features are distinctive and invariant to transformations. Corners make particularly good features since they are unique and change significantly under shifts.

Uploaded by

arya dadhich
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
263 views50 pages

Feature Detection and Matching

The document discusses feature detection and matching for images. It describes two main approaches: tracking features between nearby images using correlation or least squares, and independently detecting features in each image and then matching based on local appearance. It explains that feature extraction involves finding locations like corners and blobs that can be matched. Feature matching has four stages: detection, description, matching candidates, and tracking. Local features are useful for tasks like image alignment, 3D reconstruction, and object recognition. Good features are distinctive and invariant to transformations. Corners make particularly good features since they are unique and change significantly under shifts.

Uploaded by

arya dadhich
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 50

Feature detection and matching

Points and patches


Two main approaches to finding
feature points and their
correspondences
• The first is to find features in one image that
can be accurately tracked using a local search
technique, such as correlation or least squares
• The second is to independently detect
features in all the images under consideration
and then match features based on their local
appearance.
Feature extraction: Corners and blobs
Motivation: Automatic panoramas

Credit: Matt Brown


Motivation: Automatic panoramas

HD View
http://research.microsoft.com/en-us/um/redmond/groups/ivm/HDView/HDGigapixel.htm

Also see GigaPan:


http://gigapan.org/
Why extract features?
• Motivation: panorama stitching
– We have two images – how do we combine them?
Why extract features?
• Motivation: panorama stitching
– We have two images – how do we combine them?

Step 1: extract features


Step 2: match features
Why extract features?
• Motivation: panorama stitching
– We have two images – how do we combine them?

Step 1: extract features


Step 2: match features
Step 3: align images
• The former approach is more suitable when
images are taken from nearby viewpoints or in
rapid succession (e.g., video sequences), while
the latter is more suitable when a large
amount of motion or appearance change is
expected, e.g., in stitching together
panoramas
Key point matching
• Four separate stages
• [1] During the feature detection (extraction)
stage, each image is searched for locations that
are likely to match well in other images.
• [2]At the feature description stage, each region
around detected keypoint locations is converted
into a more compact and stable (invariant)
descriptor that can be matched against other
descriptors.
• [3] The feature matching stage efficiently
searches for likely matching candidates in
other images.
• [4] The feature tracking stage is an alternative
to the third stage that only searches a small
neighborhood around each detected feature
and is therefore more suitable for video
processing.
Image matching

by Diva Sian

by swashford
Harder case

by Diva Sian by scgbt


Harder still?

NASA Mars Rover images


Answer below (look for tiny colored squares…)

NASA Mars Rover images


with SIFT feature matches
Feature Matching
Feature Matching
Invariant local features
Find features that are invariant to transformations
– geometric invariance: translation, rotation, scale
– photometric invariance: brightness, exposure, …

Feature Descriptors
Advantages of local features
Locality
– features are local, so robust to occlusion and clutter
Quantity
– hundreds or thousands in a single image
Distinctiveness:
– can differentiate a large database of objects
Efficiency
– real-time performance achievable
More motivation…
Feature points are used for:
– Image alignment (e.g., mosaics)
– 3D reconstruction
– Motion tracking
– Object recognition
– Indexing and database retrieval
– Robot navigation
– … other
What makes a good feature?

Snoop demo
Want uniqueness
Look for image regions that are unusual
– Lead to unambiguous matches in other images

How to define “unusual”?


Local measures of uniqueness
Suppose we only consider a small window of pixels
– What defines whether a feature is a good or bad
candidate?

Credit: S. Seitz, D. Frolova, D. Simakov


Local measure of feature uniqueness
• How does the window change when you shift it?
• Shifting the window in any direction causes a big
change

“flat” region: “edge”: “corner”:


no change in all no change along the significant change in
directions edge direction all directions
Credit: S. Seitz, D. Frolova, D. Simakov
Importance of
Corners

Corners are good features


to match
Corner Points
Interpreting the eigenvalues
Classification of image points using eigenvalues of M:

2 “Edge”
2 >> 1 “Corner”
1 and 2 are large,
1 ~ 2;
E increases in all
directions

1 and 2 are small;


E is almost constant “Flat” “Edge”
in all directions region 1 >> 2

1
Eigen Value Computation is Costly
Corn
er

Edge

Flat

Image taken
from Robert
Collins
CSE486, Penn
State
Thresholding R

Image taken
from Robert
Collins
CSE486, Penn
State
Image taken
from Robert
Collins
CSE486, Penn
Algorithm State

𝐼𝑦

𝐼𝑥 𝐼𝑦
Harris detector example
f value (red high, blue low)
Threshold (f > value)
Find local maxima of f
Harris features (in red)

You might also like