EECE 5639 Computer Vision I
Lecture 1
Edge detectio
Project 1 has been poste
Next Clas
Corners, Model Fitting
1
s
Canny Edge Detector
Canny found a linear, continuous lter that maximized the three given criteria
There is no close-form solution for the optimal lter
However, it looks VERY SIMILAR to the derivative of a Gaussian.
2
fi
fi
.
1D Gaussian
3
First Derivative of a Gaussian
Positive
Negative
As a mask, it is also computing a difference (derivative)
4
Another interpretation:
I(x) E(x)
Nois Edg
Filter Detection
Gaus Firs
Smoothing Derivative
5
e
Canny Edge Detector 2D
First derivative Gradient vector
Absolute value Magnitude
6
Algorithm CANNY_ENHANCER
The input is image I; G is a zero mean Gaussian lter (std = σ
1. J = I * G (smoothing
2. For each pixel (i,j): (edge enhancement
Compute the image gradient
∇J(i,j) = (Jx(i,j),Jy(i,j))’
1. Estimate edge strength
1. es(i,j) = (Jx2(i,j)+ Jy2(i,j))1/2
Estimate edge orientation
eo(i,j) = arctan(Jx(i,j)/Jy(i,j))
The output are images Es and Eo
7
)
fi
Ef ciency Considerations
1D
• 2D
8
fi
Ef ciency Considerations
We can also use Gaussian separability:
9
fi
CANNY_ENHANCER
The output image Es has the magnitudes of the smoothed gradient
Sigma determines the amount of smoothing
Es has large values at edges
Edge ENHANCER
10
.
How do we “detect” edges?
Es has large values at edges
Find local maxima
Th
11
:
… but it also may have wide ridges around the local maxima (large values
around the edges)
Th
12
NONMAX_SUPRESSION
The inputs are Es & Eo (outputs of CANNY_ENHANCER
Consider 4 directions D={ 0,45,90,135} wrt
For each pixel (i,j) do
1. Find the direction d∈D s.t. d≅ Eo(i,j) (normal to the edge
If {Es(i,j) is smaller than at least one of its neigh. along d}
1. IN(i,j)=0
Otherwise, IN(i,j)= Es(i,j)
The output is the thinned edge image IN
13
:
Graphical Interpretation
x x
14
Thresholding
Edges are found by thresholding the output of NONMAX_SUPRESSIO
If the threshold is too high
Very few (none) edges
High MISDETECTIONS, many gap
If the threshold is too low
Too many (all pixels) edge
High FALSE POSITIVES, many extra edges
15
SOLUTION:
Hysteresis Thresholding
Es(i,j)>L
Es(i,j)<H
Es(i,j)> H
Es(i,j)>L Es(i,j)<L
Strong edges reinforce adjacent weak edges
16
HYSTERESIS_THRESH
Inputs:
IN (output of NONMAX_SUPRESSION),
Eo (output of CANNY_ENHANCER),
thresholds L and H.
For all pixels in IN and scanning in a xed order
Locate the next unvisited pixel s.t. IN(i,j)>H
Starting from IN(i,j), follow the chains of connected local maxima,
in both directions perpendicular to the edge normal, as long as
IN>L.
Mark all visited points, and save the location of the contour
points.
Output: a set of lists describing the contours.
17
fi
Hysteresis Thresholding
Es(i,j)>L Es(i,j)> H
18
Other Edge Detectors
(2nd order derivative lters)
fi
First-order derivative lters (1D)
Sharp changes in gray level of the input image correspond to “peaks” of the
rst-derivative of the input signal.
F(x) F ’(x)
x
20
fi
fi
Second-order derivative lters (1D)
Peaks of the rst-derivative of the input signal, correspond to “zero-crossings”
of the second-derivative of the input signal.
F(x) F ’(x) F’’(x)
x
21
fi
fi
NOTE:
F’’(x)=0 is not enough
F’(x) = c has F’’(x) = 0, but there is no edg
The second-derivative must change sign, -- i.e. from (+) to (-) or
from (-) to (+
The sign transition depends on the intensity change of the
image – i.e. from dark to bright or vice versa.
22
)
Edge Detection (2D)
y
1D F(x)
2D
x
I(x) x I(x,y)
dI(x) > Th |∇I(x,y)| =(Ix 2(x,y) + Iy2(x,y))1/2 > Th
dx tan θ = Ix(x,y)/ Iy(x,y)
d2I(x) =0 ∇2I(x,y) =Ix x (x,y) + Iyy (x,y)=0
dx2
Laplacian
23
Notes about the Laplacian:
∇2I(x,y) is a SCALAR
↑ Can be found using a SINGLE mask
↓ Orientation information is lost
∇2I(x,y) is the sum of SECOND-order derivatives
But taking derivatives increases nois
Very noise sensitive
It is always combined with a smoothing operation
24
!
LOG Filter
First smooth (Gaussian lter)
Then, nd zero-crossings (Laplacian lter)
O(x,y) = ∇2(I(x,y) * G(x,y))
Using linearity
O(x,y) = ∇2G(x,y) * I(x,y)
This lter is called: “Laplacian of the Gaussian” (LOG)
25
fi
fi
:
fi
,
fi
:
1D Gaussian
26
First Derivative of a Gaussian
Positive
Negative
As a mask, it is also computing a difference (derivative)
27
Second Derivative of a Gaussian
2D
“Mexican Hat”
28
Modern Edge Detection
P. Dollar and C. Zitnick
“Structured Forests for Fast Edge Detection,” ICCV 2013.
29
:
Let’s look at Canny’s output
Task: Find the cows in the image
30
Canny Edges
Image Gradients + NMS Canny’s Edges
31
Canny Edges
Image Gradients + NMS Canny’s Edges
32
Canny Edges
Image Gradients + NMS Canny’s Edges
Many distractors, missing edges!
33
Can we do better?
Imagine … that someone goes and ANNOTATES which edges matter
34
Annotate
Some people have done that!
The Berkeley Segmentation Dataset and Benchmark
by D. Martin, C. Fowlkes, D. Tal and J. Malik
35
Using Annotations
How can we use annotations to improve (task oriented) edge detection?
We can use Machine Learning!
36
Very over-simpli ed explanation
Supervised Classi cation Problem:
Samples are represented by “features”
Class 1
Class -1
We have “training” labeled data from 2 classes.
37
fi
fi
Very over-simpli ed explanation
Supervised Classi cation Problem:
Class 1
?
Class -1
We have “training” labeled data from 2 classes.
Given a new sample, we want to assign to it a label.
38
fi
fi
Very over-simpli ed explanation
Supervised Classi cation Problem:
class boundary
Class 1
?
Class -1
We have “training” labeled data from 2 classes.
Given a new sample, we want to assign to it a label.
39
fi
fi
Very over-simpli ed explanation
Supervised Classi cation Problem
class boundary
Class 1
?
Class -1
We have “training” labeled data from 2 classes.
Given a new sample, we want to assign to it a label.
40
fi
fi
Supervised Classi cation
There are many strategies to learn classi ers
Nearest neighbor
Support Vector Machin
Classi cation Tree
Classi cation Forest
Neural networks (deep learning
Classi ers provide
a class “label” for new sample
a “score” of how well the label ts the new sampl
41
fi
fi
fi
s
fi
)
fi
fi
e
Training an Edge Detector
42
Training an Edge Detector
Labeled Samples!
43
Recipe for Training an Edge Detector
• Extract LOTS of image patche
• Assign labels to the samples using annotation
• Represent each patch using features (measurements) of the patc
• Simplest possibility would be to vectorize the patch:
44
s
Recipe for Training an Edge Detector
• Extract LOTS of image patche
• Assign labels to the samples using annotation
• Represent each patch using features (measurements) of the patc
• Simplest possibility would be to vectorize the patc
• Better: extract meaningful features such as gradients, color histogram, etc
45
s
Recipe for Training an Edge Detector
• Extract LOTS of image patche
• Assign labels to the samples using annotation
• Represent each patch using features (measurements) of the patc
• Simplest possibility would be to vectorize the patc
• Better: extract meaningful features such as gradients, color histogram, etc
• Train the classi er
46
fi
s
Using the Classi er
47
fi
Using the Classi er
48
fi
Using the Classi er
score
49
fi
Canny vs Structured Edge Detector
50
Canny vs Structured Edge Detector
51
Canny vs Structured Edge Detector
52
Canny vs Structured Edge Detector
53
Canny vs Structured Edge Detector
54
Evaluating Results
55
Evaluation: Recall & Precision
56
Evaluation: Recall & Precision
(green + red)
57
Evaluation
58