Sahar Aldhaheri
“Learning is any process by
which a system improves
performance from experience.”
Herbert Simon
What is Learning?
Definition of Machine Learning is the study of algorithms that:
o Improve their performance P
o At some task T
o With experience E.
A well-defined learning task is given by {P, T, E}.
Defining the Learning Task
Improve on task T, with respect to performance metric P, based on
experience E
T: Playing checkers
P: Percentage of games won against an arbitrary opponent
E: Playing practice games against itself
T: Recognizing hand-written words
P: Percentage of words correctly classified
E: Database of human-labeled images of handwritten words
Defining the Learning Task
Improve on task T, with respect to performance metric P, based on
experience E
T: Driving on four-lane highways using vision sensors
P: Average distance traveled before a human-judged error
E: A sequence of images and steering commands recorded while
observing a human driver.
T: Categorize email messages as spam or legitimate.
P: Percentage of email messages correctly classified.
E: Database of emails, some with human-given labels
Machine Learning Problems
What is classification?
o A machine learning task that deals with identifying the class to which an
instance belongs
o A classifier performs classification
Training Testing
phase phase
Learning the classifier Testing how well the classifier
from the available data performs
‘Training set’ ‘Testing set’
(Labeled)
The classification learning framework
● Apply a prediction function to a feature representation of
the image to get the desired output:
f( ) = “apple”
f( ) = “tomato”
f( ) = “cow”
Slide credit: L. Lazebnik
The classification learning framework
y = f(x)
output prediction Image
function feature
● Training: given a training set of labeled examples {(x1,y1), …,
(xN,yN)}, estimate the prediction function f by minimizing the
prediction error on the training set
● Testing: apply f to a never before seen test example x and
output the predicted value y = f(x)
Slide credit: L. Lazebnik
Steps
Training Training
Labels
Training
Images
Image Learned
Training
Features model
Testing
Image Learned
Prediction
Features model
Test Image Slide credit: D. Hoiem and L. Lazebnik
Recognition task and supervision
• Images in the training set must be annotated with the
“correct answer” that the model is expected to produce
Contains a motorbike
Slide credit: L. Lazebnik
Classification VS. Regression
Deep Belief Net on Face Images
object models
object parts
(combination
of edges)
edges
pixels
Based on materials 16
by Andrew Ng
1
7
Learning of Object Parts
Slide credit: Andrew Ng
Training on Multiple Objects
● Trained on 4 classes (cars, faces, motorbikes, airplanes).
● Second layer: Shared-features and object-specific features.
● Third layer: More specific features.
25
Slide credit: Andrew Ng
Scene Labeling via Deep Learning
[Farabet et al. ICML 2012, PAMI 2013] 19
Bais Problems
What is Classification?
Modeling
Multi-Class Classification
Classification Algorithms
K-Nearest Neighbours (KNN)
K-Nearest Neighbours (KNN)
Why KNN?
What is KNN Algorithm?
How do we choose the factor ‘k’?
How do we choose the factor ‘k’?
How do we choose the factor ‘k’?
How do we choose the factor ‘k’?
When do we use KNN Algorithm?
Confusion Matrix
What to remember about classifiers
● No free lunch: machine learning algorithms are tools, not
dogmas
● Try simple classifiers first
● Better to have smart features and simple classifiers than
simple features and smart classifiers
Slide credit: D. Hoiem