Lecture
Machine Learning
By: Dr. Rehan Ashraf
Logistic
Regression
Classification
Machine Learning
Classification
Email: Spam / Not Spam?
Online Transactions: Fraudulent (Yes / No)?
Tumor: Malignant / Benign ?
0: “Negative Class” (e.g., benign tumor)
1: “Positive Class” (e.g., malignant tumor)
(Yes) 1
Malignant ?
(No) 0
Tumor Size Tumor Size
Threshold classifier output at 0.5:
If , predict “y = 1”
If , predict “y = 0”
Classification: y = 0 or 1
can be > 1 or < 0
Logistic Regression:
Logistic
Regression
Hypothesis
Representation
Machine Learning
Logistic Regression Model
Want
0.5
Sigmoid function 0
Logistic function
Interpretation of Hypothesis Output
= estimated probability that y = 1 on input x
Example: If
Tell patient that 70% chance of tumor being malignant
“probability that y = 1, given x,
parameterized by ”
Logistic
Regression
Decision boundary
Machine Learning
Logistic regression 1
Suppose predict “ “ if
predict “ “ if
Decision Boundary
x2
3
1 2 3 x1
Predict “ “ if
Non-linear decision boundaries
x2
-1 1 x1
-1
Predict “ “ if
x2
x1
Logistic
Regression
Cost function
Machine Learning
Training set:
m examples
How to choose parameters ?
Cost function
Linear regression:
“non-convex” “convex”
Logistic regression cost function
If y = 1
0 1
Logistic regression cost function
If y = 0
0 1
Logistic
Regression
Simplified cost function
and gradient descent
Machine Learning
Logistic regression cost function
Logistic regression cost function
To fit parameters :
To make a prediction given new :
Output
Gradient Descent
Want :
Repeat
(simultaneously update all )
Gradient Descent
Want :
Repeat
(simultaneously update all )
Algorithm looks identical to linear regression!
Logistic
Regression
Advanced
optimization
Machine Learning
Optimization algorithm
Cost function . Want .
Given , we have code that can compute
-
-
(for )
Gradient descent:
Repeat
Optimization algorithm
Given , we have code that can compute
-
-
(for )
Optimization algorithms: Advantages:
- Gradient descent - No need to manually pick
- Conjugate gradient - Often faster than gradient
- BFGS descent.
- L-BFGS Disadvantages:
- More complex
Example:
function [jVal, gradient]
= costFunction(theta)
jVal = (theta(1)-5)^2 + ...
(theta(2)-5)^2;
gradient = zeros(2,1);
gradient(1) = 2*(theta(1)-5);
gradient(2) = 2*(theta(2)-5);
options = optimset(‘GradObj’, ‘on’, ‘MaxIter’, ‘100’);
initialTheta = zeros(2,1);
[optTheta, functionVal, exitFlag] ...
= fminunc(@costFunction, initialTheta, options);
theta =
function [jVal, gradient] = costFunction(theta)
jVal = [ code to compute ];
gradient(1) = [ code to compute ];
gradient(2) = [code to compute ];
gradient(n+1) = [ code to compute ];
Logistic
Regression
Multi-class classification:
One-vs-all
Machine Learning
Multiclass classification
Email foldering/tagging: Work, Friends, Family, Hobby
Medical diagrams: Not ill, Cold, Flu
Weather: Sunny, Cloudy, Rain, Snow
Binary classification: Multi-class classification:
x2 x2
x1 x1
x2
One-vs-all (one-vs-rest):
x1
x2 x2
x1
x1
x2
Class 1:
Class 2:
Class 3:
x1
One-vs-all
Train a logistic regression classifier for each
class to predict the probability that .
On a new input , to make a prediction, pick the
class that maximizes