Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
41 views2 pages

Logistic Regression Basics

The document discusses how to classify data points as 0 or 1 using a logistic regression model. It explains that the hypothesis function hθ(x) will output a value greater than or equal to 0.5 if θTx is greater than or equal to 0, classifying the point as 1, and less than 0.5 if θTx is less than 0, classifying it as 0. The decision boundary is the line that separates points classified as 0 versus 1. An example shows a decision boundary as a vertical line where x1 is equal to 5.

Uploaded by

Abdullah Mahsud
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views2 pages

Logistic Regression Basics

The document discusses how to classify data points as 0 or 1 using a logistic regression model. It explains that the hypothesis function hθ(x) will output a value greater than or equal to 0.5 if θTx is greater than or equal to 0, classifying the point as 1, and less than 0.5 if θTx is less than 0, classifying it as 0. The decision boundary is the line that separates points classified as 0 versus 1. An example shows a decision boundary as a vertical line where x1 is equal to 5.

Uploaded by

Abdullah Mahsud
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

In order to get our discrete 0 or 1 classification, we can translate the output of the hypothesis

function as follows:

hθ(x)≥0.5→y=1hθ(x)<0.5→y=0
The way our logistic function g behaves is that when its input is greater than or equal to zero, its
output is greater than or equal to 0.5:

g(z)≥0.5whenz≥0
Remember.

z=0,e0=1⇒g(z)=1/2z→∞,e−∞→0⇒g(z)=1z→−∞,e∞→∞⇒g(z)=0
So if our input to g is \theta^T XθTX, then that means:

hθ(x)=g(θTx)≥0.5whenθTx≥0
From these statements we can now say:

θTx≥0⇒y=1θTx<0⇒y=0
The decision boundary is the line that separates the area where y = 0 and where y = 1. It is
created by our hypothesis function.

Example:

θ=⎡⎣5−10⎤⎦y=1if5+(−1)x1+0x2≥05−x1≥0−x1≥−5x1≤5
In this case, our decision boundary is a straight vertical line placed on the graph where x_1 = 5x1
=5, and everything to the left of that denotes y = 1, while everything to the right denotes y = 0.

Again, the input to the sigmoid function g(z) (e.g. \theta^T XθTX) doesn't need to be linear, and
could be a function that describes a circle (e.g. z = \theta_0 + \theta_1 x_1^2 +\theta_2
x_2^2z=θ0+θ1x12+θ2x22) or any shape to fit our data.

In order to get our discrete 0 or 1 classification, we can translate the output of the hypothesis
function as follows:

hθ(x)≥0.5→y=1hθ(x)<0.5→y=0
The way our logistic function g behaves is that when its input is greater than or equal to zero, its
output is greater than or equal to 0.5:

g(z)≥0.5whenz≥0
Remember.

z=0,e0=1⇒g(z)=1/2z→∞,e−∞→0⇒g(z)=1z→−∞,e∞→∞⇒g(z)=0
So if our input to g is \theta^T XθTX, then that means:

hθ(x)=g(θTx)≥0.5whenθTx≥0
From these statements we can now say:

θTx≥0⇒y=1θTx<0⇒y=0
The decision boundary is the line that separates the area where y = 0 and where y = 1. It is
created by our hypothesis function.

Example:

θ=⎡⎣5−10⎤⎦y=1if5+(−1)x1+0x2≥05−x1≥0−x1≥−5x1≤5
In this case, our decision boundary is a straight vertical line placed on the graph where x_1 = 5x1
=5, and everything to the left of that denotes y = 1, while everything to the right denotes y = 0.

Again, the input to the sigmoid function g(z) (e.g. \theta^T XθTX) doesn't need to be linear, and
could be a function that describes a circle (e.g. z = \theta_0 + \theta_1 x_1^2 +\theta_2
x_2^2z=θ0+θ1x12+θ2x22) or any shape to fit our data.

You might also like