What is Regression Analysis?
Regression analysis is a set of statistical methods used for the estimation of
relationships between a dependent variable and one or more independent
variables. It can be utilized to assess the strength of the relationship between
variables and for modeling the future relationship between them.
Regression analysis includes several variations, such as linear, multiple linear,
and nonlinear. The most common models are simple linear and multiple linear.
Nonlinear regression analysis is commonly used for more complicated data
sets in which the dependent and independent variables show a nonlinear
relationship.
Regression analysis offers numerous applications in various disciplines,
including finance.
Regression Analysis – Linear model assumptions
Linear regression analysis is based on six fundamental assumptions:
1. The dependent and independent variables show a linear relationship
between the slope and the intercept.
2. The independent variable is not random.
3. The value of the residual (error) is zero.
4. The value of the residual (error) is constant across all observations.
5. The value of the residual (error) is not correlated across all observations.
6. The residual (error) values follow the normal distribution.
Regression Analysis – Simple linear regression
Simple linear regression is a model that assesses the relationship between a
dependent variable and an independent variable. The simple linear model is
expressed using the following equation:
Y = a + bX + ϵ
Where:
Y – Dependent variable
X – Independent (explanatory) variable
a – Intercept
b – Slope
ϵ – Residual (error)
Regression Analysis – Multiple linear regression
Multiple linear regression analysis is essentially similar to the simple linear
model, with the exception that multiple independent variables are used in the
model. The mathematical representation of multiple linear regression is:
Y = a + bX1 + cX2 + dX3 + ϵ
Where:
Y – Dependent variable
X1, X2, X3 – Independent (explanatory) variables
a – Intercept
b, c, d – Slopes
ϵ – Residual (error)
Multiple linear regression follows the same conditions as the simple linear
model. However, since there are several independent variables in multiple
linear analysis, there is another mandatory condition for the model:
Non-collinearity: Independent variables should show a minimum of
correlation with each other. If the independent variables are highly
correlated with each other, it will be difficult to assess the true
relationships between the dependent and independent variables.
Source https://corporatefinanceinstitute.com/resources/knowledge/finance/regression-analysis/
Linear Regression Example
In this lesson, we apply regression analysis to some fictitious data, and we show how to interpret the
results of our analysis.
Note: Regression computations are usually handled by a software package or a graphing calculator. For
this example, however, we will do the computations "manually", since the gory details have educational
value.
Problem Statement
Last year, five randomly selected students took a math aptitude test before they began their statistics
course. The Statistics Department has three questions.
What linear regression equation best predicts statistics performance, based on math aptitude
scores?
If a student made an 80 on the aptitude test, what grade would we expect her to make in
statistics?
How well does the regression equation fit the data?
How to Find the Regression Equation
In the table below, the xi column shows scores on the aptitude test. Similarly, the yi column shows
statistics grades. The last two columns show deviations scores - the difference between the student's
score and the average score on each test. The last two rows show sums and mean scores that we will use
to conduct the regression analysis.
Student xi yi (xi-x) (yi-y)
1 95 85 17 8
2 85 95 7 18
3 80 70 2 -7
4 70 65 -8 -12
5 60 70 -18 -7
Sum 390 385
Mean 78 77
And for each student, we also need to compute the squares of the deviation scores (the last two columns
in the table below).
Student xi yi (xi-x)2 (yi-y)2
1 95 85 289 64
2 85 95 49 324
3 80 70 4 49
4 70 65 64 144
5 60 70 324 49
Sum 390 385 730 630
Mean 78 77
And finally, for each student, we need to compute the product of the deviation scores.
Student xi yi (xi-x)(yi-y)
1 95 85 136
2 85 95 126
3 80 70 -14
4 70 65 96
5 60 70 126
Sum 390 385 470
Mean 78 77
The regression equation is a linear equation of the form: ŷ = b 0 + b1x . To conduct a regression analysis,
we need to solve for b0 and b1. Computations are shown below. Notice that all of our inputs for the
regression analysis come from the above three tables.
First, we solve for the regression coefficient (b1):
b1 = Σ [ (xi - x)(yi - y) ] / Σ [ (xi - x)2]
b1 = 470/730
b1 = 0.644
Once we know the value of the regression coefficient (b1), we can solve for the regression slope (b0):
b0 = y - b1 * x
b0 = 77 - (0.644)(78)
b0 = 26.768
Therefore, the regression equation is: ŷ = 26.768 + 0.644x .
How to Use the Regression Equation
Once you have the regression equation, using it is a snap. Choose a value for the independent variable (x),
perform the computation, and you have an estimated value (ŷ) for the dependent variable.
In our example, the independent variable is the student's score on the aptitude test. The dependent
variable is the student's statistics grade. If a student made an 80 on the aptitude test, the estimated
statistics grade (ŷ) would be:
ŷ = b0 + b1x
ŷ = 26.768 + 0.644x = 26.768 + 0.644 * 80
ŷ = 26.768 + 51.52 = 78.288
Warning: When you use a regression equation, do not use values for the independent variable that are
outside the range of values used to create the equation. That is called extrapolation, and it can produce
unreasonable estimates.
In this example, the aptitude test scores used to create the regression equation ranged from 60 to 95.
Therefore, only use values inside that range to estimate statistics grades. Using values outside that range
(less than 60 or greater than 95) is problematic.
How to Find the Coefficient of Determination
Whenever you use a regression equation, you should ask how well the equation fits the data. One way to
assess fit is to check the coefficient of determination, which can be computed from the following formula.
R2 = { ( 1 / N ) * Σ [ (xi - x) * (yi - y) ] / (σx * σy ) }2
where N is the number of observations used to fit the model, Σ is the summation symbol, x i is the x value
for observation i, x is the mean x value, yi is the y value for observation i, y is the mean y value, σx is the
standard deviation of x, and σy is the standard deviation of y.
Computations for the sample problem of this lesson are shown below. We begin by computing the
standard deviation of x (σx):
σx = sqrt [ Σ ( xi - x )2 / N ]
σx = sqrt( 730/5 ) = sqrt(146) = 12.083
Next, we find the standard deviation of y, (σy):
σy = sqrt [ Σ ( yi - y )2 / N ]
σy = sqrt( 630/5 ) = sqrt(126) = 11.225
And finally, we compute the coefficient of determination (R2):
R2 = { ( 1 / N ) * Σ [ (xi - x) * (yi - y) ] / (σx * σy ) }2
R2 = [ ( 1/5 ) * 470 / ( 12.083 * 11.225 ) ] 2
R2 = ( 94 / 135.632 )2 = ( 0.693 )2 = 0.48
A coefficient of determination equal to 0.48 indicates that about 48% of the variation in statistics grades
(the dependent variable) can be explained by the relationship to math aptitude scores (the independent
variable). This would be considered a good fit to the data, in the sense that it would substantially improve
an educator's ability to predict student performance in statistics class.
Source: https://stattrek.com/regression/regression-example.aspx