CHapter 2 Econometrics
CHapter 2 Econometrics
➢The term ‘simple’ refers to the fact that we use only two variables
(one dependent and one independent variable).
➢Linear refers to linear in parameters, it may or may not be linear in
the variable. The parameters appear with a power of one & is not
multiplied/divided by other parameters.
10/17/2024 By: Urgessa F 15
The true relationship which connects the variables involved is split into
two parts: a part represented by a line(fitted model) and a part
represented by the random term ‘u’(estimated model)
𝒀 = 𝜶 + 𝜷𝑿𝟐 + 𝒖
𝒀𝟐 = 𝜶 + 𝜷𝟏 𝑿𝟏 + 𝜷𝟐 𝑿𝟏 𝑿𝟐
10/17/2024 By: Urgessa F 26
Cont.……………………………………………..
ii. The number of observations n must be greater than the number of parameters to be
estimated
• Alternatively, the number of observations n must be greater than the number of
explanatory variables
• From a single observation there is no way to estimate
• the two unknowns, α and β
• We need at least two pairs of observations to estimate the two unknowns.
iii. The regression model is correctly specified.
• Alternatively, there is no specification bias or error in the model used in empirical
analysis.
• Some important questions that arise in the specification of the model include the
following:
a. What variables should be included in the model?
b. What is the functional form of the model? Is it linear in the parameters, the variables, or both?
c. What are the probabilistic assumptions made about the Yi , the Xi , and the ui entering the model?
10/17/2024 By: Urgessa F 27
2.4. Methods of estimation
• The next step is the estimation of the numerical values of the parameters of
economic relationships.
• The parameters of the simple linear regression model can be estimated by
various methods. Three of the most commonly used methods are:
1. Ordinary least square method (OLS)
2. Maximum likelihood method (MLM)
3. Method of moments (MM)
▪ But, here we will deal with the OLS methods of estimation.
i i
2
= −
ˆ − ˆ
X 2
e (Y )
i ..............................2.9
To find the values of 𝛼ො 𝑎𝑛𝑑 𝛽መ that minimize this sum, we have to partially differentiate
σ 𝑒 𝑖 2 with respect to 𝛼ො 𝑎𝑛𝑑 𝛽መ and set the partial derivatives equal to zero.
ei2
= −2 (Yi − ˆ − ˆX i ) = 0.......................................................(2.10)
ˆ
i
Y = n + ˆX
i
ˆ = Y − ˆX ..........................................................................(2.11)
ei2
= −2 X i (Yi − ˆ − ˆX ) = 0..................................................(2.12)
ˆ 10/17/2024 By: Urgessa F 30
2
2 2
u i u i
= −2 Yi + 2n + 2 1 X i = 0
= −2 X iYi + 2 X i + 2 1 X i2 = 0
1
2 X iYi = 2 X i + 2 1 X i2
2 Yi = 2n + 2 1 X i
X Y = X i + 1 X i2
Y i = n + 1 X i i i
divided by n we get
Substituting the value of 𝛼 from equation
− −
Y i
=
n 1 X i
+ X Y = X (Y −
i i i 1 X ) + 1 X i
2
n n n − − −
− − X Y i i =n X (Y − X ) + 1 X i2
Y = + 1 X − − − 2
X Y i i = n X Y − 1 n X + 1 X i2
−
X iYi − n X Y
−
− − − − − 2 1 =
X Y − n X Y = ( X
− 2
= Y − 1 X ..............2.8 i i 1 i
2
−nX ) X i
2
−nX
........2.9
− − 2
x = ( X2
i i − X) = X −nX 2
i
2
− − − −
x y = ( X
i i i − X )(Yi − Y ) = X iYi − n X Y −
ത 𝑌ത are the sample means of X and Y. We have also defined xi = ( X i − X )
Where, 𝑋and
− − −
1 =
(Y − Y ) − ( X − X ) x y
i
=
i i i
.and yi = (Yi − Y ) Hence forth, we adopt the conventional of lettering the
lowercase letters denote deviations from mean value. Finally, if Y is the dependent
( X − X ) x −
2
2
i variable and X is an explanatory variable then the sample regression function (SRF)
i
of on is written formally as
x y = (Y − Y )( X
i i i − X) X Y − Y n X − X nY + n X Y
i i
n2
X Y Multiplying both sides
− − − − − −
X iYi − i
X Y − Y X − X Y + n X Y
i i i i X Y −Y n X n by n we get
X iYi − i i n
i i
− X Y
X = nX X . Y
n
X iYi − n
i i i
−
Y = nY 10/17/2024
i
n
By: Urgessa F
n n X iYi − X i Yi
32
Estimation of a function with zero intercept
• Suppose it is desired to fit the line Yi = + X i + U i , subject to = 0.the restriction
To estimate , ˆ the problem is put in a form of restricted minimization
problem and then Lagrange method is applied. Substituting (iii) in (ii) and
rearranging we obtain:
n
We minimize: e 2 = (Y − ˆ − ˆX ) 2
i i i =1
i
Subject to: ˆ = 0
Z = (Yi − ˆ − ˆX i ) 2 − ˆ ,
We minimize the function with respect to X i (Yi − ˆX i ) = 0
Z ˆ
Yi X i − X i = 0
2
= −2(Yi − ˆ − ˆX i ) − = 0 − − − − − − − −(i )
ˆ
Z
= −2(Yi − ˆ − ˆX i ) ( X i ) = 0 − − − − − − − −(ii) X i Yi
ˆ ˆ
=
z
= −2 = 0 − − − − − − − − − − − − − − − − − − − (iii) X i2
10/17/2024 By: Urgessa F 33
2.4.2.PROPERTIES OF OLS ESTIMATORS
• The ideal or optimum properties that the OLS estimates possess may be summarized by well
known theorem known as the Gauss-Markov Theorem.
• Statement of the theorem: “Given the assumptions of the classical linear regression model, the
OLS estimators, in the class of linear and unbiased estimators, have the minimum variance,
i.e. the OLS estimators are BLUE.
• According to the this theorem, under the basic assumptions of the classical linear regression
model, the least squares estimators are linear, unbiased and have minimum variance (i.e. are
best of all linear unbiased estimators). Some times the theorem referred as the BLUE
theorem i.e. Best, Linear, Unbiased Estimator. An estimator is called BLUE if:
a. Linear: a linear function of the a random variable, such as, the dependent variable Y.
b. Unbiased: its average or expected value is equal to the true population parameter.
c. Minimum variance: It has a minimum variance in the class of linear and unbiased
estimators. An unbiased estimator with the least variance is known as an efficient estimator.
From equation we can have the above equation but in deviation form y = yˆ + e
By squaring and summing both sides, we obtain the following expression:
y = ( y + ei + 2 yei)
2
ˆ 2 2
From equation (2.37) we have yˆ = . ˆx Squaring and summing both sides give us
yˆ 2 = ˆ 2 x 2 − − − ...........................2.50
10/17/2024 By: Urgessa F 40
We can substitute (2.50) in (2.49) and obtain: Comparing (2.52) and (2.54), we see exactly the expressions.
Therefore:
ˆ 2 x 2 xy xy
ESS / TSS = ......................................2.51 ESS/TSS = = r2
y 2
x 2 y 2
xy xi
2 2
ˆ xi yi
= 2 sence, =
x y
2 xi2 From (2.48), RSS=TSS-ESS. Hence R2 becomes
xy xy R =
TSS − RSS
= 1−
RSS ei2
= 1 − 2 ...............2.55
= 2 2 ............................2.52
2
TSS TSS y
x y
The limit of R2: The value of R2 falls between zero and one. i.e. 0<R2<1 .
ii. Choose the level of significance, which is the probability of rejecting the null
hypothesis when it is true, in other words, level of significance refers to the
probability of committing type one error by the researcher.