Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
24 views7 pages

PS1 Sol

The document provides solutions for various econometrics problems, including finding MA(∞) representations, ARIMA classifications, and assessing weak stationarity and invertibility of stochastic processes. It also discusses conditional forecasting and the properties of specific processes, determining whether they are white noise or i.i.d. Additionally, it includes calculations for autocovariances of a given process under certain conditions.

Uploaded by

kkk.tazabaeva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views7 pages

PS1 Sol

The document provides solutions for various econometrics problems, including finding MA(∞) representations, ARIMA classifications, and assessing weak stationarity and invertibility of stochastic processes. It also discusses conditional forecasting and the properties of specific processes, determining whether they are white noise or i.i.d. Additionally, it includes calculations for autocovariances of a given process under certain conditions.

Uploaded by

kkk.tazabaeva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Econometrics II Part I

Solutions for the Problem Set

Problem 1 (30 points)


a) Find the MA(∞) representation of the following processes:
i) Yt = 2 + 0.5Yt−1 + εt ;
Solution. Using the lag operator, the process can be re-written as

(1 − 0.5L)Yt−1 = 2 + εt ,

and since |0.5| < 1, the process for {Yt } is weakly stationary, and
hence, can be written as MA(∞):

(1 − 0.5L)Yt = 2 + εt
Yt = (1 − 0.5L)−1 (2 + εt )
= (1 + 0.5L + 0.52 L2 + ...)2 + (1 + 0.5L + 0.52 L2 + ...) εt
+∞
X
= 2(1 + 0.5 + 0.52 + ...) + 0.5j Lj εt
j=0
+∞
2 X
= + 0.5j εt−j
1 − 0.5 j=0
+∞
X
=4+ 0.5j εt−j ,
j=0

where we used the fact that Lj const = const for all j.


ii) Yt = −3 − 0.8Yt−1 + εt .
Solution. Since | − 0.8| < 1, the process for {Yt } is weakly station-
ary, and hence, can be written as MA(∞):

(1 + 0.8L)Yt = −3 + εt
Yt = (1 + 0.8L)−1 (−3 + εt )
= (1 − 0.8L + 0.82 L2 + ...)(−3) + (1 + (−0.8)L + (−0.8)2 L2 + ...) εt
+∞
X
= −3(1 − 0.8 + 0.82 + ...) + (−0.8)j Lj εt
j=0
+∞
3 X
=− + (−0.8)j εt−j
1 + 0.8 j=0
+∞
5 X
=− + (−0.8)j εt−j .
3 j=0

1
b) Write the following stochastic processes using the lag operator and find
their ARIMA(p,d,q) classification (e.g. find p, q, and d):

i) Yt = −2 + 0.1Yt−2 + 0.3Yt−3 + 0.5Yt−5 + εt ;


Solution. Using the lag operator, the process can be re-written as

(1 − 0.1L2 − 0.3L3 − 0.5L5 )Yt = −2 + εt .

Note that 0.1 + 0.3 + 0.5 = 0.9 < 1, i.e. the sufficient coefficient for
stationarity of an AR(p) process is satisfied, and so the roots of the
characteristic polynomial 1 − 0.1x2 − 0.3x3 − 0.5x5 lie outside the
unit circle. The order of the lag polynomial applied to Yt is 5, and
the order of the lag polynomial applied to εt is 0, so the process is
AR(5), or ARIMA (5, 0, 0).
ii) Yt = 5 + 2.5Yt−1 − 2Yt−2 + 0.5Yt−3 + εt +0.3 εt−3 +0.09 εt−6 +... +
0.3s εt−3s .
Solution. Using the lag operator, the process can be re-written as

(1−2.5L+2L2 −0.5L3 )Yt = 5+(1+0.3L3 +0.32 L6 +...+0.3s L3s ) εt .

The roots of the characteristic polynomial for the lag operator applied
to Yt can be found as the solutions to the following equation:

1 − 2.5x + 2x2 − 0.5x3 = 0.

The equation can be re-written as

1 − 2.5x + 2x2 − 0.5x3 = 0


1 − 0.5x − 2x + x2 + x2 − 0.5x3 = 0
(1 − 0.5x) − 2x(1 − 0.5x) + x2 (1 − 0.5x) = 0
(1 − 0.5x)(1 − 2x + x2 ) = 0
(1 − 0.5x)(1 − x)2 = 0.

We can see that one root of this equation is equal to 2, and there are
two roots equal to 1. So the process has two unit roots, and one root
outside the unit circle.
Hence, the process for Yt can be written as

(1 − L)2 (1 − 0.5L)Yt = 5 + (1 + 0.3L3 + 0.32 L6 + ... + 0.3s L3s ) εt ,

and since the order of the lag polynomial applied to εt is 3s, this
process is ARIMA(1, 2, 3s).
c) Are the following processes weakly stationary? Are they invertible? Ex-
plain your answers.

i) Yt = 5 + 0.3Yt−1 − Yt−2 + 0.3Yt−3 + εt − εt−1

2
Solution. The process can be re-written as

(1 − 0.3L + L2 − 0.3L3 )Yt = 5 + (1 − L) εt .

The characteristic equation for the MA part has a root on the unit
circle:
1 − x = 0 ⇒ x = 1,
so the MA part is not invertible.
For the AR part, the characteristic equation is

1 − 0.3x + x2 − 0.3x3 = 0
1 − 0.3x + x2 (1 − 0.3x) = 0
(1 − 0.3x)(1 + x2 ) = 0.

The roots are x1,2 = ±i, x3 = 10/3. The last root lies outside the
√ circle, but the first two lie exactly on the unit circle: | ± i| =
unit
02 + 12 = 1.
Hence, the process is neither stationary nor invertible.
ii) Yt = 0.8Yt−1 − 0.25Yt−2 + εt +1.3 εt−1
Solution. The process can be re-written as

(1 − 0.8L + 0.25L2 )Yt = (1 + 1.3L) εt .

The characteristic equation for the MA part has a root inside the
unit circle:
10
1 + 1.3x = 0 ⇒ x = − ,
13
so the MA part is not invertible.
For the AR part, the characteristic equation is

1 − 0.8x + 0.25x2 = 0
0.8 ± 0.6i
x1,2 = = 1.6 ± 1.2i.
0.5
The roots are complex number but they lie outside the unit circle:
p
|1.6 ± 1.2i| = 1.62 + 1.22 = 2 > 1.

Hence, the process is weakly stationary but not invertible.

Problem 2 (20 points)


Derive the h−step ahead conditional forecast for the following processes, using
the general formula we derived in class:
a) Yt = 5 + εt +0.5 εt−1

3
Solution. Here µ = 5, Θ(L) = 1, and Φ(L) = 1 + 0.5L. According to the
general formula derived in class,

Ŷt+h|t = µ + [L−h Θ(L)−1 Φ(L)]+ Φ(L)−1 Θ(L)(Yt − µ)

Next,

L−h Θ(L)−1 Φ(L)]+ = [L−h (1 + 0.5L)]+


= [L−h + 0.5L−h+1 ]+ .

This expression is equal to 0.5, when h = 1, and 0 for all h ≥ 2.


Hence,
+∞
X
−1
Ŷt+1|t = 5 + 0.5(1 + 0.5L) (Yt − 5) = 5 + 0.5 (−0.5)j (Yt−j − 5)
j=1

and Ŷt+h|t = 5 for all h ≥ 2.

b) Yt = 0.8Yt−1 + εt
Solution. Here µ = 0, Θ(L) = 1 − 0.8L, and Φ(L) = 1.

X
L−h Θ(L)−1 Φ(L)]+ = [L−h 0.8j Lj ]+
j=0
X∞
=[ 0.8j Lj−h ]+
j=0

X
= 0.8j Lj−h
j=h

X
= 0.8h 0.8j−h Lj−h
j=h
X∞
= 0.8h 0.8k Lk
k=0
= 0.8h (1 − 0.8L)−1

Hence,
Ŷt+h|t = 0.8h (1 − 0.8L)−1 (1 − 0.8L)Yt = 0.8h Yt .

Problem 3 (25 points)


a) (15 points) Suppose, {Xt } is i.i.d. and {Yt } is i.i.d., and {Xt } is inde-
pendent of {Yt }, with P (Xt = 1) = P (Xt = 0) = 1/2 and E[Yt ] = 0,
Var(Yt ) = σ 2 < ∞. Let Zt = Xt (1 − Xt−1 )Yt . Is {Zt } a white noise? Is
it i.i.d.?
Solution. The process {Zt }is a white noise, if 1) E[Zt ] = 0 for all t; 2)
2
Var(Zt ) = σZ < ∞ for all t, and 3) Cov(Zt , Zs ) = 0 for all t 6= s.
Let us check these conditions:

4
(a) E[Zt ] = E[Xt (1 − Xt−1 )Yt ] = E[Xt ]E[1 − Xt−1 ]E[Yt ] = 0 for all t,
where the penultimate equality holds because Xt , Xt−1 , Yt are inde-
pendent.
(b) Similarly, Var(Zt ) = E[Xt2 (1−Xt−1 )2 Yt2 ] = E[Xt2 ]E[(1−Xt−1 )2 ]E[Yt2 ] =
11 2 2
2 2 σ = σ /4 for all t
(c) Finally, for all t 6= s,

Cov(Zt , Zs ) = E[Zt Zs ]
= E[Xt (1 − Xt−1 )Yt Xs (1 − Xs−1 )Ys ]
= E[Xt (1 − Xt−1 )Xs (1 − Xs−1 )]E[Yt ]E[Ys ]
= 0,

since E[Yt ] = E[Ys ] = 0 for all t 6= s.


Hence, {Zt } is indeed a white noise.
Next, let us see whether this process is necessarily i.i.d. Let Yt be such that
P (Yt = 1) = P (Yt = −1) = 1/2. Then E[Yt ] = 0 and Var(Yt ) = 1 < ∞
for all t.
Let us find the distribution of Zt . Zt can take on only 3 values: −1, 0,
and 1, with the probabilities 1/8, 3/4 and 1/8.
Let us now look at P (Zt = 1, Zt+1 = 1). Zt = 1 only if Xt = 1, Xt−1 = 0,
and Yt = 1. Similarly, Zt+1 = 1, only if Xt+1 = 1, Xt = 0, and Yt+1 = 1.
Hence, P (Zt = 1, Zt+1 = 1) = 0 6= 1/8 · 1/8 = P (Zt = 1)P (Zt+1 = 1),
and so Zt and Zt+1 are not independent.

b) (10 points) Suppose, {Xt } are i.i.d. with mean µ and variance σ 2 , and
are independent
Pt of a r.v. Y with mean 0 and variance 1. Let Zt =
Y + (1/t) j=1 Xj . Is {Zt } weakly stationary? Is it ergodic for the mean?
Solution. Note that E[Zt ] = E[Y ] + E[Xj ] = µ, and Var(Zt ) = Var(Y ) +
(1/t)Var(Xj ) = 1 + σ 2 /t and it is not constant for all t, hence Zt is not
stationary, and hence not ergodic for the mean.

Problem 4 (25 points)


Suppose Xt = θXt−1 +εt +ϕ εt−1 . And let {ηt } be a process such
 that {(εt , ηt )}
σε2 σε η
is a white noise process with the variance-covariance matrix σε η ση2

i) (10 points) Compute the first 10 autocovariances of {Xt }.

Solution. First, we need to assume that |θ| < 1 in order for {Xt } to be
weakly stationary. Next, note that E[Xt ] = 0. Then

γ(1) = Cov(Xt , Xt−1 )


= Cov(θXt−1 + εt +ϕ εt−1 , Xt−1 )
= θγ(0) + ϕCov(εt−1 , Xt−1 ).

5
Next,

Cov(Xt−1 , εt−1 ) = Cov(θXt−2 + εt−1 +ϕ εt−2 , εt−1 )


= V ar(εt−1 ) = σ 2 , for all t.

Hence,
γ(1) = θγ(0) + ϕσ 2 .

Next,

γ(2) = Cov(Xt , Xt−2 )


= Cov(θXt−1 + εt +ϕ εt−1 , Xt−2 )
= θCov(Xt−1 , Xt−2 ) + Cov(εt , Xt−2 ) + Cov(ϕ εt−1 , Xt−2 )
= θγ(1).

And so, for any h > 1,

γ(h) = Cov(Xt , Xt−h )


= Cov(θXt−1 + εt ϕ εt−1 , Xt−h )
= θγ(h − 1).

So all we need to do is to find γ(0).

γ(0) = Var(Xt )
= Var(θXt−1 + εt +ϕ εt−1 )
= θ2 γ(0) + σ 2 + ϕ2 σ 2 + 2θϕCov(Xt−1 , εt−1 )
= θ2 γ(0) + (1 + ϕ2 + 2θϕ)σ 2 .

Hence,

1 + ϕ2 + 2θϕ 2
γ(0) = σ
1 − θ2
(θ + ϕ)(1 + θϕ) 2
γ(1) = σ
1 − θ2
(θ + ϕ)(1 + θϕ) 2
γ(2) = θ σ
1 − θ2
...
(θ + ϕ)(1 + θϕ) 2
γ(10) = θ9 σ .
1 − θ2

ii) (15 points) Let Yt = Xt + ηt . Show that {Yt } is ARMA(p,q), and find p
and q.

6
Solution. Yt is different from Xt only by a white noise ηt . So it is natural
to assume that the autoregressive part of Yt is similar to that by Xt . Let
us verify that:
(1 − θL)Yt = (1 − θL)(Xt + ηt )
= (1 − θL)Xt + (1 − θL)ηt
= εt +ϕ εt−1 +ηt − θηt−1 .

So (1 − θL)Yt is a sum of two MA(1) processes. Let us find its covariance


structure.
First, let us look at

Cov(εt +ϕ εt−1 +ηt − θηt−1 , εt +ϕ εt−1 +ηt − θηt−1 ) =


= (1 + ϕ2 )σε2 + (1 + θ2 )ση2 + 2(1 − θϕ)σε η .

Next,
Cov(εt +ϕ εt−1 +ηt − θηt−1 , εt−1 +ϕ εt−2 +ηt−1 − θηt−2 ) =
= ϕσε2 − θση2 + (ϕ − θ)σε η .

Finally, for any h ≥ 2,


Cov(εt +ϕ εt−1 +ηt − θηt−1 , εt−h +ϕ εt−h−1 +ηt−h − θηt−h−1 ) =
= 0,
so the process has the MA(1) structure.
Getting to this point is enough to get the full points for this question.
However, we might also want to ask, whether there is a white noise {ut },
such that for some constant α, we can write
Yt = θYt−1 + ut + αut−1 ,
so that
ut + αut = εt +ϕ εt−1 +ηt − θηt−1 .
Since Var(ut + αut−1 ) = (1 + α2 )σu2 , and Cov(ut + αut−1 , ut−1 + αut−2 ) =
α, we can write the follwoing two equations:
(1 + α2 )σu2 = (1 + ϕ2 )σε2 + (1 + θ2 )ση2 + 2(1 − θϕ)σε η ,
and
ασu2 = ϕσε2 − θση2 + (ϕ − θ)σε η .
Hence,
(1+α2 )[(1+ϕ2 )σε2 +(1+θ2 )ση2 +2(1−θϕ)σε η ] = α[ϕσε2 −θση2 +(ϕ−θ)σε η ].
It is a quadratic equation in α, with two solutions: one larger and one
smaller than 1. Picking the one smaller than 1 we can ensure that the
process is invertible. Denote it by α∗ . Then ut can be found as
ut = (1 − α∗ L)−1 (εt +ϕ εt−1 +ηt − θηt−1 ).

You might also like