Predictors
Antonio Ferramosca - DSI 2021
Exercise 1
Consider the system given by:
x1 (t + 1) = 0.6x1 (t) + u(t) + e(t)
x2 (t + 1) = x1 (t) + 0.5x2 (t) + 4e(t)
y(t) = x2 (t)
with e(t) ∼ W N (0, 1)
1. Classify the system, find the state-space matrices, and study the stability of the deter-
ministic portion.
2. Obtain its ARMAX representation.
3. Is the system in canonical form? If not, find the canonical representation of the process.
4. Find the optimal 2-steps predictor and the associated prediction error variance.
SOLUTION:
1.
The state space model is a second order (2 states), LTI (parameters are not time-varying
and equations are linear), strictly proper (u(t) does not appear in the output transformation).
The deterministic part of the system is a SISO system, but there’s also a stochastic portion.
The state-space matrices are given by:
0.6 0 1 1
A= , B= , C = 0 1 , D = 0, E =
1 0.5 0 4
In compact form, defining x = (x1 , x2 )0 we have
x(t + 1) = Ax(t) + Bu(t) + Ee(t)
y(t) = Cx(t)
As for stability of the deterministic portion (assuming e(t) = 0, ∀t) we have to check the
eigenvalues of matrix A, and they should be inside the unit circle: |λ1,2 | < 1
To find the eqigenvalues we have to solve
det(A − λI) = 0
In this case: λ1 = 0.6 and λ2 = 0.5 (triangular matrix, so the eigenvalues are the elements of
the diagonal) .
1
Since |λ1,2 | < 1 is fulfilled then the system si asymptotically stable.
2.
The ARMAX model cab be easily computed:
zx1 (t) = 0.6x1 (t) + u(t) + e(t)
zx2 (t) = x1 (t) + 0.5x2 (t) + 4e(t)
y(t) = x2 (t)
then
(z − 0.6)x1 (t) = u(t) + e(t)
(z − 0.5)x2 (t) = x1 (t) + 4e(t)
y(t) = x2 (t)
Since y(t) = x2 (t), let’s determine x2 (t)
1 4
x2 (t) = x1 (t) + e(t)
(z − 0.5) (z − 0.5)
1 1
x1 (t) = u(t) + e(t)
(z − 0.6) (z − 0.6)
Then:
1 1 1 4
y(t) = x2 (t) = u(t) + e(t) + e(t)
(z − 0.5) (z − 0.6) (z − 0.6) (z − 0.5)
1 4z − 1.4
= 2 u(t) + 2 e(t)
z − 1.1z + 0.3 z − 1.1z + 0.3
Multiplying numerators and denominators by z −2 , we get:
1 4z −1 − 1.4z −2
y(t) = u(t − 2) + e(t) (1)
1 − 1.1z −1 + 0.3z −2 1 − 1.1z −1 + 0.3z −2
This is the ARMAX representation in which we can recognize the following terms:
A(z) = 1 − 1.1z −1 + 0.3z −2
B(z) = 1
C(z) = 4z −1 − 1.4z −2
k = 2
3.
4z −1 −1.4z −2
So, W (z) = 1−1.1z −1 +0.3z −2
is not in canonical form. In fact:
• C(z) and A(z) have the same degree: NO
• C(z) and A(z) are monic: NO
• C(z) and A(z) are coprime: YES.
The root of C(z) is given by z = 1.4
4
= 0.35. As for the roots of A(z), note that A(z) =
(z − 0.5)(z − 0.6), so they are z = 0.5 and z = 0.6 (which are exactly the eigenvalues of
the A matrice - Question 1).
2
• C(z) and A(z) have roots inside the unite circle: YES
We have to deal with the first to conditions in order to obtain the canonical form. As for
the first condition, if we multiply/divide by z we get
z 4z −1 − 1.4z −2 4 − 1.4z −1
W (z) = = z −1
z 1 − 1.1z −1 + 0.3z −2 1 − 1.1z −1 + 0.3z −2
For the second condition we can factorize by 4 the numerator:
4 − 1.4z −1 −1 1 − 0.35z −1
W (z) = z = 4z −1
1 − 1.1z −1 + 0.3z −2 1 − 1.1z −1 + 0.3z −2
Then we can rewrite the stochastic portion of the process as:
y(t) = W (z)e(t)
1 − 0.35z −1
= −1 −2
4z −1 e(t)
1 − 1.1z + 0.3z
1 − 0.35z −1
= η(t)
1 − 1.1z −1 + 0.3z −2
1−0.35z −1
with W1 (z) = 1−1.1z −1 +0.3z −2
and η(t) = 4e(t − 1) is the new white noise, which is such that
mη = E[η(t)] = E[4e(t − 1)] = 0
γη (0) = E[η(t)2 ] = E[(4e(t − 1))2 ] = 16
So η(t) ∼ W N (0, 16) and
1 1 − 0.35z −1
y(t) = u(t − 2) + η(t)
1 − 1.1z −1 + 0.3z −2 1 − 1.1z −1 + 0.3z −2
which is in canonical form.
4.
In order to compute the optima 2-steps predictor, we can simply apply the formula for
ARMAX:
B(z)Q2 (z) R2 (z)
ŷ(t|t − 2) = u(t − 2) + y(t)
C(z) C(z)
In order to find the Q2 (z) and R2 (z) we have to solve a 2-steps long division between C(z)
and A(z). We get:
Q2 (z) = 1 + 0.75z −1 , R2 (z) = 0.525z −2 − 0.225z −3
with R2 (z) = z −2 R(z) and R(z) = 0.525 − 0.225z −1 .
So:
B(z)Q2 (z) R2 (z)
ŷ(t|t − 2) = u(t − 2) + y(t)
C(z) C(z)
1 + 0.75z −1 0.525z −2 − 0.225z −3
= u(t − 2) + y(t)
1 − 0.35z −1 1 − 0.35z −1
1 + 0.75z −1 0.525 − 0.225z −1
= u(t − 2) + y(t − 2)
1 − 0.35z −1 1 − 0.35z −1
That is, in the recursive representation:
ŷ(t|t − 2) = 0.35ŷ(t − 1|t − 3) + u(t − 2) + 0.75u(t − 3) + 0.525y(t − 2) − 0.225y(t − 3)
which only depends on information up to time t − 2.
As for the variance of the prediction error ε(t) = Q2 (z)η(t), we have:
V ar[ε(t)] = E[ε(t)2 ] = E[(η(t) + 0.75η(t − 1)))2 ] = 16 + 0.752 · 16 = 25
3
Exercise 2
Consider the process given by:
y(t) = 0.2y(t − 3) + e(t), e(t) ∼ W N (0, 1)
1. Compute the mean and the variance of the process.
2. Compute the 1-step, 2-steps, 3-steps predictors.
SOLUTION:
1.
The process is an AR(3) with a purely delay of 3 time instants. The transfer function is:
1 z3
W (z) = =
1 − 0.2z −3 z 3 − 0.2
√
which has 3 poles, all of them in z = 3 0.2 = 0.58 < 1, thus inside the unite circle.
Since e(t) is a WN and W (z) is As.Stable, then y(t) is a WSS process.
Since the noise i zero-mean, then my = E[y(t)] = 0.
Ad for the variance:
γy (0) = E[y(t)2 ] = E[(0.2y(t − 3) + e(t))2 ] = 0.04γy (0) + E[e(t)2 ]
Then:
1
(1 − 0.04)γy (0) = 1 ⇒ γy (0) = = 1.04
0.96
Let’s compute γy (1):
γy (1) = E[y(t)y(t − 1)] = E[(0.2y(t − 3) + e(t)) y(t − 1)]
= E[0.2y(t − 3)y(t − 1) + e(t)y(t − 1)]
= 0.2γy (−2)
= 0.2γy (2)
So we need to know γy (2) in order to compute γy (1):
γy (2) = E[y(t)y(t − 2)] = E[(0.2y(t − 3) + e(t)) y(t − 2)]
= E[0.2y(t − 3)y(t − 2) + e(t)y(t − 2)]
= 0.2γy (−1)
= 0.2γy (1)
Thus:
γy (1) = 0.2γy (2)
γy (2) = 0.2γy (1)
which implies γy (1) = γy (2) = 0.
So, is the covariance null for any|τ | > 0? Let’s find out...
γy (3) = E[y(t)y(t − 3)] = E[(0.2y(t − 3) + e(t)) y(t − 3)]
= E[0.2y(t − 3)2 + e(t)y(t − 3)]
= 0.2γy (0)
= 0.21
4
which is not null.
γy (4) = E[y(t)y(t − 4)] = E[(0.2y(t − 3) + e(t)) y(t − 4)]
= E[0.2y(t − 3)y(t − 4) + e(t)y(t − 4)]
= 0.2γy (1)
= 0
γy (5) = E[y(t)y(t − 5)] = E[(0.2y(t − 3) + e(t)) y(t − 5)]
= E[0.2y(t − 3)y(t − 5) + e(t)y(t − 5)]
= 0.2γy (2)
= 0
γy (6) = E[y(t)y(t − 6)] = E[(0.2y(t − 3) + e(t)) y(t − 6)]
= E[0.2y(t − 3)y(t − 6) + e(t)y(t − 6)]
= 0.2γy (3)
= 0.042
which is not null. Let’s compute a few more values:
γy (7) = E[y(t)y(t − 7)] = E[(0.2y(t − 3) + e(t)) y(t − 7)]
= E[0.2y(t − 3)y(t − 7) + e(t)y(t − 7)]
= 0.2γy (4)
= 0
γy (8) = E[y(t)y(t − 8)] = E[(0.2y(t − 3) + e(t)) y(t − 8)]
= E[0.2y(t − 3)y(t − 8) + e(t)y(t − 8)]
= 0.2γy (5)
= 0
γy (9) = E[y(t)y(t − 9)] = E[(0.2y(t − 3) + e(t)) y(t − 9)]
= E[0.2y(t − 3)y(t − 9) + e(t)y(t − 9)]
= 0.2γy (6)
= 0.0084
which is not null. You already got how it goes...
2.
First of all, we need to check if the process is in canonical form. We now that the transfer
function is as. stable from our previous study. So the poles are inside the unit circle. Also,
there are no zeros.
5
If we recall it:
1
W (z) =
1 − 0.2z −3
we can see that the two polynomials C(z) = 1 and A(z) = 1 − 0.2z −3 have same degree, are
monic, are coprime. Thus the process is in canonical form.
So, applying the formula for ARMA, we can see that:
Rr (z)
ŷ(t|t − r) = , and ε(t) = Qr (z)e(t)
C(z)
It’s easy to see that the 1-step predictor is such that
Q1 (z) = 1, R1 (z) = C(z) − A(z) = 0.2z −3
Thus:
ŷ(t|t − 1) = 0.2y(t − 3), and ε(t) = e(t)
As for the 2-steps predictor, notice that, if we compute the 2-steps long division, due to the
particular process (a pure delay of 3 time steps) we still get:
Q2 (z) = 1, R2 (z) = 0.2z −3
The same occurs with the 3-steps predictor: Q3 (z) = 1, and R3 (z) = 0.2z −3 .
Thus:
ŷ(t|t − 2) = 0.2y(t − 3), and ε(t) = e(t)
ŷ(t|t − 3) = 0.2y(t − 3), and ε(t) = e(t)
This is so because the process is an AR with a 3 steps delay, and the predictors must satisfy
the constraint of the pure delay. So the 1-step and the 2-steps predictor are equal to the 3-steps
predictor.