Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
12 views14 pages

Stationary Processes for Stat Students

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views14 pages

Stationary Processes for Stat Students

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

✬ ✩

STAT 520 Stationary Stochastic Processes 1

Stationary Stochastic Process

The behavior of a stochastic process, or simply a process, z(t) on a


domain T is characterized by the probability distributions of its

finite dimensional restrictions z(t1 ), . . . , z(tm ) ,

p z(t1 ), . . . , z(tm ) ,
for all t1 , . . . , tm ∈ T .

A process is (strictly) stationary if


 
p z(t1 ), . . . , z(tm ) = p z(t1 + h), . . . , z(tm + h) ,

for all t1 , . . . , tm , h ∈ T .

In (discrete time) time series analysis, T is the integer lattice, so ti


and h will be integers. We will write z(t) as zt .
✫ ✪
C. Gu Spring 2024
✬ ✩
STAT 520 Stationary Stochastic Processes 2

Moments of Stationary Process

For m = 1 with a stationary process, p(zt ) = p(z) is the same for


all t. Its mean and variance are
Z Z
µ = E[zt ] = zp(z)dz, σ 2 = E (zt − µ)2 = (z − µ)2 p(z)dz.
 

The autocovariance of the process at lag k is


 
γk = cov[zt , zt+k ] = E (zt − µ)(zt+k − µ) .

The autocorrelation of the process is


 
E (zt − µ)(zt+k − µ) γk
ρk = q  = γ ,
E (zt − µ)2 ]E[(zt+k − µ)2 0

where γ0 = σ 2 .
✫ ✪
C. Gu Spring 2024
✬ ✩
STAT 520 Stationary Stochastic Processes 3

Autocovariance and Positive Definiteness

The covariance matrix of (z1 , . . . , zn ),


   
γ0 γ1 ... γn−1 ρ0 ρ1 ... ρn−1
   
 γ1 γ0 ... γn−2 
  ρ1 ρ0 ... ρn−2 
2
= σ ,
 
 .. .. .. 
  .. .. .. 

 . . ... .  
 . . ... . 

γn−1 γn−2 ... γ0 ρn−1 ρn−2 ... ρ0

is positive definite. In general, a bivariate function R(t, s) on T is


nonnegative definite if for all li real and all ti ∈ T ,
P
i,j li lj R(ti , tj ) ≥ 0.

For 1-D stationary process, the autocovariance R(t, s) = R |t − s|
is generated from some univariate function R(h).
✫ ✪
C. Gu Spring 2024
✬ ✩
STAT 520 Stationary Stochastic Processes 4

Weak Stationarity, Gaussian Process

A process is a Gaussian process if its restrictions (zt1 , . . . , ztm )


follow normal distributions.

A process zt on T is weakly stationary of second order if


E[zt ] = E[z0 ] = µ
cov[zt , zt+h ] = cov[z0 , zh ] = γh ,
for all t, h ∈ T . A Gaussian process that is weakly stationary of
second order is also strictly stationary.

For zt stationary, the linear function with coefficients l1 , . . . , ln ,


Lt = l1 zt + l2 zt−1 + · · · + ln zt−n+1 ,
is stationary. These include ∇zt = zt − zt−1 and higher order

✫ ✪
differences ∇d zt .

C. Gu Spring 2024
✬ ✩
STAT 520 Stationary Stochastic Processes 5

Examples: AR(1) and MA(1) Processes

Let at be independent with E[at ] = 0 and E[a2t ] = σa2 . The process


at is called a white noise process.

Suppose zt satisfies zt = φzt−1 + at , a first order autoregressive


(AR) process, with |φ| < 1 and zt−1 independent of at . It is easy to
verify that E[zt ] = 0 and

γ0 = σa2 /(1 − φ2 ), ρk = φρk−1 , ρk = φ|k| .

Let zt = at − θat−1 , a first order moving average (MA) process. It


is easy to verify that E[zt ] = 0 and

γ0 = σa2 (1 + θ2 ), γ1 = σa2 (−θ), γk = 0, k > 1.

✫ ✪
C. Gu Spring 2024
✬ ✩
STAT 520 Stationary Stochastic Processes 6

Examples of Nonstationary Processes

Consider a random walk with drift, zt = zt−1 + δ + at , t > 0, with


z0 = 0, δ a constant, and at white noise. It is easy to calculate
E[zt ] = δt, var[zt ] = tσa2 ,
so zt is nonstationary. The difference wt = zt − zt−1 however is
stationary.

Consider zt = µt + yt , with µt a deterministic function and yt a


stationary process. E[zt ] = µt + E[y] depends on t, so zt is
nonstationary.
For µt = δt, wt = zt − zt−1 is stationary.
For µt = A cos(2πt/k) + B sin(2πt/k), wt = zt − zt−k is stationary.

✫ ✪
C. Gu Spring 2024
✬ ✩
STAT 520 Stationary Stochastic Processes 7

Estimation of Mean

Observing z1 , . . . , zN , one estimates µ = E[zt ] using


PN
z̄ = t=1 zt /N , with the variance
1
PN PN γ0
 PN −1 k

var[z̄] = N 2 t=1 s=1 γt−s = N 1 + 2 k=1 (1 − N )ρk .
P
Assuming k |ρk | < ∞, it can be shown that as N → ∞,
P∞ 
N var[z̄] → γ0 1 + 2 k=1 ρk ,
P∞ 
which yields the “large sample” variance (γ0 /N ) 1 + 2 k=1 ρk .
Compare with the familiar i.i.d. result var[z̄] = σ 2 /N , the effective
P∞ 
sample size becomes N/ 1 + 2 k=1 ρk due to autocorrelation.
P∞ 
The factor 1 + 2 k=1 ρk is 1 for white noise, (1 + φ)/(1 − φ) for
AR(1), and (1 − θ)2 /(1 + θ2 ) for MA(1).

✫ ✪
C. Gu Spring 2024
✬ ✩
STAT 520 Stationary Stochastic Processes 8

Estimation of Autocorrelation Function

To estimate γk , one uses the sample covariance function


1
PN −k
ck = N t=1 (zt − z̄)(zt+k − z̄).
To estimate ρk , one uses the sample autocorrelation function
rk = ck /c0 .
Both c|t−s| and r|t−s| are nonnegative definite.

It can be shown that


k 1
P∞
ρv = γk + O(N −1 ),

E[ck ] ≈ γk − N γk − N γ0 1+2 v=1

and that
k 1
P∞
ρv = ρk + O(N −1 ),

E[rk ] ≈ ρk − N ρk − N 1+2 v=1

✫ ✪
so ck and rk are asymptotically unbiased estimates of γk and ρk .

C. Gu Spring 2024
✬ ✩
STAT 520 Stationary Stochastic Processes 9

Variance of Sample ACF

For Gaussian process with N large, it can be shown that



1 X
cov[ck , ck+s ] ≈ (γv γv+s + γv γv+2k+s ),
N v=−∞
P∞
so N var[ck ] ≈ v=−∞ (γv2 + γv γv+2k ). Similarly, one has

1 X
cov[rk , rk+s ] ≈ (ρv ρv+s + ρv ρv+2k+s + 2ρk ρk+s ρ2v
N v=−∞
− 2ρk ρv ρv+k+s − 2ρk+s ρv ρv+k ),
P∞ 2
and N var[rk ] ≈ v=−∞ (ρv + ρv ρv+2k + 2ρ2k ρ2v − 4ρk ρv ρv+k ).

For k large, ρk often “dies out”, leaving only the first terms
contributing to the large-lag (co)variance.
✫ ✪
C. Gu Spring 2024
✬ ✩
STAT 520 Stationary Stochastic Processes 10

Sample ACF in R

The acf function in the R library ts can be used to calculate and


plot ck and rk , with standard errors superimposed for rk . Part of
the arguments and default options are listed below.
acf(x,lag.max=NULL,type=c("cor","cov","partial"),
plot=TRUE,demean=TRUE,...)
plot.acf(acf.obj,ci=0.95,ci.col="blue",
ci.type=c("white","ma"),...)
The default lag.max is 10 log10 N . The estimated variances for rk
are (1/N ) for ci.type="white" (white noise model, with ρv = 0,
Pk−1 2
v > 0), or (1/N )(1 + 2 v=1 rk ) for ci.type="ma" (MA(k-1)
model, with ρv = 0, v > k − 1).

The option type="partial" concerns partial autocorrelation, to be

✫ ✪
discussed later.

C. Gu Spring 2024
✬ ✩
STAT 520 Stationary Stochastic Processes 11

Some Elementary Fourier Analysis

A complex number a + ib can be written as Aeiθ = A(cos θ + i sin θ),



where A = |a + ib| = a2 + b2 , cos θ = a/A, and sin θ = b/A. |eiθ | = 1.
R 1/2 2
For any f (x) on (−1/2, 1/2) satisfying −1/2 f (x) dx < ∞, one has the
Fourier series expansion,
P∞
f (x) = v=−∞ fv e−i2πvx ,
R 1/2
where fv = −1/2 f (x)ei2πvx dx are the Fourier coefficients. The
P∞ 2
R 1/2 2
Parseval’s identity asserts that v=−∞ |fv | = −1/2 f (x) dx.

For vector (z1 , . . . , zN ), the discrete Fourier transform (DFT) is given by


ζv = √1N N
P
t=1 z t e −i2πtv/N
, v = 1, . . . , N ,
and the inverse DFT is given by
PN
zt = N v=1 ζv ei2πtv/N ,
1
t = 1, . . . , N .
✫ ✪

C. Gu Spring 2024
✬ ✩
STAT 520 Stationary Stochastic Processes 12

Spectral Density of Stationary Process

Herglotz’s Theorem. A necessary and sufficient condition for ρk ,


k = 0, ±1, ±2, . . . to be the autocorrelation function for some
stationary process zt is that there exists a probability function
(cdf) F (ω) on (−1/2, 1/2) such that
R 1/2 i2πkω
ρk = −1/2 e dF (ω).

When F (ω) has a density f (ω), ρk are the Fourier coefficients of


f (ω). The spectral density f (ω) has an expression
P∞
f (ω) = k=−∞ ρk e−i2πkω .

For zt real with ρk = ρ−k , one has


P∞
f (ω) = 1 + 2 k=1 ρk cos 2πkω.
✫ ✪
C. Gu Spring 2024
✬ ✩
STAT 520 Stationary Stochastic Processes 13

Examples of Spectral Density

For white noise, f (ω) = 1 is uniform.


1 − φ2
For AR(1), f (ω) = 2
. Plots are with φ = ±.5.
1 − 2φ cos 2πω + φ
θ cos 2πω
For MA(1), f (ω) = 1 − 2 2
. Plots are with θ = ±.7.
1+θ

AR(1) Spectrum MA(1) Spectrum


3.0

2.0
1.5
2.0
f(ω)

f(ω)
1.0
1.0

0.5
0.0

0.0

−0.4 −0.2 0.0 0.2 0.4 −0.4 −0.2 0.0 0.2 0.4

✫ ✪
ω ω

C. Gu Spring 2024
✬ ✩
STAT 520 Stationary Stochastic Processes 14

Continuous and Discrete Spectrum

Consider zt = a1 cos 2πλt + a2 sin 2πλt, where a1 , a2 ∼ N (0, σ 2 ).


One has E[zt ] = 0 and
cov[zt , zs ] = σ 2 {cos 2πλt cos 2πλs + sin 2πλt sin 2πλs}
= σ 2 cos 2πλ|t − s|,
so zt is stationary with ρk = cos 2πλk. The spectral distribution
F (ω) is discrete with mass at ω = ±λ.

The above example shows that a discrete spectrum corresponds to


a sinusoidal deterministic process, thus a purely random process
should have a spectral density. In general, a stationary process may
have both deterministic and purely random components.

✫ ✪
C. Gu Spring 2024

You might also like