Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
167 views89 pages

Stochastic Processes for Engineers

This document introduces stochastic processes and provides examples. It discusses: 1) Stochastic processes are functions that map outcomes of experiments to functions of time, extending the concept of a random variable. 2) Examples are given of stochastic processes where coin flips determine the function mapped to over time. 3) Probability distributions can be defined for stochastic processes by looking at the process at a single time or jointly over multiple times. Higher order distributions capture the relationships between process samples at different times.

Uploaded by

daniel developer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
167 views89 pages

Stochastic Processes for Engineers

This document introduces stochastic processes and provides examples. It discusses: 1) Stochastic processes are functions that map outcomes of experiments to functions of time, extending the concept of a random variable. 2) Examples are given of stochastic processes where coin flips determine the function mapped to over time. 3) Probability distributions can be defined for stochastic processes by looking at the process at a single time or jointly over multiple times. Higher order distributions capture the relationships between process samples at different times.

Uploaded by

daniel developer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 89

EEC 605 Probability and Stochastic

Processes
Dr. Ndeh Ntomambang Ningo
Module 5
Introduction to
Stochastic Processes
Probability and Stochastic Processes
Introduction to Stochastic Processes
▪ Recall that a random variable, X, is a function of the possible
outcomes, ζ , of an experiment. Now, we would like to extend
this concept so that a function of time x(t) is assigned to every
outcome, ζ, of an experiment.
▪ The function, x(t), may be real or complex and it can be discrete
or continuous in amplitude. Strictly speaking, the function is
really a function of two variables, x(t, ζ), but to keep the
notation simple, we typically do not explicitly show the
dependence on the outcome, just as we have not in the case of
random variables.
Probability and Stochastic Processes
Introduction to Stochastic Processes
▪ The function may have the same general dependence on time
for every outcome of the experiment or each outcome could
produce a completely different waveform.
▪ In general, the function is a member of an ensemble (family, set,
collection) of functions. Just as we did for random variables, an
ensemble of member functions, X(t), is denoted with an upper
case letter. Thus, X(t) represents the random process, while x(t)
is one particular member or realization of the random process.
Probability and Stochastic Processes
Introduction to Stochastic Processes
From the point of probability theory, the random process model
is similar to that of a random variable. A random variable is a
mapping of events in the sample space to points on the real line,

Illustration of a
random variable.
Probability and Stochastic Processes
Introduction to Stochastic Processes
while a random or stochastic process is a mapping of the sample
space into function space where the functions are dependent on
time.

The random
process model
Probability and Stochastic Processes
Introduction to Stochastic Processes
Definition: A random or stochastic process is a function of the
elements of a sample space, S, as well as another independent
variable, t. Given an experiment, E, with sample space, S, the
random process, X(t), maps each possible outcome, ζ , to a
function of t, x(t, ζ ), as specified by some rule.
Probability and Stochastic Processes
Introduction to Stochastic Processes
We have defined a random process as a family of functions X(t,
ζ), as a function of both time t and the outcome ζ of a random
experiment. This family of functions is known as an ensemble.
Members of the ensemble are also referred to as sample functions
or realizations. Ideally, an ensemble may consist of an infinite
number of sample functions.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Some realizations of a random processes. Note that both
independent variables t and ζ are changing.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Graphically, an ensemble can be considered as having two directions, one
corresponding to time and the other corresponding to the realization.
Note that the following functions can be derived from a random process:
▪ Given a sample function, i.e., for a fixed realization ζ , one can proceed
along the time direction. Therefore, one has a function of time.
▪ On the other hand, if time is fixed, we have a collection of time samples
corresponding to the various possible values of a random variable.
▪ If both time t and realization ζ are fixed, we have a number or a
constant.
▪ If time t and realization ζ vary, then we have a random processes.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Example 1:
Suppose an experiment consists of flipping a coin. If the outcome
is heads, ζ = H, the random process takes the form xH(t) = sin(ω0t);
whereas, if the outcome is tails, ζ = T, the realization is
xT(t) = sin(2ω0t), where ω0 is some frequency.

The graph in the next slide shows two realizations of the random
process.
Probability and Stochastic Processes
Introduction to Stochastic Processes

Member functions for


the random process of
Example 1 above.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Example 2:
Suppose an experiment results in a random variable A that is
uniformly distributed over [0 ,1). A random process is then
constructed according to X(t) = A sin(ω0t). Since the random
variable is continuous, there are an uncountably infinite number
of realizations of the random process. A few realizations are
shown below.
Probability and Stochastic Processes
Introduction to Stochastic Processes

Some member
functions for the
random process of
Example 2
Probability and Stochastic Processes
Introduction to Stochastic Processes
Note that given an observation of the realization of the random
process X(t1) at time t1, we can determine the rest of the
realization from the ensemble so long as ω0t1 ≠ nπ.
Probability and Stochastic Processes
Introduction to Stochastic Processes
EXAMPLE 3: This example is a generalization of that given in
Example 1. Suppose now that the experiment consists of flipping a
coin repeatedly and observing the sequence of outcomes. The random
process X(t) is the constructed as

X(t) = sin(Ωit) , (i – 1)T ≤ t < iT,

where Ωi = ω0 if the ith flip of the coin results in “heads” and Ωi = 2ω0
if the ith flip results in “tails”.
This is the sort of signal that might be produced by a frequency shift
keying (FSK) modem. In that application, the frequencies are not
determined by coin tosses, but by random data bits instead.
Probability and Stochastic Processes
Introduction to Stochastic Processes
First Order Distribution and Density Functions of Random Processes
▪ As with random variables, we can mathematically describe a
random process in terms of a cumulative distribution function,
probability density function, or a probability mass function.
▪ In fact, given a random process, X(t), which is sampled at some
specified point in time, t = tk , the result is a random variable,
Xk = X(tk). This random variable can then be described in terms of its
PDF, fX(xk;tk).
▪ Note that an additional time variable has been added to the PDF due
to the fact that the PDF of the sample of the random process may
depend on when the process is sampled.
Probability and Stochastic Processes
Introduction to Stochastic Processes
First Order Distribution and Density Functions of Random Processes
Since a random process is a random variable for any fixed time t,
we can define a probability distribution and density functions as

FX ( x; t ) = P  , t : X ( ; t )  x  (M5.1)

for a fixed t.
FX ( x; t )
f X ( x; t ) = (M5.2)
x
Probability and Stochastic Processes
Introduction to Stochastic Processes
Second Order Distribution and Density Functions of Random Processes
▪ The PDF (or CDF or PMF) of a sample of a random process taken at
an arbitrary point in time goes a long way toward describing the
random process, but it is not a complete description.
▪ To see this, consider two samples, X1 = X(t1) and , X2 = X(t2) , taken at
two arbitrary points in time. The PDF, fX(x;t) describes both X1 and
X2 , but it does not describe the relationship between X1 and X2 . For
some random processes, it might be reasonable to expect that X1 and
X2 would be highly correlated if t1 is near t2, while X1 and X2 might
be virtually uncorrelated if t1 and t2 are far apart.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Second Order Distribution and Density Functions of Random Processes
▪ To characterize relationships of this sort, a joint PDF of the two
samples would be needed. That is, it would be necessary to
construct a joint PDF of the form

f X1 , X 2 ( x1 , x2 ; t1 , t2 ) (M5.3)

This is referred to as a second-order PDF of the random process .


Probability and Stochastic Processes
Introduction to Stochastic Processes
Second Order Distribution and Density Functions of Random Processes

If t1 and t2 are different times, then X1 = X(t1) and X2 = X(t2) are


two different random variables as shown
Probability and Stochastic Processes
Introduction to Stochastic Processes
Second Order Distribution and Density Functions of Random Processes

A second-order distribution function FX(x1, x2; t1, t2) for X1 and X2


is defined as

FX ( x1 , x2 ; t1 , t2 ) = P[ X ( t1 )  x1 , X ( t2 )  x2 ]
(M5.4)

The corresponding joint PDF is

 2 FX ( x1 , x2 ; t1 , t2 )
f X ( x1 , x2 ; t1 , t2 ) = (M5.5)
x1x2
Probability and Stochastic Processes
Introduction to Stochastic Processes
nth Order Distribution and Density Functions of Random Processes

Continuing with this reasoning, in order to completely describe


the random process, it is necessary to specify an nth order PDF for
an arbitrary n. That is, suppose the random process is sampled at
time instants t1, t2, …, tn , producing the random variables
X1 = X(t1), X2(t2), …, Xn(tn).
Probability and Stochastic Processes
Introduction to Stochastic Processes
nth Order Distribution and Density Functions of Random Processes

The nth order CDF is defined as

FX ( x1 , x2 ,..., xn ; t1 , t2 ,..., tn ) = P  X ( t1 )  x1 , X ( t2 )  x2 ,..., X ( t n )  xn , 


(M5.6)
The corresponding nth order is obtained from (M5.6) as
 n FX ( x1 , x2 ,..., xn ; t1 , t2 ,..., tn )
f X ( x1 , x2 ,..., xn ; t1 , t2 ,..., tn ) =
x1x2 ...xn (M5.7)
Probability and Stochastic Processes
Introduction to Stochastic Processes
nth Order Distribution and Density Functions of Random Processes

Unfortunately, for many realistic random processes, the prospects


of writing down an nth order PDF are rather daunting, except for
a Gaussian process. We are forced to rely on rather incomplete
but simpler descriptions of random processes and usually restrict
the definition to second-order distribution functions. Using
second order descriptions, we can define the mean and variance
of a random process,
Probability and Stochastic Processes
Introduction to Stochastic Processes
Mean and Variance of Random Processes
As for random variables, we can define the mean and variance of
a random process.

Definition: The mean function of a random process is simply the


expected value of the process. For continuous time processes, this
is written as

 X ( t ) =  xf X ( x; t ) dx (M5.8)
Probability and Stochastic Processes
Introduction to Stochastic Processes
Mean and Variance of Random Processes

For a discrete-time random process, the mean function is

 X  n  = E ( X  n ) =  xf X ( x; n ) dx (M5.9)
Probability and Stochastic Processes
Introduction to Stochastic Processes
Mean and Variance of Random Processes
The variance of the random process is

( t ) = E  X ( t ) −  X ( t ) =   x −  X ( t )  f X ( x; t ) dt
2 2
 2
X

= E  X 2 ( t )  −  X2 ( t ) (M5.10)
where

E  X  =  x f X ( x; t ) dt
 2
 2
(M5.11)
is the second moment of the random process.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Mean and Variance of Random Processes

Since the density is a function of time, the means and variances of


random processes are also functions of time.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Mean and Variance of Random Processes
Example 3: Suppose an experiment results in a random variable A that
is uniformly distributed over [0 ,1). A random process is then
constructed according to X(t) = A sin(ω0t). Find the mean value and
variance of X(t).

μX(t) = E[X(t)] = E[A sin(ω0t)] = E[A] sin(ω0t) = (1/2) sin(ω0t)


E[X2(t)] = E[A2 sin2(ω0t)] = E[A2] sin2(ω0t) = (1/3) sin2(ω0t)
Therefore, the variance is
σX2(t) = E[X2(t)] – μX2(t) = (1/3) sin2(ω0t) - (1/4) sin2(ω0t) = (1/12) sin2(ω0t)
Probability and Stochastic Processes
Introduction to Stochastic Processes
Mean and Variance of Random Processes

Exercise 1: Suppose a random process X(t) = cos(2πFt), where F is


a random variable uniformly distributed over some interval (0,
f0). Find the mean value and variance of X(t).
Probability and Stochastic Processes
Introduction to Stochastic Processes
Second Order Distribution and Density Functions of Random Processes

Limiting the discussion to second order descriptions, we can


define the joint distribution and density of two different random
processes X(t) and Y(t) as follows. Let X(t) and Y(t) be sampled at
time instants t1 and t2 respectively, to obtain X(t1) and Y(t2). Then
the joint CDF of X(t1) and Y(t2) is
Probability and Stochastic Processes
Introduction to Stochastic Processes
Second Order Distribution and Density Functions of Random Processes
Probability and Stochastic Processes
Introduction to Stochastic Processes
Second Order Distribution and Density Functions of Random Processes

FX ,Y ( x1 , y2 ; t1 , t2 ) = P  X ( t1 )  x1 , Y ( t2 )  (M5.12)
and the corresponding joint PDF of X(t1) and Y(t2) is

 2 FX ,Y ( x1 , y2 ; t1 , t2 )
f X ,Y ( x1 , y2 ; t1 , t2 ) =
x1y2 (M5.13)
Probability and Stochastic Processes
Introduction to Stochastic Processes

Joint Moments of Random Processes:


Autocorrelation Function RXX(t1,t2)
Probability and Stochastic Processes
Introduction to Stochastic Processes
In (M5.8) and (M5.10) we defined mean and variance for a
random process. The second moment of a random process has
also been defined in (M5.10). Since X(t1) and X(t2) are random
variables, various types of joint moments can be defined.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Definition: Autocorrelation Function RXX(t1,t2)
The autocorrelation function, RXX(t1,t2), of a continuous-time
random process, X(t), is defined as the expected value of the
product X(t1)X(t2):
 
RXX ( t1 , t2 ) = E  X ( t1 ) X ( t2 )  =  xx 1 2 f X1 X 2 ( x1 , x2 ; t1 , t2 ) dx1dx2
− −

(M5.14)
Probability and Stochastic Processes
Introduction to Stochastic Processes
Definition: Autocorrelation Function RXX(t1,t2)
The autocorrelation function, RXX(t1,t2), of a discrete-time random
process, X(t), is defined as the expected value of the product
X(n1)X(n2):
 
RXX ( t1 , t2 ) = E  X ( n1 ) X ( n2 )  =  xx 1 2 f X1 X 2 ( x1 , x2 ; n1 , n2 ) dx1dx2
− −

(M5.15)
Probability and Stochastic Processes
Introduction to Stochastic Processes
Autocorrelation Function
▪ The autocorrelation function describes the relationship (correlation)
between two samples of a random process. This correlation will
depend on when the samples are taken; thus, the autocorrelation
function is, in general, a function of two time variables.
▪ Quite often we are interested in how the correlation between two
samples depends on how far apart the samples are spaced. To
explicitly draw out this relationship, we define a time difference
variable,

 = t2 − t1 (M5.16)
Probability and Stochastic Processes
Introduction to Stochastic Processes
Autocorrelation Function
The autocorrelation function of the random process X(t) can then
be expressed as
RXX ( t , t +  ) = E  X ( t ) X ( t +  ) 
(M5.17)
where we have replaced t1 with t.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Autocorrelation Function
Example 4: Consider the sine wave process with a uniformly
distributed amplitude as described in Examples 2 above, where
X(t) = A sin(ω0t). The autocorrelation function is found as

RXX (t1 , t2 ) = E  X ( t1 ) X ( t2 )  = E  A2 sin (0t1 ) sin (0t2 ) 


1
= sin (0t1 ) sin (0t2 ) (M5.18)
3
Probability and Stochastic Processes
Introduction to Stochastic Processes
Autocorrelation Function
Example 5: Consider the random process X(t) = Asin(ω0t), here
the amplitude A is uniformly distributed in [0,1). Calculate the
autocorrelation function of X(t).
RXX(t1,t2) = E[X(t1)X(t2)]= E[A2 sin(ω0t1) sin(ω0t2)]
= (1/3) sin(ω0t1) sin(ω0t2)
In other words,
RXX(t,t+τ) = (1/3) sin(ω0t) sin(ω0(t+τ))
Probability and Stochastic Processes
Introduction to Stochastic Processes
Autocorrelation Function
Example 6: Suppose a random process X(t) is given by
X(t) = A sin(ω0t + θ), where θ is uniformly distributed over [0, 2π).
Determine the autocorrelation and mean value functions of X(t).

RXX(t1,t2) = E[X(t1)X(t2)] = E[A2 sin(ω0t1 + θ) sin(ω0t2 + θ)]


= (A2/2)E[cos(ω0(t2 – t1))] + (A2/2) E[cos(ω0(t1 + t2 + 2θ))]
= (A2/2) cos(ω0(t2 – t1)) = (A2/2) cos(ω0 τ)
μX(t) = E[A sin(ω0t + θ)] = 0
Probability and Stochastic Processes
Introduction to Stochastic Processes
Autocorrelation Function
Exercise 2: Consider a random process X(t) = A sin(ω0t + θ),
where the phase is uniformly distributed over [0 2π) and the
amplitude A is a constant. Calculate the mean value, variance and
autocorrelation function of this random process.

Exercise 3: If X(t) has a mean μX and autocorrelation function RXX(τ)


and Y(t) = c + dX(t), where c and d are constants, find the mean
and autocorrelation of the process Y(t) in terms of the mean and
autocorrelation of X(t).
Probability and Stochastic Processes
Introduction to Stochastic Processes
Autocorrelation Function
Exercise 4: (Modulation) A random process Y(t) is given by
Y(t) = X(t)cos(ωt + Ф), where X(t) is a zero mean wide-sense
stationary random process with autocorrelation function
RX(τ) = 2e-2λ|τ| which is modulating the carrier cos(ωt + Ф). The
random variable Ф is uniformly distributed in the interval (0,2π),
and is independent of X(t). Find the mean, variance, and
autocorrelation of Y(t).
Probability and Stochastic Processes
Introduction to Stochastic Processes
Autocorrelation Function
Exercise 5:
Peebles 6.3-8
Probability and Stochastic Processes
Introduction to Stochastic Processes

Joint Moments of Random Processes:


Autocovariance Function CXX(t1,t2)
Probability and Stochastic Processes
Introduction to Stochastic Processes
Autocovariance Function
Definition: The autocovariance function, CXX(t1,t2), of a continuous-
time random process, X(t), is defined as the covariance of X(t1)
and X(t2)

C XX ( t1 , t2 ) = Cov  X ( t1 ) X ( t2 )  = E  X ( t1 ) −  X ( t1 )   X ( t2 ) −  X ( t2 )  
 
=    x −  ( t )  x
− −
1 X 1 2 −  X ( t2 )  f XX ( x1 , x2 ; t1 , t2 ) dx1dx2

= E  X ( t1 ) X ( t2 )  −  X ( t1 )  X ( t2 )
(M5.19)
= RXX ( t1 , t2 ) −  X ( t1 )  X ( t2 )
Probability and Stochastic Processes
Introduction to Stochastic Processes
Autocovariance Function

Note that the variance of the random process X(t) can be obtained
from its autocovariance function by setting t1 = t2 = t in (M5.19) to
obtain C ( t , t ) = E  X (t ) X (t ) −  (t )  (t )
XX   X X

= E  X 2 ( t )  −  X 2 ( t )
(M5.20)
= 2
XX (t )
which is identical to (M5.10)
Probability and Stochastic Processes
Introduction to Stochastic Processes
Autocovariance Function
The autocovariance function is helpful when studying random
processes X(t) which can be represented as the sum of a
deterministic signal, s(t), plus a zero-mean noise process, N(t).
That is
X(t) = s(t) + N(t) (M5.21)

Then the autocorrelation function of X(t) is


Probability and Stochastic Processes
Introduction to Stochastic Processes
Autocovariance Function
 
RXX ( t1 , t2 ) =  s ( t1 ) + N ( t1 )   s ( t2 ) + N ( t2 )  = s ( t1 ) s ( t2 ) + RNN ( t1 , t2 )
(M5.22)
If the signal s(t) is strong compared to the noise N(t), the
deterministic part, s(t1) s(t2), of the autocorrelation function of
X(t) in (M5.22) will dominate the autocorrelation function, and
thus the autocorrelation function of X(t) will not tell us much
about the randomness in the process X(t).
Probability and Stochastic Processes
Introduction to Stochastic Processes
Autocovariance Function
On the other hand, the autocovariance function of X(t) is
C XX ( t1 , t2 ) = RXX ( t1 , t2 ) − s ( t1 ) s ( t2 ) = RNN ( t1 , t2 ) = C NN ( t1 , t2 )
(M5.23)
Because the autocovariance function of the random process X(t) is
equal to autocovariance function of the noise process N(t), the
autocovariance function allows us to isolate the noise which is the
source of randomness in the process.
Probability and Stochastic Processes
Introduction to Stochastic Processes

Joint Moments of Random Processes:


Cross-correlation Function RXY(t1,t2)
Cross-covariance Function CXY(t1,t2)
Probability and Stochastic Processes
Introduction to Stochastic Processes
Cross-correlation Function
Consider two random processes X(t) and Y(t).
Definition: The cross-correlation function RXY(t1, t2) is defined as
the expected value of the product X(t1) and Y (t2)
 
RXY ( t1 , t2 ) = E  X ( t1 ) Y ( t2 )  =  xy 1 2 f XY ( x1 , y2 ; t1 , t2 ) dx1dy2 (M5.24)
− −
Probability and Stochastic Processes
Introduction to Stochastic Processes
Cross-covariance Function
The cross-covariance function CXY(t1, t2) is defined as the
covariance between X(t1) and Y(t2)

C XY ( t1 , t2 ) = E  X ( t1 ) −  X ( t1 )  Y ( t2 ) − Y ( t2 ) 
 
=    x −  ( t )  y
− −
1 X 1 2 − Y ( t2 )  f XY ( x1 , y2 ; t1 , t2 ) dx1dy2

= E  X ( t1 ) Y ( t2 )  −  X ( t1 ) Y ( t2 )
= RXY ( t1 , t2 ) −  X ( t1 )  X ( t2 ) (M5.25)
Probability and Stochastic Processes
Introduction to Stochastic Processes
Cross-correlation and Cross-covariance Functions
(M5.25) establishes the relationship between the cross-correlation
and cross-covariance functions as
C XY ( t1 , t2 ) = RXY ( t1 , t2 ) −  X ( t1 ) Y ( t2 )
(M5.26)
RXY ( t1 , t2 ) = C XY ( t1 , t2 ) +  X ( t1 ) Y ( t2 )
Probability and Stochastic Processes
Introduction to Stochastic Processes
Correlation Coefficient
The correlation coefficient ρXY(t1, t2) between two random
processes X(t) and Y(t) is defined as the covariance of the two
processes normalized by the variances of the two processes.

C XY ( t1 , t2 ) C XY ( t1 , t2 )
 XY ( t1 , t2 ) = = (M5.27)
 X2 ( t1 ) Y2 ( t2 )  X ( t1 ) Y ( t2 )
Probability and Stochastic Processes
Introduction to Stochastic Processes
Correlation Coefficient
The range of values taken by the correlation coefficient so defined
is limited to
−1   ( t , t )  1
XY 1 2
(M5.28)

Therefore the correlation coefficient may be more appropriate to


use in comparative analyses of the relationship between random
processes.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Probability and Stochastic Processes
Introduction to Stochastic Processes
Exercise 6
Suppose that X(t) and Y(t) are independent random processes
and let
U(t) = X(t) – Y(t)
V(t) = X(t) + Y(t)

Find CUX(t1,t2), CUY(t1,t2), and CUV(t1,t2).


Probability and Stochastic Processes
Introduction to Stochastic Processes
Exercise 7: Let X have zero mean and unit variance and let Y =
3X. Find the correlation between X and Y.

Exercise 8: A random process X(t) is given by X(t) = A sin(ωt + ϕ),


where A is a uniformly distributed random variable with mean
mA and variance σA2. Find the mean, variance, autocorrelation,
autocovariance, and correlation coefficient of X(t).
Probability and Stochastic Processes
Introduction to Stochastic Processes

Stationary and Ergodic Processes


Probability and Stochastic Processes
Introduction to Stochastic Processes
Stationary and Ergodic Random Processes - Stationarity

Definition: A continuous time random process is strict sense


stationary if the statistics of the process are invariant to a time
shift. Specifically, for any time shift τ and any integer n ≥ 1,
f X1 , X 2 ,..., X n ( x1 , x2 ,..., xn ; t1 , t2 ,..., tn ) = f X1 , X 2 ,..., X n ( x1 , x2 ,..., xn ; t1 +  , t2 +  ,..., tn +  )

(M5.29)
Probability and Stochastic Processes
Introduction to Stochastic Processes
Stationary and Ergodic Random Processes - Stationarity

▪ In general, it is often quite difficult to show that a random


process is strict sense stationary because to do so, one needs to
be able to express the general nth order PDF. But, to show that a
process is not strict sense stationary, one needs to show only
that one PDF of any order is not invariant to a time shift.
▪ Since determining stationarity in the strict sense can be very
tedious, we often settle for a looser form of stationarity.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Stationary and Ergodic Random Processes - Stationarity

Definition: A random process is wide sense stationary (WSS) if the


mean function and autocorrelation function are invariant to a
time shift. In particular, this implies that
 X ( t ) =  X = cons tan t
(M5.30)
RXX ( t1 , t2 ) = RXX ( t2 − t1 ) = RXX ( ) ( function only of  )
Probability and Stochastic Processes
Introduction to Stochastic Processes
Stationary and Ergodic Random Processes - Stationarity

▪ All strict sense stationary random processes are also WSS,


provided that the mean and autocorrelation function exist. The
converse is not true.
▪ A WSS process does not necessarily need to be stationary in the
strict sense. We refer to a process which is not WSS as
non-stationary.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Stationary and Ergodic Random Processes - Stationarity

Note that
▪ Wide-sense stationary processes are also called second-order
stationary processes or weakly stationary.
▪ A process is first-order stationary if the distribution and density
functions are independent of time.
FX ( x; t ) = FX ( x; t +  ) = FX ( x )
(M5.31)
f X ( x; t ) = f X ( x; t +  ) = f X ( x )
Probability and Stochastic Processes
Introduction to Stochastic Processes
Stationary and Ergodic Random Processes - Stationarity

Many of the processes in practice are WSS and hence have a


constant mean function and an autocorrelation function that
depends only on a single time variable. Hence, in the remainder
of this course, when a process X(t) is known to be WSS or if we
are assuming it to be WSS, then we will represent its
autocorrelation function by RXX(τ).
Probability and Stochastic Processes
Introduction to Stochastic Processes
Stationary and Ergodic Random Processes - Stationarity

Example 7: Let X(t) = At + B, where A and B are independent random


variables, both uniformly distributed over the interval (-1 1). Is X(t)
WSS?
μX(t) = E[At + B] = tE[A] + E[B] = 0
RXX(t,t+τ) = E[(At + B)(A(t + τ) + B)]
= E[A2t2 + A2tτ + 2ABt + ABτ + B2] = (2/3)[t2 + tτ + 1]
= (2/3)[t2 + tτ + 1]
Since the autocorrelation function depends on time t, the process is not
WSS.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Stationary and Ergodic Random Processes - Stationarity

Exercise 9: Suppose we form a random process Y(t) by


modulating a carrier with another random process, X(t). That is,
let
Y(t) = X(t) cos(ω0t + θ), where θ is uniformly distributed over
[0, 2π) and independent of . Under what conditions is Y(t) WSS?
Probability and Stochastic Processes
Introduction to Stochastic Processes
Stationary and Ergodic Random Processes - Stationarity

Exercise 10: A random process is given by


X(t) = A cos(ωt) + B sin(ωt), where A and B are independent zero
mean random variables.
(a) Find the mean function, µX(t).
(b) Find the autocorrelation function, RXX(t1, t2).
(c) Under what conditions (on the variances of A and B) is X(t)
WSS?
Probability and Stochastic Processes
Introduction to Stochastic Processes
Stationary and Ergodic Random Processes - Stationarity

Exercise 11: A random process is defined by X(t) = X0 +Vt where


X0 and V are statistically independent random variables
uniformly distributed on intervals [X01 , X02] and [V1, V2],
respectively. Find (a) the mean, (b) the autocorrelation, and (c) the
autocovariance functions of X(t ).
(d) Is A (t) stationary in any sense? If so, state the type.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Stationary and Ergodic Random Processes - Ergodicity

Ergodicity deals with the relationship between statistical


averages and sample statistical averages.

It generally allows all time averages to equal the corresponding


statistical averages.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Stationary and Ergodic Random Processes - Ergodicity

▪ In order to calculate the mean or autocorrelation function of a random


process, it is necessary to perform an ensemble average. In many cases,
this may not be possible as we may not be able to observe all realizations
(or a large number of realizations) of a random process. In fact, quite often
we may be able to observe only a single realization.
▪ We are forced to ask whether it is possible to calculate the mean and/or
autocorrelation function from a single realization of a random process?
The answer is sometimes, depending on the nature of the process.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Stationary and Ergodic Random Processes - Ergodicity

To begin, let us consider the mean. Suppose a WSS random


process X(t) has a mean μX. But we are able to observe only one
realization of the random process, x(t), and wish to try to
determine μX from this realization. One obvious approach would
be to calculate the time average of the realization:
1
x (t )  x ( t )dt
t0
= lim (M5.32)
x → 2t − t0
0
Probability and Stochastic Processes
Introduction to Stochastic Processes
Stationary and Ergodic Random Processes - Ergodicity

However, it is not obvious that the time average of one


realization is necessarily equal to the ensemble average. If the
two averages are the same, then we say that the random process
is ergodic in the mean.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Stationary and Ergodic Random Processes - Ergodicity

We shall take the same approach for the autocorrelation function.


Thus, given a single realization, x(t), we form the time-average
autocorrelation function:
1
R XX ( ) = x ( t ) x ( t +  )  x ( t )x (t +  ) dt
t0
= lim (M5.33)
x → 2t − t0
0

If for any realization, R XX ( ) = RXX ( ) , then the random process is


said to be ergodic in the autocorrelation.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Stationary and Ergodic Random Processes - Ergodicity

If for any realization,

R XX ( ) = RXX ( )
(M5.34)

then the random process is said to be ergodic in the autocorrelation.


Probability and Stochastic Processes
Introduction to Stochastic Processes
Stationary and Ergodic Random Processes - Ergodicity

Definition of Ergodicity

A WSS random process is ergodic if ensemble averages involving


the process can be calculated using time averages of any
realization of the process. Two limited forms of ergodicity are:
Ergodic in the mean : x ( t ) = E  X ( t ) 
Ergodic in the autocorrelation: x ( t ) x ( t +  ) = E  X ( t ) X ( t +  ) 
Probability and Stochastic Processes
Introduction to Stochastic Processes
Stationary and Ergodic Random Processes

Example 8: Suppose a random process X(t) is given by


X(t) = A sin(ω0t + θ), where θ is uniformly distributed over [0, 2π).
Is X(t) ergodic.

It was shown in Example 6 that this process is WSS. To determine


ergodicity, we must calculate the time average value and
autocorrelation function to see whether they are equal to the ensemble
mean and autocorrelation values in Example 6.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Stationary and Ergodic Random Processes

The time average is 2


2 A
x ( t ) = A sin (0t +  ) = A sin (0t +  ) dt = − cos (0t +  ) = 0
0 0 0
Because the time average value of any sinusoid is always zero.
The time autocorrelation function is
x ( t ) x ( t +  ) = A2 sin (0t +  ) sin (0 ( t +  ) )
A2 A2
= cos (0 ) − cos ( 20t + 0 + 2 )
2 2
A2
= cos (0 )
2
Probability and Stochastic Processes
Introduction to Stochastic Processes
Stationary and Ergodic Random Processes

Since the ensemble average of Example 6 is equal to the time


average of Example 8 and the ensemble autocorrelation of
Example 6 is equal to the time autocorrelation function of
Example 8, the process is ergodic.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Properties of Autocorrelation Functions of WSS Processes

Since the autocorrelation function, along with the mean, is


considered to be a principal statistical descriptor of a WSS
random process, we will now consider some properties of the
autocorrelation function.

Recall that henceforth we shall be concerned only with WSS


processes.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Properties of Autocorrelation Functions of WSS Processes
PROPERTY 1: The autocorrelation function evaluated at τ = 0,
RXX(0), is the average normalized power in the random process, X(t).

Recall that the autocorrelation of a WSS process is


RXX ( ) = E  X ( t ) X ( t +  )  (M5.35)
Evaluating (M5.35) at τ = 0, we obtain
RXX ( ) = E  X ( t ) X ( t )  = E  X 2 ( t ) 
(M5.36)
Probability and Stochastic Processes
Introduction to Stochastic Processes
Properties of Autocorrelation Functions of WSS Processes

PROPERTY 2: The autocorrelation function of a WSS random process


is an even function; that is,
RXX(τ) = RXX(-τ) (M5.37)

PROPERTY 3: The autocorrelation function of a WSS random process


is maximum at the origin; that is,
|RXX(τ)| ≤ RXX(0) (M5.38)
for all τ.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Properties of Autocorrelation Functions of WSS Processes

PROPERTY 4: If X(t) is ergodic and has no periodic components,


then
lim RXX ( ) =  X2 (M4.39)
 →

PROPERTY 5: If X(t) has a periodic component, then RXX(t) will


have a periodic component with the same period.
Probability and Stochastic Processes
Introduction to Stochastic Processes
Properties of Autocorrelation Functions of WSS Processes

Example 9: An ergodic process has an autocorrelation function


given by
3 + 2
RXX ( ) =
2 + 2 2

Find the mean, the average power and variance of the random
process X(t).
Probability and Stochastic Processes
Introduction to Stochastic Processes
Properties of Autocorrelation Functions of WSS Processes

Exercise 12
Probability and Stochastic Processes
Introduction to Stochastic Processes

You might also like