EEE 461 Topic 4-1
EEE 461 Topic 4-1
Topic 4
Markov Chains and Processes
A S M Jahid Hasan
Assistant Professor
Department of Electrical and Computer Engineering
North South University
1 / 45
Overview
2 / 45
Discrete Markov Chains
3 / 45
Introduction
4 / 45
Introduction
• In order for the basic Markov approach to be applicable, the behaviour of the system
must be characterized by a lack of memory
• Lack of memory → the future states of a system are independent of all past
states’ except the immediately preceding one
• The future random behaviour of a system only depends on where it is at present, not
on where it has been in the past or how it arrived at its present position.
• In addition, the process must be stationary, sometimes called homogeneous, for
the approach to be applicable.
• This means that the behaviour of the system must be the same at all points of time
irrespective of the point of time being considered
• In other words, the probability of making a transition from one given state to
another is the same (stationary) at all times in the past and future.
5 / 45
Introduction
6 / 45
General Modeling Concepts
• The only requirements needed for the technique to be applicable are that:
• the system must be stationary
• the process must lack memory
• the states of the system must be identifiable
• Consider a simple system with two system states
• Probabilities of remaining in or leaving a particular state shown in Figure
• These probabilities are assumed to be constant for all times into future
• System is stationary and movement between states occurs in discrete steps →
Markov Chain
7 / 45
General Modeling Concepts
• Assume the system shown initially in state 1, consider the first time interval
• System can remain in state 1 with a probability of 21 or it can move into state 2 with
a probability of 12
• Sum of these probabilities must be unity, i.e., system must either remain in state
being considered or move out of state
• This principle applies equally to all systems irrespective of degree of complexity or
ways of moving out of a given state
• This principle applies equally to all systems irrespective of degree of complexity or
ways of moving out of a given state → sum of probabilities of remaining in or
moving out of a state must be unity
• Once the system is in state 2, it can remain in it with a probability of 34 or it can
make a transition back to state 1 with a probability of 14 during the next time interval
8 / 45
General Modeling Concepts
• The behaviour of this system can be easily illustrated by the tree diagram shown in
the previous slide
• This figure assumes the system starts in state 1, shows the states in which the system
can reside after each step or time interval and considers up to 4 such time intervals.
• Probability of following anyone branch of tree evaluated by multiplying appropriate
probabilities of each step of branch
• Probability of residing in a particular state of system after a certain number of time
intervals evaluated by summating branch probabilities that lead to that state
• Probability of residing in State 1 after 2 time interval: 12 × 21 + 12 × 14 = 38
• If all these probabilities are summated at any time step they add up to unity
43
• Probability of residing in state 1 and 2 after 4 time intervals: 128 85
and 128 ,
respectively
10 / 45
General Modeling Concepts
11 / 45
General Modeling Concepts
• The state probabilities after each time interval is shown in the table in the previous
slide.
• The results shown in Table are represented in graphical form in the figure on the
right.
• These characteristics are known as the transient behaviour or time-dependent values
of the state probabilities.
• It is evident from Figure that as the number of time intervals is increased, the values
of state probabilities tend to a constant or limiting value.
• This is characteristic of most systems which satisfy the conditions of the Markov
approach
• These limiting values of probability are known as the limiting-state or
time-independent values of the state probabilities
12 / 45
General Modeling Concepts
13 / 45
General Modeling Concepts
14 / 45
Stochastic Transitional Probability Matrix
• Tree diagram method useful for illustrating concepts of Markov chains, impractical
for large systems or even small systems with large number of time intervals.
• Matrix solution techniques are used for these analyses
• Transition probabilities of the system can be represented by matrix P:
1 1
P11 P12
P= = 12 23
P21 P22 4 4
• Here, Pij = probability of making a transition to state j after a time interval given
that it was in state i at the beginning of time interval
• For the first time interval → P11 = 12 , P12 = 12 , P21 = 41 , P22 = 34
15 / 45
Stochastic Transitional Probability Matrix
• Definition of Pij indicates:
• Row position of matrix is the state from which transition occurs
• Column position of matrix is the state to which transition occurs
• General form of the matrix for an n-state system:
• First element of row 1 83 → probability of being in state 1 after a time interval given
that it started in state 1
• Second element of row 1 58 → probability of being in state 2 after a time interval
given that it started in state 1
17 / 45
Time Dependent Probability Evaluation
19 / 45
Time Dependent Probability Evaluation
1 2
P(0) = 1 0
P(2) =P(0)P2
3 5
= 1 0 8 8
5 11
16 16
1 2
3 5
= 8 8
20 / 45
Time Dependent Probability Evaluation
Case 2: System equally likely to start in state 1 or state 2
Initial probability vector:
1 2
1 1
P(0) = 2 2
P(2) =P(0)P2
3 5
= 12 12
8 8
5 11
16 16
1 2
11 21
= 32 32
• The principle shown for the two cases can again be extended to give: P(n) = P(0)Pn
• State probabilities at any time interval → multiply stochastic transitional
probability matrix by itself relevant number of times
• Transient behaviour → Continue the process sequentially
• Limiting values of state probabilities → continue multiplication process sufficient
number of times
22 / 45
Limiting State Probability Evaluation
Underlying Principle
Once limiting state probabilities have been reached by matrix multiplication, any further
multiplication by stochastic transitional probability matrix does not change values of
limiting state probabilities
∴ αP = α
Here, α → limiting probability vector
and P → stochastic transitional probability matrix
23 / 45
Limiting State Probability Evaluation
24 / 45
Application of Discrete Markov Techniques
Examples
Q: Consider the 3-state system shown in Figure 8.4 and the transition probabilities
indicated. Evaluate the limiting state probabilities associated with each state.
25 / 45
Application of Discrete Markov Techniques
Examples
Let, P1 , P2 , P3 → limiting state probabilities.
3 1
4 4 0
∴ P1 P2 P3 0 21 1
2 = P1 P2 P3
1 1 1
3 3 3
Examples
We rearrange the first two equations and replace the third one to get:
1 1
− P1 + P3 = 0 (4)
4 3
1 1 1
P1 − P2 + P3 = 0 (5)
4 2 3
P1 + P2 + P3 = 1 (6)
4 4 3
Solving these equations we get: P1 = 11 , P2 = 11 , P3 = 11
27 / 45
Application of Discrete Markov Techniques
Examples
Q: A man either drives his car to work or catches a train. Assume that he never takes the
train two days in a row but if he drives to work, then the next day he is just as likely to
drive again as he is to catch the train. Evaluate (a) the probability that he drives to work
after (i) 2 days (ii) a long time, (b) the probability that he drives to work after (i) 2 days,
(ii) a long time if on the first day of work he tosses a fair die and drives to work only if a
2 appears.
Ans: Stochastic transitional probability matrix:
28 / 45
Application of Discrete Markov Techniques
Examples
The transition probabilities after 2 days, i.e., 2 time intervals:
1 1
2 0 1 0 1
P = 1 1 1 1 = 12 23
2 2 2 2 4 4
(a) If he takes the train on the first day of work, then P(0) = [1 0] and
1 1
P(2) = P(0)P2 = 1 0 21 23 = 12 12
4 4
1 1
2 2 = 1 3
P(2) = P(0)P = 0 1 21 3 4 4
4 4
29 / 45
Application of Discrete Markov Techniques
Examples
Let these limiting probabilities be Pt and Pd for catching the train and driving,
respectively.
0 1
Pt Pd 1 1 = Pt Pd
2 2
We get
1
Pd = Pt
2
and also
Pd + Pt = 1
2 1
. Solving we get, Pd = 3 and Pt = 3
30 / 45
Application of Discrete Markov Techniques
Examples
(b) Here, P(0) = [ 56 1
6]
1 1
2
5 1
11 13
∴ P(2) = P(0)P = 2 2 =
6 6 1 3 24 24
4 4
Since this problem is an ergodic problem, the limiting values of probability do not depend
on the initial conditions.
∴ Pd = 32 and Pt = 13
31 / 45
Continuous Markov Processes
32 / 45
Introduction
• Reliability problems normally concerned with systems that are:
• discrete in space, i.e., they can exist in one of a number of discrete and identifiable
states
• continuous in time; i.e., they exist continuously in one of the system states until a
transition occurs which takes them discretely to another state in which they then exist
continuously until another transition occurs
• Systems under consideration are stationary Markov processes → conditional
probability of failure or repair during any fixed interval of time is constant
• This implies failure and repair characteristics of components associated with
(negative) exponential distributions
• Limiting or steady-state probabilities are not dependent on the state residence time
distributions, only upon their mean values
• However, considerable differences can exist in the values of time-dependent state
probabilities as these are very dependent on the distributional assumptions
• Markov approach can be used for a wide range of reliability problems →
non-repairable or repairable; series-connected, parallel redundant or standby
redundant 33 / 45
General Modeling Concepts
• Consider a single repairable component with constant failure rate and repair rate →
characterized by exponential distribution
Figure: Single component repairable system: Variation of reliability and time-dependent availability.
34 / 45
General Modeling Concepts
Lets define:
• P0 (t) = probability that the component is operable at time t
• P1 (t) = probability that the component is failed at time t
• λ = failure rate and µ = repair rate
• Density functions for operating and failed states of system
• Parameters λ and µ also referred to as state transition rates → rate at which system
transits from one state of system to another
35 / 45
General Modeling Concepts
36 / 45
eneral Modeling Concepts
• Failure and repair rates are sometimes incorrectly evaluated by counting the number
of failures or repairs in a given period of time, and dividing by the elapsed time
• Correct interpretation of state residence time is important
• The correct time value to use in the denominator is the portion of time in which the
component was in the state being considered
• Transition rate leads to the definition:
number of times a transition occurs from a given state
transition rate =
time spent in that state
37 / 45
Evaluating Time Dependent Probabilities
38 / 45
Evaluating Time Dependent Probabilities
• We can write:
P0 (t + dt) = P0 (t)(1 − λdt) + P1 (t)µdt
P0 (t + dt) − P0 (t)
⇒ = −λP0 (t) + µP1 (t) (1)
dt
dP0 (t)
∴ = P0′ (t) = −λP0 (t) + µP1 (t) [if dt → 0]
dt
• Similarly, we can write:
P1 (t + dt) = P1 (t)(1 − µdt) + P0 (t)λdt
dP1 (t)
∴ = P1′ (t) = λP0 (t) − µP1 (t) (2)
dt
′ ′ −λ λ
∴ [P0 (t) P1 (t)] = [P0 (t) P1 (t)]
µ −µ
Note: The coefficient matrix not a stochastic transition probability matrix → rows
summate to zero
39 / 45
Evaluating Time Dependent Probabilities
λ 1
P1 (s) = P0 (s) + P1 (0) (4)
s +µ s +µ
• P0 (s) and P1 (s) are the Laplace transform of P0 (t) and P1 (t)
• P0 (0) and P1 (0) are the initial probability of being in states 0 and 1
40 / 45
Evaluating Time Dependent Probabilities
• Equations 3 and 4 can now be solved for Po(s) and P1(s) as linear simultaneous
equations:
µ P0 (0) + P1 (0) 1 1
P0 (s) = · + · [λP0 (0) − µP1 (0)]
λ+µ s λ+µ s +λ+µ
λ P0 (0) + P1 (0) 1 1
P1 (s) = · + · [µP1 (0) − λP0 (0)]
λ+µ s λ+µ s +λ+µ
• Performimg inverse Laplace transformation we get:
µ e −(λ+µ)t
P0 (t) = · [P0 (0) + P1 (0)] + [λP0 (0) − µP1 (0)]
λ+µ λ+µ
λ e −(λ+µ)t
P1 (t) = · [P0 (0) + P1 (0)] + [µP1 (0) − λP0 (0)]
λ+µ λ+µ
41 / 45
Evaluating Time Dependent Probabilities
• Note that, P0 (t) + P1 (t) = 1, thus, P0 (0) + P1 (0) = 1. We get:
µ e −(λ+µ)t
P0 (t) = + [λP0 (0) − µP1 (0)]
λ+µ λ+µ
λ e −(λ+µ)t
P1 (t) = + [µP1 (0) − λP0 (0)]
λ+µ λ+µ
• In practice the most likely state in which the system starts is state 0, i.e. the system
is in an operable condition at zero time → P0 (0) = 1 and P1 (0) = 0
µ λe −(λ+µ)t
P0 (t) = + (5)
λ+µ λ+µ
λ λe −(λ+µ)t
P1 (t) = − (6)
λ+µ λ+µ
42 / 45
Evaluating Limiting State Probabilities
• Equations 5 and 6 represent probabilities of system being in operating state and
failed state, respectively, as a function of time given that system started at time t =
0 in operating state
• Limiting state probabilities will be non-zero for a continuous Markov process
provided system is ergodic
• For the single component system limiting probabilities can be evaluated by letting
t→∞
• If the values of limiting state probabilities are defined as P0 and P1 for the operating
state and the failed state, then:
µ λ
P0 = P0 (∞) = and P1 = P1 (∞) =
λ+µ λ+µ
• These limiting state probability expressions applicable irrespective of whether system
starts in operating state or in failed state
43 / 45
Evaluating Limiting State Probabilities
1
• For exponential distribution, MTTF = m = λ and MTTR = r = µ1 . Then,
m r
P0 = and P1 =
m+r m+r
• Values of P0 and P1 are generally referred to as steady-state or limiting availability
A and unavailability U, respectively
• The time dependent availability A(t) and unavailability U(t) of the system:
µ λe −(λ+µ)t
A(t) = P0 (t) = + (5)
λ+µ λ+µ
λ λe −(λ+µ)t
U(t) = P1 (t) = − (6)
λ+µ λ+µ
• Note that A(t) + U(t) = P0 (t) + P1 (t) = 1
44 / 45
Evaluating Limiting State Probabilities
45 / 45