Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
5 views45 pages

EEE 461 Topic 4-1

Hhhhh

Uploaded by

himelmunna123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views45 pages

EEE 461 Topic 4-1

Hhhhh

Uploaded by

himelmunna123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

EEE 461: Power Systems Reliability

Topic 4
Markov Chains and Processes

A S M Jahid Hasan
Assistant Professor
Department of Electrical and Computer Engineering
North South University

August 18, 2025

1 / 45
Overview

1. Discrete Markov Chains

2. Continuous Markov Processes

2 / 45
Discrete Markov Chains

3 / 45
Introduction

• Analytical techniques for reliability evaluation described before → applicable to both


non-repairable and repairable systems
• For repairable systems → assumed that the repair process is instantaneous or
negligible compared with the operating time
• Additional techniques are required if this assumption is not valid
• One very important technique that overcomes this problem → Markov approach or
Markov modelling.
• Can be applied to the random behaviour of systems that vary discretely or
continuously with respect to time and space
• The discrete case is generally known as a Markov chain and the continuous case is
generally known as a Markov process

4 / 45
Introduction

• In order for the basic Markov approach to be applicable, the behaviour of the system
must be characterized by a lack of memory
• Lack of memory → the future states of a system are independent of all past
states’ except the immediately preceding one
• The future random behaviour of a system only depends on where it is at present, not
on where it has been in the past or how it arrived at its present position.
• In addition, the process must be stationary, sometimes called homogeneous, for
the approach to be applicable.
• This means that the behaviour of the system must be the same at all points of time
irrespective of the point of time being considered
• In other words, the probability of making a transition from one given state to
another is the same (stationary) at all times in the past and future.

5 / 45
Introduction

• Markov approach is applicable to those systems whose behaviour can be described by


a probability distribution that is characterized by a constant hazard rate, i.e.,
Poisson and exponential distributions
• If the hazard rate is constant the probability of making a transition between two
states remain constant at all points of time
• If this probability is a function of time or the number of discrete steps, then the
process is non-stationary and designated as non-Markovian.
• In the general case of Markov models, both time and space may either be discrete or
continuous.
• In the particular case of system reliability evaluation, space is normally represented
only as a discrete function since this represents the discrete and identifiable states
in which the system and its components can reside
• Time may either be discrete or continuous

6 / 45
General Modeling Concepts
• The only requirements needed for the technique to be applicable are that:
• the system must be stationary
• the process must lack memory
• the states of the system must be identifiable
• Consider a simple system with two system states
• Probabilities of remaining in or leaving a particular state shown in Figure
• These probabilities are assumed to be constant for all times into future
• System is stationary and movement between states occurs in discrete steps →
Markov Chain

Figure: A two state Markov Chain system

7 / 45
General Modeling Concepts

• Assume the system shown initially in state 1, consider the first time interval
• System can remain in state 1 with a probability of 21 or it can move into state 2 with
a probability of 12
• Sum of these probabilities must be unity, i.e., system must either remain in state
being considered or move out of state
• This principle applies equally to all systems irrespective of degree of complexity or
ways of moving out of a given state
• This principle applies equally to all systems irrespective of degree of complexity or
ways of moving out of a given state → sum of probabilities of remaining in or
moving out of a state must be unity
• Once the system is in state 2, it can remain in it with a probability of 34 or it can
make a transition back to state 1 with a probability of 14 during the next time interval

8 / 45
General Modeling Concepts

Figure: Tree diagram of the two state system


9 / 45
General Modeling Concepts

• The behaviour of this system can be easily illustrated by the tree diagram shown in
the previous slide
• This figure assumes the system starts in state 1, shows the states in which the system
can reside after each step or time interval and considers up to 4 such time intervals.
• Probability of following anyone branch of tree evaluated by multiplying appropriate
probabilities of each step of branch
• Probability of residing in a particular state of system after a certain number of time
intervals evaluated by summating branch probabilities that lead to that state
• Probability of residing in State 1 after 2 time interval: 12 × 21 + 12 × 14 = 38
• If all these probabilities are summated at any time step they add up to unity
43
• Probability of residing in state 1 and 2 after 4 time intervals: 128 85
and 128 ,
respectively

10 / 45
General Modeling Concepts

Figure: System transient behaviour

11 / 45
General Modeling Concepts

• The state probabilities after each time interval is shown in the table in the previous
slide.
• The results shown in Table are represented in graphical form in the figure on the
right.
• These characteristics are known as the transient behaviour or time-dependent values
of the state probabilities.
• It is evident from Figure that as the number of time intervals is increased, the values
of state probabilities tend to a constant or limiting value.
• This is characteristic of most systems which satisfy the conditions of the Markov
approach
• These limiting values of probability are known as the limiting-state or
time-independent values of the state probabilities

12 / 45
General Modeling Concepts

• Initial conditions → state of system at step 0 or zero time


• Initial conditions are generally known → problem centres around evaluating system
reliability in the future
• Transient behaviour is very dependent on initial conditions
• In previous example → system started in state 1
• If system started in state 2 rather than state 1 similar transient behaviour is obtained
with different probability values
• However, limiting values of state probabilities totally independent of initial conditions
and both will tend to same limiting-state values

13 / 45
General Modeling Concepts

• A system or process for which limiting values of state probabilities independent of


initial conditions is called ergodic
• For ergodicity → essential that every state of a system can be reached from all
other states of system either directly or indirectly through intermediate states
• If this is not possible and a particular state or states, once entered cannot be left:
• System is not ergodic
• Above mentioned states known as absorbing states

Ergodic Systems Characteristics


• Limiting or steady-state probabilities for states independent of initial conditions
• Rate of convergence to limiting-state value:
• can be dependent on initial conditions
• very dependent on probabilities of making transitions between states of system

14 / 45
Stochastic Transitional Probability Matrix

• Tree diagram method useful for illustrating concepts of Markov chains, impractical
for large systems or even small systems with large number of time intervals.
• Matrix solution techniques are used for these analyses
• Transition probabilities of the system can be represented by matrix P:
  1 1
P11 P12
P= = 12 23
P21 P22 4 4

• Here, Pij = probability of making a transition to state j after a time interval given
that it was in state i at the beginning of time interval
• For the first time interval → P11 = 12 , P12 = 12 , P21 = 41 , P22 = 34

15 / 45
Stochastic Transitional Probability Matrix
• Definition of Pij indicates:
• Row position of matrix is the state from which transition occurs
• Column position of matrix is the state to which transition occurs
• General form of the matrix for an n-state system:

• Summation of probabilities in each row must be unity


16 / 45
Time Dependent Probability Evaluation

• Stochastic transitional probability matrix for first time interval:


  1 1
P11 P12
P= = 12 23
P21 P22 4 4

• Multiplying matrix P by itself:


     3 5

P P12 P11 P12 P P + P12 P21 P11 P12 + P12 P22
P2 = 11 = 11 11 = 85 8
11
P21 P22 P21 P22 P21 P11 + P22 P21 P21 P12 + P22 P22 16 16

• First element of row 1 83 → probability of being in state 1 after a time interval given
that it started in state 1
• Second element of row 1 58 → probability of being in state 2 after a time interval
given that it started in state 1

17 / 45
Time Dependent Probability Evaluation

• Values in row 1 are identical to state probabilities in Table shown in slide 11


evaluated after two time intervals given that system commenced in state 1
• A similar comparison could be made between the values in row 2 and the
probabilities that would have been evaluated after two time intervals if the system
had started in state 2
• Elements of P2 give all the state probabilities of system after two time intervals,
both those when starting in state 1 and those when starting in state 2
18 / 45
Time Dependent Probability Evaluation

• The principle illustrated by this example can be extended to any power of P


• Matrix Pn can be defined as the matrix whose element Pijn represents the probability
that the system will be in state j after n time intervals given that it started in state i.
• Probability of residing in any state can be evaluated provided it is known for certain
in which state the system started:
• i.e., probability of starting in a particular state is unity and probability of starting in all
others is zero
• Frequently this is the case in practice because, at time zero, deterministic state of
system known
• If initial conditions not known determinstically → Premultiply Pn by initial
probability vector P(0)
• P(0) represents probability of being in each of the system states at the start
• Values of probability contained in P(0) must summate to unity

19 / 45
Time Dependent Probability Evaluation

Case 1: System starts in state 1


Initial probability vector:

1 2
 
P(0) = 1 0

Probability vector representing state probabilities after two time intervals:

P(2) =P(0)P2
3 5
 
 
= 1 0 8 8
5 11
16 16
1 2
3 5
= 8 8

20 / 45
Time Dependent Probability Evaluation
Case 2: System equally likely to start in state 1 or state 2
Initial probability vector:

1 2
1 1
P(0) = 2 2

Probability vector representing state probabilities after two time intervals:

P(2) =P(0)P2
3 5
 
= 12 12
 
8 8
5 11
16 16
1 2
 11 21 
= 32 32

This principle can again be extended to give: P(n) = P(0)Pn


21 / 45
Time Dependent Probability Evaluation

• The principle shown for the two cases can again be extended to give: P(n) = P(0)Pn
• State probabilities at any time interval → multiply stochastic transitional
probability matrix by itself relevant number of times
• Transient behaviour → Continue the process sequentially
• Limiting values of state probabilities → continue multiplication process sufficient
number of times

22 / 45
Limiting State Probability Evaluation

• Limiting values of state probabilities of an ergodic system can be evaluated using


matrix multiplication technique
• Sensible to use this technique if transient behaviour is also required
• If only limiting state probabilities required, matrix multiplication can be tedious and
time-consuming → Efficient alternative method exists

Underlying Principle
Once limiting state probabilities have been reached by matrix multiplication, any further
multiplication by stochastic transitional probability matrix does not change values of
limiting state probabilities
∴ αP = α
Here, α → limiting probability vector
and P → stochastic transitional probability matrix

23 / 45
Limiting State Probability Evaluation

• Let P1 , P2 be the limiting probabilities of being in states 1 and 2 respectively, then


   
P1 P2 P = P1 P2
 1 1
 
  
2
⇒ P1 P2 1 3 = P1 P2 2
4 4
1 1
∴ P1 + P2 = P1 (1)
2 4
1 3
and P1 + P2 = P2 (2)
2 4
• (1) and (2) essentially same equation
• Two unknowns, P1 , P2 → another equation needed → P1 + P2 = 1 (3)
• Equation 3 can be extended to any number of states

24 / 45
Application of Discrete Markov Techniques
Examples
Q: Consider the 3-state system shown in Figure 8.4 and the transition probabilities
indicated. Evaluate the limiting state probabilities associated with each state.

Ans: Stochastic transitional probability matrix for this system:


3 1 
4 4 0
P =  0 12 12 
1 1 1
3 3 3

25 / 45
Application of Discrete Markov Techniques

Examples
Let, P1 , P2 , P3 → limiting state probabilities.
3 1 
 4 4 0
∴ P1 P2 P3  0 21 1
  
2 = P1 P2 P3
1 1 1
3 3 3

We get the following equations from there:


3 1
P1 + P3 = P1 (1)
4 3
1 1 1
P1 + P2 + P3 = P2 (2)
4 2 3
1 1
P2 + P3 = P3 (3)
2 3
26 / 45
Application of Discrete Markov Techniques

Examples
We rearrange the first two equations and replace the third one to get:
1 1
− P1 + P3 = 0 (4)
4 3
1 1 1
P1 − P2 + P3 = 0 (5)
4 2 3
P1 + P2 + P3 = 1 (6)
4 4 3
Solving these equations we get: P1 = 11 , P2 = 11 , P3 = 11

27 / 45
Application of Discrete Markov Techniques

Examples
Q: A man either drives his car to work or catches a train. Assume that he never takes the
train two days in a row but if he drives to work, then the next day he is just as likely to
drive again as he is to catch the train. Evaluate (a) the probability that he drives to work
after (i) 2 days (ii) a long time, (b) the probability that he drives to work after (i) 2 days,
(ii) a long time if on the first day of work he tosses a fair die and drives to work only if a
2 appears.
Ans: Stochastic transitional probability matrix:

28 / 45
Application of Discrete Markov Techniques

Examples
The transition probabilities after 2 days, i.e., 2 time intervals:
   1 1
2 0 1 0 1
P = 1 1 1 1 = 12 23
2 2 2 2 4 4

(a) If he takes the train on the first day of work, then P(0) = [1 0] and

 1 1
 
P(2) = P(0)P2 = 1 0 21 23 = 12 12
  
4 4

If he drives on the first day of work, then P(0) = [0 1] and

 1 1
 
2 2 = 1 3
  
P(2) = P(0)P = 0 1 21 3 4 4
4 4

29 / 45
Application of Discrete Markov Techniques

Examples
Let these limiting probabilities be Pt and Pd for catching the train and driving,
respectively.  
  0 1  
Pt Pd 1 1 = Pt Pd
2 2
We get
1
Pd = Pt
2
and also
Pd + Pt = 1
2 1
. Solving we get, Pd = 3 and Pt = 3

30 / 45
Application of Discrete Markov Techniques

Examples
(b) Here, P(0) = [ 56 1
6]
1 1

2
5 1
  11 13

∴ P(2) = P(0)P = 2 2 =
6 6 1 3 24 24
4 4

Since this problem is an ergodic problem, the limiting values of probability do not depend
on the initial conditions.
∴ Pd = 32 and Pt = 13

31 / 45
Continuous Markov Processes

32 / 45
Introduction
• Reliability problems normally concerned with systems that are:
• discrete in space, i.e., they can exist in one of a number of discrete and identifiable
states
• continuous in time; i.e., they exist continuously in one of the system states until a
transition occurs which takes them discretely to another state in which they then exist
continuously until another transition occurs
• Systems under consideration are stationary Markov processes → conditional
probability of failure or repair during any fixed interval of time is constant
• This implies failure and repair characteristics of components associated with
(negative) exponential distributions
• Limiting or steady-state probabilities are not dependent on the state residence time
distributions, only upon their mean values
• However, considerable differences can exist in the values of time-dependent state
probabilities as these are very dependent on the distributional assumptions
• Markov approach can be used for a wide range of reliability problems →
non-repairable or repairable; series-connected, parallel redundant or standby
redundant 33 / 45
General Modeling Concepts
• Consider a single repairable component with constant failure rate and repair rate →
characterized by exponential distribution

Figure: Single component repairable system: State space diagram.

Figure: Single component repairable system: Variation of reliability and time-dependent availability.
34 / 45
General Modeling Concepts

Lets define:
• P0 (t) = probability that the component is operable at time t
• P1 (t) = probability that the component is failed at time t
• λ = failure rate and µ = repair rate
• Density functions for operating and failed states of system

f0 (t) = λe −λt and f1 (t) = µe −µt

• Parameters λ and µ also referred to as state transition rates → rate at which system
transits from one state of system to another

35 / 45
General Modeling Concepts

• Failure rate λ → reciprocal of mean time to failure (MTTF)


• Repair rate µ → reciprocal of mean time to repair (MTTR)
• MTTF (m): Mean of times to failure counted from moment the component begins
to operate to the moment it fails
• MTTR (r): Mean of the times counted from the moment component fails to the
moment it is returned to an operable condition
number of failures of a component in the given period of time
λ=
total period of time the component was operating
number of repairs of a component in the given period of time
µ=
total period of time the component was being repaired

36 / 45
eneral Modeling Concepts

• Failure and repair rates are sometimes incorrectly evaluated by counting the number
of failures or repairs in a given period of time, and dividing by the elapsed time
• Correct interpretation of state residence time is important
• The correct time value to use in the denominator is the portion of time in which the
component was in the state being considered
• Transition rate leads to the definition:
number of times a transition occurs from a given state
transition rate =
time spent in that state

37 / 45
Evaluating Time Dependent Probabilities

• Consider an incremental interval of time dt


• dt → sufficiently small so that probability of two or more events occurring during this
increment of time negligible
• Probability of being in operating state after this time interval dt, i.e., probability of
being in state 0 at time (t+dt) → [Probability of being operative at time t AND of
not failing in time dt] + [probability of being failed at time t AND of being repaired
in time dt]

38 / 45
Evaluating Time Dependent Probabilities
• We can write:
P0 (t + dt) = P0 (t)(1 − λdt) + P1 (t)µdt
P0 (t + dt) − P0 (t)
⇒ = −λP0 (t) + µP1 (t) (1)
dt
dP0 (t)
∴ = P0′ (t) = −λP0 (t) + µP1 (t) [if dt → 0]
dt
• Similarly, we can write:
P1 (t + dt) = P1 (t)(1 − µdt) + P0 (t)λdt
dP1 (t)
∴ = P1′ (t) = λP0 (t) − µP1 (t) (2)
dt  
′ ′ −λ λ
∴ [P0 (t) P1 (t)] = [P0 (t) P1 (t)]
µ −µ
Note: The coefficient matrix not a stochastic transition probability matrix → rows
summate to zero
39 / 45
Evaluating Time Dependent Probabilities

• Equations 1 and 2 are linear differential equations with constant coefficients


• Can be easily solved using Laplace Transformation:

sP0 (s) − P0 (0) = −λP0 (s) + µP1 (s)


µ 1
P0 (s) = P1 (s) + P0 (0) (3)
s +λ s +λ
• Similarly, we can write:

λ 1
P1 (s) = P0 (s) + P1 (0) (4)
s +µ s +µ
• P0 (s) and P1 (s) are the Laplace transform of P0 (t) and P1 (t)
• P0 (0) and P1 (0) are the initial probability of being in states 0 and 1

40 / 45
Evaluating Time Dependent Probabilities

• Equations 3 and 4 can now be solved for Po(s) and P1(s) as linear simultaneous
equations:

µ P0 (0) + P1 (0) 1 1
P0 (s) = · + · [λP0 (0) − µP1 (0)]
λ+µ s λ+µ s +λ+µ
λ P0 (0) + P1 (0) 1 1
P1 (s) = · + · [µP1 (0) − λP0 (0)]
λ+µ s λ+µ s +λ+µ
• Performimg inverse Laplace transformation we get:

µ e −(λ+µ)t
P0 (t) = · [P0 (0) + P1 (0)] + [λP0 (0) − µP1 (0)]
λ+µ λ+µ
λ e −(λ+µ)t
P1 (t) = · [P0 (0) + P1 (0)] + [µP1 (0) − λP0 (0)]
λ+µ λ+µ

41 / 45
Evaluating Time Dependent Probabilities
• Note that, P0 (t) + P1 (t) = 1, thus, P0 (0) + P1 (0) = 1. We get:

µ e −(λ+µ)t
P0 (t) = + [λP0 (0) − µP1 (0)]
λ+µ λ+µ
λ e −(λ+µ)t
P1 (t) = + [µP1 (0) − λP0 (0)]
λ+µ λ+µ
• In practice the most likely state in which the system starts is state 0, i.e. the system
is in an operable condition at zero time → P0 (0) = 1 and P1 (0) = 0

µ λe −(λ+µ)t
P0 (t) = + (5)
λ+µ λ+µ
λ λe −(λ+µ)t
P1 (t) = − (6)
λ+µ λ+µ

42 / 45
Evaluating Limiting State Probabilities
• Equations 5 and 6 represent probabilities of system being in operating state and
failed state, respectively, as a function of time given that system started at time t =
0 in operating state
• Limiting state probabilities will be non-zero for a continuous Markov process
provided system is ergodic
• For the single component system limiting probabilities can be evaluated by letting
t→∞
• If the values of limiting state probabilities are defined as P0 and P1 for the operating
state and the failed state, then:
µ λ
P0 = P0 (∞) = and P1 = P1 (∞) =
λ+µ λ+µ
• These limiting state probability expressions applicable irrespective of whether system
starts in operating state or in failed state
43 / 45
Evaluating Limiting State Probabilities
1
• For exponential distribution, MTTF = m = λ and MTTR = r = µ1 . Then,

m r
P0 = and P1 =
m+r m+r
• Values of P0 and P1 are generally referred to as steady-state or limiting availability
A and unavailability U, respectively
• The time dependent availability A(t) and unavailability U(t) of the system:

µ λe −(λ+µ)t
A(t) = P0 (t) = + (5)
λ+µ λ+µ
λ λe −(λ+µ)t
U(t) = P1 (t) = − (6)
λ+µ λ+µ
• Note that A(t) + U(t) = P0 (t) + P1 (t) = 1

44 / 45
Evaluating Limiting State Probabilities

• Availability is probability of being found in operating state at some time t in


future given that system started in operating state at time t = 0
• Quite different from reliability R(t) = e −λt
• Reliability is the probability of staying in operating state as a function of time
given that system started in operating state at time t = 0
• Similar conceptual relationship exists between unavailability U(t) and unreliability
Q(t)

45 / 45

You might also like