Probability and Statistics
Chapter4-Markov Chains
Yoonseon Oh
[email protected]
7.1. Discrete-Time Markov Chains
Markov property
3
Transition probability matrix
Example 7. 1 . Alice is taking a probability class and in each week, she can be either up-to-
date or she may have fallen behind. If she is up-to-date in a given week, the probability that
she will be up-to-date (or behind) in the next week is 0.8 (or 0.2, respectively) . If she is
behind in the given week, the probability that she will be up-to-date (or behind) in the next
week is 0.6 (or 0.4, respectively) . We assume that these probabilities do not depend on
whether she was up-to-date or behind in previous weeks, so the problem has the typical
Markov chain character (the future depends on the past only through the present) .
5
Example 7.2. Spiders and Fly. A fly moves along a straight line in unit increments. At
each time period, it moves one unit to the left with probability 0.3, one unit to the
right with probability 0.3, and stays in place with probability 0.4, independent of the
past history of movements. Two spiders are lurking at positions 1 and m; if the fly
lands there, it is captured by a spider, and the process terminates. We want to
construct a Markov chain model, assuming that the fly starts in a position between 1
and m.
6
Example 7.3. Machine Failure, Repair, and Replacement. A machine can be either
working or broken down on a given day. If it is working, it will break down in the next
day with probability b, and will continue working with probability 1-b. If it breaks
down on a given day, it will be repaired and be working in the next day with
probability r, and will continue to be broken down with probability 1-r. Model the
machine by a Markov chain with the following two states.
State 1: Machine is working
State 2: Machine is broken down.
7
Example 7.3. (Continue) Suppose that whenever the machine remains broken for a
given number of days, despite the repair efforts, it is replaced by a new working
machine.
8
The Probability of a Path
Given a Markov chain model, we can compute the probability of any particular
sequence of future states.
9
10
n-Step Transition Probabilities
• The probability law of the state at some future time, conditioned on the current state.
• the probability that the state after n time periods will be j , given that the current state is i.
11
Example 7. 1 . (continue) Alice is taking a probability class and in each week, she can be
either up-to-date or she may have fallen behind. If she is up-to-date in a given week, the
probability that she will be up-to-date (or behind) in the next week is 0.8 (or 0.2,
respectively) . If she is behind in the given week, the probability that she will be up-to-date
(or behind) in the next week is 0.6 (or 0.4, respectively) . We assume that these
probabilities do not depend on whether she was up-to-date or behind in previous weeks, so
the problem has the typical Markov chain character (the future depends on the past only
through the present) .
1 2
12
Example 7.2. (continue) Spiders and Fly. A fly moves along a straight line in unit
increments. At each time period, it moves one unit to the left with probability 0.3, one
unit to the right with probability 0.3, and stays in place with probability 0.4,
independent of the past history of movements. Two spiders are lurking at positions 1
and m; if the fly lands there, it is captured by a spider, and the process terminates.
We want to construct a Markov chain model, assuming that the fly starts in a
position between 1 and m.
j j
1 2 3 4 1 2 3 4
1 1.0 0 0 0 1 1.0 0 0 0
2 0.3 0.4 0.3 0 2 2/3 0 0 1/3
i
3 0 0.3 0.4 0.3 3 1/3 0 0 2/3
4 0 0 0 1.0 4 0 0 0 1.0
13
7.2 Classification of States
1. Accessible
2. Recurrent
3. Transient
14
1 2 3 4
Markov Chain Decomposition
• A Markov chain can be decomposed into one or more recurrent classes, plus
possibly some transient states.
• A recurrent state is accessible from all states in its class, but is not accessible
from recurrent states in other classes.
• A transient state is not accessible from any recurrent state.
• At least one, possible more, recurrent states are accessible from a given
transient state.
15
1 2 3 4
1 2
16
1 2 3 4 5
Properties
(a) Once the state enters (or starts in) a class of recurrent states, it stays within
that class
(b) If the initial state is transient, then the state trajectory contains an initial
portion consisting of transient states and a final portion consisting of recurrent
states from the same class.
17
Periodicity
18
7.3 Steady-State Behavior
19
Example 7.5. Consider a two-state Markov chain with transition probabilities
𝟏𝟏 , 𝟏𝟐
𝟐𝟏 𝟐𝟐
20
Example 7.6. An absent-minded professor has two umbrellas that she uses
when commuting from home to office and back. If it rains and an umbrella is
available in her location, she takes it. If it is not raining, she always forgets to
take an umbrella. Suppose that it rains with probability p each time she
commutes, independent of other times. What is the steady state probability that
she gets wet during a commute?
21
22
Long-Term Frequency Interpretations
23
24
Birth-Death Processes
25
26
27
7.4. Absorption Probabilities and Expected Time
To Absorption
28
29
Example 7.10
30
31
Expected Time to Absorption
32
33
Example 7.12. Spiders and Fly. Consider the spiders-and-fly model of Example
7.2. This corresponds to the Markov chain shown in Fig. 7.19. The states
correspond to possible fly positions, and the absorbing states 1 and m
correspond to capture by a spider.
34
Mean First Passage and Recurrence Times
We consider a Markov chain with a single recurrent class.
35
Mean recurrence time
36
37