Kulkarni Modeling and Analysis of Stochastic Systems 2011
Kulkarni Modeling and Analysis of Stochastic Systems 2011
Analysis of
Stochastic
Systems
Second Edition
CHAPMAN & HALL/CRC
Texts in Statistical Science Series
Series Editors
Bradley P. Carlin, University of Minnesota, USA
Julian J. Faraway, University of Bath, UK
Martin Tanner, Northwestern University, USA
Jim Zidek, University of British Columbia, Canada
Modeling and
Analysis of
Stochastic
Systems
Second Edition
Vidyadhar G. Kulkarni
Department of Statistics and Operations Research
University of North Carolina at Chapel Hill
U.S.A.
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2009 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been
made to publish reliable data and information, but the author and publisher cannot assume responsibility for the valid-
ity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright
holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may
rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or uti-
lized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopy-
ing, microfilming, and recording, or in any information storage or retrieval system, without written permission from the
publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://
www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923,
978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For
organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
To
my wife
Radhika
Jack and Harry were lost over a vast farmland while on their balloon ride.
When they spotted a bicyclist on trail going through the farmland below, they
lowered their balloon and yelled, “Good day, sir! Could you tell us where we
are?”
The bicyclist looked up and said, “Sure! You are in a balloon!”
Jack turned to Harry and said, “This guy must be a mathematician!”
“What makes you think so?” asked Harry.
“Well, his answer is correct, but totally useless!”
The author sincerely hopes that a student mastering this book will be able to
use stochastic models to obtain correct as well as useful answers.
Contents
Preface xix
1 Introduction 1
xi
xii CONTENTS
3.1 Definitions 55
3.2 Cumulative Distribution Function of T 56
3.3 Absorption Probabilities 60
3.4 Expectation of T 69
3.5 Generating Function and Higher Moments of T 74
3.6 Computational Exercises 76
3.7 Conceptual Exercises 81
Epilogue 489
References 533
Index 539
Preface
Of course, if the results of Step 2 show that the model does not “fit” the real-life situa-
tion, then one needs to modify the model and repeat Steps 1 and 2 until a satisfactory
solution emerges. Then one proceeds to Step 3. As the title of the book suggests, we
emphasize the first two steps. The selection, the organization, and the treatment of
topics in this book are dictated by the emphasis on modeling and analysis.
Based on my teaching experience of over 25 years, I have come to the conclusion
that it is better (from the students’ points of view) to introduce Markov chains be-
fore renewal theory. This enables the students to start building interesting stochastic
models right away in diverse areas such as manufacturing, supply chains, genet-
ics, communications, biology, queueing, and inventory systems, etc. This gives them
a feel for the modeling aspect of the subject early in the course. Furthermore, the
analysis of Markov chain models uses tools from matrix algebra. The students feel
comfortable with these tools since they can use the matrix-oriented packages, such
as Matlab, to do numerical experimentation. Nothing gives them better confidence
in the subject than seeing the analysis produce actual numbers that quantify their in-
tuition. We have also developed a collection of Matlab-based programs that can be
downloaded from:
1. www.unc.edu/∼vkulkarn/Maxim/maxim.zip
2. www.unc.edu/∼vkulkarn/Maxim/maximgui.zip
The instructions for using them are included in the readme files in these two zip files.
After students have developed familiarity with Markov chains, they are ready for
renewal theory. They can now appreciate it because they now have a lot of renewal,
renewal-reward, or regenerative processes models. Also, they are more ready to use
the tools of Laplace transforms.
xix
xx PREFACE
I am aware that this sequence is contrary to the more prevalent approach that starts
with renewal theory. Although it is intellectually appealing to start with renewal the-
ory, I found that it confuses and frustrates students, and it does not give them a feel for
the modeling aspect of the subject early on. In this new edition, I have also changed
the sequence of topics within Markov chains; I now cover the first passage times be-
fore the limiting behavior. This seems more natural since the concepts of transience
and recurrence depend upon the first passage times.
The emphasis on the analysis of the stochastic models requires careful develop-
ment of the major useful classes of stochastic processes: discrete and continuous time
Markov chains, renewal processes, regenerative processes, and Markov regenerative
processes. In the new edition, I have included a chapter on diffusion processes. In or-
der to keep the length of the book under control, some topics from the earlier edition
have been deleted: discussion of numerical methods, stochastic ordering, and some
details from the Markov renewal theory. We follow a common plan of study for each
class: characterization, transient analysis, first passage times, limiting behavior, and
cost/reward models. The main aim of the theory is to enable the students to “solve”
or “analyze” the stochastic models, to give them general tools to do this, rather than
show special tricks that work in specific problems.
The third aspect, the implementation, involves actually using the results of Steps
1 and 2 to manage the “real-life” situation that we are interested in managing. This
requires the knowledge of statistics (for estimating the parameters of the model) and
organizational science (how to persuade the members of an organization to follow
the new solution, and how to set up an organizational structure to facilitate it), and
hence is beyond the scope of this book, although, admittedly, it is a very important
part of the process.
The book is designed for a two-course sequence in stochastic models. The first
six chapters can form the first course, and the last four chapters, the second course.
The book assumes that the students have had a course in probability theory (measure
theoretic probability is not needed), advanced calculus (familiarity with differential
and difference equations, transforms, etc.), and matrix algebra, and a general level
of mathematical maturity. The appendix contains a brief review of relevant topics. In
the second edition, I have removed the appendix devoted to stochastic ordering, since
the corresponding material is deleted from the chapters on discrete and continuous
time Markov chains. I have added two appendices: one collects relevant results from
analysis, and the other from differential and difference equations. I find that these
results are used often in the text, and hence it is useful to have them readily accessible.
The book uses a large number of examples to illustrate the concepts as well as
computational tools and typical applications. Each chapter also has a large number
of exercises collected at the end. The best way to learn the material of this course
is by doing the exercises. Where applicable, the exercises have been separated into
three classes: modeling, computational, and conceptual. Modeling exercises do not
involve analysis, but may involve computations to derive the parameters of the prob-
lem. A computational exercise may ask for a numerical or algebraic answer. Some
PREFACE xxi
computational exercises may involve model building as well as analysis. A concep-
tual exercise generally involves proving some theorem, or fine tuning the understand-
ing of some concepts introduced in the chapter, or it may introduce new concepts.
Computational exercises are not necessarily easy, and conceptual exercises are not
necessarily hard. I have deleted many exercises from the earlier edition, especially
those that I found I never assigned in my classes. Many new exercises have been
added. I found it useful to assign a model building exercise and then the correspond-
ing analysis exercise. The students should be encouraged to use computers to obtain
the solutions numerically.
It is my belief that a student, after mastering the material in this book, will be well
equipped to build and analyze useful stochastic models of situations that he or she
will face in his or her area of interest. It is my fond hope that the students will see
a stochastic model lurking in every corner of their world as a result of studying this
book.
Vidyadhar Kulkarni
Department of Statistics and Operations Research
University of North Carolina
Chapel Hill, NC
CHAPTER 1
Introduction
The discipline of operations research was borne out of the need to solve military
problems during World War II. In one story, the air force was using the bullet holes
on the airplanes used in combat duty to decide where to put extra armor plating. They
thought they were approaching the problem in a scientific way until someone pointed
out that they were collecting the bullet hole data from the planes that returned safely
from their sorties.
Consider a system that evolves randomly in time, for example, the stock market
index, the inventory in a warehouse, the queue of customers at a service station,
water-level in a reservoir, the state of the machines in a factory, etc.
Suppose we observe this system at discrete time points n = 0, 1, 2, · · ·, say, every
hour, every day, every week, etc. Let Xn be the state of the system at time n. For
example, Xn can be the Dow-Jones index at the end of the n-th working day; the
number of unsold cars on a dealer’s lot at the beginning of day n; the intensity of the
n-th earthquake (measured on the Richter scale) to hit the Continental United states
in this century; or the number of robberies in a city on day n, to name a few. We say
that {Xn , n ≥ 0} is a discrete-time stochastic process describing the system.
If the system is observed continuously in time, with X(t) being its state at time
t, then it is described by a continuous-time stochastic process {X(t), t ≥ 0}. For
example, X(t) may represent the number of failed machines in a machine-shop at
time t, the position of a hurricane at time t, or the amount of money in a bank account
at time t, etc.
More formally, a stochastic process is a collection of random variables
{X(τ ), τ ∈ T }, indexed by the parameter τ taking values in the parameter set T .
The random variable takes values in the set S, called the state-space of the stochastic
process. In many applications the parameter t represents time. Throughout this book
we shall encounter two cases:
1
2 INTRODUCTION
1. T = {0, 1, 2, · · ·}. In this case we write {Xn , n ≥ 0} instead of {X(τ ), τ ∈ T }.
2. T = [0, ∞). In this case we write {X(t), t ≥ 0} instead of {X(τ ), τ ∈ T }.
t
(a) Continuous-time, discrete state space
X(t)
t
(b) Continuous-time, continuous state space
Xn
× × × ×
× × ×
× × ×
× × ×
n
(c) Discrete-time, discrete state space
In this book we develop the tools for the analysis of the stochastic processes and study
several special classes of stochastic processes in great detail. To do this we need
a mathematically precise method to describe a stochastic process unambiguously.
Since a stochastic process is a collection of random variables, it makes sense to start
by reviewing how one “describes” a single random variable.
From elementary probability theory (see Appendices A and B) we see that a single
random variable X is completely described by its cumulative distribution function
HOW TO CHARACTERIZE A STOCHASTIC PROCESS 5
(cdf)
F (x) = P(X ≤ x), −∞ < x < ∞. (1.1)
A multivariate random variable (X1 , X2 , · · · , Xn ) is completely described by its
joint cdf
F (x1 , x2 , · · · , xn ) = P(X1 ≤ x1 , X2 ≤ x2 , · · · , Xn ≤ xn ), (1.2)
for all −∞ < xi < ∞ and i = 1, 2, · · · , n. Thus if the parameter set T is finite, the
stochastic process {X(τ ), τ ∈ T } is a multi-variate random variable, and hence is
completely described by the joint cdf. But what about the case when T is not finite?
Consider the case T = {0, 1, 2, · · ·} first. One could naively look for a direct ex-
tension of the finite dimensional joint cdf to an infinite dimensional case as follows:
F (x0 , x1 , · · ·) = P(X0 ≤ x0 , X1 ≤ x1 , · · ·), −∞ < xi < ∞, i = 0, 1, · · · .
(1.3)
However, the probability on the right hand side of the above equation is likely to be
zero or one in most of the cases. Thus such a function is not likely to give much
information regarding the stochastic process {Xn , n ≥ 0} . Hence we need to look
for an alternative. Suppose we are given a family of finite dimensional joint cdfs
{Fn , n ≥ 0} such that
Fn (x0 , x1 , · · · , xn ) = P(X0 ≤ x0 , X1 ≤ x1 , · · · , Xn ≤ xn ), (1.4)
for all −∞ < xi < ∞, and i = 0, 1, · · · , n. Such a family is called consistent if it
satisfies
Fn (x0 , x1 , · · · , xn ) = Fn+1 (x0 , x1 , · · · , xn , ∞), (1.5)
for all −∞ < xi < ∞, and i = 0, 1, · · · , n, n ≥ 0. A discrete-time stochastic
process is completely described by a consistent family of finite dimensional joint
cdfs, that is, any probabilistic question about {Xn , n ≥ 0} can be answered in terms
of {Fn , n ≥ 0}. Technically speaking, what one means by “completely describe” a
stochastic process is to construct a probability space (Ω, F , P) on which the process
is defined. The more curious reader is referred to more advanced texts on stochastic
processes to answer further questions.
Next we turn to the case of T = [0, ∞). Unfortunately the matter of completely
describing a continuous time stochastic process {X(t), t ≥ 0} is not so simple, since
this case deals with an uncountable number of random variables. The situation can
be simplified if we can make certain assumptions about the continuity of the sample
paths of the process. We shall not deal with the details here, but shall give the main
result:
Suppose the sample paths of {X(t), t ≥ 0} are, with probability 1, right continu-
ous with left limits, i.e.,
lim X(s) = X(t), (1.6)
s↓t
and lims↑t X(s) exists for each t. Furthermore, suppose the sample paths have a
finite number of discontinuities in a finite interval of time. Then {X(t), t ≥ 0} is
6 INTRODUCTION
completely described by a consistent family of finite dimensional joint cdfs
Ft1 ,t2 ,···,tn (x1 , x2 , · · · , xn ) = P (X(t1 ) ≤ x1 , X(t2 ) ≤ x2 , · · · , X(tn ) ≤ xn ),
(1.7)
for all −∞ < xi < ∞, i = 1, · · · , n, n ≥ 1 and all 0 ≤ t1 < t2 < · · · < tn .
In a sense a sequence of iid random variables is the simplest kind of stochastic pro-
cess, but it does not have any interesting structure. However, one can construct more
complex and interesting stochastic processes from it as shown in the next example.
Now that we know what a stochastic process is and how to describe them precisely,
the next question is: what do we with it?
In Section 1.1 we have seen several situations where stochastic processes appear.
For each situation we have a specific goal for studying it. The study of stochastic
processes will be useful if it somehow helps us achieve that goal. In this book we shall
develop a set of tools to help us achieve those goals. This will lead us to the study
of special classes of stochastic processes: Discrete-time Markov chains (Chapters
2, 3, 4), Poisson processes (Chapter 5), continuous-time Markov chains (Chapter
6), renewal and regenerative processes (Chapter 8), Markov-regenerative processes
(Chapter 9) and Brownian motion (Chapter 10).
For each of these classes, we follow a more or less standard format of study, as
described below.
WHAT DO WE DO WITH A STOCHASTIC PROCESS? 7
1.3.1 Characterization
The first step is to define the class of stochastic processes under consideration. Then
we look for ways of uniquely characterizing the stochastic process. In many cases
we find that the consistent family of finite dimensional joint probability distributions
can be described by a rather compact set of parameters. Identifying these parameters
is part of this step.
The second step in the study of the stochastic process is to study its transient behav-
ior. We concentrate on two aspects of transient behavior. First, we study the marginal
distribution, that is, the distribution of Xn or X(t) for a fixed n or t. We develop
methods of computing this distribution. In many cases, the distribution is too hard
to compute, in which case we satisfy ourselves with the moments or transforms of
the random variable. (See Appendices B through F.) Mostly we shall find that the
computation of transient behavior is quite difficult. It may involve computing matrix
powers, or solving sets of simultaneous differential equations, or inverting trans-
forms. Very few processes have closed form expressions for transient distributions,
for example, the Poisson process. Second, we study the occupancy times, that is, the
expected total time the process spends in different states up to time n or t.
Since computing the transient distribution is intractable in most cases, we next turn
our attention to the limiting behavior, viz., studying the convergence of Xn or X(t)
as n or t tends to infinity. Now, there are many different modes of convergence of a
sequence of random variables: convergence in distribution, convergence in moments,
convergence in probability, and convergence with probability one (or almost sure
convergence). These are described in Appendix G. Different stochastic processes ex-
hibit different types of convergence. For example, we generally study convergence
in distribution in Markov chains. For renewal processes, we obtain almost sure con-
vergence results.
The first question is if the convergence occurs at all, is the limit unique when it
does occur. This is generally a theoretical exploration, and proceeds in a theorem-
proof fashion. The second part is how to compute the limit if the convergence does
occur and the limit is unique. Here we may need mathematical tools like matrix alge-
bra, systems of difference and differential equations, Laplace and Laplace-Stieltjes
transforms, generating functions, and, of course, numerical methods. The study of
the limiting distributions forms a major part of the study of the stochastic processes.
8 INTRODUCTION
1.3.4 First Passage Times
Stochastic processes in this book are intended to be used to model systems evolv-
ing in time. These systems typically incur costs or earn rewards depending on their
evolution. In practice, the system designer has a specific set of operating policies in
mind. Under each policy the system evolution is modeled by a specific stochastic
process which generates specific costs or rewards. Thus the analysis of these costs
and rewards (we shall describe the details in later chapters) is critical in evaluating
comparative worth of the policies. Thus we develop methods of computing different
cost and reward criteria for the stochastic processes.
If the reader keeps in mind these five main aspects that we study for each class
of stochastic processes, the organization of the rest of the book will be relatively
transparent. The main aim of the book is always to describe general methods of
studying the above aspects, and not to go into special methods, or “tricks,” that work
(elegantly) for specialized problems, but fail in general. For this reason, some of
our analysis may seem a bit long winded for those who are already familiar with
the tricks. However, it is the general philosophy of this book that the knowledge of
general methods is superior to that of the tricks. Finally, the general methods can be
adopted for implementation on computers, while the tricks cannot. This is important,
since computers are a great tool for solving practical problems.
CHAPTER 2
9
10 DISCRETE-TIME MARKOV CHAINS: TRANSIENT BEHAVIOR
Definition 2.1 Discrete Time Markov Chain. A stochastic process {Xn , n ≥
0} with countable state-space S is called a DTMC if
(i) for all n ≥ 0, Xn ∈ S,
(ii) for all n ≥ 0, and i, j ∈ S,
P(Xn+1 = j|Xn = i, Xn−1 , Xn−2 , · · · , X0 ) = P(Xn+1 = j|Xn = i). (2.1)
Equation 2.1 is a formal way of stating the Markov property for discrete-time
stochastic processes with countable state spaces. We next define an important sub-
class of DTMCs called the time-homogeneous DTMCs.
In short, the elements on each row of a stochastic matrix are non-negative and add
up to one. The relevance of this definition is seen from the next theorem.
= P(Xn+1 ∈ S|Xn = i) = 1,
since, according to the definition of DTMC, Xn+1 ∈ S with probability 1.
Next, following the general road map laid out in Chapter 1, we turn our attention
to the question of characterization of a DTMC. Clearly, any stochastic matrix can
be thought of as a transition matrix of a DTMC. This generates a natural question:
is a DTMC completely characterized by its transition probability matrix? In other
words, are the finite dimensional distributions of a DTMC completely specified by
its transition probability matrix? The answer is no, since we cannot derive the distri-
bution of X0 from the transition probability matrix, since its elements are conditional
probabilities. So suppose we specify the distribution of X0 externally. Let
ai = P(X0 = i) i ∈ S, (2.5)
and
a = [ai ]i∈S (2.6)
be a row vector representing the probability mass function (pmf) of X0 . We say that
a is the initial distribution of the DTMC.
Next we ask: is a DTMC completely described by its transition probability matrix
and its initial distribution? The following theorem answers this question in the affir-
mative. The reader is urged to read the proof, since it clarifies the role played by the
Markov property and the time-homogeneity of the DTMC.
Proof. We shall prove the theorem by showing how we can compute the finite di-
mensional joint probability mass function P(X0 = i0 , X1 = i1 , · · · , Xn = in ) in
terms of a and P . Using Equation 2.5 we get
ai0 = P(X0 = i0 ), i0 ∈ S.
Next we have
P(X0 = i0 , X1 = i1 ) = P(X1 = i1 |X0 = i0 )P(X0 = i0 )
= ai0 pi0 ,i1
by using the definition of P . Now, as an induction hypothesis, assume that
P(X0 = i0 , X1 = i1 , · · · , Xk = ik ) = ai0 pi0 ,i1 pi1 ,i2 · · · pik−1 ,ik (2.7)
12 DISCRETE-TIME MARKOV CHAINS: TRANSIENT BEHAVIOR
for k = 1, 2, · · · , n − 1. We shall show that it is true for k = n. We have
P(X0 = i0 , X1 = i1 , · · · , Xn = in )
= P(Xn = in |Xn−1 = in−1 , · · · , X1 = i1 , X0 = i0 ) ·
P(X0 = i0 , X1 = i1 , · · · , Xn−1 = in−1 )
= P(Xn = in |Xn−1 = in−1 )P(X0 = i0 , X1 = i1 , · · · , Xn−1 = in−1 )
(by Markov Property)
= pin−1 ,in P(X0 = i0 , X1 = i1 , · · · , Xn−1 = in−1 )
(by time homogeneity)
= pin−1 ,in ai0 pi0 ,i1 pi1 ,i2 · · · pin−2 ,in−1
(by induction hypothesis),
which can be rearranged to show that the induction hypothesis holds for k = n.
Hence the result follows.
The transition probability matrix of a DTMC can be represented graphically by its
transition diagram, which is a directed graph with as many nodes as there are states
in the state-space, and a directed arc from node i to node j if pij > 0. In particular, if
pii > 0, there is a self loop from node i to itself. The dynamic behavior of a DTMC
is best visualized via its transition diagram as follows: imagine a particle that moves
from node to node by choosing the outgoing arcs from the current node with the
corresponding probabilities. In many cases it is easier to describe the DTMC by its
transition diagram, rather than displaying the transition matrix.
2
0.2
0.6
0.1 1 0.4
0.7
0.4
3 0.6
2.2 Examples
Example 2.3 Two-State DTMC. One of the simplest DTMCs is one with two
states, labeled 1 and 2. Thus S = {1, 2}. Such a DTMC has a transition matrix
as follows:
α 1−α
,
1−β β
where 0 ≤ α, β ≤ 1. The transition diagram is shown in Figure 2.2. Thus if the
DTMC is in state 1, it jumps to state 2 with probability 1 − α; if it is in state 2, it
jumps to state 1 with probability 1 − β, independent of everything else.
α 1 2 β
1–β
previous data we have determined that if it is sunny today, there is an 80% chance
that it will be sunny tomorrow regardless of the past weather; whereas, if it is rainy
today, there is a 30% chance that it will be rainy tomorrow, regardless of the past.
Let Xn be the weather on day n. We shall label sunny as state 1, and rainy as state 2.
Then {Xn , n ≥ 0} is a DTMC on S = {1, 2} with transition probability matrix
.8 .2
.
.7 .3
Clearly, the DTMC has a higher tendency to move to state 1, thus implying that this
is a model of the weather at a sunny place!
Example 2.5 Clinical Trials. Suppose two drugs are available to treat a particular
disease, and we need to determine which of the two drugs is more effective. This is
generally accomplished by conducting clinical trials of the two drugs on actual pa-
tients. Here we describe a clinical trial setup that is useful if the response of a patient
to the administered drug is sufficiently quick, and can be classified as “effective” or
“ineffective.” Suppose drug i is effective with probability pi , (i = 1, 2). In practice
the values of p1 and p2 are unknown, and the aim is to determine if p1 ≥ p2 or
p2 ≥ p1 . Ethical reasons compel us to use the better drug on more patients. This is
achieved by using the play the winner rule as follows.
The first patient is given either drug 1 or 2 at random. If the nth patient is given
drug i (i = 1, 2) and it is observed to be effective for that patient, then the same drug
is given to the (n + 1)-st patient; if it is observed to be ineffective then the (n + 1)-
st patient is given the other drug. Thus we stick with a drug as long as its results
are good; when we get a bad result, we switch to the other drug – hence the name
“play the winner.” Let Xn be the drug (1 or 2) administered to the n-th patient. If the
successive patients are chosen from a completely randomized pool, then we see that
P(Xn+1 = 1|Xn = 1, Xn−1 , · · · , X1 )
= P(drug 1 is effective on the n-th patient) = p1 .
We can similarly derive P(Xn+1 = j|Xn = i; history) for all other (i, j) combina-
tions, thus showing that {Xn , n ≥ 0} is a DTMC. Its transition probability matrix is
given by
p1 1 − p1
.
1 − p2 p2
EXAMPLES 15
If p1 > p2 , the DTMC has a higher tendency to move to state 1, thus drug 1 (the better
drug) is used more often. Thus the ethical purpose served by the play the winner
rule.
Example 2.7 Two-Machine Workshop. Suppose a work shop has two identical
machines as described in Example 2.6. The two machines behave independently of
each other. Let Xn be the number of working machines on day n. Is {Xn , n ≥ 0} a
DTMC?
The state-space is S = {0, 1, 2}. Next we verify the Markov property given by
Equation 2.1. For example, we have
P(Xn+1 = 0|Xn = 0, Xn−1 , · · · , X0 )
= P(Xn+1 = 0|Both machines are down on day n, Xn−1 , · · · , X0 )
= P(Both machines are down on day n + 1| Both machines are down on day n)
= pd pd .
Similarly we can verify that P(Xn+1 = j|Xn = i, Xn−1 , · · · , X0 ) depends only on
i and j for all i, j ∈ S. Thus {Xn , n ≥ 0} is a DTMC on state-space S = {0, 1, 2}
with transition probability matrix given by
p2d (1 − pu )2
pd (1 − pu )
P = pd (1 − pu ) pd pu + (1 − pd )(1 − pu ) pu (1 − pd ) .
(1 − pu )2 2pu (1 − pu ) p2u
This example can be extended to r ≥ 2 machines in a similar fashion.
Example 2.9 Random Walk. Let {Zn , n ≥ 1} be a sequence of iid random vari-
ables with common pmf
αk = P(Xn = k), k = 0, ±1, ±2, · · · .
Define
n
X
X0 = 0, Xn = Zn , n ≥ 1.
k=1
The {Xn , n ≥ 0} has state-space S = {0, ±1, ±2, · · ·}. To verify that it is a DTMC,
we see that, for all i, j ∈ S,
P(Xn+1 = j|Xn = i, Xn−1 , · · · , X0 )
n+1
X n
X
= P( Zk = j| Zk = i, Xn−1 , · · · , X0 )
k=1 k=1
= P(Zn+1 = j − i) = αj−i .
Thus {Xn , n ≥ 0} is a DTMC with transition probabilities
pi,j = αj−i , i, j ∈ S.
This random walk is called space-homogeneous, or state-independent, since, Zn ,
the size of the n-th step does not depend on the position of the random walk at
time n.
Example 2.11 Gambler’s Ruin. Consider two gamblers, A and B, who have a
combined fortune of N dollars. They bet one dollar each on the toss of a coin. If the
coin turns up heads, A wins a dollar from B, and if the coin turns up tails, B wins
a dollar from A. Suppose the successive coin tosses are independent, and the coin
turns up heads with probability p and tails with probability q = 1 − p. The game
ends when either A or B is broke (or ruined).
Let Xn denote the fortune of gambler A after the n-th toss. We shall assume the
coin tossing continues after the game ends, but no money changes hands. With this
convention we can analyze the stochastic process {Xn , n ≥ 0}. Obviously, if Xn =
0 (A is ruined) or Xn = N (B is ruined), Xn+1 = Xn . If 0 < Xn < N , we have
Xn + 1 with probability p
Xn+1 =
Xn − 1 with probability q.
This shows that {Xn , n ≥ 0} is a DTMC on state-space S = {0, 1, 2, · · · , N }. Its
P P P P
1 0 1 2 N–1 N 1
q q q q
Example 2.13 Urn Model. Consider two urns labeled A and B, containing a total
of N white balls and N red balls among them. An experiment consists of picking one
ball at random from each urn and interchanging them. This experiment is repeated
in an independent fashion. Let Xn be the number of white balls in urn A after n
repetitions of the experiment. Assume that initially urn A contains all the white balls,
and urn B contains all the red balls. Thus X0 = N . Note that Xn tells us precisely
the contents of the two urns after n experiments. That {Xn , n ≥ 0} is a DTMC
on state-space S = {0, 1, · · · , N } can be seen from the following calculation. For
0<i<N
P(Xn+1 = i + 1|Xn = i, Xn−1 , · · · , X0 )
= P(A red ball from A and a white ball from B is picked on the n-th experiment)
N −i N −i
= · = pi,i+1 .
N N
The other transition probabilities can be computed similarly to see that {Xn , n ≥
0} is a random walk on S = {0, 1, · · · , N } with the following parameters:
r0 = 0, p0 = 1,
2 2
i i N −i N −i
qi = , ri = 2 · , pi = , 0 < i < N,
N N N N
rN = 0, qN = 1.
Thus the random walk has reflecting barriers at 0 and N . This urn model was used
initially by Ehrenfest to model diffusion of molecules across a permeable membrane.
One can think of the white balls and red balls as the molecules of two different gases
and the switching mechanism as the model for the diffusion across the membrane. It
also appears as Moran model in genetics.
Example 2.14 Brand Switching. A customer chooses among three brands of beer,
say A, B, and C, every week when he buys a six-pack. Let Xn be the brand he
purchases in week n. From his buying record so far, it has been determined that
EXAMPLES 19
{Xn , n ≥ 0} is a DTMC with state-space S = {A, B, C} and transition probability
matrix given below:
0.1 0.2 0.7
Example 2.15 Success Runs. Consider a game where a coin is tossed repeatedly
in an independent fashion. Whenever the coin turns up heads, which happens with
probability p, the player wins a dollar. Whenever the coin turns up tails, which hap-
pens with probability q = 1 − p, the player loses all his winnings so far. Let Xn
denote the players fortune after the n-th toss. We have
(
0 with probability q
Xn+1 =
Xn + 1 with probability p.
This shows that {Xn , n ≥ 0} is a DTMC on state-space S = {0, 1, 2, · · ·} with
transition probabilities
pi,0 = q, pi,i+1 = p, i ∈ S.
A slightly more general version of this DTMC can be considered with transition
p0 p1 p2
0 1 2 3
q1
q2
q3
probabilities
pi,0 = qi , pi,i+1 = pi , i ∈ S.
Such a DTMC is called a success runs Markov chain. Its transition diagram is shown
in Figure 2.4.
All other transition probabilities are zero. Thus {Xn , n ≥ 0} is a DTMC with tran-
sition probability matrix given by
β0 α0 0 0 0 ···
β1 α1 α0 0 0 ···
β2 α2 α1 α0 0 · · ·
P = . (2.12)
β3 α3 α2 α1 α0 · · ·
.. .. .. .. ..
..
. . . . . .
Note that {Xn , n ≥ 0} can increase by at most one. Matrices of the form above are
known as the lower Hessenberg matrices. Markov chains with this type of transition
probability matrix also arise in many applications, again in queueing theory.
The last two examples illustrate a general class of DTMCs {Xn , n ≥ 0} that are
generated by the following recursion
Xn+1 = f (Xn , Yn ), n ≥ 0,
where {Yn , n ≥ 0} is a sequence of iid random variables. The reader is urged to
construct more DTMCs of this structure for further understanding of the DTMCs.
22 DISCRETE-TIME MARKOV CHAINS: TRANSIENT BEHAVIOR
2.3 DTMCs in Other Fields
In this section we present several examples where the DTMCs have been used to
model real life situations.
2.3.1 Genomics
In 1953 Francis Crick and James Watson, building on the experimental research of
Rosalind Franklin, first proposed the double helix model of DNA (deoxyribonu-
cleic acid), the key molecule that contains the genetic instructions for the making
of a living organism. It can be thought of as a long sequence of four basic nu-
cleotides (or bases): adenine, cytosine, guanine and thymine, abbreviated as A, C, G,
and T , respectively. The human DNA consists of roughly 3 billion of these bases.
The sequence of these bases is very important. A typical sequence may read as:
CT T CT CAAAT AACT GT GCCT C · · ·.
Let Xn be the n-th base in the sequence. It is clear that {Xn , n ≥ 1} is a discrete-
time stochastic process with state-space S = {A, C, G, T }. Clearly, whether it is a
DTMC or not needs to be established by statistical analysis. We will not get into the
mechanics of doing so in this book. By studying a section of the DNA molecule one
might conclude that the {Xn , n ≥ 1} is a DTMC with transition probability matrix
given below (rows and column are ordered as A, C, G, T )
0.180 0.274 0.426 0.120
0.170 0.368 0.274 0.188
P (1) = 0.161 0.339 0.375 0.135 .
2.3.2 Genetics
2.3.3 Genealogy
Genealogy is the study of family trees. In 1874 Francis Galton and Henry Watson
wrote a paper on extinction probabilities of family names, a question that was origi-
nally posed by Galton about the likelihood of famous family names disappearing due
to lack of male heirs. It is observed that ethnic populations such as the Koreans and
Chinese have been using family names that are passed on via male children for sev-
eral thousand years, and hence are left with very few family names. While in other
societies where people assume new family names more easily, or where the tradition
of family names is more recent, have many family names.
In its simplest form, we consider a patriarchal society where the family name
is carried on by the male heirs only. Consider the Smith family tree as shown in
Figure 2.5.
The initiator is Steve Smith, who we shall say constitutes the zeroth generation.
Steve Smith has two sons: Peter and Eric Smith, who constitute the first generation.
26 DISCRETE-TIME MARKOV CHAINS: TRANSIENT BEHAVIOR
Steve
Peter Eric
Robert Edward
Bruce
Peter does not have any male offspring, whereas Eric has two: Robert and Edward
Smith. Thus the second generation has two males. The third generation has only one
male: Bruce Smith, who dies without a male heir, and hence the family name Smith
initiated by Steve dies out. We say that the family name of Steve Smith became
extinct in the third generation.
To model this situation let Xn be the number of individuals in the n-th generation,
starting with X0 = 1. We index the individuals of the n-th generation by integers
1, 2, · · · , Xn . Let Yr,n be the number of male offspring to the r-th individual of the
n-th generation. Then we have
Xn
X
Xn+1 = Yr,n . (2.14)
r=1
Now suppose {Yr,n } are iid random variables. Then {Xn , n ≥ 0} is a DTMC on
state-space S = {0, 1, 2, · · ·} with transition probabilities
pi,j = P(Xn+1 = j|Xn = i, Xn−1 , · · · , X0 )
Xn
!
X
= P Yr,n = j|Xn = i, Xn−1 , · · · , X0
r=1
i
!
X
= P Yr,n = j ,
r=1
where the last probability can be (in theory) computed as a function of i and j. The
{Xn , n ≥ 0} process is called a branching process.
DTMCS IN OTHER FIELDS 27
Typical questions of interest are: What is the probability that the family name
eventually becomes extinct? How many generations does it take before it becomes
extinct, given that it does become extinct? How many total males are produced in the
family? What is the size of the n-th generation?
Although we have introduced the branching process as a model of propagation of
family names, the same models arises in other areas, such as nuclear physics, spread
of diseases, or rumors or chain-letters or internet jokes in very large populations. We
give an example of a nuclear reaction: A neutron (zeroth generation) is introduced
from outside in a fissionable material. The neutron may pass through the material
without hitting any nucleus. If it does hit a nucleus, it will cause a fission resulting in
a random number of new neutrons (first generation). These new neutrons themselves
behave like the original neutron, and each will produce its own random number of
new neutrons through a possible collision. Thus Xn , the number of neutrons after
the n-th generation, can be modeled as a branching process. In nuclear reactors the
evolution of this branching process is controlled by inserting moderator rods in the
fissionable material to absorb some of the neutrons.
2.3.4 Finance
DTMCs have become an important tool in mathematical finance. This is a very rich
and broad subject, hence we concentrate on relatively narrow and basic model of
stock fluctuations. Let Xn be the value of a stock at time n (this could be a day,
or a minute). We assume that {Xn , n ≥ 0} is a stochastic process with state-space
(0, ∞), not necessarily discrete, although stock values are reported in integer cents.
Define the return in period n as
Xn − Xn−1
Rn = , n ≥ 1.
Xn−1
Thus R3 = .1 implies that the stock value increased by 10% from period two to
three, R2 = −.05 is equivalent to saying that the stock value decreased by 5% from
period one to two. From this definition it follows that
n
Y
Xn = X0 (1 + Ri ), n ≥ 1. (2.15)
i=1
Xn = X0 (1 + u)Zn (1 − d)n−Zn .
Note that for each value of X0 , Xn can take n discrete values. This model of stock
fluctuations is called the binomial model.
Since stock values can go up or down, investors take a risk when they invest in
stocks, rather than putting their money in a risk-less money market account that gives
a fixed positive rate of return. Hence the financial industry has created several finan-
cial instruments that mitigate or bound such risks. A simple instrument of this type is
the European call option. It gives the holder of the option a right, but not an obliga-
tion, to buy the stock on a specified day, say T , at a specified price, say K. Clearly, if
XT , the value of the stock on day T , is less than K, the holder will not exercise the
option, and the option will expire with no profit and no loss to the holder. If XT > K,
the holder will exercise the option and buy the stock at K, and immediately sell it at
XT and make a tidy profit of XT − K. In general the holder gets max{0, XT − K}
at time T , which is never negative. Thus there is no risk of loss in this financial in-
strument. Clearly, the buyer of this option must be willing to pay a price to buy this
risk-less instrument. How much should she pay? What is the fair value of such an
option? There are many other such options. For example, a put option gives a right
to sell. American versions of these options can be exercised anytime until T , and not
just at time T , as is the case in the European options. Valuation of these options is a
highly technical area, and DTMCs play a major role in the discrete time versions of
these problems. See Options, Futures, and Other Derivatives by J. C. Hull for more
details at an elementary level.
2.3.6 Telecommunications
Wireless communication via cell phones has become commonly available. In such
a system a wireless device such as a cell phone communicates with a cell tower in a
bi-directional fashion, that is, it receives data from the cell tower and sends data to it.
The rate at which data can be transmitted changes randomly with time due to many
factors, for example, changing position of the user, weather, topology of the terrain,
to name a few. The cell tower knows at all times how many users are registered with
it, and what data-rate is available to each user. We consider a technology in which
the time is slotted into short intervals, say a millisecond long, and the cell tower can
communicate with exactly one user during each time slot.
Let Rn (u) be the data rate (in kilo-bits per second) available to user u in the n-th
slot. It is commonly assumed {Rn (u), n ≥ 0} is a DTMC with a finite state-space,
for example, {38.4, 76.8, 102.6, 153.6, 204.8, 307.2, 614.4, 921.6, 1228.8, 1843.2,
2457.6}, and that the data-rates available to different users are independent. Now
let Xn (u) be the amount of data (in kilobits) waiting for transmission at user u at the
beginning of the n-th time slot, and An (u) be the new data that arrives for the user
in the n-th slot. Thus if user u is served during the n-th time slot, Xn (u) + An (u)
amount of data is available for transmission, out of which Rn (u) is actually trans-
mitted. Using v(n) to denote the user that is served in the n-th slot, we see that the
following recursion holds:
max{Xn (u) + An (u) − Rn (u), 0} if u = v(n)
Xn+1 (u) =
Xn (u) + An (u) if u 6= v(n).
Now suppose there are a fixed number N of users in the reception area of the cell
tower. Let Rn = [Rn (1), Rn (2), · · · , Rn (N )], Xn = [Xn (1), Xn (2), · · · , Xn (N )],
and An = [An (1), An (2), · · · , An (N )]. Suppose the cell tower knows the state of
the system (Rn , Xn , An ) at the beginning of the n-th time slot. It decides which
user to serve next based solely on this information. If we assume that {An (u), n ≥
0} are independent (for different u’s) sequences of iid random variables, it follows
that {(Rn , Xn ), n ≥ 0} is a DTMC with a rather large state-space and complicated
transition probabilities.
The cell tower has to decide which user to serve in each time slot so that the data is
transferred at the highest possible rate (maximize throughput) and at the same time
no user is starved for too long (ensure fairness). This is the main scheduling problem
in wireless communications. For example, the cell tower may decide to serve that
user in the n-th time slot who has the highest data rate available to it. This may
maximize the throughput, but may be unfair for those users who are stuck with low
data rate environment. On the other hand, the cell tower may decide to serves the user
u with the highest backlog Xn (u) + An (u). This may also be unfair, since the user
with largest data requirement will be served most of the time. One rule that attempts
MARGINAL DISTRIBUTIONS 31
to strike a balance between these two conflicting objectives serves the user u that has
the largest value of min(Rn (u), Xn (u) + An (u)) · (Xn (u) + An (u)).
DTMCs have been used in a variety of problems arising in telecommunications.
The above model is but one example. DTMCs have been used in the analysis of radio
communication networks using protocols like ALOHA, performance of the ethernet
protocol and the TCP protocol in the Internet, to name a few other applications. The
reader is referred to the classic book Data Networks by Bertsekas and Gallager for a
simple introduction to this area.
The examples in Sections 2.2 and 2.3 show that many real-world systems can be
modeled by DTMCs and provide the motivation to study them. Section 2.1 tells us
how to characterize a DTMC. Now we follow the road map laid out in Chapter 1 and
study the transient behavior of the DTMCs. In this section we first study the marginal
distributions of DTMCs.
Let {Xn , n ≥ 0} be a DTMC on state-space S = {0, 1, 2, · · ·} with transition
probability matrix P and initial distribution a. In this section we shall study the
distribution of Xn . Let the pmf of Xn be denoted by
(n)
aj = P(Xn = j), j ∈ S, n ≥ 0. (2.16)
(0)
Clearly aj = aj is the initial distribution. By using the law of total probability we
get
X
P(Xn = j) = P(Xn = j|X0 = i)P(X0 = i)
i∈S
X
= P(Xn = j|X0 = i)ai
i∈S
(n)
X
= ai pij , (2.17)
i∈S
where
(n)
pij = P(Xn = j|X0 = i), i, j ∈ S, n ≥ 0 (2.18)
is called the n-step transition probability, since it is the probability of going from
state i to state j in n transitions. We have
(0)
pij = P(X0 = j|X0 = i) = δij , i, j ∈ S, (2.19)
where δij is one if i = j and zero otherwise, and
(1)
pij = P(X1 = j|X0 = i) = pij , i, j ∈ S. (2.20)
(n)
If we can compute the n-step transition probabilities pij , we can compute the
marginal distribution of Xn . Intuitively, the event of going from state i to state j
involves going from state i to some intermediate state r at time k ≤ n, followed by a
32 DISCRETE-TIME MARKOV CHAINS: TRANSIENT BEHAVIOR
trajectory from state r to state j in the remaining n − k steps. This intuition is used
(n)
to derive a method of computing pij in the next theorem. The proof of the theorem
is enlightening in itself since it shows the critical role played by the assumptions of
Markov property and time homogeneity.
which can be rearranged to get Equation 2.21. This proves the theorem.
Equations 2.21 are called the Chapman-Kolmogorov equations and can be written
more succinctly in matrix notation. Let
(n)
P (n) = [pij ].
It is called the n-step transition probability matrix. The Chapman-Kolmogorov equa-
tions can be written in matrix form as
P (n) = P (k) P (n−k) , 0 ≤ k ≤ n. (2.22)
The next theorem gives an important implication of the Chapman-Kolmogorov equa-
tions.
Example 2.18 Two-State DTMC. Let P be the transition probability matrix of the
two-state DTMC of Example 2.3 on page 13. If α + β = 2, we must have α = β = 1
and hence P = I. In that case P n = I for all n ≥ 0. If α + β < 2, it can be shown
by induction that, for n ≥ 0,
(α + β − 1)n 1 − α α − 1
1 1−β 1−α
Pn = + .
2−α−β 1−β 1−α 2−α−β β−1 1−β
In general such a closed form expression for P n is not available for DTMCs with
larger state spaces.
Example 2.19 Simple Random Walk. Consider a simple random walk on all inte-
gers with the following transition probabilities
pi,i+1 = p, pi,i−1 = q = 1 − p, −∞ < i < ∞,
where 0 < p < 1. Compute
(n)
p00 = P(Xn = 0|X0 = 0), n ≥ 0.
Starting from state 0, the random walk can return to state 0 only in even number of
steps. Hence we must have
(n)
p00 = 0, for all odd n.
Now let n = 2k be an even integer. To return to state 0 in 2k steps starting from state
0, the random walk must take a total of k steps to the right, and k steps to the left, in
any order. There are (2k)!/(k!k!) distinct sequences of length 2k made up of k right
and k left steps, and the probability of each sequence is pk q k . Hence we get
(2k) (2k)! k k
p00 = p q , k = 0, 1, 2, · · · . (2.24)
k!k!
In a similar manner one can show that
(n) n a b
pij = p q , if n + j − i is even (2.25)
b
34 DISCRETE-TIME MARKOV CHAINS: TRANSIENT BEHAVIOR
where a = (n + j − i)/2 and b = (n + i − j)/2. If n + j − i is odd the above
probability is zero.
It is not always possible to get a closed form expression for the n-step transition
probabilities, and one must do so numerically. We will study this in more detail in
Section 2.6.
(n)
Now let a(n) = [aj ] be the probability mass function (pmf) of Xn . The next
Theorem gives a simple expression for a(n) .
a(n) = aP n , n ≥ 0. (2.26)
Example 2.21 Urn Model Continued. Let {Xn , n ≥ 0} be the stochastic process
of the urn model of Example 2.13 on page 18 with N = 10. Compute E(Xn ) for
n = 0, 5, 10, 15, and 20, starting with X0 = 10. Using the transition matrix P from
Example 2.13 we get
10
X
en = E(Xn |X0 = 10) = iP(Xn = i|X0 = 10) = aP n b,
i=1
MARGINAL DISTRIBUTIONS 35
where a = (0, 0, · · · , 0, 1) and b = (0, 1, · · · , 9, 10)′ . Numerical computations yield:
e0 = 10, e5 = 6.6384, e10 = 5.5369, e15 = 5.1759, and e20 = 5.0576. We can
see numerically that as n → ∞, en converges to 5.
Example 2.22 The Branching Process. Consider the branching process {Xn , n ≥
0} introduced in Section 2.3.3. Here we compute µn = E(Xn ) and σn2 = Var(Xn )
as a function of n. Since X0 = 1, clearly
µ0 = 1, σ02 = 0. (2.27)
2
Let µ and σ be the mean and variance of the number of offspring to a single individ-
ual. Since X1 is the number of offspring to the single individual in generation zero,
we have
µ1 = µ, σ12 = σ 2 . (2.28)
Furthermore, one can show that
Xn−1
!
X
E(Xn ) = E Yr,n−1 = µE(Xn−1 ),
r=1
X
!
Xn−1
0.2 0 0 0.8
Then the pmf of the grade of this employee in year 8 is given by
[0.25 0.25 0.25 0.25] ∗ P 8 = [0.4958, 0.2388, 0.1140, 0.1514].
We can interpret this to mean that 49.58% of the employees are in grade 1
in year 8, etc. Thus the expected number of employees in the four grades are
[49.58, 23.88, 11.40, 15.14]. One can numerically see that after several years (24
in this example), the expected number of employees in the four grades stabilize at
[50, 25, 12.5, 12.5].
where the last equality follows from Theorem 2.4. Writing the above equation in ma-
trix form yields Equation 2.30. This proves the theorem.
Example 2.24 Two-State DTMC. Consider the two-state DTMC of Example 2.3.
The n-step transition probability matrix of the DTMC was given in Example 2.18 on
page 33. Using that, and a bit of algebra, we see that the occupancy matrix for the
two-state DTMC is given by
1 − (α + β − 1)(n+1) 1 − α α − 1
(n) n+1 1−β 1−α
M = + .
2−α−β 1−β 1−α (2 − α − β)2 β−1 1−β
Thus, if the DTMC starts in state 1, the expected number of times it visits state 2 up
(n)
to time n is given by M12 .
Example 2.25 Brand Switching Model Continued. Consider the brand switching
model of Example 2.14 on page 18. Compute the expected number of each brand
sold in the first ten weeks.
38 DISCRETE-TIME MARKOV CHAINS: TRANSIENT BEHAVIOR
Let P be the transition matrix given by Equation 2.8. Since we are interested in
purchases over the weeks 0 through 9, we need to compute M (9) . Using Theorem 2.6
we get
X 9 2.1423 2.7412 5.1165
M (9) = P n = 1.2631 3.9500 4.7869 .
n=1 1.1532 2.8511 5.9957
Thus if the customer chooses brand A in the initial week, his expected number of
purchases on brand A over the weeks 0 through 9 is 2.1423, of brand B is 2.7412,
and of brand C is 5.1165.
We start with some preliminaries from matrix algebra. The reader is referred to
texts by Fuller (1962) and Gantmacher (1960) for proofs and other details. An m×m
square matrix A is called diagonalizable if there exist an invertible matrix X and a
diagonal matrix
D = diag[λ1 , λ2 , · · · , λm ],
such that
D = XAX −1 .
Elements of X and D maybe complex even if A is real. It is known that the λ’s are
the eigenvalues of A, the j-th column xj of X is the right eigenvector of λj , and the
j-th row yj of X −1 is the left eigenvector of λj . That is, the λ’s are the m roots of
det(λI − A) = 0,
where det is short for determinant, xj satisfies
Axj = λj xj , (2.31)
and yj satisfies
yj A = λj yj .
If all the eigenvalues are distinct, then A is diagonalizable. This is a sufficient condi-
tion, but not necessary. With this notation we get the following theorem:
Proof: The first result follows from the fact that P is stochastic and hence the column
vector e with all coordinates equal to one satisfies
P e = e,
thus proving that 1 is an eigenvalue of P .
To derive the second results, define the norm of an m-vector x as
kxk = max{xi : 1 ≤ i ≤ m},
and the norm of P as
kP k = sup kP xk.
x:kxk=1
Then one can show that
m
X
kP k = max{ pij } = 1,
i
j=1
Example 2.26 Three-state DTMC. Consider a three state DTMC with the follow-
ing transition probability matrix:
0 1 0
P = q 0 p ,
0 1 0
where 0 < p < 1 and q = 1 − p. Simple matrix multiplications show that
q 0 p
P 2n = 0 1 0 , n ≥ 1, (2.34)
q 0 p
0 1 0
P 2n+1 = q 0 p , n ≥ 0. (2.35)
0 1 0
Derive these formulas using the method of diagonalization.
Simple calculations show that P has three eigenvalues 1, 0, and -1, consistent with
Theorem 2.8. Thus P is diagonalizable with
1 0 0
D = 0 0 0 ,
0 0 −1
1 p 1
X = 1 0 −1 ,
1 −q 1
and
q 1 p
1
X −1 = 2 0 −2 .
2
q 1 −p
Thus
q(1 + (−1)n ) 1 − (−1)n p(1 + (−1)n )
1
P n = XDn X −1 = q(1 − (−1)n ) 1 + (−1)n p(1 − (−1)n ) , n ≥ 1.
2
q(1 + (−1)n ) 1 − (−1)n p(1 + (−1)n )
The above equation reduces to Equation 2.34 when n is even, and Equation 2.35
when n is odd. Thus the powers of P show an oscillatory behavior as a function
of n.
Example 2.27 Genotype Evolution. Consider the six-state DTMC of the Geno-
type Evolution Model described on page 24, with the transition matrix P given by
Equation 2.13 on page 24. Compute P n .
COMPUTATION OF MATRIX POWERS 41
A tedious calculation are λ1 =
√shows that the six eigenvalues in decreasing order √
1, λ2 = 1, λ3 = (1 + 5)/4 = 0.8090, λ4 = 0.5, λ5 = 0.25, λ6 = (1 − 5)/4 =
−0.3090. The matrix of right eigenvectors is given by
4 0 0 0 0 0
3 1 0 0 0 0
2 2 λ23 −1 −1 λ26
X= .
2 2 1 0 4 1
1 3 λ3 0 1 λ6
0 4 λ23 1 −1 λ26
Note that the eigenvalues are not distinct, since the eigenvalue 1 is repeated twice.
However, the matrix X is invertible. Hence P is diagonalizable, and the representa-
tion in Equation 2.33 holds with
D = diag[1, 1, 0.809, 0.5, 0.25, −0.3090].
Thus we get P = XDn X −1 .
n
We assume that the reader is familiar with generating functions, see Appendix D for
relevant details. Let P be an m × m matrix of transition probabilities and define
∞
X
P (z) = znP n, (2.36)
n=0
n (n)
3. P = [aij ] is the required power of P .
3 5 − 4z2 5 − 4z1
[P n ]1,1 = √ · ( n+1 − n+1 ).
2 31 z2 z1
One can compute the expressions for other elements of P n in a similar fashion.
2.1 We have an infinite supply of light bulbs, and Zi is the lifetime of the i-th light
bulb. {Zi , i ≥ 1} is a sequence of iid discrete random variables with common pmf
P(Zi = k) = pk , k = 1, 2, 3, · · · ,
P∞
with k=1 pk = 1. At time zero, the first light bulb is turned on. It fails at time Z1 ,
when it is replaced by the second light bulb, which fails at time Z1 + Z2 , and so on.
Let Xn be the age of the light bulb that is on at time n. Note that Xn = 0, if a new
light bulb was installed at time n. Show that {Xn , n ≥ 0} is a DTMC and compute
its transition probability matrix.
2.2 In the above exercise, let Yn be the remaining life of the bulb that is in place at
time n. For example, Y0 = Z1 . Show that {Yn , n ≥ 0} is a DTMC and compute its
transition probability matrix.
2.3 An urn contains w white balls and b black balls initially. At each stage a ball is
picked from the urn at random and is replaced by k balls of similar color (k ≥ 1). Let
Xn be the number of black balls in the urn after n stages. Is {Xn , n ≥ 0} a DTMC?
If yes, give its transition probability matrix.
MODELING EXERCISES 43
2.4 Consider a completely connected network of N nodes. At time 0 a cat resides
on node N and a mouse on node 1. During one time unit, the cat chooses a random
node uniformly from the remaining N − 1 nodes and moves to it. The mouse moves
in a similar way, independently of the cat. If the cat and the mouse occupy the same
node, the cat promptly eats the mouse. Model this as a Markov chain.
2.5 Consider the following modification of the two-state weather model of Exam-
ple 2.4: given the weather condition on day n − 1 and n, the weather condition on
day n + 1 is independent of the weather on earlier days. Historical data suggests that
if it rained yesterday and today, it will rain tomorrow with probability 0.6; if it was
sunny yesterday and today, it will rain tomorrow with probability 0.2, if it was sunny
yesterday but rained today, it will rain tomorrow with probability .5, and if it rained
yesterday but is sunny today, it will rain tomorrow with probability .25. Model this
as a four state DTMC.
2.6 A machine consists of K components in series, i.e., all the components must be
in working condition for the machine to be functional. When the machine is func-
tional at the beginning of the nth day, each component has a probability p of failing
at the beginning of the next day, independent of other components. (More than one
component can fail at the same time.) When the machine fails, a single repair person
repairs the failed components one by one. It takes exactly one day to repair one failed
component. When all the failed components are repaired the machine is functional
again, and behaves as before. When the machine is down, the working components
do not fail. Let Xn be the number of failed components at the beginning of the nth
day, after all the failure and repair events at that time are accounted for. Show that
{Xn , n ≥ 0} is a DTMC. Display the transition matrix or the transition diagram.
2.7 Two coins are tossed simultaneously and repeatedly in an independent fashion.
Coin i (i = 1, 2) shows heads with probability pi . Let Yn (i) be the number of heads
observed during the first n tosses of the i-th coin. Let Xn = Yn (1) − Yn (2). Show
that {Xn , n ≥ 0} is a DTMC. Compute its transition probabilities.
2.8 Consider the following weather forecasting model: if today is sunny (rainy) and
it is the k-th day of the current sunny (rainy) spell, then it will be sunny (rainy)
tomorrow with probability pk (qk ) regardless of what happened before the current
sunny (rainy) spell started (k ≥ 1). Model this as a DTMC. What is the state-space?
What are the transition probabilities?
2.9 Let Yn ∈ {1, 2, 3, 4, 5, 6} be the outcome of the n-th toss of a fair six-sided die.
Let Sn = Y1 + · · · + Yn , and Xn = Sn (mod 7), the remainder when Sn is divided
by 7. Assume that the successive tosses are independent. Show that {Xn , n ≥ 1} is
a DTMC, and display its transition probability matrix.
2.13 Consider the following extension of Example 2.5. Suppose we follow the play
the winner rule with k drugs (k ≥ 2) as follows. The initial player is given drug 1. If
the drug is effective with the current patient, we give it to the next patient. If the result
is negative, we switch to drug 2. We continue this way until we reach drug k. When
we observe a failure of drug k, we switch back to drug 1 and continue. Suppose the
successive patients are independent, and that drug i is effective with probability pi .
Let Xn be i if the n-th patient is given drug i. Show that {Xn , n ≥ 1} is a DTMC.
Derive its transition probability matrix.
2.14 A machine with two components is subject to a series of shocks that occur
deterministically one per day. When the machine is working a shock can cause failure
of component 1 alone with probability α1 , or of component 2 alone with probability
α2 or of both components with probability α12 or no failures with probability α0 .
(Obviously α0 + α1 + α2 + α12 = 1.) When a failure occurs, the machine is shut
MODELING EXERCISES 45
down and no more failures occur until the machine is repaired. The repair time (in
days) of component i (i = 1, 2) is a geometric random variable with parameter ri ,
0 < ri < 1. Assume that there is a single repair person and all repair times are
independent. (Thus if both components fail they are repaired sequentially, and the
machine is turned on once both components are fixed.) Give the state-space and the
transition probability matrix of an appropriate DTMC that can be used to model the
state-evolution of the machine.
2.15 Suppose three players - 1,2,3 - play an infinite tournament as follows: Initially
player 1 plays against player 2. The winner of the n-th game plays against the player
who was not involved in the n-th game. Suppose bij is the probability that in a game
between players i and j, player i will win. Obviously, bij + bji = 1. Suppose the
outcomes of the successive games are independent. Let Xn be the pair that played
the n-th game. Show that {Xn , n ≥ 0} is a DTMC. Display its transition probability
matrix or the transition diagram.
2.16 Mr. Al Anon drinks one six-pack of beer every evening! Let Yn be the price
of the six-pack on day n. Assume that the price is either L or H > L, and
that {Yn , n ≥ 0} is a DTMC on state-space {H, L} with transition probability
matrix
α 1−α
.
1−β β
Mr. Al Anon visits the beer store each day in the afternoon. If the price is high
and he has no beer at home, he buys one six pack, which he consumes in the
evening. If the price is high and he has at least one six pack at home, he does
not buy any beer. If the price is low, he buys enough six packs so that he will
have a total of five six packs in the house when he reaches home. Model this sys-
tem by a DTMC. Describe its state-space and compute the transition probability
matrix.
2.17 Let {Yn , n ≥ 1} be a sequence of iid random variables with common pmf
P(Yn = k) = αk , k = 0, 1, 2, 3, ..., M.
Define X0 = 0 and
Xn = max{Y1 , Y2 , ..., Yn }, n = 1, 2, 3....
Show that {Xn , n ≥ 0} is a DTMC. Display the transition probability matrix.
2.18 Consider a machine that alternates between two states: up and down. The suc-
cessive up and down times are independent of each other. The successive up times
are iid positive integer valued random variables with common pmf
P(up time = i) = ui , i = 1, 2, 3, . . . ,
and the successive down times are iid positive integer valued random variables with
common pmf
P(down time = i) = di , i = 1, 2, 3, . . . .
46 DISCRETE-TIME MARKOV CHAINS: TRANSIENT BEHAVIOR
Assume that
∞
X ∞
X
ui = 1, di = 1.
i=1 i=1
Model this system by a DTMC. Describe its state-space and the transition probability
matrix.
2.19 Ms. Friendly keeps in touch with her friends via email. Every day she checks
her email inbox at 8:00am. She processes each message in the inbox at 8:00am inde-
pendently in the following fashion: she answers it with probability p > 0 and deletes
it, or she leaves it in the inbox, to be visited again the next day. Let Yn be the number
of messages that arrive during 24 hours on day n. Assume that {Yn , n ≥ 0} is a
sequence of iid random variables with common pmf
αk = P(Yn = k), k = 0, 1, 2, · · · .
Let Xn be the number of messages in the inbox at 8:00am on day n. Assume that no
new messages arrive while she is deleting the messages. Show that {Xn , n ≥ 0} is a
DTMC.
2.20 A buffer of size B bytes is used to store and play a streaming audio file. Sup-
pose the time is slotted so that one byte is played (and hence removed from the buffer)
at the end of each time slot. Let An be the number of bytes streaming into the buffer
from the internet during the nth time slot. Suppose that {An , n ≥ 1} is a sequence
of iid random variables with common pmf
αk = P(An = i), k = 0, 1, 2.
Let Xn be the number of bytes in the buffer at the end of the nth time slot, after the
input during that slot followed by the output during that slot. If the buffer is empty no
sound is played, and if the buffer becomes full, some of the incoming bytes may be
lost if there is no space for them. Both create a loss of quality. Model {Xn , n ≥ 0}
as a DTMC. What is its state-space and transition probability matrix?
2.21 A shuttle bus with finite capacity B stops at bus stops numbered 0, 1, 2, · · · on
an infinite route. Let Yn be the number of riders waiting to ride the bus at stop n.
Assume that {Yn , n ≥ 0} is a sequence of iid random variables with common pmf
αk = P(Yn = k), k = 0, 1, 2, · · · .
Every passenger who is on the bus alights at a given bust stop with probability p. The
passengers behave independently of each other. After the passengers alight, as many
of the the waiting passengers board the bus as there is room on the bus. Show that
{Xn , n ≥ 0} is a DTMC. Compute the transition probabilities.
2.22 A production facility produces one item per hour. Each item is defective with
probability p, the quality of successive items being independent. Consider the fol-
lowing quality control policy parameterized by two positive integers k and r. In the
beginning the policy calls for 100% inspection, the expensive mode of operation.
MODELING EXERCISES 47
As soon as k consecutive non-defective items are encountered, it switches to econ-
omy mode and calls for inspecting each item with probability 1/r. It reverts back
to the expensive mode as soon as an inspected item is found to be defective. The
process alternates this way forever. Model the inspection policy as a DTMC. Define
the state-space, and show the transition probability matrix or the transition diagram.
2.23 Let Dn be the demand for an item at a store on day n. Suppose {Dn , n ≥ 0}
is a sequence of iid random variables with common pmf
αk = P(Dn = k), k = 0, 1, 2, · · · .
Suppose the store follows the following inventory management policy, called the
(s, S) policy: If the inventory at the end of the n-th day (after satisfying the demands
for that day) is s or more, (here s ≥ 0 is a fixed integer) the store manager does
nothing. If it is less than s, the manager orders enough to bring the inventory at the
beginning of the next day up to S. Here S ≥ s is another fixed integer. Assume the
delivery to the store is instantaneous. Let Xn be the number of items in the inventory
in the store at the beginning of the n-th day, before satisfying that day’s demand, but
after the inventory is replenished. Show that {Xn , n ≥ 0} is a DTMC, and compute
its transition probabilities.
2.25 Consider the following simple model of the software development process: the
software is tested at times n = 0, 1, 2 · · · . If the software has k bugs, the test at time
n will reveal a bug with probability βk independent of the history. (β0 = 0.) If a bug
is revealed, the software is updated so that the bug is fixed. However, in the process
of fixing the bug, additional i bugs are introduced with probability αi (i = 0, 1, 2)
independent of the history. Let Xn be the number of bugs in the software just before
it is tested at time n. Show that {Xn , n ≥ 0} is a DTMC and display its transition
diagram.
48 DISCRETE-TIME MARKOV CHAINS: TRANSIENT BEHAVIOR
2.26 Consider a closed society of N individuals. At time 0 one of these N indi-
viduals hears a rumor (from outside the society). At time 1 he tells it to one of the
remaining N − 1 individuals, chosen uniformly at random. At each time n, every
individual who has heard the rumor (and has not stopped spreading it) picks a person
from the remaining N − 1 individuals at random and spreads the rumor. If he picks
a person who has already heard the rumor, he stops spreading the rumor any more;
else, he continues. If k ≥ 2 persons tell the rumor to the same person who has not
heard the rumor before, all k + 1 will continue to spread the rumor in the next period.
Model this rumor spreading phenomenon as a DTMC.
2.28 Redo the above problem by assuming that we begin with an rr individual but
always cross with a dd individual.
2.29 An electronic chain letter scheme works as follows. The initiator emails K per-
sons exhorting each to email it to K of their own friends. It mentions that complying
with the request will bring mighty good fortunes, while ignoring the request would
bring dire supernatural consequences. Suppose a recipient complies with the request
with probability α and ignores it with probability 1−α, independently of other recip-
ients. Assume the population is large enough so that this process continues forever.
Show that we can model this situation by a branching process with X0 = 20.
2.32 A manufacturing setup consists of two distinct machines, each producing one
component per hour. Each component is tested instantly and is identified as defective
or non-defective. Let αi be the probability that a component produced by machine
i is non-defective, i = 1, 2. The defective components are discarded and the non-
defective components are stored in two separate bins, one for each machine. When
a component is present in each bin, the two are instantly assembled together and
shipped out. Bin i can hold at most Bi components, i = 1, 2. (Here B1 and B2 are
fixed positive integers.) When a bin is full the corresponding machine is turned off.
It is turned on again when the bin has space for at least one component. Assume that
successive components are independent. Model this system by a DTMC.
2.33 Consider the following variation of Modeling Exercise 2.1: We replace the
light bulb upon failure, or upon reaching age K, where K > 0 is a fixed integer.
Assume that replacement occurs before failure if there is a tie. Let Xn be as in Mod-
eling Exercise 2.1. Show that {Xn , n ≥ 0} is a DTMC and compute its transition
probability matrix.
2.2 Let {Xn , n ≥ 0} be a DTMC with state-space {1, 2, 3, 4, 5} and the following
transition probability matrix:
0.1 0.0 0.2 0.3 0.4
0.0 0.6 0.0 0.4 0.0
0.2 0.0 0.0 0.4 0.4 .
0.0 0.4 0.0 0.5 0.1
0.6 0.0 0.3 0.1 0.0
Suppose the initial distribution is a = [0.5, 0, 0, 0, 0.5]. Compute the following:
(a) The pmf of X2 ,
(b) P(X2 = 2, X4 = 5),
50 DISCRETE-TIME MARKOV CHAINS: TRANSIENT BEHAVIOR
(c) P(X7 = 3|X3 = 4),
(d) P(X1 ∈ {1, 2, 3}, X2 ∈ {4, 5}).
2.3 Prove the result in Example 2.18 on page 33 by using (a) induction, and (b) the
method of diagonalization of Theorem 2.7 on page 38.
2.4 Compute the expected fraction of the patients who get drug 1 among the first
n patients in the clinical trial of Example 2.5 on page 14. Hint: Use the results of
Example 2.24 on page 37.
2.5 Suppose a market consists of k independent customers who switch between the
three brands A, B, and C according to the DTMC of Example 2.14. Suppose in week
0 brand A is chosen with probability 0.3 and brand B is chosen with probability 0.3.
Compute the probability distribution of the number of customers who choose brand
B in week 3.
2.6 Consider the two machine workshop of Example 2.7 on page 15. Suppose each
machine produces a revenue of $r per day when it is up, and no revenues when it
is down. Compute the total expected revenue over the first n days assuming both
machines are up initially. Hint: Use independence and the results of Example 2.24
on page 37.
2.7 Consider the binomial model of stock fluctuation as described in Section 2.3.4
on page 27. Suppose X0 = 1. Compute E(Xn ) and Var(Xn) for n ≥ 0.
2.9 Consider the Modeling Exercise 2.23 with the following parameters: s =
10, S = 20, α0 = 0.1, α1 = 0.2, α2 = 0.3, and α3 = 0.4. Suppose
X0 = 20 with probability 1. Compute E(Xn ) for n = 1, 2, · · · , 10.
2.10 Consider the discrete time queue of Example 2.12 on page 17. Compute
P(X2 = 0|X0 = 0).
2.11 Derive the n-step transition probabilities in Equation 2.25 on page 33.
COMPUTATIONAL EXERCISES 51
2.12 Consider the weather forecasting model of Modeling Exercise 2.5. What is the
probability distribution of the length of the rainy spell predicted by this model? Do
the same for the length of the sunny spell.
2.13 Consider the Modeling Exercise 2.21 with a bus of capacity 20, and Yn ∼
P (10), Poisson with mean 10, and p = .4. Compute E(Xn |X0 = 0) for n =
0, 1, · · · , 20.
ι2π
xjk = exp( kj), 1 ≤ j, k ≤ m,
m
1 ι2π
yjk = exp(− kj), 1 ≤ j, k ≤ m.
m m
Show that λk is the k-th eigenvalue of P , with right eigenvector [x1k x2k · · · xmk ]′
and the left eigenvector [yk1 yk2 · · · ykm ]. Hence, using D = diag(λ1 λ2 · · · λm ),
X = [xjk ] and Y = [ykj ], show that
P = XDY.
Thus the powers of a circulant transition probability matrix can be written down
analytically.
2.16 Four points are arranged in a circle in a clockwise order. A particle moves on
these m points by taking a clockwise step with probability p and a counterclockwise
step with probability q = 1 − p, at time n = 0, 1, 2, · · · . Let Xn be the position
of the particle at time n. Thus {Xn , n ≥ 0} is a DTMC on {1, 2, 3, 4}. Display its
transition probability matrix P . Compute P n by using the diagonalization method of
Section 2.6. Hint: Use the results of Computational Exercise 2.14 above.
52 DISCRETE-TIME MARKOV CHAINS: TRANSIENT BEHAVIOR
2.17 Consider the urn model of Example 2.13. Show that the transition probability
matrix has the following eigenvalues: λk = k(k + 1)/N 2 − 1/N, 0 ≤ k ≤ N .
Show that the transition probability matrix is diagonalizable. In general, finding the
eigenvalues is the hard part. Finding the corresponding eigenvectors is the easy part.
(n)
2.18 Compute pij for the success runs Markov chain of Computational Exer-
cise 2.15.
2.23 Compute the mean of variance of the number of individuals in the n-th gen-
eration in the branching process of Section 2.3.3 on page 25, assuming the initial
generation consists of i individuals. Hence compute the mean and variance of the
number of letters in the n-th generation in Modeling Exercise 2.29. Hint: Imagine i
independent branching processes, each initiated by a single individual, and use the
results of Example 2.22.
2.26 Consider the Moran model described on page 25. Let i ∈ {0, 1, · · · , N } be a
given integer and suppose X0 = i with probability 1. Compute E(Xn ) and Var(Xn )
for n ≥ 0.
CONCEPTUAL EXERCISES 53
2.9 Conceptual Exercises
2.2 Suppose {Xn , n ≥ 0} and {Yn , n ≥ 0} are two independent DTMCs with
state-space S = {0, 1, 2, · · ·}. Prove or give a counterexample to the following state-
ments:
(a) {Xn + Yn , n ≥ 0} a DTMC.
(b) {(Xn , Yn ), n ≥ 0} is a DTMC.
2.5 Suppose {Xn , n ≥ 0} and {Yn , n ≥ 0} are two independent DTMCs with
state-space S = {0, 1, 2, · · ·}. Let {Zn , n ≥ 0} be a sequence iid Ber(p) random
variables. Define
Xn if Zn = 0
Wn =
Yn if Zn = 1.
Is {Wn , n ≥ 0} a DTMC (not necessarily time homogeneous)?
2.6 Let {Yn , n ≥ 0} be a sequence of iid random variables with common pmf
αk = P(Yn = k), k = 0, 1, 2, · · · .
We say that Yn is a record if Yn > Yr , 0 ≤ r ≤ n − 1. Let X0 = Y0 , and Xn be the
value of the n-th record, n ≥ 1. Show that {Xn , n ≥ 0} is a DTMC and compute its
transition probability matrix.
2.9 Suppose {Xn , n ≥ 0} is a time homogeneous DTMC with the following prop-
erty: there is a j ∈ S such that pij = p for all i ∈ S. Show that P(Xn = j) = p for
all n ≥ 1, no matter what the initial distribution is.
2.11 Let {Xn , n ≥ 0} be a simple random walk of Example 2.19. Show that
{|Xn |, n ≥ 0} is a DTMC. Compute its transition probability matrix.
A frequent flyer business traveler is concerned about the risk of encountering a ter-
rorist bomb on one of his flights. He consults his statistician friend to get an estimate
of the risk. After studying the data the statistician estimates that the probability of
finding a bomb on a random flight is one in a thousand. Alarmed by such a large risk,
the businessman asks his friend if there is any way to reduce the risk. The statistician
offers, “Carry a bomb with you on the plane, since the probability of two bombs on
the same flight is one in a million.”
3.1 Definitions
There are two reasons for studying the first passage times. First, they appear natu-
rally in applications when we are interested in the time until a given event occurs in a
stochastic system modeled by a DTMC. Second, we shall see in the next chapter that
55
56 DISCRETE-TIME MARKOV CHAINS: FIRST PASSAGE TIMES
the quantities u (probability of eventually visiting a state) and m(1) (the expected
time to visit a state) play an important part in the study of the limiting behavior of
the DTMC. Thus the study of the first passage time has practical as well as theoretical
motivation.
Next we introduce the conditional quantities for i ∈ S:
vi (n) = P(T > n|X0 = i),
ui = P(T < ∞|X0 = i),
mi (k) = E(T k |X0 = i),
φi (z) = E(z T |X0 = i).
Thus we can compute the unconditional quantities from the conditional ones. We
develop a method called the first-step analysis to compute the conditional quantities.
The method involves computing the conditional quantities by further conditioning
on the value of X1 (i.e., the first step), and then using time-homogeneity and Markov
property to derive a set of linear equations for the conditional quantities. These equa-
tions can then be solved numerically or algebraically depending on the problem at
hand.
We shall see later that sometimes we need to study an alternate first passage time
as defined below:
T̃ = min{n > 0 : Xn = 0}. (3.2)
The following theorem illustrates how the first-step analysis produces recursive
method of computing the cumulative distribution of T . We first introduce the fol-
lowing matrix notation:
v(n) = [v1 (n), v2 (n), · · ·]′ , n ≥ 0
CUMULATIVE DISTRIBUTION FUNCTION OF T 57
B = [pij : i, j ≥ 1]. (3.3)
Thus B is a submatrix of P obtained by deleting the row and column corresponding
to the state 0.
Theorem 3.1
v(n) = B n e, n ≥ 0, (3.4)
where e is column vector of all ones.
Proof: We prove the result by using the first-step analysis. For n ≥ 1 and i ≥ 1 we
have
vi (n) = P(T > n|X0 = i)
X∞
= P(T > n|X1 = j, X0 = i)P(X1 = j|X0 = i)
j=0
X∞
= pij P(T > n|X1 = j, X0 = i)
j=0
∞
X
= pi0 P(T > n|X1 = 0, X0 = i) + pij P(T > n|X1 = j, X0 = i)
j=1
∞
X
= pij P(T > n|X1 = j)
j=1
X∞
= pij P(T > n − 1|X0 = j)
j=1
X∞
= pij vj (n − 1).
j=1
Here we have used the fact that X1 = 0 implies that T = 1 and hence P(T >
n|X1 = 0, X0 = i) = 0, and the Markov property and time homogeneity implies
that the probability of T > n given X1 = j, X0 = i is the same as the probability
that T > n − 1 given X0 = j. Writing the final equation in matrix form yields
v(n) = Bv(n − 1), n ≥ 1. (3.5)
Solving this equation recursively yields
v(n) = B n v(0).
Finally, X0 = i ≥ 1 implies that T ≥ 1. Hence vi (0) = 1. Thus
v(0) = e. (3.6)
This yields Equation 3.4. Note that it is valid for n = 0 as well, since B 0 = I, the
identity matrix, by definition.
58 DISCRETE-TIME MARKOV CHAINS: FIRST PASSAGE TIMES
Since we have studied the computation of matrix powers in Section 2.6, we have
an easy way of computing the complementary cdf of T . We illustrate with several
examples.
Example 3.1 Two State DTMC. Consider the two state DTMC of Example 2.3 on
page 13. The state space is {1, 2}. Let T be the first passage time to state 1. One can
use Theorem 3.1 with B = [β], or direct probabilistic reasoning, to see that
v2 (n) = P(T > n|X0 = 2) = β n , n ≥ 0.
Hence we get
P(T = n|X0 = 2) = v2 (n − 1) − v2 (n) = β n−1 (1 − β), n ≥ 1,
which shows that T is a geometric random variable with parameter 1 − β.
0 0 0 0 0 1
Let
T = min{n ≥ 0 : Xn = 1}
be the first passage time to state 1. Let B be the submatrix of P obtained by deleting
rows and columns corresponding to state 1. Then, using v(n) = [v2 (n), · · · , v6 (n)]′ ,
Theorem 3.1 yields
v(n) = B n e, n ≥ 0.
Direct numerical calculation yields, for example,
v(5) = B 5 e = [0.4133, 0.7373, 0.6921, 0.8977, 1]′ ,
and
lim v(n) = [0.25, 0.50, 0.50, 0.75, 1]′ .
n→∞
Thus T is a defective random variable, and the probability that the DTMC will never
visit state 1 starting from state 2 is .25, which is the same as saying that the proba-
bility of eventually visiting state 1 starting from state 2 is .75.
Example 3.3 Success Runs. Consider the success runs DTMC of Example 2.15 on
page 19 with transition probabilities
pi,0 = qi , pi,i+1 = pi , i = 0, 1, 2, · · · .
CUMULATIVE DISTRIBUTION FUNCTION OF T 59
Let T be the first passage time to state 0. Compute the complementary cdf of T
starting from state 1.
In this case it is easier to use the special structure of the DTMC. We have v1 (0) = 1
and, for n ≥ 1,
v1 (n) = P(T > n|X0 = 1)
= P(X1 = 2, X2 = 3, · · · , Xn = n + 1|X0 = 1)
= p1 p2 · · · pn ,
since the only way T > n starting from X0 = 1 is if the DTMC increases by one for
each of the next n steps. Since the pi ’s can take any value in [0, 1], it follows that we
can construct a success runs DTMC such that T has any pre-specified distribution on
{1, 2, 3, · · ·}.
Example 3.4 Coin Tosses. Suppose a coin is tossed repeatedly and independently.
The probability of head is p and the probability of tail is q = 1 − p on any given toss.
Compute the distribution of the number of tosses needed to get two heads in row.
Theorem 3.2 The vector v = limn→∞ v(n) is given by the largest solution to
v = Bv (3.8)
such that v ≤ e, where B is as defined in Equation 3.3, and e is a vector of all ones.
Proof: Equation 3.8 follows by letting n → ∞ on both sides of Equation 3.5 since
we know the limit exists. To show that it is the largest solution bounded above by
e, suppose w ≤ e is another solution to Equation 3.8. Then, from Equation 3.6, we
have
v(0) = e ≥ w.
As an induction hypothesis assume
v(k) ≥ w
for k = 0, 1, 2, · · · , n. Using the fact that w = Bw, and that B is a non-negative
matrix, we can use Equation 3.5 to get
v(n + 1) = Bv(n) ≥ Bw = w.
Thus
v(n) ≥ w
for all n ≥ 0. Thus letting n → ∞
v = lim v(n) ≥ w.
n→∞
Example 3.5 Genotype Evolution. Suppose that the population in the genotype
evolution model of Example 3.2 initially consists of one dominant and one hybrid
individual. What is the probability that eventually the population will contain only
dominant individuals?
Let T be the first passage time as defined in Example 3.2. We are asked to compute
u2 = P(T < ∞|X0 = 2) = 1 − v2 . From Theorem 3.2 we see that
v = Bv
where v = [v2 , v3 , · · · , v6 ]′ and
1/2 0 1/4 0 0
0 0 1 0 0
B = 1/4 1/8 1/4 1/4 1/16
. (3.9)
0 0 1/4 1/2 1/4
0 0 0 0 1
The above equation implies that v6 = v6 , thus there are an infinite number of solu-
tions to v = Bv. Since we are looking for the largest solution bounded above by 1,
we must choose v6 = 1. Notice that we could have concluded this by observing that
X0 = 6 implies that the DTMC can never visit state 1 and hence T = ∞. Thus we
must have u6 = 0 or v6 = 1. Once v6 is chosen to be 1, we see that v = Bv has a
unique solution given by
v = [0.25, 0.50, 0.50, 0.75, 1]′ .
Thus the required answer is given by u2 = 1 − v2 = 0.75.
Example 3.6 Gambler’s Ruin. Consider the Gambler’s ruin model of Exam-
ple 2.11 on page 17. The DTMC in that example is a simple random walk on
{0, 1, 2, · · · , N } with transition probabilities given by
pi,i+1 = p for 1 ≤ i ≤ N − 1,
pi,i−1 = q = 1 − p for 1 ≤ i ≤ N − 1,
p00 = pN N = 1.
Compute the probability that the DTMC eventually visits state 0 starting from state
i.
Let T = min{n ≥ 0 : Xn = 0}. We can write the Equation 3.8 as follows
vi = pvi+1 + qvi−1 , 1 ≤ i ≤ N − 1. (3.10)
The above equation can also be derived from first step analysis, or argued as follows:
If the DTMC starts in state i (1 ≤ i ≤ N − 1), it will be in state i + 1 with probability
p, and the probability of never visiting state 0 from then on is vi+1 ; and it will visit
state i − 1 with probability q and the probability of never visiting state 0 from then
on will be vi−1 . Hence we get Equation 3.10. To complete the argument we must
62 DISCRETE-TIME MARKOV CHAINS: FIRST PASSAGE TIMES
decide appropriate values for v0 and vN . Clearly, we must have v0 = 0, since once
the DTMC visits state 0, the probability of never visiting state 0 will be 0. We can
also argue that vN = 1, since once the DTMC visits state N it can never visit state
0. We also see that Equation 3.8 implies that vN = vN , and hence we set vN = 1 in
order to get the largest solution bounded above by 1.
Equation 3.10 is a difference equation with constant coefficients, and hence using
the standard methods of solving such equations (See Appendix I) we get
(
1−(q/p)i
1−(q/p)N
if q 6= p
vi = i
(3.11)
N if q = p.
Example 3.7 Success Runs. Consider the success runs DTMC of Example 3.3.
Compute the probability that the DTMC never visits state 0 starting form state i.
Using the derivation in Example 3.3, we get
vi (n) = pi pi+1 · · · pi+n−1 , i ≥ 1, n ≥ 1.
Hence
vi = lim pi pi+1 · · · pi+n−1 , i ≥ 1.
n→∞
The above limit is zero if pn = 0 for some n ≥ i. To avoid this triviality, assume that
pi > 0 for all i ≥ 1. Now it is known that
∞
X
lim p1 p2 · · · pn = 0 ⇔ qn = ∞,
n→∞
n=1
and
∞
X
lim p1 p2 · · · pn > 0 ⇔ qn < ∞.
n→∞
n=1
Hence we see that
∞
X
vi = 0, i.e., ui = 1 for all i ⇔ qn = ∞,
n=1
and
∞
X
vi > 0, i.e., ui < 1 for all i ⇔ qn < ∞.
n=1
The reader should try to obtain the same results by using Theorem 3.2.
If q ≥ p, the random walk has a drift towards zero (since it goes towards zero with
higher probability than away from it), and it makes intuitive sense that the random
walk will hit zero with probability 1, and hence vi = 0. On the other hand, if q < p,
there is a drift away from zero, and there is a positive probability that the random
walk will never visit zero. Note that without Theorem 3.2 we would not be able to
choose any particular value of α in Equation 3.13.
Example 3.9 General Simple Random Walk. Now consider the space-
nonhomogeneous simple random walk with the following transition probabilities:
pi,i+1 = pi for i ≥ 1,
pi,i−1 = qi = 1 − pi for i ≥ 1,
p00 = 1.
Let T be the first passage time into state 0. Equation 3.8 for this system, written in
scalar form, is as follows:
vi = qi vi−1 + pi vi+1 , i ≥ 1, (3.14)
with a single boundary condition v0 = 0. Now let
xi = vi − vi−1 , i ≥ 1.
Using pi + qi = 1, we can write Equation 3.14 as
(pi + qi )vi = qi vi−1 + pi vi+1
which yields
qi xi = pi xi+1 ,
64 DISCRETE-TIME MARKOV CHAINS: FIRST PASSAGE TIMES
or, assuming pi > 0 for all i ≥ 1,
qi
xi+1 = xi , i ≥ 1.
pi
Solving recursively, we get
xi+1 = αi x1 , i ≥ 0, (3.15)
where α0 = 1 and
q1 q2 · · · qi
αi = , i ≥ 1. (3.16)
p1 p2 · · · pi
Summing Equation 3.15 for i = 0 to j, and rearranging, we get
j
!
X
vj+1 = αi v1 .
i=0
Now, Theorem 3.2 says that v1 must be the largest value such that 0 ≤ vj ≤ 1 for all
j ≥ 1. Hence we must choose
P∞
P∞1
(
α
if i=0 αi < ∞,
v1 = i=0 i
P∞
0 if i=0 αi = ∞.
We can compute ui = 1 − vi from the above. PNotice that visiting state 0 is certain
from any state i (that is, ui = 1) if the sum αj diverges. The reader is urged to
verify that this consistent of the results of Example 3.8.
Now let
∞
X
ψ(u) = αj uj (3.18)
j=0
be the generating function of the offspring size. The absorption probability u satisfies
u = ψ(u). (3.19)
′′
Since ψ (u) ≥ 0 for all u ≥ 0, ψ is a convex function of u ∈ [0, 1]. Also, ψ(0) = α0
and ψ(1) = 1. Clearly, if α0 = 0, each individual produces at least one offspring,
and hence the branching process never goes extinct, i.e., u = 0. Hence assume that
α0 > 0. Figures 3.1 and 3.2 show two possible cases that can arise.
The situation in Figure 3.1 arises if
∞
X
′
ψ (1) = iαi = µ > 1,
i=0
while that in Figure 3.2 arises if µ ≤ 1. In the first case there is a unique value of
ψ(u)
a0
u
0 1
u ∈ (0, 1) satisfying ψ(u) = u, in the second case the only value of u ∈ [0, 1] that
satisfies u = ψ(u) is u = 1. Thus we conclude that extinction is certain if µ ≤ 1,
66 DISCRETE-TIME MARKOV CHAINS: FIRST PASSAGE TIMES
ψ(u)
a0
u
0 1
while there is a positive probability of the branching process growing without bounds
if µ > 1. This is intuitive, except for the critical case of µ = 1. In this case our pre-
vious analysis shows that E(Xn ) = 1 for all n ≥ 0, however the branching process
becomes extinct. This seemingly inconsistent result is a manifestation of the fact that
convergence in distribution or with probability one does not imply convergence of
the means.
We will encounter Equation 3.19 multiple times during the study of DTMCs. Hence it
is important to know how to solve it. Except in very special cases (See Computational
Exercise 3.34) we need to resort to numerical methods to solve this equation. We
describe one such method here.
Let ψ be as defined in Equation 3.18, and define
ρ0 = 0, ρn+1 = ψ(ρn ), n ≥ 0.
We shall show that if µ > 1, ρn → u, the desired solution in (0,1) to Equation 3.19.
We have
ρ1 = a0 > 0 = ρ0 .
Now, since ψ is an increasing function, it is clear that
ρn+1 ≥ ρn , n ≥ 0.
Also, ρn ≤ 1 for all n ≥ 0. Hence {ρn , n ≥ 0} is a bounded monotone increasing
sequence. Hence it has a limit u, and this u satisfies Equation 3.19. The sequence
{ρn } is geometrically illustrated in Figure 3.3.
Similarly, if µ > 1, one can use the bisection method to find a ρ∗0 < 1 such that
ψ(ρ∗0 ) < ρ∗0 . Then recursively define
ρ∗n+1 = ψ(ρ∗n ), n ≥ 0.
ABSORPTION PROBABILITIES 67
ψ(u)
a0
u
ρ0 ρ1 ρ2 ρ
with the boundary condition u0 = 1. Since the above equation is a difference equa-
tion with constant coefficients, we try a geometric solution ui = ui , i ≥ 0. Substi-
tuting in the above equation and canceling ui−1 from both sides we get
∞
X
u= αj uj , i ≥ 1.
j=0
3.4 Expectation of T
Let {Xn , n ≥ 0} and T be as defined in Section 3.1. In this section we assume that
ui = P(T < ∞|X0 = i) = 1 for all i ≥ 1 and compute
mi = E(T |X0 = i), i ≥ 1.
Notice that ui < 1 would imply that mi = ∞. Let
m = [m1 , m2 , m3 , · · ·]′ ,
and B be as defined in Equation 3.3. The main result is given by the following The-
orem.
Theorem 3.3 Suppose ui = 1 for all i ≥ 1. Then m is given by the smallest non-
negative solution to
m = e + Bm, (3.22)
where e is a column vector of all ones.
Now, using
E(T |X0 = i, X1 = 0) = 1,
and
E(T |X0 = i, X1 = j) = 1 + E(T |X0 = j) = 1 + mj , j ≥ 1
70 DISCRETE-TIME MARKOV CHAINS: FIRST PASSAGE TIMES
and simplifying, we get
∞
X
mi = 1 + pij mj , i ≥ 1. (3.23)
j=1
This yields Equation 3.22 in matrix form. The proof that it is the smallest non-
negative solution follows along the same lines as the proof of Theorem 3.2.
Notice that Equation 3.23 can be intuitively understood as follows: If the DTMC
starts in state i ≥ 1, it has to take at least one step before hitting state 0. If after the
first step it is in state 0, it does not need anymore steps to reach state zero. On the
other hand, if it is in state j ≥ 1, it will need an additional mj steps on average to
reach state zero. Weighing all these possibilities we get Equation 3.23. We illustrate
with several examples below.
Example 3.13 Genotype Evolution. Consider the six-state DTMC in the Geno-
type Evolution model of Example 3.2 with transition probability matrix given in
Equation 3.7. Suppose initially the population consists of one dominant and one hy-
brid individual. Compute the expected time until the population becomes entirely
dominant or entirely recessive.
Let
T = min{n ≥ 0 : Xn = 1 or 6}.
We are asked to compute m2 = E(T |X0 = 2). Since we know that eventually the
entire population will be either dominant or recessive, ui = 1 for all i. Now let
m = [m2 , m3 , m3 , m4 ]′ , and B be the 4 by 4 submatrix of P obtained by deleting
rows and columns corresponding to states 1 and 6. Then, from Theorem 3.3 we get
m = e + Bm.
Solving numerically, we get
5 2 2 5
m = [4 , 6 , 5 , 4 ]′ .
6 3 3 6
Thus, on the average, the population genotype gets fixed in 4 65 steps if it starts with
one dominant and one hybrid individual.
The next example shows that some questions can be answered by using the first
passage times even if the problem does not explicitly involve a DTMC!
Example 3.15 General Simple Random Walk. Compute the expected time to hit
state 0 starting from state i ≥ 1 in the general simple random walk of Example 3.9.
Let αi be as given Equation 3.16. We shall assume that
∞
X
αr = ∞,
r=1
so that, from Example 3.9, ui = 1 for all i. The first-step analysis yields
mi = 1 + qi mi−1 + pi mi+1 , i ≥ 1 (3.24)
with boundary condition m0 = 0. Now let
xi = mi − mi−1 , i ≥ 1.
Then Equation 3.24 can be written as
qi xi = 1 + pi xi+1 , i ≥ 1.
Solving recursively, we get
xi+1 = −αi bi + αi m1 , i ≥ 0 (3.25)
where
i
X 1
bi = ,
j=1
pj αj
with b0 = 0. Summing Equation 3.25 we get
mi+1 = xi+1 + xi + · · · + x1 + x0
Xi Xi
= − αj bj + m1 αj , i ≥ 0.
j=1 j=0
Note that m1 may be finite or infinite. Using the above expression for m1 we get
i−1 ∞
X X 1
mi = αk .
pj αj
k=0 j=k+1
with the boundary condition m0 = 0. It can be shown that the solution to this set
of equations is given by mi = im for some positive m. Substituting in the above
equation we get
∞
X
im = 1 + αj (i + j − 1)m, i ≥ 1.
j=0
Simplifying, we get
(1 − µ)m = 1.
Since mi is the smallest non-negative solution, we must have
1
1−µ if µ < 1
m=
∞ if µ ≥ 1.
Hence i
1−µ if µ < 1
mi =
∞ if µ ≥ 1.
z ψ(z)
= − a0 m 1 + φ(z).
1−z z
This yields
a0 m1 z(1 − z) − z 2
φ(z) = .
(1 − z)(ψ(z) − z)
One can show that φ(z) < ∞ for |z| < 1. Hence if there is a z with |z| < 1 for
which the denominator on the above equation becomes zero, the numerator must
P we see that ψ(z) − z has a
also become zero. From the results of Example 3.10,
solution z = α with 0 < α < 1 if and only if µ = kαk > 1. Thus, in this case we
must have
a0 m1 α(1 − α) = α2
or
α
m1 = .
a0 (1 − α)
Substituting in φ(z) we see that
α
1−α z(1 − z) − z 2
φ(z) = ,
(1 − z)(ψ(z) − z)
if a0 > 0 and µ > 1. Clearly, if a0 = 0 or if µ ≤ 1 we get m1 = ∞.
74 DISCRETE-TIME MARKOV CHAINS: FIRST PASSAGE TIMES
3.5 Generating Function and Higher Moments of T
Let {Xn , n ≥ 0} and T be as defined in Section 3.1. We begin this section with the
study of the generating function of T defined as
∞
X
φi (z) = E(z T |X0 = i) = z n P(T = n|X0 = i), i ≥ 1.
n=0
The above generating function is well defined for all complex z with |z| ≤ 1. Let B
be as defined in Equation 3.3 and let
b = [p10 , p20 , · · ·]′ .
The next theorem gives the main result concerning
φ(z) = [φ1 (z), φ2 (z), φ3 (z), · · ·]′ .
Theorem 3.4 The vector φ(z) is the smallest solution (for z ∈ [0, 1]) to
φ(z) = zb + zBφ(z). (3.28)
Now, X0 = i, X1 = 0 ⇒ T = 1. Hence
E(z T |X0 = i, X1 = 0) = 1.
Also, for j ≥ 1, X0 = i, X1 = j implies that T has the same distribution as 1 + T
starting from X0 = j. Hence
E(z T |X0 = i, X1 = j) = E(z 1+T |X0 = j) = zφj (z).
Using these two observations we get
∞
X
φi (z) = zpi0 + z pij φj (z).
j=1
The above equation in matrix form yields Equation 3.28. The proof that it is the
smallest solution (for z ∈ [0, 1]) follows along the same lines as that of the proof of
Theorem 3.2.
Note that
∞
X
φi (1) = P(T = n|X0 = i) = P(T < ∞|X0 = i) = ui = 1 − vi . (3.29)
n=0
Example 3.18 Genotype Evolution. Consider the six-state DTMC in the Geno-
type Evolution model of Example 3.2 with transition probability matrix given in
Equation 3.7. Suppose initially the population consists of one dominant and one hy-
brid individual. Compute the variance of time until the population becomes entirely
dominant or entirely recessive.
Let
T = min{n ≥ 0 : Xn = 1 or 6}.
76 DISCRETE-TIME MARKOV CHAINS: FIRST PASSAGE TIMES
We will first compute mi (2) = E(T (T − 1)|X0 = i), i = 2, 3, 4, 5. Since we know
that eventually the entire population will be either dominant or recessive, ui = 1
for all i. Now let m(2) = [m2 (2), m3 (2), m3 (2), m4 (2)]′ , and B be the 4 by 4
submatrix of P obtained by deleting rows and columns corresponding to states 1 and
6. From Example 3.13, we have
5 2 2 5
m = m(1) = [4 , 6 , 5 , 4 ]′ .
6 3 3 6
Theorem 3.5 implies that
m(2) = 2Bm(1) + Bm(2).
Numerical calculations yield
8 4 1 8
m(2) = [39 , 60 , 49 , 39 ]′ .
9 9 9 9
Hence the required answer is
2
Var(T |X0 = 2) = m2 (2) + m2 (1) − m2 (1)2 = 22 .
3
3.1 Let {Xn , n ≥ 0} be a DTMC on state space {0, 1, 2, 3} with the following
transition probability matrix:
0.2 0.1 0.0 0.7
0.1 0.3 0.6 0.0
.
0.0 0.4 0.2 0.4
0.7 0.0 0.1 0.2
Let T = min{n ≥ 0 : Xn = 0}. Compute
3.2 Do the above problem with the following transition probability matrix:
0.0 1.0 0.0 0.0
0.2 0.0 0.8 0.0
0.0 0.8 0.0 0.2 .
3.3 Consider a 4 by 4 chutes and ladders game as shown in Figure 3.4. A player
starts on square one. He tosses a six sided fair die and moves 1 square if the die
shows 1 or 2, 2 squares if it shows 3 or 4, and 3 squares if it shows 5 or 6. Toward
the end of the play, when the player is near square 16, he has to land on square 16
COMPUTATIONAL EXERCISES 77
15
16 14 13
9 11 12
10
5
8 7 6
1 2 3 4
exactly in order to finish the game. If he overshoots 16, he has to toss again. (In a
two person game he loses a turn.) Compute the expected number of tosses needed to
finish the game. (You may need to use a computer.)
3.4 If two players are playing the game compute the probability distribution of the
time until the game ends (that is, when one of the two players lands on 16.) As-
sume that there is no interaction among the two players, except that they take turns.
Compute the mean time until the game terminates.
3.5 Compute the expected number of times the ladder from 3 to 5 is used by a player
during one game.
3.6 Develop a computer program that analyzes a general chutes and ladders game.
The program accepts the following input: n, size of the board (n by n); k, the number
of players; the placements of the chutes and the ladders, and the distribution of the
number of squares moved in one turn. The program produces the following output:
1. Distribution of the number of tosses needed for one player to finish the game.
2. Distribution of the time to complete the game played by k players,
3. Expected length of the game played by k players.
3.7 Consider a maze of nine cells as shown in Figure 3.5. At time 0 a rat is placed
in cell one, and food in cell 9. The rat stays in the current cell for one unit of time
and then chooses one of the doors in the cell at random and moves to an adjacent
cell. Its successive moves are independent and completely uninfluenced by the food.
Compute the expected time required for the rat to reach the food.
78 DISCRETE-TIME MARKOV CHAINS: FIRST PASSAGE TIMES
1 2 3
4 5 6
7 8 9
3.8 Suppose the food in the above problem is replaced by a cat that moves like the
rat, but in an independent fashion. Of course, when the cat and the rat occupy the
same cell, the cat promptly eats the rat. Compute the expected time when the cat gets
its meal.
3.10 Consider the Moran model on page 25 with N genes in the population, i of
which are initially dominant (1 ≤ i ≤ N ). Compute the expected time until all the
genes are dominant or all the genes are recessive.
3.11 Suppose a coin is tossed repeatedly and independently, the probability of ob-
serving a head on any toss being p. Compute the probability that a string of r con-
secutive heads is observed before a string of m consecutive tails.
3.12 In the Computational Exercise 3.10 compute the probability that eventually all
the genes become dominant.
3.13 Let {Zn , n ≥ 1} be a sequence of iid random variables with common pmf
pZ (1) = pZ (2) = pZ (3) = pZ (4) = .25. Let X0 = 0 and Xn = Z1 +Z2 +· · ·+Zn ,
n ≥ 1. Define T = min{n > 0 : Xn is divisible by 7}. Note the strict inequality in
n > 0. Compute E(T |X0 = 0).
COMPUTATIONAL EXERCISES 79
3.14 Consider the binomial model of the stock fluctuation as described on page 28.
Suppose X0 = 1, and an individual owns one stock at time 0. Suppose the individual
has decided to sell the stock as soon as it reaches a value of 2 or more. Compute
the expected time when such a sale would occur, assuming that p = .5, u = .2 and
d = .1. Hint: It might be easier to deal with log of the stock price.
3.15 In the coin tossing experiment of Computational Exercise 3.11 compute the
expected number of tosses needed to observe the sequence HHT T for the first time.
3.16 Solve Computational Exercise 3.14 if the individual decides to sell the stock
when it goes above 2 or below .7.
3.19 Let {Xn , n ≥ 0} be a simple random walk on {0, 1, 2, · · ·} with p00 = 1 and
pi,i+1 = p, pi,i−1 = q for i ≥ 1. Let T = min{n ≥ 0 : Xn = 0}. Show that, for
|z| ≤ 1,
E(z T |X0 = i) = φ(z)i
where p
φ(z) = (1 − 1 − 4pqz 2 )/2pz.
Give a probabilistic interpretation of φ(z)i in terms of the generating functions of the
convolutions of iid random variables.
3.20 Compute the expected number of bases between two consecutive appearances
of the triplet CAG in the DNA sequence in Computational Exercise 3.18.
3.22 In the Computational Exercise 3.18 compute the probability that we see the
triplet ACT before the triple GCT .
80 DISCRETE-TIME MARKOV CHAINS: FIRST PASSAGE TIMES
3.23 In the Gambler’s Ruin model of Example 2.11 on page 17 compute the ex-
pected number of bets won by the player A until the game terminates, assuming that
the player A starts the game with i dollars, 1 ≤ i ≤ N − 1.
3.24 Consider the discrete time queue with Bernoulli arrivals and departures as de-
scribed in Example 2.12 on page 17. Suppose the queue has i customers in it initially.
Compute the probability that the server will eventually become idle.
3.25 Consider the clinical trials of Example 2.5 on page 14 using play the winner
rule. Suppose the trial stops as soon as either drug produces r successes, with the
drug producing the r successes first being declared the superior drug. Compute the
probability that the better of the two drugs gets declared as the best.
3.26 Derive a recursive method to compute the expected number of patients needed
to successfully conclude the clinical trial of the Computational Exercise 3.25.
3.30 A clinical trial is designed to identify the better of two experimental treatments.
The trial consists of several stages. At each stage two new patients are selected ran-
domly from a common pool of patients and one is given treatment 1, and the other is
given treatment 2. The stage is complete when the result of each treatment is known
as a success or a failure. At the end of the nth stage we record Xn = the number of
successes under treatment 1 minus those on treatment 2 observed on all the stages
so far. The trial stops as soon as Xn reaches +k or −k, where k is a given positive
CONCEPTUAL EXERCISES 81
integer. If the trial stops with Xn = k, treatment 1 is declared to be better than 2, else
treatment 2 is declared to be better than 1. Now suppose the probability of success
of the ith treatment is pi , i = 1, 2. Suppose p1 > p2 . Assume that the results of
the successive stages are independent. Compute the probability that the clinical trial
reaches correct decision, as a function of p1 , p2 , and k.
3.31 In the Computational Problem 3.30 compute the expected number of patients
subjected to treatments during the entire clinical trial.
3.32 Consider the manufacturing set up described in Modeling Exercise 2.32. Sup-
pose initially both bins are empty. Compute the expected time (in hours) until one of
the machines is turned off.
3.33 Consider the two bar town of Modeling Exercise 2.11. Assume that the two
transition probability matrices are identical. Compute the expected time when boy
and girl meet.
3.34 Solve Equation 3.19 for the special case when αi = 0 for i ≥ 4.
Use first step analysis and follow the proof of Theorem 3.2.
3.2 For the Conceptual Exercise 3.1 derive an analog of Theorem 3.3 for
mi (A) = E(T (A)|X0 = i).
and ui,n = P(T ≤ n|X0 = i). Using the first step analysis show that
∞
X
ui,n+1 = pi,0 + pij uj,n , i > 0, n ≥ 0,
j=1
and
∞
X
mi,n+1 = ui,n+1 + pij mj,n , i > 0, n ≥ 0.
j=1
3.7 Let A be a fixed subset of the state-space S. Derive a set of equations to compute
the probability that the DTMC visits every state in A before visiting state 0.
3.8 Let {Xn , n ≥ 0} be a DTMC with state-space {0, 1, · · ·}, and transition proba-
bility matrix P . Let wij be the expected number of visits to state j starting from state
i before hitting state 0, counting the visit at time 0 if i = j. Using first-step analysis
show that
X∞
wij = δij + pik wkj , i ≥ 1
k=1
where δij = 1 if i = j, and zero otherwise.
3.9 Let {Xn , n ≥ 0} be a DTMC with state-space {0, 1, · · ·}, and transition proba-
bility matrix P . Let T = min{n ≥ 0 : Xn = 0}. A state j is called a gateway state
to state 0 if XT −1 = j. Derive equations to compute the probability wij that state j
is a gateway state to state 0 if the DTMC starts in state i.
CONCEPTUAL EXERCISES 83
3.10 Let {Xn , n ≥ 0} be a DTMC with state-space {0, 1, · · ·}, and transition proba-
bility matrix P . Let (i0 , i1 , · · · , ik ) be a sequence of states in the state-space. Define
T = min{n ≥ 0 : Xn = i0 , Xn+1 = i1 , · · · , Xn+k = ik }. Derive a method to
compute E(T |X0 = i).
3.11 Let {Xn , n ≥ 0} be a DTMC with state-space {0, 1, · · ·}, and transition prob-
ability matrix P . Let T = min{n ≥ 0 : Xn = 0}. Let M be the largest state visited
by the DTMC until hits state 0. Derive a method to compute the distribution of M .
3.12 Show by induction that the sequence {vi , i ≥ 0} satisfying Equation 3.21 is
an increasing sequence.
CHAPTER 4
“God not only plays dice, he also sometimes throws the dice where they cannot be
seen”
– Stephen Hawking
In this chapter we study the limiting behavior of P (n) as n → ∞. Since the row
sums of M (n) are n + 1, we study the limiting behavior of M (n) /(n + 1) as n → ∞.
Note that [M (n) ]ij /(n + 1) can be interpreted as the fraction of the time spent by the
DTMC in state j starting from state i during {0, 1, · · · , n}. Hence studying this limit
makes practical sense. We begin by some examples illustrating the types of limiting
behavior that can arise.
85
86 DISCRETE-TIME MARKOV CHAINS: LIMITING BEHAVIOR
be the transition probability matrix of the two-state DTMC of Example 2.3 on
page 13 with α + β < 2. From Example 2.18 on page 33 we get
(α + β − 1)n 1 − α α − 1
n 1 1−β 1−α
P = + , n ≥ 0.
2−α−β 1−β 1−α 2−α−β β−1 1−β
Hence we get
n 1 1−β 1−α
lim P = .
n→∞ 2−α−β 1−β 1−α
Thus the limit of P (n) exists and its row sums are one. It has an interesting feature
that both the rows of the limiting matrix are the same. This implies that the limiting
distribution of Xn does not depend upon the initial distribution of the DTMC. We
shall see that a large class of DTMCs share this feature. From Example 2.24 on
page 37 we get, for n ≥ 0,
1 − (α + β − 1)(n+1) 1 − α α − 1
(n) n+1 1−β 1−α
M = + .
2−α−β 1−β 1−α (2 − α − β)2 β−1 1−β
Hence, we get
M (n)
1 1−β 1−α
lim = .
n→∞ n + 1 2−α−β 1−β 1−α
Thus, curiously, the limit of M (n) /(n + 1) in this example is the same as that of
P (n) . This feature is also shared by a large class of DTMCs. We will identify this
class in this chapter.
Example 4.3 Genotype Evolution Model. Consider the six-state DTMC of the
genotype evolution model with the transition probability matrix as given in Equa-
tion 2.13 on page 24. Direct numerical calculations show that
1 0 0 0 0 0
3/4 0 0 0 0 1/4
(n)
(n) M 1/2 0 0 0 0 1/2
lim P = lim = . (4.7)
n→∞ n→∞ n + 1 1/2 0 0 0 0 1/2
1/4 0 0 0 0 3/4
0 0 0 0 0 1
Thus this example also shows that the limit of P (n) exists and is the same as the
limit of M (n) /(n + 1), and the row sums of the limiting matrix are one. However,
in this example all the rows of the limiting matrix are not identical. Thus the limiting
distribution of Xn exists, but depends upon the initial distribution of the DTMC. We
will identify the class of DTMCs with this feature later on in this chapter.
Example 4.4 Simple Random Walk. A simple random walk on all integers has the
following transition probabilities
pi,i+1 = p, pi,i−1 = q = 1 − p, −∞ < i < ∞,
where 0 < p < 1. We have seen in Example 2.19 on page 33 that
(2n)(2n)! n n
p00 = p q , n = 0, 1, 2, · · · . (4.8)
n!n!
We show that the above quantity converges to zero as n → ∞. We need to use the
following asymptotic expression called Stirling’s formula:
√
n! ∼ 2πnn+1/2 e−n
where ∼ indicates that the ratio of the two sides goes to one as n → ∞. We have
(2n) (2n)! n n
p00 = p q
n!n!
√
2πe−2n (2n)2n+1/2 n n
∼ √ p q
( 2πe−n nn+1/2 )2
88 DISCRETE-TIME MARKOV CHAINS: LIMITING BEHAVIOR
(4pq)n
= √
πn
1
≤ √
πn
where the last inequality follows because 4pq = 4p(1 − p) ≤ 1 for 0 ≤ p ≤ 1. Thus
(2n)
p00 approaches zero asymptotically. We also know that p2n+1
00 is always zero. Thus
we see that
(n)
lim p00 = 0.
n→∞
Similar calculation shows that
lim [P (n) ]ij = 0
n→∞
for all i and j. It is even more tedious, but possible, to show that
[M (n) ]ij
lim = 0.
n→∞ n + 1
Thus in this example the two limits again coincide, but the row sums of the limiting
matrix are not one, but zero! Since P (n) is a stochastic matrix, we have
∞
(n)
X
pij = 1, n ≥ 0.
j=−∞
This implies that the interchange of limit and the sum in this infinite state-space
DTMC is not allowed. We shall identify the class of DTMCs with this feature later
in this chapter.
Case 1: Limit of P (n) exists, has identical rows, and each row sums to one.
Case 2: Limit of P (n) exists, does not have identical rows, each row sums to one.
Case 3: Limit of P (n) exists, but the rows may not sum to one.
Note that j is accessible from i if and only if there is a directed path in the diagraph
(n)
representation of the DTMC. This is so, since pij > 0 implies that there is a se-
quence of states i = i0 , i1 , · · · , in = j such that pik ,ik+1 > 0 for k = 0, 1, · · · , n − 1.
This in turn is equivalent to the existence of a directed path i = i0 , i1 , · · · , in = j in
the diagraph.
(0)
We write i → j if state j is accessible from i. Since pii = 1, it trivially follows
that i → i.
(i) i ↔ i, (reflexivity).
(ii) i ↔ j ⇔ j ↔ i, (symmetry).
(iii) i ↔ j, j ↔ k ⇒ i ↔ k, (transitivity).
Proof: (i) and (ii) are obvious from the definition. To prove (iii) note that i → j and
j → k imply that there are integers n ≥ 0 and m ≥ 0 such that
(n) (m)
pij > 0, pjk > 0.
Hence
(n+m) (n) (m)
X
pik = pir prk (Theorem 2.3)
r∈S
(n) (m)
≥ pij pjk > 0.
Hence i → k. Similarly k → j and j → i imply that k → i. Thus
i ↔ j, j ↔ k ⇒ i → j, j → k and k → j, j → i ⇒ i → k, k → i ⇒ i ↔ k.
90 DISCRETE-TIME MARKOV CHAINS: LIMITING BEHAVIOR
It is possible to write a computer program that can check the accessibility of one
state from another in O(q) time where q is the number of non-zero entries in P .
The same algorithm can be used to check if two states communicate. See Conceptual
Exercise 4.3.
From the above theorem it is clear that “communication” is a reflexive, symmetric,
and transitive binary relation. (See Conceptual Exercise 4.1 for more examples of
binary relations.) Hence we can use it to partition the state-space S into subsets
known as communicating classes.
(i) i ∈ C, j ∈ C ⇒ i ↔ j,
(ii) i ∈ C, i ↔ j ⇒ j ∈ C.
Property (i) assures that any two states in a communicating class communicate
with each other (hence the name). Property (ii) forces C to be a maximal set, i.e.,
no strict superset of C can be a communicating class. Note that it is possible to
have a state j outside C that is accessible from a state i inside C, but in this case, i
cannot be accessible from j. Similarly, it is possible to have a state i inside C that is
accessible from a state j outside C, but in this case, j cannot be accessible from i.
This motivates the following definition.
Note that once a DTMC visits a state in a closed communicating class C, it cannot
leave it, i.e,
Xn ∈ C ⇒ Xm ∈ C for all m ≥ n.
Two distinct communicating classes must be disjoint, see Conceptual Exercise 4.2.
Thus we can uniquely partition the state-space S as follows:
S = C1 ∪ C2 ∪ · · · ∪ Ck ∪ T, (4.9)
where C1 , C2 , · · · Ck are k disjoint closed communicating classes, and T is the union
of all the other communicating classes. We do not distinguish between the communi-
cating classes if they are not closed, and simply lump them together in T . Although
it is possible to have an infinite number of closed communicating classes in a DTMC
with a discrete state-space, in practice we shall always encounter DTMCs with a
finite number of closed communicating classes.
It is possible to develop an algorithm to derive the above partition in time O(q),
see Conceptual Exercise 4.4. Observe that the classification depends only on which
elements of P are positive and which are zero. It does not depend upon the actual
values of the positive elements.
IRREDUCIBILITY AND PERIODICITY 91
Definition 4.5 Irreducibility. A DTMC with state-space S is said to be irreducible
if S is a closed communicating class, else it is called reducible.
It is clear that all states in an irreducible DTMC communicate with each other. We
shall illustrate the above concepts by means of several examples below. In each case
we show the diagraph representation of the DTMC. In our experience the diagraph
representation is the best visual tool to identify the partition of Equation 4.9.
Example 4.5 Two-State DTMC. Consider the two state DTMC of Example 2.3
with 0 < α, β < 1. The diagraph is shown in Figure 4.1. In this DTMC 1 ↔ 2, and
1–α
α 1 2 β
1–β
hence {1, 2} is a closed communicating class, and the DTMC is irreducible. Next
suppose α = 1 and 0 < β < 1. The diagraph representation is shown in Figure 4.2.
Here we have 1 ↔ 1, 2 ↔ 2, 2 → 1, but 2 is not accessible from 1. Thus C1 = {1}
1 1 2 1–β
β
Example 4.6 Genotype Evolution. Consider the six state DTMC arising in the
genotype evolution model of Example 4.3. Since p11 = p66 = 1, it is clear that C1 =
{1} and C2 = {6} are two closed communicating classes. Also, T = {2, 3, 4, 5} is
a communicating class that is not closed, since states 1 and 6 are accessible from all
the four states in T . The DTMC is reducible.
Example 4.7 Simple Random Walk. Consider the simple random walk on
{0, 1, · · · , N } with p00 = pN N = 1, pi,i+1 = p, pi,i−1 = q, 1 ≤ i ≤ N − 1.
Assume p > 0, q > 0, p + q = 1. The diagraph representation is shown in Fig-
ure 4.3. In this DTMC C1 = {1} and C2 = {N } are two closed communicating
classes. Also, T = {1, 2, · · · , N − 1} is a communicating class that is not closed,
92 DISCRETE-TIME MARKOV CHAINS: LIMITING BEHAVIOR
1 1
p p p p p p p
since states 1 and N are accessible from all the states in T . The DTMC is reducible.
What happens if p or q is 1?
Note that a state i is aperiodic if pii > 0. The term periodic is self explanatory: the
DTMC, starting in a state i with period d, can return to state i only at times that
are integer multiples of d. An alternate definition of periodicity involves the strictly
positive first passage time defined in Conceptual Exercise 3.3, reiterated here with a
slight variation:
T̃i = min{n > 0 : Xn = i}, i ∈ S. (4.10)
We leave it to the reader (see Conceptual Exercise 4.5) to show that the two defi-
nitions are equivalent. If the DTMC never returns to state i once it leaves state i, the
sets in the two definitions above are empty. In this case we define the period to be ∞.
It will be seen later that for such states the concept of periodicity is irrelevant. The
next theorem gives a very useful result about periodicity.
Proof: Let d be the period of state i. Now, i ↔ j implies that there are two integers
n and m such that
(n) (n)
pij > 0, and pji > 0.
We have
(n+r+m) (n) (r) (m) (n) (r) (m)
X
pii = pik pkj pji ≥ pij pjj pji .
k,j
For r = 0, we get
(n+m) (n) (m)
pii ≥ pij pji > 0.
Hence n + m must be an integer multiple of d. Now suppose r is not divisible by
(n+r+m)
d. Then n + m + r is not divisible by d. Hence pii = 0 since i has period d.
(n) (r) (m) (n) (m)
Thus pij pjj pji = 0. However, pij and pji are not zero. Hence we must have
RECURRENCE AND TRANSIENCE 93
(r) (r)
pjj = 0 if r is not divisible by d. Thus the gcd of {r ≥ 1 : pjj> 0} must be
an integer multiple of d. Suppose it is kd for some integer k ≥ 1. However, by the
same argument, if the period of j is kd, that of i must be an integer multiple of kd.
However, we have assumed that the period of i is d. Hence we must have k = 1. This
proves the theorem.
The above theorem implies that all states in a communicating class have the same
period. We say that periodicity is a class property. This enables us to talk about the
period of a communicating class. A communicating class or an irreducible DTMC
is said to be periodic with period d if any (and hence every) state in it has period
d. If the period is 1, it is called aperiodic. It should be noted that periodicity, like
communication, depends only upon which elements of P are positive, and which are
zero; and not on the actual magnitudes of the positive elements.
Example 4.8 Two-State DTMC. Consider the two state DTMC of Example 4.1.
In each case considered there pii > 0 for i = 1, 2, hence the both the states are
aperiodic. Now consider the case with α = β = 0. The transition probability matrix
in this case is
0 1
P .
1 0
Thus we see that if X0 = 1 we have
X2n = 1, X2n−1 = 2, n ≥ 1.
Hence the DTMC returns to state 1 only at even times, and hence it has period 2.
Since this is an irreducible DTMC, all states must have period 2. Thus state 2 has
period 2.
Example 4.9 Simple Random Walk. Consider the simple random walk of Exam-
ple 4.4. Since p00 = pN N = 1, states 0 and N are aperiodic. Now suppose the
DTMC starts in state 1. Clearly, DTMC cannot return to state 1 at odd times, and can
return at every even time. Hence state 1 has period 2. Since states 1, 2, · · · , N − 1
communicate with each other, all of them must have period 2.
In this section we introduce the concepts of recurrence and transience of states. They
play an important role in the study of the limiting behavior of the DTMCs. Let T̃i be
as defined in Equation 4.10. Define
ũi = P(T̃i < ∞|X0 = i) (4.11)
94 DISCRETE-TIME MARKOV CHAINS: LIMITING BEHAVIOR
and
m̃i = E(T̃i |X0 = i). (4.12)
When ũi < 1, m̃i = ∞. However, as the following example suggests, m̃i can be
infinite even if ũi = 1.
Example 4.10 Success Runs. Consider the success runs Markov chain on
{0, 1, 2, · · ·} with
i+1 1
pi,i+1 = , pi,0 = , i ≥ 0.
i+2 i+2
Now, direct calculations as in Example 3.3 on page 58 show that
P(T̃0 = n|X0 = 0) = P(Xi = i, i = 1, 2, · · · , n − 1, Xn = 0|X0 = 0)
1
= , n ≥ 1.
(n + 1)(n + 2)
Hence,
ũ0 = P(T̃0 < ∞|X0 = 0)
X∞
= P(T̃0 = n|X0 = 0)
n=1
∞
X 1
= = 1.
n=1
(n + 1)(n + 2)
The last equality above is seen by writing
1 1 1
= −
(n + 1)(n + 2) n+1 n+2
and using telescoping sums. However, we get
∞
X
1 + m̃0 = (n + 1)P(T̃0 = n|X0 = 0)
n=1
∞
X n+1
=
n=1
(n + 1)(n + 2)
∞
X 1
= = ∞.
n=1
n + 2
Here, the last inequality follows because the last sum is a harmonic series, which is
known to diverge.
Now, in Section 2.5 we studied Vi (n), the number of visits to state i by a DTMC
over the finite time period {0, 1, 2, · · · , n}. Here we study Vi , the number of visits
by the DTMC to state i over the infinite time period {0, 1, 2, · · ·}. The next theorem
gives the main result.
Proof: Follows from Markov property and time homogeneity. See Conceptual Exer-
cise 4.6.
The next theorem yields the necessary and sufficient condition for recurrence and
transience.
Proof: Let Vi (n) be the number of visits to state i over 0 through n. Let Vi be the
number of visits to state i over 0 to ∞. We see that ũi = 1 implies that the DTMC,
starting from state i, returns to it with probability 1, and hence, due to the Markov
property and time homogeneity, returns infinitely often. Hence, in this case,
E(Vi |X0 = i) = ∞.
On the other hand, if ũi < 1, the DTMC, starting from state i, returns to state i
exactly k times with probability ũk−1
i (1 − ũi ). Hence
1
E(Vi |X0 = i) = < ∞.
1 − ũi
Now, from the results in Section 2.5 we have
n
(m)
X
E(Vi (n)|X0 = i) = Mii (n) = pii .
m=0
96 DISCRETE-TIME MARKOV CHAINS: LIMITING BEHAVIOR
Hence
∞
(n)
X
E(Vi |X0 = i) = pii .
n=0
Proof: This theorem is a special case of a general theorem called elementary renewal
theorem, which will be proved in Chapter 8. Hence we do not include a formal proof
here.
Unlike periodicity, transience and recurrence are dependent upon the actual mag-
nitudes of the transition probabilities pij . Like periodicity, they are class properties.
This is shown in the next two theorems.
(n)
Proof: (i) Suppose i ↔ j. Then there are integers n and m such that pij > 0 and
(m)
pji > 0. Now,
∞ ∞
(r) (r+n+m)
X X
pjj ≥ pjj
r=0 r=0
∞ X
(m) (r) (n)
X
≥ pjk pkk pkj
r=0 k∈S
RECURRENCE AND TRANSIENCE 97
∞
(m) (r) (n)
X
≥ pji pii pij
r=0
∞
(m) (n) (r)
X
= pji pij pii = ∞
r=0
Now,
t t
1 X (r) 1 X (r+n+m)
lim pjj = lim pjj
t→∞ t + 1 t→∞ t + 1
r=0 r=0
n
!
(m) 1 X (r) (n)
≥ pji lim pii pij > 0.
n→∞ n + 1
r=0
The last inequality follows because the positive recurrence of i implies that the last
(n) (m)
limit is positive (from Theorem 4.5), and we also have pij > 0 and pji > 0. Hence
state j is positive recurrent.
(ii) Follows along the same lines as the proof of part (ii) in Theorem 4.6.
Theorems 4.6 and 4.7 greatly simplify the task of identifying the transient and
recurrent states. If state i is recurrent then all states that belong to the same commu-
nicating class as i must be recurrent. The same conclusion holds for the transient or
positive or null recurrent states. Hence we can make the following definitions.
Proof: Let C be a finite closed communicating class. Then for all i ∈ C, we have
X (k)
1 = P(Xk ∈ C|X0 = i) = pij , k ≥ 0.
j∈C
Hence
n
1 X X (k)
pij = 1.
n+1
k=0 j∈C
Taking limits, we get
n n
1 X X (k) X 1 X (k)
lim pij = lim pij = 1,
n→∞ n + 1 n→∞ n + 1
k=0 j∈C j∈C k=0
where the interchange of the limit and the sum is allowed since C is finite. Thus there
must be at least one j for which
n
1 X (k)
lim pij > 0. (4.13)
n→∞ n + 1
k=0
Now suppose the states in C are not positive recurrent. Then they must be all null
recurrent or all transient. In either case we must have
n
1 X (k)
lim pjj = 0, j ∈ C. (4.14)
n→∞ n + 1
k=0
This follows from Theorem 4.7(ii) for the null recurrent case, and Theorem 4.6(ii)
(r)
for the transient case. Since C is finite, there is an r = r(i, j) such that pji > 0.
Now,
(k+r) (r) (k)
pjj ≥ pji pij . (4.15)
Combining Equations 4.14 and 4.15, we get
n n
1 X (k) 1 X (k+r)
0 = lim pjj = lim pjj
n→∞ n + 1 n→∞ n + 1
k=0 k=0
n
(r) 1 X (k)
≥ pji lim pij .
n→∞ n+1
k=0
Example 4.11 Genotype Evolution. Consider the six-state DTMC of Example 4.3.
We saw in Example 4.6 that it has two closed communicating classes C1 = {1} and
C2 = {6}, and a non-closed class T = {2, 3, 4, 5}. Thus states 1 and 6 are positive
recurrent and states 2,3,4,5 are transient.
Example 4.12 Simple Random Walk. Consider the simple random walk of Ex-
ample 4.4. We saw in Example 4.9 that it has two closed communicating classes
C1 = {1} and C2 = {N }, and a non-closed class T = {2, 3, · · · , N − 1}. Thus
states 1 and N are positive recurrent and state 2, 3, · · · , N − 1 are transient.
Example 4.13 General Success Runs. Consider the general success runs DTMC of
Example 3.3 on page 58. This DTMC has state-space S = {0, 1, 2, · · ·} and transition
probabilities
pi,0 = qi , pi,i+1 = pi , i ∈ S.
Assume that pi > 0, qi > 0 for all i ≥ 0. Then the DTMC is irreducible. Thus
if we determine the transience or recurrence of the state 0, that will automatically
100 DISCRETE-TIME MARKOV CHAINS: LIMITING BEHAVIOR
determine the transience and recurrence of all the states in S. From the first step
analysis we get
ũ0 = 1 − p0 v1 ,
where v1 is the probability that the DTMC never visits state 0 starting from state 1.
From the analysis in Example 3.7 on page 62 we have
∞
X
v1 = 0 ⇔ qn = ∞,
n=1
and
∞
X
v1 > 0 ⇔ qn < ∞.
n=1
It follows that the general success runs DTMC is
Thus the recurrent DTMC is positive recurrent if the last sum converges, otherwise
it is null recurrent.
Example 4.14 General Simple Random Walk. Consider the random walk on
{0, 1, 2, · · ·} with the following transition probabilities:
pi,i+1 = pi for i ≥ 0,
pi,i−1 = qi = 1 − pi for i ≥ 1.
where α0 = 1
q1 q2 · · · qi
αi = , i ≥ 1.
p1 p2 · · · pi
P∞
Thus state 0 (and hence the entire
P∞ DTMC) is recurrent if and only if i=0 αi = ∞,
and transient if and only if i=0 α i < ∞. Next, assuming the DTMC is recurrent,
we can use the first step analysis to obtain
m̃0 = 1 + p0 m1 ,
where m1 is the expected first passage time into state 0 starting from state 1. From
Example 3.15 we have
∞
X 1
m1 = .
p α
j=1 j j
Now let ρ0 = 1 and
p0 p1 · · · pi−1
ρi = , i ≥ 1.
q1 q2 · · · qi
Thus
∞
X
m̃0 = ρi .
i=0
P
Thus state 0 (and hence the whole DTMC) is positive recurrent if the series ρi
converges, and null recurrent if it diverges. Combining all these results, we get the
following complete classification: the state 0 (and hence the entire DTMC) is
P∞
(i) positive recurrent if and only if ρi < ∞,
P∞ i=0 P∞
(ii) null recurrent if and only if i=0 ρi = ∞, and i=0 αi = ∞,
P∞
(iii) transient if and only if i=0 αi < ∞.
This completes the classification of the DTMC. This makes intuitive sense since this
DTMC models the situation where exactly one item is removed from the inventory
every time period, while µ items are added to the system per period on the average.
In this section we shall derive a sufficient condition for the positive recurrence of an
irreducible DTMC {Xn , n ≥ 0} with state-space S and transition probability matrix
P . Let ν : S → [0, ∞) and, for i ∈ S, define
d(i) = E(ν(Xn+1 ) − ν(Xn )|Xn = i)
X
= pij ν(j) − ν(i).
j∈S
The function ν is called a potential function, and the quantity d(i) is called the gen-
eralized drift in state i. It is called drift when ν(i) = i for all i ∈ S. The main result,
called Foster’s criterion, is given below.
Before we prove the above result, we give the intuition behind it. Suppose the hypoth-
esis of the above theorem holds. Since ν is a non-negative function, E(ν(Xn )) ≥ 0
for all n ≥ 0. However, whenever Xn ∈ / H, we have E(ν(Xn+1 )) < E(ν(Xn )) − ǫ.
Hence the DTMC cannot stay outside of H for too long, else E(ν(Xn )) will become
negative. Hence the DTMC must enter the finite set H often enough. The theorem
says that the visits to H are sufficiently frequent to make the states in H (and hence,
due to irreducibility, the whole chain) positive recurrent. A formal proof follows.
Proof: Define X
w(k) = pkj ν(j), k ∈ H.
j∈S
Then, from Equation 4.16, |w(k) − ν(k)| < ∞ for all k ∈ H. Hence w(k) < ∞ for
all k ∈ H. Now let
(0)
yi = ν(i),
and
(n)
X (n)
yi = pij ν(j).
j∈S
Then
(r+1)
X
yi (r + 1) = pij ν(j)
j∈S
(r)
XX
= pik pkj ν(j)
j∈S k∈S
(r) (r)
X X X X
= pik pkj ν(j) + pik pkj ν(j)
k∈H j∈S k/
∈H j∈S
104 DISCRETE-TIME MARKOV CHAINS: LIMITING BEHAVIOR
X (r) X (r)
≤ pik w(k) + pik (ν(k) − ǫ)
k∈H k/
∈H
(r) (r)
X X
≤ pik (w(k) + ǫ) + pik ν(k) − ǫ (since ν(k) ≥ 0 for all k)
k∈H k∈S
(r) (r)
X
≤ pik (w(k) + ǫ) + yi − ǫ.
k∈H
Since 0 ≤ w(k) < ∞ for all k ∈ H, the above inequality implies that there exists a
k ∈ H for which
n
1 X (r)
lim pik > 0. (4.18)
n→∞ n + 1
r=0
Using the fact that k → i, and arguments similar to those in Theorem 4.7, we can
show that
n
1 X (r)
lim pkk > 0. (4.19)
n→∞ n + 1
r=0
(See conceptual Exercise 4.12.) Hence state k is recurrent, as a consequence of part
(i) of Theorem 4.5. Since the DTMC is assumed irreducible, all the states are positive
recurrent.
Foster’s criterion is especially easy to apply since checking Equations 4.16 and
4.17 is a simple task. In many applications S = {0, 1, 2, · · ·} and usually ν(i) = i
suffices to derive useful sufficient conditions for positive recurrence. In this case we
have the following result, called Pakes’ lemma, whose proof we leave to the reader,
see Conceptual Exercise 4.13.
Theorem 4.11 is so intuitive that it is tempting to assume that we will get a suf-
ficient condition for transience if we reverse the inequality in condition (ii) of that
Theorem. Unfortunately, this turns out not to be true. However, we do get the follow-
ing more restricted result, which we state without proof:
We end this section with the remark that the recurrence and transience properties
of an infinite state DTMC are dependent on the actual magnitudes on the elements
of P , and not just whether they are zero or one. Using the concepts developed in this
section we shall study the limiting behavior of DTMCs in the next section.
In this section we derive the main results regarding the limiting distribution of an
irreducible DTMC {Xn , n ≥ 0} on state-space S = {0, 1, 2 · · ·} and transition
probability matrix P . We treat the four cases separately: the DTMC is transient, null
recurrent, aperiodic positive recurrent, and periodic positive recurrent.
We begin the study of the convergence results for recurrent DTMC with the discrete
renewal theorem. This is the discrete analog of a general theorem called the Key
Renewal Theorem, which is presented in Chapter 8. We first give the main result.
LIMITING BEHAVIOR OF IRREDUCIBLE DTMCS 107
Theorem 4.14 Discrete Renewal Theorem. Let {un , n ≥ 1} be a sequence of real
numbers with
X∞
un ≥ 0, un = 1,
n=1
P∞
and let µ = n=1 nun . Let d, called the period, be the largest integer such that
∞
X
und = 1.
n=1
Proof: (i). The hard part is to prove that the limit exists. We refer the reader to
Karlin and Taylor (1975) or Kohlas (1982) for the details. Here we assume that the
limit exists and show that it is given as stated in Equation 4.25.
First, it is possible to show by induction that
n
X
|gn | ≤ |νk |, n ≥ 0.
k=0
Since {u′r , r ≥ 1} has period one, we can use Equation 4.25 to get
∞
1 X ′
lim gn′ = ν .
n→∞ µ′ n=0 n
(n)
Theorem 4.15 Discrete Renewal Equation for pjj . Fix a j ∈ S. Let
Proof: We have
(0)
g0 = pjj = 1 = ν0 .
For n ≥ 1, conditioning on T̃j ,
(n)
gn = pjj = P(Xn = j|X0 = j)
n
X
= P(Xn = j|X0 = j, T̃j = m)P(T̃j = m)
m=1
Xn
= P(Xn = j|X0 = j, Xr 6= j, 1 ≤ r ≤ m − 1, Xm = j)ũj (m)
m=1
Xn
= um P(Xn = j|Xm = j)
m=1
Xn
= um gn−m ,
m=1
where we have used time homogeneity to get the last equality and Markov property
to get the one before that. This proves the theorem.
Using the above theorem we get the next important result.
and
∞
X
|νn | = ν0 = 1.
n=0
We also have
∞
X
µ= νn = E(T̃j |X0 = j) = m̃j .
n=1
Now suppose state j is aperiodic. Hence d = 1, and we can apply Theorem 4.14 part
(i). We get
∞
(n) 1X
lim pjj = νn = 1/m̃j .
n→∞ µ n=0
This yields part (i). Part (ii) follows similarly from part (ii) of Theorem 4.14.
Theorem 4.17 The Null Recurrent DTMC. For an irreducible null recurrent
DTMC
(n)
lim p = 0.
n→∞ ij
Following the proof of Theorem 4.13 and using the assumption of irreducibility we
can show that
(n)
lim pij = 0, i, j ∈ S.
n→∞
Since the DTMC is recurrent each state is visited infinitely often. Hence, in contrast
to the transient case,
P(Xn ∈ A infinitely often) = 1.
Thus a null DTMC will visit every finite set infinitely often over the infinite horizon,
even though the limiting probability that the DTMC is in the set A is zero. This non-
intuitive behavior is the result of the fact that although each state is visited infinitely
often, the expected time between two consecutive visits to the state is infinite.
Theorem 4.18 The Positive Recurrent Aperiodic DTMC. For an irreducible pos-
itive recurrent DTMC
(n)
lim p = πj > 0, i, j ∈ S, (4.27)
n→∞ ij
where {πj , j ∈ S} are given by the unique solution to
X
πj = πi pij , j ∈ S, (4.28)
i∈S
X
πj = 1. (4.29)
j∈S
Proof: We have already seen that Equation 4.27 holds when i = j with πj =
1/m̃j > 0. Hence assume i 6= j. Following the proof of Theorem 4.15 we get
n
(n) (n−m)
X
pij = um pjj , n ≥ 0,
m=1
where
um = P(T̃j = m|X0 = i).
112 DISCRETE-TIME MARKOV CHAINS: LIMITING BEHAVIOR
Since i ↔ j, it follows that
∞
X
um = P(Xn = j for some n ≥ 0 |X0 = i) = 1.
m=1
Now let 0 < ǫ < 1 be given. Thus it is possible to pick an N such that
∞
X
um ≤ ǫ/2,
m=N +1
and
(n)
|pjj − πj | ≤ ǫ/2, for all n ≥ N.
Then, for n ≥ 2N , we get
n
(n) (n−m)
X
|pij − πj | = | um pjj − πj |
m=1
n−N n ∞
(n−m) (n−m)
X X X
= | um (pjj − πj ) + um (pjj − πj ) − um πj |
m=1 m=n−N +1 m=n+1
n−N n ∞
(n−m) (n−m)
X X X
≤ um |pjj − πj | + um pjj + um πj
m=1 m=n−N +1 m=n−N +1
n−N
X n
X ∞
X
≤ um ǫ/2 + um + um
m=1 m=n−N +1 m=n+1
≤ ǫ/2 + ǫ/2 ≤ ǫ.
(n)
This proves Equation 4.27. Next we derive Equations 4.28 and 4.29. Now let aj =
P(Xn = j). Then Equation 4.27 implies
(n)
lim a = πj , j ∈ S.
n→∞ j
Let m → ∞ on both sides. The interchange of the limit and the sum on the right
hand side is justified due to bounded convergence theorem. Hence we get
(n)
X
πj = πi pij .
i∈S
Equation 4.28 results from the above by setting n = 1. Now let n → ∞. Again,
bounded convergence theorem can be used to interchange the sum and the limit on
the right hand side to get !
X
πj = πi πj ,
i∈S
LIMITING BEHAVIOR OF IRREDUCIBLE DTMCS 113
P
but πj > 0. Hence we must have πi = 1, yielding Equation 4.29.
Now suppose {πi′ , i ∈ S} is another solution to Equations 4.28 and 4.29. Using the
same steps as before we get
(n)
X
πj′ = πi′ pij , n ≥ 0.
i∈S
Letting n → ∞ we get !
X
πj′ = πi′ πj = πj .
i∈S
Equation 4.28 is called the balance equation, since it balances the probability of
entering a state with the probability of exiting a state. Equation 4.29 is called the
normalizing equation, for obvious reasons. The solution {πj , j ∈ S} satisfying bal-
ance and the normalizing equations is called the limiting distribution, since it is the
limit of the distribution of Xn as n → ∞. It is also called the steady state distribu-
tion. It should be noted that it is the state-distribution that is steady, not the state itself.
(nd)
lim p = d/m̃j = dπj > 0, j ∈ S. (4.30)
n→∞ jj
Now let
αij (r) = P(T̃j (mod d) = r|X0 = i), 0 ≤ r ≤ d − 1
X∞
= P(T̃j = nd + r|X0 = i).
n=0
(n)
The next theorem shows that pij does not have a limit, but does have d convergent
subsequences.
Proof: Follows along the same lines as that of Theorem 4.18 by writing
n
(nd+r) (n−k)d
X
pij = P(T̃j = kd + r|X0 = i)pjj
k=0
(This just says that the Cesaro limit agrees with the usual limit when the latter exists.
See Marsden (1974).) Using this we get, for 0 ≤ m ≤ d − 1,
nd+m d−1 n−1
1 X (k) 1X d X (kd+r)
lim pij = lim pij
n→∞ nd + m d r=0 n→∞ nd + m
k=0 k=0
LIMITING BEHAVIOR OF IRREDUCIBLE DTMCS 115
m
1 X (nd+r)
+ lim pij
n→∞ nd + m r=0
d−1
1 X
= dπj αij (r) = πj .
d r=0
The Equation 4.32 follows from this.
(ii) We have
X Mij(n) X n
1 X (k)
pjm = pij pjm
n+1 n+1
j∈S j∈S k=0
n
1 X (k+1)
= pim
n+1
k=0
n+1
n + 2 1 X (k) 1 (0)
= pim − p
n+1n+2 n + 1 im
k=0
(n+1)
n+2 Mij 1 (0)
= − p .
n+1 n+2 n + 1 im
Letting n → ∞ on both sides of the above equation we get
X
πj pjm = πm ,
j∈S
Since πm > 0 this implies Equation 4.34. Uniqueness can be proved in a manner
similar to the proof of Theorem 4.18.
The next theorem provides a necessary and sufficient condition for positive recur-
rence. It also provides a sort of converse to Theorems 4.18 and 4.20.
116 DISCRETE-TIME MARKOV CHAINS: LIMITING BEHAVIOR
Theorem 4.21 Positive Recurrence. Let {Xn , n ≥ 0} be an irreducible DTMC. It
is positive recurrent if and only if there is a non-negative solution to
X
πj = πi pij , j ∈ S, (4.35)
i∈S
X
πj = 1. (4.36)
j∈S
Proof: Theorems 4.18 and 4.20 give the “if” part. Here we prove the “only if” part.
Suppose {Xn , n ≥ 0} is an irreducible DTMC that is either null recurrent or tran-
sient. Suppose there is a non-negative solution to Equation 4.35. Then following the
same argument as before, we have
(n)
X
πj = πi pij , n ≥ 1.
i∈S
Here the interchange of the limits and the sum is justified by dominated convergence
theorem. Thus the solution cannot satisfy Equation 4.36. This proves the Theorem.
Uniqueness follows from the uniqueness of the limiting distribution of the positive
recurrent DTMCs.
The above theorem is very useful. For irreducible DTMCs it allows us to directly
solve Equations 4.35 and 4.36 without first checking for positive recurrence. If we
can solve these equations, the DTMC is positive recurrent. Note that we do not insist
on aperiodicity in this result. However, the interpretation of the solution {πj , j ∈ S}
depends upon whether the DTMC is aperiodic or periodic. In an aperiodic DTMC
there are three possible interpretations:
When the DTMC is periodic, the second and the third interpretations continue to
hold, where as the first one fails, since a periodic DTMC does not have a limiting
distribution.
4.5.8 Examples
Example 4.19 Three-State DTMC. Consider the three state DTMC of Exam-
ple 4.2. The DTMC is periodic with period 2. Equations 4.35 and 4.36 yield:
π1 = qπ2
π2 = π1 + π3
π3 = pπ2
π1 + π2 + π3 = 1.
These have a unique solution given by
π1 = q/2, π2 = 1/2, π3 = p/2.
The above represents the stationary distribution, and the limiting occupancy distri-
bution of the DTMC. Since the DTMC is periodic, it has no limiting distribution.
These results are consistent with the results of Example 4.2. Using the matrix nota-
tion α(r) = [αij (r)], we see that
1 0 1 0 1 0
α(0) = 0 1 0 , α(1) = 1 0 1 .
1 0 1 0 1 0
Using this we can verify that Theorem 4.20 produces the results that are consistent
with Equations 4.2 and 4.3.
Example 4.20 Genotype Evolution. Consider the six state DTMC of Example 4.3.
This a reducible DTMC, and one can see that Equations 4.35 and 4.36 have an infinite
number of solutions. We shall deal with the reducible DTMCs in Section 4.7.
It should be clear by now that the study of the limiting behavior of irreducible
positive recurrent DTMCs involves solving the balance equations and the normal-
izing equation. If the state-space is finite and the transition probabilities are given
118 DISCRETE-TIME MARKOV CHAINS: LIMITING BEHAVIOR
numerically, one can use standard numerical procedures to solve them. When the
DTMC has infinite state-space or the transition probabilities are given algebraically,
numerical methods cannot be used. In such cases one obtains the solution by analyt-
ical methods. The analytical methods work only if the transition probabilities have a
special structure. We shall describe various examples of analytical examples in the
next section.
Example 4.21 Success Runs. Consider the success runs DTMC of Example 2.15
on page 19 with transition probabilities
pi,0 = qi , pi,i+1 = pi , i = 0, 1, 2, · · · .
We assume that pi > 0 and qi > 0 for all i ≥ 0, making the DTMC positive recurrent
and aperiodic. The balance equations yield
πi+1 = pi πi , i ≥ 0.
Solving the above equation recursively yields
πi = ρi π0 , i ≥ 0,
where
ρ0 = 1, ρi = p0 p1 · · · pi−1 , i ≥ 1.
The normalizing equation yields
∞ ∞
!
X X
1= πi = π0 ρi .
i=0 i=0
P
Thus, if ρi converges, we have
∞
!−1
X
π0 = ρi .
i=0
This is also the stationary distribution and the limiting occupancy distribution of the
DTMC.
Example 4.22 General Simple Random Walk. Now consider the general simple
random walk with the following transition probabilities:
pi,i+1 = pi for i ≥ 0,
pi,i−1 = qi = 1 − pi for i ≥ 1,
p00 = 1 − p0 .
Assume that 0 < pi < 1 for all i ≥ 0, so that the DTMC is recurrent and aperiodic.
The balance equations for this DTMC are:
π0 = (1 − p0 )π0 + q1 π1 ,
πj = pj−1 πj−1 + qj+1 πj+1 , j ≥ 1.
It is relatively straightforward to prove by induction that the solution is given by (see
Conceptual Exercise 4.16).
πi = ρi π0 , i ≥ 0, (4.40)
where
p0 p1 · · · pi−1
ρ0 = 1, ρi = , i ≥ 1. (4.41)
q1 q2 · · · qi
The normalizing equation yields
∞ ∞
!
X X
1= πi = π0 ρi .
i=0 i=0
P
Thus, if ρi converges, we have
∞
!−1
X
π0 = ρi .
i=0
120 DISCRETE-TIME MARKOV CHAINS: LIMITING BEHAVIOR
Thus, from Theorem 4.20, P we see that the general simple random walk is positive
recurrent if and only if ρi converges. This is consistent with the results of Ex-
ample 4.14. When the DTMC is positive recurrent, the limiting distribution is given
by
ρi
πi = P∞ , i ≥ 0. (4.42)
j=0 ρj
This is also the stationary distribution and the limiting occupancy distribution of the
DTMC. Note that the DTMC is periodic with period 2 if p0 = 1. In this case the
expressions for πi remain valid, but now {πi , i ≥ 0} is not a limiting distribution.
Special case 2. Suppose pi = p for all i ≥ 0, and 0 < p < 1. In this case the
DTMC is positive recurrent and aperiodic, and we have
ρi = ρi , i ≥ 0,
P
where ρ = p/q. Hence ρi converges if p < q and diverges if p ≥ q. Combining
this with results from Example 4.14, we see that the space homogeneous simple
random walk is
(i) positive recurrent if p < q,
(ii) null recurrent if p = q,
(iii) transient p > q.
In case p < q, the limiting distribution is given by
πi = ρi (1 − ρ), i ≥ 0.
Thus, in the limit, Xn is a modified geometric random variable with parameter
1 − ρ.
These equations look the same as Equation 3.20 on page 67. Using the results of
Example 3.11, we see that the solution to the balance equations is given by
πi = cρi , i ≥ 0,
where c > 0 is a constant, and ρ satisfies
∞
X
ρ = ψ(ρ) = αk ρk . (4.45)
k=0
From Example 3.10, we know that there is a solution to the above equation in (0,1)
if an only if µ > 1. This is then the necessary and sufficient condition for positive
recurrence. This is consistent with the results of Example 4.16. When the DTMC is
positive recurrent (i.e., there is a unique ρ < 1 satisfying the above equation), the
normalizing equation yields
∞ ∞
X X c
1= πi = c ρi = .
i=0 i=0
1−ρ
This yields the limiting distribution as
πi = (1 − ρ)ρi , i ≥ 0. (4.46)
Thus, in the limit, Xn is a modified geometric random variable with parameter
1 − ρ.
Let T (r) be the first passage time to visit the set Cr , i.e.,
T (r) = min{n ≥ 0 : Xn ∈ Cr }, 1 ≤ r ≤ k.
Let
ui (r) = P(T (r) < ∞|X0 = i), 1 ≤ r ≤ k, i ∈ T. (4.48)
The next theorem gives a method of computing the above probabilities.
Proof: This proof is similar to the proof of Theorem 3.2 on page 60. See also Con-
ceptual Exercise 3.1.
Using the quantities {ui (r), i ∈ T, 1 ≤ r ≤ k} we can describe the limiting
distribution of Dn as n → ∞. This is done in the theorem below.
(iii) If Cr is positive recurrent and periodic, Dn (i, j) does not have a limit. However,
n
1 X
lim Dm (i, j) = ui (r)πj ,
n→∞ n + 1
m=0
Proof: (i) If ui (r) = 0, then Dn (i, j) = 0 for all n ≥ 1, and hence Equation 4.50
follows. If ui (r) > 0, then it is possible to go from i to any state j ∈ Cr , i.e., there is
(m)
an m ≥ 1 such that pij = Dm (i, j) > 0. Since state j is null recurrent or transient,
(n)
pjj → 0 as n → ∞. The proof follows along the same lines as that of Theorem 4.13.
(ii) Let i ∈ T and j ∈ Cr be fixed. Following Theorem 4.18 let
T̃j = min{n > 0 : Xn = j},
and
um = P(T̃j = m|X0 = i).
Then, since Cr is a closed recurrent class T̃j < ∞ if and only if T (r) < ∞. Hence
∞
X
um = αi (r).
m=1
and
(m)
|pjj − πj | ≤ ǫ/2
for all m ≥ N . The rest of the proof is similar to that of Theorem 4.18.
(iii) Similar to the proof of Theorem 4.20.
We illustrate with the help of several examples.
LIMITING BEHAVIOR OF REDUCIBLE DTMCS 125
Example 4.25 Genotype Evolution Model. Consider the six-state DTMC of the
genotype evolution model with the transition probability matrix as given in Equa-
tion 2.13 on page 24. From Example 4.6 we see that this is a reducible DTMC
with two closed communicating classes C1 = {1}, C2 = {6}, and the open class
T = {2, 3, 4, 5}. We also have
P (1) = [1], P (2) = [1],
1/2 0 1/4 0
0 0 1 0
Q=
,
1/4 1/8 1/4 1/4
0 0 1/4 1/2
and
1/4 0
0 0
1/16 1/16 .
D=
0 1/4
Remember that the rows of D are indexed {2, 3, 4, 5} and the columns are indexed
{1, 6}.
We have
P (1)n → [1], P (2)n → [1],
0 0 0 0
n
0 0 0 0
Q → .
0 0 0 0
0 0 0 0
Equations for {ui (1), i ∈ T } are given by
u2 (1) = .25 + .5u2 (1) + .25u4 (1)
u3 (1) = u4 (1)
u4 (1) = .25u2 (1) + .125u3 (1) + .25u4 (1) + .25u5(1)
u5 (1) = .25u4 (1) + .5u5 (1).
The solution is given by
[u2 (1), u3 (1), u4 (1), u5 (1)]′ = [.75, .5, .5, .25]′ .
Similar calculations yield
[u2 (2), u3 (2), u4 (2), u5 (2)]′ = [.25, .5, .5, .75]′ .
Hence we get
3/4 1/4
1/2 1/2
Dn →
1/2
,
1/2
1/4 3/4
126 DISCRETE-TIME MARKOV CHAINS: LIMITING BEHAVIOR
and
1 0 0 0 0 0
3/4 0 0 0 0 1/4
1/2 0 0 0 0 1/2
lim P (n)
= .
n→∞
1/2 0 0 0 0 1/2
1/4 0 0 0 0 3/4
0 0 0 0 0 1
This matches the result in Example 4.6.
Suppose the costs are discounted at rate α, where 0 ≤ α < 1 is a fixed discount
factor. Thus if the system incurs a cost of c at time n, its present value at time 0 is
αn c, i.e., it is equivalent to incurring a cost of αn c at time zero. Let C be the total
discounted cost over the infinite horizon, i.e.,
∞
X
C= αn c(Xn ).
n=0
Let φ(i) be the expected total discounted cost (ETDC) incurred over the infinite
horizon starting with X0 = i. That is,
φ(i) = E(C|X0 = i).
The next theorem gives the main result regarding the ETDC. We introduce the fol-
lowing column vectors
c = [c(i)]i∈S , φ = [φ(i)]i∈S .
Proof: Let C1 be the total discounted cost incurred over {1, 2, · · ·} discounted
back to time 0, i.e.,
X∞
C1 = αn c(Xn ).
n=1
DTMCS WITH COSTS AND REWARDS 127
From time-homogeneity it is clear that
E(C1 |X1 = j) = αφ(j), j ∈ S.
Using the first step analysis we get
X
E(C1 |X0 = i) = pij E(C1 |X0 = i, X1 = j)
j∈S
X
= α pij φ(j).
j∈S
Hence,
φ(i) = E(C|X0 = i)
= E(c(X0 ) + C1 |X0 = i)
X
= c(i) + α pij φ(j).
j∈S
The discounted costs have the disadvantage that they depend upon the discount
factor and the initial state, thus making decision making more complicated. These
issues are addressed by considering the long run cost per period, called the aver-
agePcost. The expected total cost up to time N , starting from state i, is given by
E( N n=0 c(Xn )|X0 = i). Dividing it by N + 1 gives the cost per period. Hence the
long run expected cost per period is given by:
N
!
1 X
g(i) = lim E c(Xn )|X0 = i ,
N →∞ N + 1
n=0
assuming that the above limit exists. To keep the analysis simple, we will assume that
the DTMC is irreducible and positive recurrent with limiting occupancy distribution
given by {πj , j ∈ S}.
P Intuitively, it makes sense that the long run cost per period
should be given by πj c(j), independent of the initial state i. This intuition is
formally proved in the next theorem:
Then X
g(i) = g = πj c(j).
j∈S
(N )
Proof: Let Mij be the expected number of visits to state j over {0, 1, 2, · · · , N }
starting from state i. See Section 2.5. Then, we see that
1 X (N )
g(i) = lim Mij c(j)
N →∞ N + 1
j∈S
X Mij(N )
= lim c(j)
N →∞ N +1
j∈S
REVERSIBILITY 129
(N )
X Mij
= lim c(j)
N →∞ N +1
j∈S
X
= πj c(j).
j∈S
Here the last interchange of sum and the limit is allowed because the DTMC is posi-
tive recurrent. The last equality follows from Theorem 4.20.
We illustrate with an example.
Example 4.27 Brand Switching. Consider the model of brand switching as de-
scribed in Example 2.14, where a customer chooses among three brands of beer, say
A, B, and C, every week when he buys a six-pack. Let Xn be the brand he purchases
in week n. We assume that {Xn , n ≥ 0} is a DTMC with state-space S = {A, B, C}
and transition probability matrix given below:
0.1 0.2 0.7
P = 0.2 0.4 0.4 . (4.53)
0.1 0.3 0.6
Now suppose a six pack costs $6.00 for brand A, $5.00 for brand B, and $4 for brand
C. What is the weekly expenditure on beer by the customer in the long run?
We have
c(A) = 6, c(B) = 5, c(C) = 4.
Also solving the balance and normalizing equations we get
πA = 0.132, π2 = 0.319, πC = 0.549.
Hence the long run cost per week is
g = 6πA + 5πB + 4πC = 4.583.
Thus the customer spends $4.58 per week on beer.
It is possible to use the results in Section 4.7 to extend this analysis to reducible
DTMCs. However, the long run cost rate may depend upon the initial state in that
case.
4.9 Reversibility
In this section we study a special class of DMTCs called the reversible DTMCs.
Intuitively, if we watch a movie of a reversible DTMC we will not be able to tell
whether the time is running forward or backward. Thus, the probability of traversing
a cycle of r + 1 states i0 → i1 → i2 → · · · → ir−1 → ir → i0 is the same as
traversing it in reverse order i0 → ir → ir−1 → · · · → i2 → i1 → i0 . We make this
more precise in the definition below.
130 DISCRETE-TIME MARKOV CHAINS: LIMITING BEHAVIOR
We begin with the definition:
The next theorem gives a simple necessary and sufficient condition for reversibil-
ity.
Proof: Suppose the DTMC is irreducible and positive recurrent, and Equation 4.55
holds. Consider a cycle of states i0 , i1 , · · · , ir , i0 . Then we get
πi0 pi0 ,i1 pi1 ,i2 · · · pir−1 ,ir pir ,i0 = pi1 ,i0 πi1 pi1 ,i2 · · · pir−1 ,ir pir ,i0
..
.
= pi1 ,i0 pi2 ,i1 · · · pir ,ir−1 pi0 ,ir πi0 .
Since the DTMC is positive recurrent, πi0 > 0. Hence the above equation implies
Equation 4.54.
To prove necessity, suppose Equation 4.54 holds. Summing over paths of length r
from i1 to i0 we get
(r) (r)
pi0 ,i1 pi1 ,i0 = pi0 ,i1 pi1 ,i0 .
Now sum over r = 0 to n, and divide by n + 1 to get
n n
! !
1 X (r) 1 X (r)
pi0 ,i1 p = p pi1 ,i0 ,
n + 1 r=0 i1 ,i0 n + 1 r=0 i0 ,i1
or
(n) (n)
Mi1 ,i0 Mi0 ,i1
pi0 ,i1
= pi ,i .
n+1 n+1 1 0
Now let n → ∞. Since the DTMC is irreducible and positive recurrent, we can use
Theorem 4.20 to get
pi0 ,i1 πi0 = pi1 ,i0 πi1 ,
which is Equation 4.55.
The Equations 4.55 are called the local balance or detailed balance equations, as
opposed to Equations 4.35, which are called global balance equations. Intuitively, the
local balance equations say that, in steady state, the expected number of transitions
from state i to j per period is the same as the expected number of transitions per
period from j to i. This is in contrast to stationary DTMCs that are not reversible:
REVERSIBILITY 131
for such DTMCs the global balance equations imply that the expected number of
transitions out of a state over any time period is the same as the expected number
of transitions into that state over the same time period. It can be shown that the
local balance equations imply global balance equations, but not the other way. See
Conceptual Exercise 4.21.
Example 4.29 General Simple Random Walk. Consider a positive recurrent gen-
eral simple random walk as described in Example 4.22. Show that it is a reversible
DTMC.
From Example 4.22 we see that the stationary distribution is given by
πi = ρi π0 , i ≥ 0,
where
p0 p1 · · · pi−1
ρ0 = 1, ρi = , i ≥ 1.
q1 q2 · · · qi
Hence we have
p0 p1 · · · pi−1 p0 p1 · · · pi−1 pi
πi pi,i+1 = π0 pi = π0 qi+1 = πi+1 pi+1,i .
q1 q2 · · · qi q1 q2 · · · qi+1
Since the only transitions in a simple random walk are from i to i + 1, and i to i − 1,
we see that Equations 4.55 are satisfied. Hence the DTMC is reversible.
Reversible DTMCs are nice because it is particularly easy to compute their station-
ary distribution. A large class of reversible DTMCs are the tree DTMCs, of which
the general simple random walk is a special case. See Conceptual Exercise 4.23 for
details.
We end this section with an interesting result about the eigenvalues of a reversible
DTMC with finite state space.
4.1 Numerically study the limiting behavior of the marginal distribution and the
occupancy distribution in the brand switching example of Example 2.14 on page 18,
by studying P n and M (n) /(n + 1) for n = 1 to 20.
4.2 Study the limiting behavior of the two-state DTMC of Example 2.3 if
(i) α + β = 0,
(ii) α + β = 2.
4.3 Numerically study the limiting behavior of the random walk on {0, 1, · · · , N }
with
p0,1 = p0,0 = pN,N = pN,N −1 = 1/2, pi,i+1 = pi,i−1 = 1/2, 1 ≤ i ≤ N − 1,
by studying P n and M (n) /(n + 1) for n = 1 to 20, and N = 1 to 5. What is your
guess for the limiting values a general N ?
4.4 Establish the conditions of transience, null recurrence, and positive recurrence
for the space-nonhomogeneous simple random walk on {0, 1, 2, · · ·} with the follow-
ing transition probabilities:
pi,i+1 = pi for i ≥ 0,
pi,i−1 = qi for i ≥ 1,
pi,i = ri , for i ≥ 0.
Assume that 0 < pi < 1 − ri for all i ≥ 0.
COMPUTATIONAL EXERCISES 133
4.5 Establish the conditions of transience, null recurrence, and positive recurrence
for the simple random walk on {0, 1, 2, · · ·} with reflecting boundary at zero, i.e.,
pi,i+1 = p for i ≥ 1,
pi,i−1 = q for i ≥ 1,
p0,1 = 1.
Assume that 0 < p < p + q < 1.
4.6 Consider the DTMC of example 4.15. Suppose it is positive recurrent. Show
that the limiting distribution {πj , j ≥ 0} can be recursively calculated by
π0 = 1 − µ,
1 − α0
π1 = π0 ,
α0
1 − α0 − α1
π2 = (π0 + π1 ),
α0
j
!
1 − ji=0 αi X
P
πj+1 = πi
α0 i=0
j j
X X αk
+ πi , j ≥ 2.
i=2
α0
k=j−i+2
4.7 Study the limiting behavior of the random walk in Computational Exercise 4.5,
assuming it is positive recurrent.
4.8 Study the limiting behavior of the random walk in Computational Exercise 4.4,
assuming it is positive recurrent.
4.9 Classify the states of the brand switching model of Example 4.27. That is, iden-
tify the communicating classes, and state whether they are transient, null recurrent,
or positive recurrent, and whether they are periodic or aperiodic.
0 0 .5 .5 0 0 1 0
4.11 Compute the limits of M (n) /(n + 1) as n → ∞ for the P matrices in Compu-
tational Exercise 4.10, using the method of diagonalization to compute the P n .
134 DISCRETE-TIME MARKOV CHAINS: LIMITING BEHAVIOR
4.12 Do a complete classification of the states of the DTMC in
4.13 Do a complete classification of the states of the DTMCs with the following
transition probability matrices:
.5 .5 0 0 .2 .8 0 0
0 .5 .5 0 .6 .4 0 0
(a) 0 0 .5
, (b) .
.5 .2 .3 .5 0
0 0 0 1 0 0 0 1
4.15 Classify the following success runs DTMCs as transient, null recurrent, or pos-
itive recurrent
4.16 State whether the following simple random walks are transient, null recurrent,
or positive recurrent.
4.18 Use Pakes’ lemma to derive a sufficient condition for the positive recurrence
of the DTMC in Example 2.17 on page 20.
COMPUTATIONAL EXERCISES 135
4.19 Show that the discrete time queue of Example 2.12 is positive recurrent if
p < q, null recurrent if p = q, and transient if p > q.
4.21 Compute the limiting distributions and the limiting occupancy distributions of
the DTMCs in Computational Exercise 4.10.
4.22 Is the DTMC in Computational Exercise 3.9 positive recurrent? If so, compute
its limiting distribution.
where
∞
X ∞
X
A(z) = z j αj , B(z) = z j βj ,
j=0 j=0
and
1−µ
π0 = .
1−µ+ν
4.25 Compute the long run fraction of patients who receive drug i if we follow the
play the winner policy of Modeling Exercise 2.13.
4.28 Let Xn be the sum of the first n outcomes of tossing a six-sided fair die re-
peatedly and independently. Compute
lim P(Xn is dividible by 7).
n→∞
4.30 Consider the discrete time queue of Example 2.12 on page 17. Assume that the
queue is positive recurrent and compute its limiting distribution.
4.31 Compute the limiting distribution of the number of white balls in urn A in the
urn model described in Example 2.13 on page 18.
4.32 Consider the Modeling Exercise 2.18 on page 45. Assume that u, the expected
up time and d, the expected down time are finite.
4.33 Consider the Modeling Exercise 2.1 on page 42. Assume that τ , the expected
lifetime of each light bulb, is finite. Show that the DTMC is positive recurrent and
compute its limiting distribution.
4.34 Consider the Modeling Exercise 2.19 on page 46. Let µ = E(Yn ), and
v =Var(Yn ). Suppose X0 = 0 with probability 1. Compute µn = E(Xn ) as a func-
tion of n.
4.35 Show that the DTMC in Modeling Exercise 2.19 on page 46 is positive recur-
rent if µ < ∞. Let ψ(z) be the g.f. of Yn and φn (z) be the g.f. of Xn . Compute φ(z),
the generating function of the limiting distribution of Xn .
4.37 Consider the Modeling Exercise 2.20 on page 46. Compute the long run frac-
tion of the time the audio quality is impaired.
(i) the long run fraction of the time both machine are working,
(ii) the average number of assemblies shipped per period,
(iii) the fraction of the time the i-th machine is off.
4.39 Consider the DTMC in the Modeling Exercise 2.24 on page 47. Let τi be the
mean lifetime of the machine from vendor i, i = 1, 2. Assume τi < ∞. Show that the
DTMC is positive recurrent and compute the long run fraction of the time machine
from vendor 1 is in use.
4.40 Compute the limiting distribution of the DTMC in the Modeling Exercise 2.27
on page 48.
4.41 Compute the limiting distribution of the DTMC in the Modeling Exercise 2.28
on page 48.
4.42 Compute the limiting distribution of the DTMC in the Modeling Exercise 2.6
on page 43.
4.44 Let P be the transition probability matrix of the gambler’s ruin DTMC in the
Example 2.11 on page 17. Compute limn→∞ P n .
4.45 Let {Xn , n ≥ 0} be the DTMC of Modeling Exercise 2.33. Suppose it costs
C1 dollars to replace a failed light bulb, and C2 dollars to replace a working light
bulb. Compute the long run replacement cost of this replacement policy.
4.46 A machine produces two items per day. Each item is non-defective with prob-
ability p, the quality of the successive items being independent. Defective items are
thrown away immediately, and the non-defective items are stored to satisfy the de-
mand of one item per day. Any demand that cannot be satisfied immediately is lost.
138 DISCRETE-TIME MARKOV CHAINS: LIMITING BEHAVIOR
Let Xn be the number of items in storage at the beginning of each day (before the
production and demand for that day is taken into account). Show that {Xn , n ≥ 0} is
a DTMC. When is it positive recurrent? Compute its limiting distribution when it
exists.
4.47 Suppose it costs $c to store an item for one day and $d for every demand that
is lost. Compute the long run cost per day of operating the facility in Computational
Exercise 4.46 assuming it is stable.
4.48 Consider the production facility of Computational Exercise 4.46. Suppose the
following operating procedure is followed. The machine is turned off as soon the
inventory (the items in storage) reaches K, a fixed positive integer. It then remains
off until the inventory reduces to k, another fixed positive integer less than K, at
which point it is turned on again. Model this by an appropriate DTMC and compute
4.52 Price of an item fluctuates between two levels: high ($H) and low ($L). Let
Yn be the price of the item at the beginning of day n. Suppose {Yn , n ≥ 0} is a
DTMC with state space {0, 1}, where state 0 is low, and state 1 is high. The transition
probability matrix is
1−α α
.
β 1−β
A production facility consumes one such item at the beginning of each day and has
a storage capacity of K. The production manager uses the following procurement
CONCEPTUAL EXERCISES 139
policy: If the price of the item at the beginning of day n is low, then procure enough
items instantly to bring the number of items in stock to K. If the price level is high,
and there is at least one item in stock, do nothing; else procure one item instantly. Let
Xn be the number of items in stock at the beginning of day n. Suppose the inventory
costs are charged at the rate of $h per item in stock at the beginning of a day per
day. (Assume the following sequence of events: at the beginning of the nth day, we
first consume an item, then observe the new price level Yn , then procure the quantity
determined by the above rule, then observe the new stock level Xn .)
1. Compute the long run procurement plus holding cost per day of following this
policy.
2. Suppose the price is known to be $25 on 20% of the days, and $15 for 80% of
the days on the average. The price cycle lasts on the average 25 days. The holding
cost is $.50 per item per day. Plot or tabulate the long run average cost per day as
a function of K. What is value of K that minimizes the cost rate?
(i) At the beginning, when the play program is initiated, the system waits until the
buffer has a fixed number b of bytes, and then starts playing. Let T be the first
time either the buffer becomes empty, or some bytes are lost. A song of K bytes
plays flawlessly if T > K. Show how to compute the probability that the song
of K bytes plays flawlessly.
(ii) Consider the following parameters:
B = 100, α0 = .2, α1 = .5, K = 512.
What value of b should be used to maximize the probability that this song is
played flawlessly? What is this maximum probability?
(i) S = set of all humans, R(x, y) = 1 if and only if x and y are blood relatives,
(ii) S = set of all humans, R(x, y) = 1 if and only if x is a brother of y,
(iii) S = set of all students at a university, R(x, y) = 1 if x and y are attending at
east one class together,
(iv) S = set of all polynomials, R(x, y) = 1 if the degree of the product of x and y
is 10.
4.2 Let C1 and C2 be two communicating classes of a DTMC. Show that either
C1 = C2 or C1 ∩ C2 = φ.
4.4 For the DTMC in Conceptual Exercise 4.3 develop an algorithm to identify the
partition in Equation 4.9 in O(q) steps.
4.5 Show that the two definitions of periodicity given in Definitions 4.6 and 4.7 are
equivalent.
4.7 Show that in an irreducible DTMC with N states, it is possible to go from any
state to any other state in N steps or less.
4.8 Show that the period of an irreducible DTMC with N states is N or less.
4.9 Show that there are no null recurrent states in a finite state DTMC.
4.10 Show by example that it is possible for an irreducible DTMC with N states to
have any period d ∈ {1, 2, · · · , N }.
4.11 Show that not all states in a finite state DTMC can be transient.
4.12 Deduce Equation 4.19 from Equation 4.18 by following an argument similar
to the one in the proof of Theorem 4.7.
4.13 Prove Theorem 4.11 (Pakes’ lemma) using Theorem 4.10 (Foster’s criterion).
4.14 Show that a transient DTMC eventually permanently exits any finite set with
probability 1. (Use Borel-Cantelli lemma of Appendix A.)
CONCEPTUAL EXERCISES 141
4.15 Let {πj , j ∈ S} be a solution to the balance and normalizing equations. Now
suppose the DTMC starts with initial distribution
P(X0 = j) = πj , j ∈ S.
Show that
P(Xn = j) = πj , j ∈ S, for all n ≥ 1.
Let
∞
X
φ(i) = E( αn c(Xn , Xn+1 |X0 = i), φ = [φ(i)]i∈S .
n=0
Show that φ satisfies Equation 4.52.
4.18 Consider the cost model of Conceptual Exercise 4.17, and assume that the
DTMC is irreducible and positive recurrent with limiting occupancy distribution
{πi , i ∈ S}. Show that g, the long run expected cost per unit time, is given by
XX
g= πi c(i, j)pi,j .
i∈S j∈S
4.19 Consider the cost set up of Section 4.8. Let φ(N, i) be the expected discounted
cost incurred over time {0, 1, 2, · · · , N } starting from state i, i.e.,
N
!
X
n
φ(N, i) = E α c(Xn )|X0 = i .
n=0
4.20 Consider the cost set up of Section 4.8. Let g(N, i) be the expected cost per
period incurred over time {0, 1, 2, · · · , N } starting from state i, i.e.,
N
!
1 X
g(N, i) = E c(Xn )|X0 = i .
N +1 n=0
142 DISCRETE-TIME MARKOV CHAINS: LIMITING BEHAVIOR
Show that g(N, i) can be computed recursively as follows:
g(0, i) = c(i),
1 X
g(N, i) = c(i) + N pij g(N − 1, j) , N ≥ 1.
N +1
j∈S
4.21 Show that if {πj , j ∈ S} satisfy the local balance Equations 4.55, they satisfy
the global balance Equations 4.35. Show by counter example that the reverse is not
true.
4.23 A DTMC is said to be tree if between any two distinct states i and j there is
exactly one sequence of distinct states i1 , i2 , · · · ir such that
pi,i1 pi1 ,i2 · · · pir ,j > 0.
Show that a positive recurrent tree DTMC is reversible.
4.25 Let {Xn , n ≥ 0} be an irreducible and positive DTMC on state space S. Let
P be its transition probability matrix and π be its stationary distribution. Suppose the
Markov chain earns a reward of ri everytime it visits state i. Let Ri (n), n ≥ 1, be
the total expected reward earned by the DTMC over time periods {0, 1, ..., n − 1}
starting from state i. Let R(n) = [Ri (n), i ∈ S] be the column vector of these
expected rewards.
Poisson Processes
The cumulative distribution function (cdf) FX is plotted in Figure 5.1. The proba-
bility density function (pdf) fX of the exp(λ) random variable is given by
d 0 if x < 0
fX (x) = FX (x) =
dx λe−λx if x ≥ 0.
The density function is plotted in Figure 5.2. The Laplace Stieltjest transform (LST)
of X ∼ exp(λ) is given by
Z ∞
−sX λ
F̃X (s) = E(e )= e−sx fX (x)dx = , Re(s) > −λ, (5.2)
0 λ + s
where the Re(s) denotes the real part of the complex number s. Taking the deriva-
tives of F̃X (s) we can compute the rth moments of X for all positive integer values
145
146 POISSON PROCESSES
F(x)
f(x)
of r as follows:
dr r!
E(X r ) = (−1)r r
F̃X (s)|s=0 = r .
ds λ
In particular we have
1 1
E(X) = , Var(X) = 2 .
λ λ
Thus the coefficient of variation of X, Var(X)/(E(X)2), is 1. We now study many
special and interesting properties of the exponential random variable.
Thus if X represents the lifetime of a component (say a computer hard drive), the
memoryless property says that the probability that an s year old hard drive will last
EXPONENTIAL DISTRIBUTIONS 147
another t years is the same as the probability that a new hard drive will last t years. It
is as if the hard drive has no memory that it has already been functioning for s years!
The next theorem gives an important characterization of an exponential random vari-
able.
Proof: We first show the “if” part. So, suppose X ∼ exp(λ) for some λ > 0. Then,
for s, t ≥ 0,
P(X > s + t, X > s)
P(X > s + t|X > s) =
P(X > s)
P(X > s + t) e−λ(s+t)
= =
P(X > s) e−λs
= e−λt = P(X > t).
Hence, by definition, X has memoryless property. Next we show the “only if” part.
So, let X be a non-negative random variable with complementary cdf
F c (x) = P(X > x), x ≥ 0.
Then, from Equation 5.3, we must have
F c (s + t) = F c (s)F c (t), s, t ≥ 0.
This implies
F c (2) = F c (1)F c (1) = (F c (1))2 ,
and
F c (1/2) = (F c (1))1/2 .
In general, for all positive rational a we get
F c (a) = (F c (1))a .
The only continuous function that will satisfy the above equation for all rational
numbers is
c
F c (x) = (F c (1))x = elnF (1)x = e−λx x ≥ 0,
where λ = −ln(F c (1)). Since F c (x) is a probability we must have λ > 0, and
hence FX (x) = 1 − F c (x) satisfies Equation 5.1. Thus X is an exp(λ) random
variable.
Theorem 5.2 A continuous non-negative random variable has constant hazard rate
r(x) = λ > 0, x ≥ 0, if and only if it is an exp(λ) random variable.
Proof: We first show the “if” part. So, suppose X ∼ exp(λ) for some λ > 0. Then
its hazard rate is given by
f (x)
r(x) =
F c (x)
λe−λx
= = λ.
e−λx
To show the “only if” part, note that the hazard rate completely determines the com-
plementary cdf by the following formula (see Conceptual Exercise 5.1)
Z x
F c (x) = exp − r(u)du , x ≥ 0. (5.4)
0
Hence, if the random variable X has hazard rate r(u) = λ for all u ≥ 0, we must
have
F c (x) = e−λx , x ≥ 0.
Thus X ∼ exp(λ).
Example 5.1 A running track is 1 km long. Two runners start on it at the same time.
The speed of runner i is Xi , i = 1, 2. Suppose Xi ∼ exp(λi ), i = 1, 2, and X1 and
X2 are independent. The mean speed of runner 1 is 20 km per hour and that of runner
2 is 22 km per hour. What is the probability that runner 1 wins the race?
The required probability is given by
1 1 λ2
P < = P(X2 < X1 ) =
X1 X2 λ1 + λ2
1/22 20
= = = 0.476.
1/22 + 1/20 20 + 22
In this section we generalize the memoryless property of Equation 5.3 from a fixed
t to a random t. We call this the strong memoryless property. The precise result is
given in the next Theorem.
Proof: We have
Z ∞
P(X > s + T, X > T ) = P(X > s + T, X > T |T = t)dFT (t)
Z0 ∞
= P(X > s + t, X > t)dFT (t)
Z0 ∞
= e−λ(s+t) dFT (t)
0
Z ∞
−λs
= e e−λt dFT (t)
0
−λs
= e P(X > T ).
Then
P(X > s + T, X > T )
P(X > s + T |X > T ) = = e−λs ,
P(X > T )
as desired.
EXPONENTIAL DISTRIBUTIONS 151
Another way of interpreting the strong memoryless property is that, given X > T ,
X − T is an exp(λ) random variable. Indeed it is possible to prove a multivariate
extension of this property as follows. Let Xi ∼ exp(λi ), 1 ≤ i ≤ n, be independent
and let T be a non-negative random variable that is independent of them. Then
n
Y
P(Xi > si + T, 1 ≤ i ≤ n|Xi > T, 1 ≤ i ≤ n) = e−λi si , si ≥ 0. (5.7)
i=1
Example 5.3 System A has two components in parallel, with iid exp(λ) lifetimes.
System B has a single component with exp(µ) lifetime, independent of system A.
What is the probability that system A fails before system B?
Let Zi be time of ith failure in system A. System A fails at time Z2 . From Exam-
ple 5.2, we have Z1 ∼ exp(2λ) and Z2 − Z1 ∼ exp(λ). Let X ∼ exp(µ) be the
lifetime of the component in system B. Then
P(System A fails before System B) =
= P(Z1 < X)P(Z2 − Z1 < X − Z1 |X > Z1 )
= P(exp(2λ) < exp(µ))P(exp(λ) < exp(µ))
2λ λ
= · .
2λ + µ λ + µ
Theorem 5.5 Suppose {Xi , 1 ≤ i ≤ n} are iid exp(λ) random variables. Then Z
is an Erlang (or Gamma) (n, λ) random variable (denoted as Erl(n, λ)), with density
0 if z < 0
fZ (z) = −λz (λz)
n−1
λe n! if z ≥ 0,
and cdf
0 if z < 0
FZ (z) = (λz)r
1 − e−λz n−1
P
r=0 r! if z ≥ 0.
EXPONENTIAL DISTRIBUTIONS 153
Proof: We compute the LST of Z as follows:
E(e−sZ ) = E(e−s(X1 +X+2+···+Xn ) )
= E(e−sX1 )E(e−sX2 ) · · · E(e−sXn )
n
λ
= .
s+λ
The result follows by taking the inverse LST from the table in Appendix F.
Example 5.4 Suppose the times between consecutive births at a maternity ward
in a hospital are iid exponential random variables with mean 1 day. What is the
probability that the 10th birth in a calendar year takes place after Jan 15?
Note that it does not matter when the last birth in the previous year took place,
since, due to strong memoryless property, the time until the first birth into the new
year is exponentially distributed. Thus Z, the time of the tenth birth is a sum of 10
iid exp(1) random variables. Therefore the required probability is given by
9
X 15r
P(Z > 15) = e−15 = 0.1185.
r=0
r!
We shall see later that the stochastic process of births is a Poisson process.
In this section we study the distribution of a random sum of iid exp(λ) random
variables. The main result is given in the next theorem.
Theorem 5.7 Let {Xi , i ≥ 1} be a sequence of iid exp(λ) random variables and
N be a G(p) random variable (i.e., geometric with parameter p), independent of the
X’s. Then the random sum
XN
Z= Xi
i=1
is an exp(λp) random variable.
Proof: We have n
λ
E(e−sZ |N = n) = .
s+λ
Hence we get
∞
X
E(e−sZ ) = E(e−sZ |N = n)P(N = n)
n=1
∞ n
X λ
= (1 − p)n−1 p
n=1
s + λ
∞
λp X λ(1 − p) n
=
s + λ n=0 s+λ
λp 1
= ·
s + λ 1 − λ(1 − p)/(s + λ)
λp
= .
s + λp
Hence, from Equation 5.2, Z must be an exp(λp) random variable.
Hence from Theorem 5.7, we see that Z ∼ exp(.03). Thus the lifetime is exponen-
tially distributed with mean 1/.03 = 33.33 hours.
A Poisson process is frequently used as a model for counting events occurring one at
a time, such as the number of births in a hospital, the number of arrivals at a service
system, the number of calls made, the number of accidents on a given section of a
road, etc. In this section we give three equivalent definitions of a Poisson process and
study its basic properties.
Let {Xn , n ≥ 1} be a sequence of non-negative random variables representing
inter-event times. Define
S0 = 0, Sn = X1 + X2 + · · · + Xn , n ≥ 1.
Then Sn is the time of occurrence of the nth event, n ≥ 1. Now, for t ≥ 0, define
N (t) = max{n ≥ 0 : Sn ≤ t}.
Thus N (t) is the number of events that take place over the time interval (0, t], and
{N (t), t ≥ 0} is called a counting process generated by {Xn , n ≥ 1}. Poisson
process is a special case of a counting process as defined below.
Thus the number of births in the maternity ward of Example 5.4 is a Poisson
process. We denote a Poisson process with parameter (or rate) λ by the shorthand
notation PP(λ). A typical sample path of a PP(λ) is shown in Figure 5.3. Note that
N (0) = 0 and the process jumps up by one at t = Sn , n ≥ 1. Thus it has piece-
wise constant, right-continuous sample paths. The next theorem gives the transient
distribution of a Poisson process.
t
S0 S1 S2 S3 S4
Example 5.6 Arrivals at a Post Office. Suppose customers arrive at a post office
according to a PP with rate 10 per hour.
(i) Compute the distribution of the number of customers who use the post office
during an 8-hour day.
Let N (t) be the number of arrivals over (0, t]. We see that the arrival process is
PP(λ) with λ = 10 per hour. Hence
N (8) ∼ P(λ · 8) = P(80).
Thus
(80)k
P(N (8) = k) = e−80 , k = 0, 1, 2, · · · .
k!
(ii) Compute the expected number of customers who use the post office during an
8-hour day. Since N (8) ∼ P(80), the desired answer is given by E(N (8)) = 80.
POISSON PROCESS: DEFINITIONS 157
Next we compute the finite dimensional joint probability distributions of a Poisson
process. The above theorem does not help us there. We develop a crucial property of
a Poisson process that will help us do this. We start with a definition.
Definition 5.4 Shifted Poisson Process. Let {N (t), t ≥ 0} be a PP(λ), and define,
for a fixed s ≥ 0,
Ns (t) = N (t + s) − N (s), t ≥ 0.
The process {Ns (t), t ≥ 0} is called a shifted Poisson process.
Theorem 5.9 Shifted Poisson Process. A shifted Poisson process {Ns (t), t ≥ 0}
is a PP(λ), and is independent of {N (u), 0 ≤ u ≤ s}.
Proof: It is clear from the definition of Ns (t) that it equals the number of events in
(s, s + t]. From Figure 5.4 we see that the first event after s occurs at time SN (s)+1
N(t)
Ns(t)
exp(λ)
t
t
S0 S1 SN(s) s SN(s)+1
Proof: Let {N (t), t ≥ 0} be a PP(λ). From Theorems 5.8 and 5.9 it follows that
(λt)k
P(N (t + s) − N (s) = k) = e−λt , k = 0, 1, 2, · · · (5.8)
k!
which is independent of s. Thus {N (t), t ≥ 0} has stationary increments.
Now suppose 0 ≤ t1 ≤ t2 ≤ t3 ≤ t4 are fixed. Then N (t2 ) − N (t1 ) and
N (t4 ) − N (t3 ) are increments over non-overlapping intervals (t1 , t2 ] and (t3 , t4 ].
From Theorem 5.9 Nt3 (t4 − t3 ) = N (t4 ) − N (t3 ) is independent of {N (u), 0 ≤
u ≤ t3 }, and hence independent of N (t2 ) − N (t1 ). This proves the independence of
increments over non-overlapping intervals. This proves the theorem.
The above theorem helps us compute the finite dimensional joint probability dis-
tributions of a Poisson process, as shown in the next theorem.
Proof:
P(N (t1 ) = k1 , N (t2 ) = k2 , · · · , N (tn ) = kn )
= P(N (t1 ) = k1 , N (t2 ) − N (t1 )
= k2 − k1 , · · · , N (tn ) − N (tn−1 ) = kn − kn−1 )
= P(N (t1 ) = k1 )P(N (t2 ) − N (t1 ) = k2 − k1 ) · · · P(N (tn ) − N (tn−1 )
= kn − kn−1 )
(by independent increments property)
(λt1 )k1 −λ(t2 −t1 ) (λ(t2 − t1 ))k2 −k1
= e−λt1 ·e
k1 ! (k2 − k1 )!
(λ(tn − tn−1 ))kn −kn−1
· · · e−λ(tn −tn−1 ) ,
(kn − kn−1 )!
where the last equation follows from Equation 5.8. Further simplification yields the
desired result.
POISSON PROCESS: DEFINITIONS 159
The independent increments property is very useful in computing probabilistic
quantities associated with a Poisson process as shown in the next two examples.
Example 5.8 Consider the post office of Example 5.6. What is the probability that
one customer arrives between 1:00pm and 1:06pm, and two customers arrive between
1:03pm and 1:12pm?
Using time homogeneity and hours as units of time, we write the required proba-
bility as P(N (0.1) = 1; N (0.2) − N (0.05) = 2). Using independence of increments
over non-overlapping intervals (0, 0.05], (0.05, 0.1], and (0.1, 0.2] we get
P(N (0.1) = 1; N (0.2) − N (0.05) = 2)
1
X
= P(N (0.05) = k, N (0.1) − N (0.05) = 1 − k, N (0.2) − N (0.1) = 1 + k)
k=0
X1
= P(N (0.05) = k)P(N (0.1) − N (0.05) = 1 − k)P(N (0.2) − N (0.1) = 1 + k)
k=0
1
X (0.5)k −.5 (0.5)1−k −.1 (1)(1+k)
= e−.5 e e .
k! (1 − k)! (1 + k)!
k=0
The next theorem states that the properties stated in Theorems 5.8 and 5.10 in fact
characterize a Poisson process.
160 POISSON PROCESSES
Theorem 5.12 Alternate Characterization 1. A stochastic process {N (t), t ≥ 0}
is a PP(λ) if and only if
Proof: The “only if” part is contained in Theorems 5.8 and 5.10. Here we prove
the “if ” part. From (ii) it is clear that N (0) = 0 with probability 1. Also, since
N (s + t) − N (s) is a P (λt) random variable, it is clear that almost all sample paths
of the process are piecewise constant with jumps of size 1. Let
S1 = inf{t ≥ 0 : N (t) = 1}.
Now, by an argument similar to that in Theorem 5.8,
P(S1 > t) = P(N (t) = 0) = e−λt .
Hence S1 ∼ exp(λ). Similarly, we can define
Sk = inf{t ≥ 0 : N (t) = k},
and, using stationary and independent increments, show that {Xk = Sk −Sk−1 , k ≥
1} (with S0 = 0) is a sequence of iid exp(λ) random variables. Hence {N (t), t ≥ 0}
is a PP(λ). This completes the proof.
Since the conditions given in the above theorem are necessary and sufficient, they
can be taken as an alternate definition of a Poisson process. Finally, we give yet
another characterization of a Poisson process. We need the following definition.
Example 5.9
We use this definition to give the second alternate characterization of a Poisson pro-
cess in the next theorem.
POISSON PROCESS: DEFINITIONS 161
Theorem 5.13 Alternate Characterization 2. A counting process {N (t), t ≥ 0}
is a PP(λ) if and only if
(i) it has stationary and independent increments,
(ii)
P(N (0) = 0) = 1,
P(N (h) = 0) = 1 − λh + o(h),
P(N (h) = 1) = λh + o(h),
P(N (h) = j) = o(h), j ≥ 2.
Proof: Suppose {N (t), t ≥ 0} is a PP(λ). Thus it satisfies conditions (i) and (ii)
of Theorem 5.12. Condition (i) above is the same as condition (i) of Theorem 5.12.
We shall show that condition (ii) of Theorem 5.12 implies condition (ii) above. We
have N (0) ∼ P(0) = 0 with probability 1. Furthermore,
1 1 −λh
lim (P(N (h) = 0) − 1 + λh) = lim e − 1 + λh = 0.
h→0 h h→0 h
Letting h → 0, we get
dpk (t)
p′k (t) = = −λpk (t) + λpk−1 (t). (5.9)
dt
Proceeding in a similar fashion for the case k = 0 we get
p′0 (t) = −λp0 (t). (5.10)
Using the initial condition p0 (0) = 1, the above equation admits the following solu-
tion:
p0 (t) = e−λt , t ≥ 0.
Using the initial condition pk (0) = 0 for k ≥ 1, we can solve Equation 5.9 recur-
sively to get
(λt)k
pk (t) = e−λt , t ≥ 0, (5.11)
k!
which implies that N (t) ∼ P (λt). Thus conditions (i) and (ii) of Theorem 5.13
imply conditions (i) and (ii) of Theorem 5.12. This proves the result.
Since the conditions of the above theorem are necessary and sufficient, they can
be taken as yet another definition of a Poisson process.
In this section we study the joint distribution of the event times S1 , S2 , · · · , Sn , given
that N (t) = n. We begin with some preliminaries about order statistics of uniformly
distributed random variables. (See Appendix C for more details.)
Theorem 5.14 Campbell’s Theorem. Let Sn be the nth event time in a PP(λ)
{N (t), t ≥ 0}. Given N (t) = n,
(S1 , S2 , · · · , Sn ) ∼ (Ũ1 , Ũ2 , · · · , Ũn ).
Example 5.10
(i) Let {N (t), t ≥ 0} be a PP(λ), and let Sn be the time of the nth event. Compute
P(S1 > s|N (t) = n). Assume n ≥ 1 is a given integer.
Theorem 5.14 implies that, given N (t) = n, S1 ∼ min{U1 , U2 , · · · , Un }. From
Equation 5.13 we get
n u n−1
f1 (u) = 1− , 0 ≤ u ≤ t.
t t
Hence
Z t s n
P(S1 > s|N (t) = n) = f1 (u)du = 1 − , 0 ≤ s ≤ t.
s t
(ii) Compute E(Sk |N (t) = n).
From Theorem 5.14 and Equation 5.14, for 1 ≤ k ≤ n, we get
kt
E(Sk |N (t) = n) = E(Ũk ) = .
n+1
For k > n, using memoryless property of the exponentials, we get
k−n
E(Sk |N (t) = n) = t + .
λ
Example 5.11 Suppose passengers arrive at a bus depot according to a PP(λ). Buses
leave the depot every T time units. Assume that the bus capacity is sufficiently large
so that when a bus leaves there are no more passengers left at the depot. What is the
average waiting time of the passengers?
Let {N (t), t ≥ 0} be a PP(λ). Suppose a bus has just left the depot at time 0,
so the bus depot is empty at time 0. Consider the time interval (0, T ]. Number of
passengers waiting for the bus at time T is N (T ).
Now suppose N (T ) = n > 0. Let S1 , S2 , · · · , Sn be the arrival times of these n
passengers. The waiting time of the ith passenger is Wi = T − Si . Hence the average
waiting time is
n n n
1X 1X 1X
W = Wi = (T − Si ) = T − Si .
n i=1 n i=1 n i=1
Now, using Theorem 5.14, we get
n
X n
X n
X
Si ∼ Ũi = Ui ,
i=1 i=1 i=1
EVENT TIMES IN A POISSON PROCESS 165
where Ui ’s are iid uniformly distributed uniformly over [0, T ]. Hence
n
1X
E(W |N (T ) = n) = T − E( Ui ) = T − T /2 = T /2.
n i=1
Hence
T
E(W̄ ) = .
2
Thus the average waiting time is T /2, which makes intuitive sense. This is one more
manifestation of the fact that the events in a Poisson process occur in a uniform,
time-independent fashion.
The superposition of two counting processes is a counting process that counts events
in both the counting processes. Splitting of a counting process is the operation of
generating two counting processes by classifying the events in the original counting
process as belonging to one or the other counting process. We first study the super-
position and then the splitting of Poisson processes.
5.4.1 Superposition
Superposition occurs naturally when two Poisson processes merge to generate a com-
bined process. For example, a telephone exchange may get domestic calls and inter-
national calls, each forming a Poisson process. Thus the process that counts both
calls is the superposition of the two processes. Figure 5.5 illustrates such a superpo-
sition. In this section we study the superposition of two or more independent Poisson
processes.
Let {Ni (t), t ≥ 0}, (i = 1, 2, · · · r), be independent Poisson processes. Define
N (t) = N1 (t) + N2 (t) + · · · + Nr (t), t ≥ 0.
The process {N (t), t ≥ 0} is called the superposition of the r processes {Ni (t), t ≥
0}, (i = 1, 2, · · · r). The next theorem describes the superposed process.
Proof: Since the r independent processes {Ni (t), t ≥ 0}, (i = 1, 2, · · · , r), have
stationary and independent increments, it follows that {N (t), t ≥ 0} inherits this
property. Thus in order to show that {N (t), t ≥ 0} is a PP(λ) it suffices to show that
N (t) is a P (λt) random variable. (See Theorem 5.12.) For a fixed t, we know that
{Ni (t) ∼ P (λi t), 1 ≤ i ≤ r} are independent Poisson random variables. Hence
their sum, N (t), is a Poisson random variable with parameter λt = (λ1 + λ2 + · · · +
λr )t. This proves the Theorem.
SUPERPOSITION AND SPLITTING OF POISSON PROCESSES 167
N1(t)
N2(t)
Example 5.13 Jobs submitted for execution on a central computer are divided into
four priority classes, indexed 1,2,3, and 4. The inter-arrival times for the jobs of class
i are exponential random variables with mean mi minutes, with m1 = 10, m2 = 15,
m3 = 30, and m4 = 60. Assume all arrival streams behave independently of each
other. Let N (t) be the total number of jobs of all classes that arrive during (0, t].
Characterize the stochastic process {N (t), t ≥ 0}.
Let Ni (t) be the number of jobs of call i that arrive during (0, t]. Due to the
iid exponential inter-arrival times, we know that {Ni (t), t ≥ 0} is a PP(λi ) where
λi = 1/mi , and the four arrival processes are independent. Now, we have
N (t) = N1 (t) + N2 (t) + N3 (t) + N4 (t).
Hence, from Theorem 5.15, it follows that {N (t), t ≥ 0} is a PP(λ) where, using the
time units of hours,
λ = λ1 + λ2 + λ3 + λ4
= 60/m1 + 60/m2 + 60/m3 + 60/m4
= 6 + 4 + 2 + 1 = 13 per hour.
We now study how events from the individual processes {Ni (t), t ≥ 0}, (i =
1, 2, · · · , r), are interleaved in the superposed process {N (t), t ≥ 0}. Let Zn = i
168 POISSON PROCESSES
if the nth event in the superposed process belongs to the ith process. Thus, for
the sample path shown in Figure 5.5, we have Z1 = 1, Z2 = 2, Z3 = 2, Z4 =
1, Z5 = 2, Z6 = 1, · · ·. The next theorem gives an interesting property of the se-
quence {Zn , n ≥ 1}.
Proof: Let Si be the time of occurrence of the first event in {Ni (t), t ≥ 0}. Then
Si ∼ exp(λi ). Also, since the r processes are independent, {Si , 1 ≤ i ≤ r} are
independent. Hence
λi
P(Zn = i) = P(Si = min{Sj , 1 ≤ j ≤ r}) = ,
λ
where we have used the marginal distribution of N in Theorem 5.3. Now suppose
the first event in the {N (t), t ≥ 0} process takes place at time s. Then, from Theo-
rem 5.9 it follows that the shifted processes {Ni (t + s), t ≥ 0}, (i = 1, 2, · · · r), are
independent Poisson processes with parameters λi , respectively. Hence Z2 , the type
of the next event in {N (t), t ≥ 0}, has the same distribution as Z1 , and is indepen-
dent of it. Proceeding in the fashion we see that Zn has the same distribution as Z1
and is independent of {Zk , 1 ≤ k ≤ n − 1}. This proves the theorem.
This is yet another indication that the events in a PP take place uniformly in time.
That is why the probability of an event being from the ith process is proportional to
the rate of the ith process, and is independent of when the event occurs.
Example 5.14 Customers arriving at a bank can be classified into three categories.
Customers of category 1 deposit money, those of category 2 withdraw money, and
those of category 3 do both. The deposit transaction takes 3 minutes, the withdrawals
take 4 minutes, and the combined transaction takes 6 minutes. Customers of category
i arrive according to PP(λi ) with λ1 = 20, λ2 = 15, and λ3 = 10 per hour. What
is the average transaction time of a typical customer in the bank? Are the successive
transaction times iid?
Let Z be the category of a typical customer. From Theorem 5.16 we get
20 15 10
P(Z = 1) = , P(Z = 2) = , P(Z = 3) = .
45 45 45
Hence the average transaction time of a typical customer is
20 15 10
3· +4· +6· = 4 minutes.
45 45 45
The successive transaction times are iid since the categories of the successive cus-
tomers are iid random variables, from Theorem 5.16.
SUPERPOSITION AND SPLITTING OF POISSON PROCESSES 169
5.4.2 Splitting
Splitting is the opposite of superposition: we start with a single Poisson process and
“split” it to create two or more counting processes. For example, the original process
may count the number of arrivals at a store, while the split processes might count
the male and female arrivals separately. The nature of these counting processes will
depend on the rule used to split the original process. We use a special rule called the
Bernoulli splitting, described below.
Let {N (t), t ≥ 0} be a Poisson process. Suppose each event is classified as a type
i event (1 ≤ i ≤ r) with probability pi , where {pi , 1 ≤ i ≤ r} are given numbers
such that
Xr
pi > 0, (1 ≤ i ≤ r), pi = 1.
i=1
The successive classifications are independent. This classification mechanism is
called the Bernoulli splitting mechanism. Now let Ni (t) be the number of events
during (0, t] that get classified as type i events. We say that the original process
{N (t), t ≥ 0} is “split” into r processes {Ni (t), t ≥ 0}, (1 ≤ i ≤ r). Clearly,
N (t) = N1 (t) + N2 (t) + · · · + Nr (t), t ≥ 0,
so {N (t), t ≥ 0} is a superposition of {Ni (t), t ≥ 0}, (1 ≤ i ≤ r).
The next theorem gives the probabilistic structure of the split processes.
Proof: We shall first show that {Ni (t), t ≥ 0} is a PP(λpi ) for a given i ∈
{1, 2, · · · ,r}. There are many ways of showing this. We shall show it by using Theo-
rem 5.7. Let {Xn , n ≥ 1} be the iid exp(λ) inter-event times in {N (t), t ≥ 0}, and
{Yn , n ≥ 1} be the inter-event times in the {Ni (t), t ≥ 0} process. Let {Rn , n ≥ 1}
be a sequence of iid be geometric random variables with parameter pi , i.e.,
P(Rn = i) = (1 − pi )k−1 pi , k ≥ 1.
Let T0 = 0 and Tn = R1 +R2 +· · ·+Rn , n ≥ 1. The Bernoulli splitting mechanism
implies that
Tn
X
Yn = Xi , n ≥ 1.
i=Tn−1 +1
Example 5.15 Geiger Counter. A geiger counter is a device to count the radioac-
tive particles emitted by a source. Suppose the particles arrive at the counter accord-
ing to a PP(λ) with λ = 1000 per second. The counter fails to count a particle with
probability .1, independent of everything else. Suppose the counter registers four par-
ticles in .01 seconds. What is the probability that at least six particles have actually
arrived at the counter during this time period?
Let N (t) be the number of particles that arrive at the counter during (0, t], N1 (t)
the number of particles that are registered by the counter during (0, t], and N2 (t) the
number of particles that go unregistered by the counter during (0, t]. Then {N (t), t ≥
0} is a PP(1000), and Theorem 5.17 implies that {N1 (t), t ≥ 0} is a PP(900), and
it is independent of {N2 (t), t ≥ 0}, which is a PP(100). We are asked to compute
P(N (.01) ≥ 6|N1 (.01) = 4). We have
P(N (.01) ≥ 6|N1 (.01) = 4) = P(N2 (.01) ≥ 2|N1 (.01) = 4)
= P(N2 (.01) ≥ 2) = 0.264.
Here we have used independence to get the second equality, and used N2 (.01) ∼
P(1) to compute the numerical answer.
Example 5.16 Turnpike Traffic. Consider the toll highway from Orlando to Mi-
ami, with n interchanges where the cars can enter and exit, interchange 1 is at
the start in Orlando, and n is at the end in Miami. Suppose the cars going to Mi-
ami enter at interchange i according to a PP(λi ) and travel at the same speed,
SUPERPOSITION AND SPLITTING OF POISSON PROCESSES 171
(1 ≤ i ≤ n − 1). A car entering at interchange i will exit at interchange j with
probability pij , 1 ≤ i < j ≤ n, independent of each other. Let Ni (t) be the number
of cars that cross a traffic counter between interchange i and i + 1 during (0, t]. Show
that {Ni (t), t ≥ 0} is a PP and compute its rate.
Let Nki (t) be the number of cars that enter at interchange k ≤ i and cross the
traffic counter between interchange i and i + 1 during (0, t]. Assuming Bernoulli
splitting, we see that
X
{Nki (t), t ≥ 0} ∼ PP(λk pkj ).
j>i
Now,
i
X
Ni (t) = Nki (t).
k=1
Theorem 5.17 implies that {Nki (t), t ≥ 0}, 1 ≤ k ≤ i, are independent Poisson
processes. Hence, from Theorem 5.15, we have
Xi X
{Ni (t), t ≥ 0} ∼ PP( λk pkj ).
k=1 j>i
Theorem 5.18 Let p be an integrable function over [0, t]. Then R(t) is a Poisson
random variable with parameter
Z t
λ p(s)ds.
0
Let N (t) be the number of users who enter the library during (0, t]. If a user i
enters the library at time s ≤ t, then he or she will be in the library at time t with
probability
p(s) = P(Ti > t − s) = 1 − G(t − s).
We can imagine that the user entering at time s is registered with probability p(s).
Thus R(t), the number of users who are registered during (0, t], is the same as the
number of users in the library at time t. Since p(·) is a monotone bounded function,
it is integrable, and we can apply Theorem 5.18 to see that R(t) is a Poisson random
variable with parameter
Z t Z t
λ p(s)ds = λ (1 − G(t − s))ds
0 0
Z t
= λ (1 − G(s))ds.
0
Thus the expected number of users in the library at time t is
Z t
E(R(t)) = λ (1 − G(s))ds.
0
In this section we study a generalization of the Poisson process by relaxing the re-
quirement that the increments be stationary.
We begin with some preliminaries. Let λ : [0, ∞) → [0, ∞) be a given function.
Assume that it is integrable over finite intervals and define
Z t
Λ(t) = λ(s)ds.
0
(i) N (0) = 0,
(ii) N (s + t) − N (s) ∼ P (Λ(s + t) − Λ(s)).
(See Conceptual Exercise 5.7.) Using the independence of increments one can derive
the finite dimensional joint probability distributions of an NPP as in Theorem 5.11.
We state the result in the next theorem and omit the proof.
Example 5.18 A fast food restaurant is open from 6am to midnight. During this
time period customers arrive according to an N P P (λ(·)), where the rate function
λ(·) (in customers/hr) is as shown in Figure 5.6. The corresponding Λ(·) is shown in
Figure 5.7.
λ(t)
30
20
10
t
6 am 8 10 12 2 4 6 8 10 12
Noon Midnight
Figure 5.6 Arrival rate function for the fast food restaurant.
(i) Compute the mean and variance of the number arrivals in one day (6am to mid-
night).
Let N (t) be the number of arrivals in the first t hours after 6am. The number of
arrivals in one day is given by N (18) which is a P(Λ(18)) random variable by
definition. Hence the mean and the variance is given by Λ(18) = 275.
NON-HOMOGENEOUS POISSON PROCESS 175
Л(t)
280
260
240
220
200
180
160
140
120
100
80
60
40
20
t
6 am 8 10 12 2 4 6 8 10 12
Noon Midnight
Figure 5.7 Cumulative arrivals function for the fast food restaurant.
(ii) Compute the mean and variance of the number arrivals from 6pm to 10pm.
The number of arrivals from 6pm to 10pm is given by N (16) − N (12) which
is a P(Λ(16) − Λ(12)) random variable by definition. Hence the mean and the
variance is given by Λ(16) − Λ(12) = 255 − 180 = 75.
(iii) What is the probability that exactly one person arrives from 8am to 8:06am?
The number of arrivals from 8am to 8:06am is given by N (2.1) − N (2) which is
a P(Λ(2.1) − Λ(2)) = P(1.95) random variable by definition. Hence the required
probability is given by e−1.95 (1.95)1 /1! = 0.2774.
The next theorem gives necessary and sufficient condition for a counting process
to be an NPP.
Proof: The “if” part is straightforward and we leave the details to the reader, see
Conceptual Exercise 5.8. To establish the “only if” part it suffices to show that the
conditions (i) and (ii) in this theorem imply that N (t) ∼ P (Λ(t)). To do this we
define
pk (t) = P(N (t) = k), k ≥ 0.
We follow the proof of Theorem 5.13. For k ≥ 1, we get
pk (t + h) = P(N (t + h) = k)
176 POISSON PROCESSES
k
X
= P(N (t + h) = k|N (t) = j)P(N (t) = j)
j=0
k
X
= P(N (t + h) − N (t) = k − j|N (t) = j)pj (t)
j=0
k
X
= P(N (t + h) − N (t) = k − j)pj (t)
j=0
(by independence of increments)
= P(N (t + h) − N (t) = 0)pk (t) + P(N (t + h) − N (t) = 1)pk−1 (t)
k
X
+ P(N (t + h) − N (t) = j)pk−j (t)
j=2
k
X
= (1 − λ(t)h + o(h))pk (t) + (λ(t)h + o(h))pk−1 (t) + o(h)pk−j (t)
j=2
k
X
= (1 − λ(t)h)pk (t) + λ(t)hpk−1 (t) + o(h)pk−j (t).
j=0
Letting h → 0, we get
dpk (t)
p′k (t) = = −λ(t)pk (t) + λ(t)pk−1 (t). (5.15)
dt
Proceeding in a similar fashion for the case k = 0 we get
p′0 (t) = −λ(t)p0 (t).
Using the initial condition p0 (0) = 1, the above equation admits the following solu-
tion:
p0 (t) = e−Λ(t) , t ≥ 0.
Using the initial condition pk (0) = 0 for k ≥ 1, we can solve Equation 5.15 recur-
sively to get
Λ(t)k
pk (t) = e−Λ(t) , t ≥ 0,
k!
which implies that N (t) ∼ P (Λt). This proves the theorem.
Since the conditions in the above theorem are necessary and sufficient, we can use
them as an alternate definition of an NPP.
COMPOUND POISSON PROCESS 177
5.5.1 Event Times in an NPP
Suppose the nth event in an NPP occurs at time Sn (n ≥ 1). Define Xn = Sn −Sn−1
(S0 = 0) be the nth inter-event time. In general the inter-event times are neither
independent, nor identically distributed. We can compute the marginal distribution
of Xn as given in the next theorem.
Theorem 5.21
e−Λ(t)
if n = 0,
P(Xn+1 > t) = R ∞ −Λ(t+s) Λ(t+s)n
0 e n! ds if n ≥ 1.
Theorem 5.22 Campbell’s Theorem for NPPs. Let Sn be the nth event time in an
N P P (λ(·)) {N (t), t ≥ 0}. Given N (t) = n,
(S1 , S2 , · · · , Sn ) ∼ (Ũ1 , Ũ2 , · · · , Ũn ).
In this section we shall study another generalization of a Poisson process, this time
relaxing the assumption that the events occur one at a time. Such a generalization is
useful in practice where multiple events can occur at the same time. For example,
customers arrive in batches to a restaurant, multiple cars are involved in an accident,
demands occur in batches, etc. To model such situations we introduce the compound
Poisson process CPP as defined below.
Z(t)
Z4
Z1 Z2
Z3
t
S0 S1 S2 S3 S4
Since {Zn , n ≥ 1} is a sequence of iid random variables, and the PP has inde-
pendent increments, it follows that {Z(t), t ≥ 0} also has independent increments.
Thus if we know the marginal distribution of Z(t), then all finite dimensional dis-
tributions of {Z(t), t ≥ 0} are known. Specifically, consider the case where Zn is a
non-negative integer valued random variable. Then the state-space of {Z(t), t ≥ 0}
is {0, 1, 2, · · ·}. Let
pk (t) = P(Z(t) = k), k = 0, 1, 2, · · · .
The next theorem gives the finite dimensional distribution.
Theorem 5.23 Let {Z(t), t ≥ 0} be a CPP with state-space {0, 1, 2, · · ·}. Let 0 ≤
t1 ≤ · · · ≤ tn be given real numbers and 0 ≤ k1 ≤ · · · ≤ kn be given integers. Then
P(Z(t1 ) = k1 , Z(t2 ) = k2 , · · · , Z(tn ) = kn )
= pk1 (t1 )pk2 −k1 (t2 − t1 ) · · · pkn −kn−1 (tn − tn−1 ).
Proof: We have
E(e−sZ(t) |N (t) = n) = E(e−(sZ1 +Z2 +···+Zn ) ) = A(s)n ,
since {Zn , n ≥ 1} are iid with common LST A(s). Thus
φ(s) = E(e−sZ(t) ) = E(E(e−sZ(t) |N (t))) = E((A(s))N (t) ),
which is the generating function of N (t) evaluated at A(s). Since N (t) ∼ P(λt), we
know that
E(z N (t) ) = e−λt(1−z) .
Hence
φ(s) = e−λt(1−A(s)) .
This proves the theorem.
In general it is difficult to invert φ(s) analytically to obtain the distribution of Z(t).
However, the moments of Z(t) are easy to obtain. They are given in the following
theorem.
Theorem 5.25 Let {Z(t), t ≥ 0} be a CPP as defined by Equation 5.16. Let τ and
s2 be the mean and the second moment of Zn , n ≥ 1. Then
E(Z(t)) = λτ t,
Var(Z(t)) = λs2 t.
Example 5.19 Suppose customers arrive at a restaurant in batches that are iid with
the following common distribution
pk = P(Batch Size = k), 1 ≤ k ≤ 6,
where
[p1 , p2 , p3 , p4 , p5 , p6 ] = [.1, .25, .1, .25, .15, .15].
The batches themselves arrive according to a PP(λ). Compute the mean and variance
of Z(t), the number of arrivals during (0, t].
{Z(t), t ≥ 0} is a CPP with batch arrival rate λ and the mean and second moment
of batch sizes given by
X 6
τ= kpk = 3.55,
k=1
6
X
s2 = k 2 pk = 15.15.
k=1
180 POISSON PROCESSES
Hence, using Theorem 5.25, we get
E(Z(t)) = 3.55λt, Var(Z(t)) = 15.15λt.
5.1 Consider a network with three arcs as shown in Figure 5.9. Let Xi ∼ exp(λi )
be the legth of arc i, (1 ≤ i ≤ 3). Suppose arc lengths are independent. Compute the
distribution of the length of the shortest path from node a to node c.
X1 X2
a b c
X3
5.2 Compute the probability that a − b − c is the shortest path from node a to node
c in the network in Figure 5.9.
5.3 Compute the distribution of the length of the longest path from node a to node
c in the network in Figure 5.9.
5.4 Compute the probability that a − b − c is the longest path from node a to node
c in the network in Figure 5.9.
5.5 Consider a system consisting of n components in parallel, that is, the system
fails when all components fail. The lifetimes of the components are iid exp(λ) ran-
dom variables. Compute the cdf of the time when the system fails, assuming that all
components are functioning at time 0.
5.9 A machine has two parts: A and B. There are two spares available for A and one
for B. The machine needs both parts to function. As soon as a part fails, it is replaced
instantaneously by its spare. The lifetimes of all parts are independent. The part A
and its spares have iid exp(λ) lifetimes and B and its spare have iid exp(µ) lifetimes.
Spares fail only while in service. What is the expected lifetime of the machine?
5.10 A rope consists of n strands. When a strand is carrying a load of x tons, its
failure rate is λx. At time zero all strands are working and equally share a combined
load of L tons. When a strand fails, the remaining strands share the load equally. This
process continues until all strands break, at which point the rope breaks. Compute the
distribution of the lifetime of the rope.
5.11 Two jobs are waiting to be processed by a single machine. The time required to
complete the ith job is an exp(λi ) random variable (i = 1, 2). The processing times
are independent of each other, and of the sequence in which the jobs are processed. To
keep the ith job in the machine shop costs Ci dollars per unit time. In what sequence
should the jobs be processed so that the expected total cost is minimized?
5.13 Suppose there are n ≥ 3 sticks whose lengths are iid exp(λ) random variables.
Show that the probability that an n-sided polygon can be formed from these sticks is
n(1/2)n−1 . Hint: A polygon can be formed if and only if the longest stick is shorter
than the sum of the rest.
5.14 Let A, B, and C be iid exp(λ) random variables. Show that the probability
that both the roots of Ax2 + Bx + C = 0 are real is 1/3.
5.15 Suppose U is uniformly distributed over (0, 1). Show that −ln(U ) is an exp(1)
random variable.
5.16 Mary arrives at a single server service facility to find a random number, N , of
customers already in the system. The customers are served in a first come first served
order. Service of all the customers are iid exp(λ) random variables. The pmf of N is
given by P(N = n) = (1 − ρ)ρn , n = 0, 1, 2, · · · , where ρ < 1 is a given constant.
Let W be the time when Mary completes her service. Compute the LST and the pdf
of W .
182 POISSON PROCESSES
5.17 A machine needs a single critical component to operate properly. When that
component fails it is instantaneously replaced by a spare. There is a supply of k
spares. The lifetime of the original component and the k spares are iid exp(λ) random
variables. What is the smallest number of spares that must be provided to guarantee
that the system will last for at least T hours with probability α, a given constant?
5.18 A system consists of two components in series. The lifetime of the first com-
ponent is exp(λ) and that of the second component is Erl(µ, n). The system fails as
soon any one of the two components fails. Assuming the components behave inde-
pendently of each other, compute the expected lifetime of the system.
5.21 Let Xi be a P(λi ) random variable, and assume that {Xi , 1 ≤ i ≤ r} are
independent. Show that X = X1 + X2 + · · · + Xn is a P(λ) random variable, where
λ = λ1 + λ2 + · · · + λn .
5.23 Let Si be the ith event time in a PP(λ) {N (t), t ≥ 0}. Show that
1 − e−λt
E(SN (t) ) = t − .
λ
5.24 Let {N (t), t ≥ 0} be a PP(λ). Suppose a Bernoulli splitting mechanism tags
each event as type one with probability p, and type two with probability 1 − p. Let
Ni (t) be the number of type i events over (0, t]. Let Ti be the time until the first event
in the type i process. Compute the joint pdf of (T1 , T2 ).
5.25 Let {N (t), t ≥ 0} be a PP(λ). Compute the probability that N (t) is odd.
(i) The mean and variance of the number of customers who enter the bank during
an 8-hour day.
(ii) Probability that more than four customers enter the bank during an hour long
lunch break.
(iii) Probability than no customers arrive during the last 15 minutes of the day.
(iv) Correlation between the number of customers who enter the bank between 9 am
and 11 am, and those who enter between 10 am and noon.
5.29 Consider a one-way road where the cars form a PP(λ) with rate λ cars/sec.
The road is x feet wide. A pedestrian, who walks at a speed of u feet/sec, will cross
the road if and only if she is certain that no cars will cross the pedestrian crossing
while she is on it. Show that the expected time until she completes the crossing is
(ex/u − 1)/λ.
5.30 A machine is up at time zero. It then alternates between two states: up or down.
(When an up machine fails it goes to the down state, and when a down machine is
repaired it moves to the up state.) Let Un be the nth up-duration, followed by the
nth down-duration, Dn . Suppose {Un , n ≥ 0} is a sequence of iid exp(λ) random
variables. The nth down-duration is proportional to the nth up-duration, i.e., there is
a constant c > 0 such that Dn = cUn .
5.31 Suppose customers arrive at a system according to PP(λ). Every customer stays
in the system for an exp(µ) amount of time and then leaves. Customers behave in-
dependently of each other. Show that the number of customers in the system at time
t is µλ (1 − e−µt ).
5.32 Two individuals, 1 and 2, need kidney transplants. Without a transplant the
remaining lifetime of person i is an exp(µi ) random variable, the two lifetimes being
independent. Kidneys become available according to a Poisson process with rate λ.
The first available kidney is supposed to go to person 1 if she is still alive when the
kidney becomes available; else it will go to person 2. The next kidney will go person
2, if he is still alive and has not already received a kidney. Compute the probability
that person i receives a new kidney (i = 1, 2).
184 POISSON PROCESSES
5.33 It has been estimated that meteors entering the earth’s atmosphere over a cer-
tain region form a Poisson process with rate λ = 100 per hour. About 1 percent of
those are visible to the naked eye.
(i) What is the probability that a person is unlucky enough not to see any shooting
stars in a one hour period?
(ii) What is the probability that a person will see two shooting stars in one minute?
More than two in one minute?
5.34 A machine is subject to shocks that arrive according to a PP(λ). The strength of
each shock is non-negative random variable with cdf G(·). If the shock has strength
x it causes the machine to fail with probability p(x). Assuming the successive shock
strengths are independent, what is the distribution of the lifetime of the machine?
5.35 Customers arrive at a bank according to a PP with rate of 10/hour. Forty percent
of them are men, and the rest are women. Given that 10 men have arrived during the
first hour, what the expected number of women who arrived in the first hour?
5.37 For the NPP of Computational Exercise 5.36, compute the distribution of N (t).
5.38 For the NPP of Computational Exercise 5.36, compute E(S1 |N (t) = n), for
0 ≤ t ≤ 2.
5.39 Redo Example 5.17 under the assumption that the arrivals to the library follow
an N P P (λ(·)).
5.40 A factory produces items one at a time according a PP(λ). These items are
loaded onto trucks that leave the factory according to a PP(µ) and transport the items
to a warehouse. The truck capacity is large enough so that all the items produced after
the departure of a truck can be loaded onto the next truck. The travel time between the
factory and the warehouse is a constant and can be ignored. Let Z(t) be the number
of items received at the warehouse during (0, t]. Is {Z(t), t ≥ 0} a PP, NPP, or CPP
or none of the above? Show that E(Z(t)) = λt − λ(1 − e−µt )/µ.
5.41 A customer makes deposits in a bank according to a PP with rate λd per week.
The sizes of successive deposits are iid random variables with mean τd and vari-
ance σd2 . Compute the mean and variance of the total amount deposited over [0, t].
Unknown to the customer, the customer’s spouse makes withdrawals from the same
account according to a PP with rate λw . The sizes of successive withdrawals are iid
CONCEPTUAL EXERCISES 185
2
random variables with mean τw and variance σw . Assume that the deposit and with-
drawal processes are independent of each other. Let Z(t) be the account balance at
time t. Show that {Z(t), t ≥ 0} is a CPP. Compute the mean and variance of Z(t).
5.43 The lifetime of an item is a nonnegative continuous random variable with cdf
F (·) and pdf f (·). Assume that F (0) = 0 and F (t) < 1 for all t < ∞, with F (∞) =
1.
(i) When the item fails at age t, we perform a minimal repair, i.e., we instanta-
neously restore it to functional state, but its failure rate remains unchanged. Thus
after minimal repair of a failed item of age t, the item behaves like a functioning
item of age t. Suppose a new item is put in use at time 0. Let N (t) be the number
of failures up to time t. Is {N (t), t ≥ 0} a non-homogenous Poisson process?
Why or why not? If it is, what is its rate function?
(ii) Suppose the expected cost of performing minimal repair on a failed item of age
t is c(t) dollars. Compute the expected total cost of repairs for this item up to a
given time T > 0.
5.44 Let N (t) be the total number of cameras sold by a store over (0, t] (years).
Suppose {N (t), t ≥ 0} is an NPP with rate function
λ(t) = 200(1 + e−t ), t ≥ 0.
n
Suppose the price of the camera is $350(1.08) in year n = 0, 1, 2, .... The interval
from t = n to t = n + 1 is called year n. Thus sales rate changes continuously with
time, but the price changes from year to year. Compute the mean and variance of the
sales revenue in the nth year.
5.5 Using Laplace transforms (LT) solve Equations 5.10 and 5.9 as follows: denote
the LT of pk (t) by p∗k (s). Using appropriate initial conditions show that
(λ + s)p∗0 (s) = 1
(λ + s)p∗k (s) = λp∗k−1 (s), k ≥ 1.
Solve these to obtain
λk
p∗k (s) = , k ≥ 0.
(λ + s)k+1
Invert this to obtain Equation 5.11.
5.6 Let {Ni (t), t ≥ 0} (i = 1, 2) be two independent Poisson processes with rates
λ1 and λ2 , respectively. At time 0 a coin is flipped that turns up heads with probability
p. Define
N1 (t) if the coin turns up heads,
N (t) =
N2 (t) if the coin turns up tails.
(i) N (0) = 0,
(ii) N (s + t) − N (s) ∼ P (Λ(s + t) − Λ(s)).
5.8 Prove the “if” part of Theorem 5.20 by using the independence of increments of
{N (t), t ≥ 0} and that N (t + h) − N (t) ∼ P (Λ(t + h) − Λ(t)).
CONCEPTUAL EXERCISES 187
5.9 Let {Ni (t), t ≥ 0} (i = 1, 2) be two independent Poisson processes with rates
λ1 and λ2 , respectively. Let Ai be the number of events in the ith process before the
first event in the jth process (i 6= j).
5.11 Prove Theorem 5.22 by following the proof of Theorem 5.14 on page 163.
5.12 Let {Ni (t), t ≥ 0} (i = 1, 2) be two independent Poisson processes with rates
λ1 and λ2 , respectively. Let T be a non-negative random variable that is independent
of both these processes. Let F be its cdf and φ be its LST. Let Bi = Ni (T ).
5.15 Let {Ni (t), t ≥ 0} be an NPP(λi (·)), (1 ≤ i ≤ r). Suppose they are indepen-
dent and define
N (t) = N1 (t) + N2 (t) + · + Nr (t), t ≥ 0.
Show that {N (t), t ≥ 0} is an N P P (λ(·)), where
λ(t) = λ1 (t) + λ2 (t) + · · · + λr (t), t ≥ 0.
5.16 Show that the process {R(t), t ≥ 0} of Theorem 5.18 is an NPP with rate
function λ(t) = λp(t), t ≥ 0.
5.21 Prove or disprove that the process {R(t), t ≥ 0} defined in Example 5.17 a
non-homogeneous Poisson process.
Proof: Follows from the general theorem, “there is no such thing as a free lunch.”
In this chapter we study a system with a countable state-space that can change
its state at any point in time. Let Sn , n ≥ 1, be time of the nth change of state or
transition, Yn = Sn − Sn−1 , (with S0 = 0), be the nth sojourn time, and Xn be the
state of the system after the nth transition. Define
N (t) = sup{n ≥ 0 : Sn ≤ t}, t ≥ 0.
Thus N (t) is the number of transitions the system undergoes over (0, t], and
{N (t), t ≥ 0} is a counting process generated by {Yn , n ≥ 1}. It has piecewise con-
stant sample paths that start with N (0) = 0 and jump up by +1 at times Sn , n ≥ 1.
189
190 CONTINUOUS-TIME MARKOV CHAINS
always assume that it holds.
Now let X(0) = X0 be the initial state of the system, and X(t) be the state of
the system at time t. Under the regularity assumption of Equation 6.1, N (t) is well
defined for each t ≥ 0, and hence we can write
X(t) = XN (t) , t ≥ 0. (6.3)
The continuous time stochastic system {X(t), t ≥ 0} has piece-wise constant right-
continuous sample paths. A typical sample path of such a system is shown in Fig-
ure 6.1. We see that the {X(t), t ≥ 0} process is in the initial state X0 at time t = 0.
X(t)
X1
X4
X0
S1 S2 S3 S4
0 t
Y1 Y2 Y3 Y4
X3
X2
It stays there for a sojourn time Y1 and then jumps to state X1 . In general it stays
in state Xn for a duration given by Yn+1 and then jumps to state Xn+1 , n ≥ 0.
Note that if Xn = Xn+1 , there is no jump in the sample path of {X(t), t ≥ 0} at
time Sn+1 . Thus, without loss of generality, we can assume that Xn+1 6= Xn for all
n ≥ 0.
In this chapter we study the case where {X(t), t ≥ 0} belongs to a particular class
of stochastic processes called the continuous-time Markov chains (CTMC), defined
below.
Example 6.3 Compound Poisson Process. Let {X(t), t ≥ 0} be a CPP with batch
arrival rate λ and iid integer valued batch sizes with common pmf
αk = P(Batch Size = k), k = 1, 2, 3, · · · .
Is {X(t), t ≥ 0} a CTMC?
See Section 5.6 for the definition of the CPP. The state-space of {X(t), t ≥ 0} is
{0, 1, 2, · · ·}. Suppose the process enters state i at time t, i.e., a batch arrives at time
t and brings the total number of arrivals over (0, t] to i. Then the process stays there
192 CONTINUOUS-TIME MARKOV CHAINS
for an exp(λ) amount of time (time until the next batch arrives), independent of the
history, and then jumps to state j > i if the new batch is of size j − i. This, along
with the properties of the CPP, implies that Equation 6.4 is satisfied with qi = λ,
pi,j = αj−i , j > i. Hence {X(t), t ≥ 0} is a CTMC.
We shall describe many more examples of CTMCs in the next section. In the
remaining section we study general properties of a CTMC. The next theorem shows
that a CTMC has Markov property at all times.
Proof: Let X(t) be as defined in Equation 6.3. Suppose SN (s) = ν, i.e., the last tran-
sition at or before s takes place at ν ≤ s. Then X(s) = i implies that X(ν) = i, and
Y , the sojourn time in state i that started at ν, ends after s. Thus Y > s − ν, which
implies that the remaining sojourn time in state i at time s, given by Y − (s − ν),
is exp(qi ). Also, Y depends on {X(u) : 0 ≤ u ≤ s} only via X(ν) which equals
X(s). Also, the next stateXN (s)+1 depends on the history only via X(ν) = X(s).
Thus all future evolution of the process after time s depends on the history only via
X(s) = i. This gives Equation 6.5. The same argument yields Equation 6.6 since the
qi and pij do not depend on when the process enters state i.
Proof: We shall prove the theorem by showing that all finite dimensional dis-
tributions of the CTMC are determined by a and {P (t), t ≥ 0}. Let n ≥ 1,
0 ≤ t1 ≤ t2 ≤ · · · ≤ tn and i1 , i2 , · · · in ∈ S be given. We have
P(X(t1 ) = i1 , X(t2 ) = i2 , · · · , X(tn ) = in )
X
= P(X(t1 ) = i1 , X(t2 ) = i2 , · · · , X(tn ) = in |X(0) = i0 )P(X(0) = i0 )
i0 ∈S
X
= ai0 P(X(t2 ) = i2 , · · · , X(tn ) = in |X(0) = i0 , X(t1 ) = i1 )
i0 ∈S
·P(X(t1 ) = i1 |X(0) = i0 )
X
= ai0 pi0 ,i1 (t1 )P(X(t2 ) = i2 , · · · , X(tn ) = in |X(t1 ) = i1 )
i0 ∈S
(from Markov property of the CTMC)
X
= ai0 pi0 ,i1 (t1 )P(X(t2 − t1 ) = i2 , · · · , X(tn − t1 ) = in |X(0) = i1 )
i0 ∈S
(from the time homogeneity of the CTMC).
Continuing in this fashion we get
P(X(t1 ) = i1 , X(t2 ) = i2 , · · · , X(tn ) = in )
X
= ai0 pi0 ,i1 (t1 )pi1 ,i2 (t2 − t1 ) · · · pin−1 ,in (tn − tn−1 ).
i0 ∈S
Now let X(t) be the state of the system at time t, and Sn be the time of the nth
transition. Let Xn = X(Sn +) and Yn = Sn − Sn−1 (with S0 = 0), as before. From
the distributional and independence assumptions about {Tik , k 6= i}, we see that Ti
of Equation 6.7 is an exp(qi ) random variable, where
X
qi = qik , i ∈ S. (6.8)
k6=i
If qi = 0 there are no events that will take the system out of state i, i.e., state i is an
absorbing state. In this case we define pii = 1, since this makes the state i absorbing
in the embedded DTMC as well. If qi > 0, we have
P(Xn+1 = j, Yn+1 > y|Xn = i, Yn , Xn−1 , Yn−1 , · · · , X1 , Y1 , X0 )
= P(Tij = Ti , Ti > y)
qij −qi y
= e , i, j ∈ S, i 6= j, y ≥ 0.
qi
Now define
qij
pij = , i, j ∈ S, j 6= i. (6.9)
qi
From the above derivation it is clear that {X(t), t ≥ 0} is a CTMC with param-
eters {qi } and {pij } as defined in Equations 6.8 and 6.9. Thus we can describe a
CTMC by specifying qij , called the transition rate from state i to j, for all the pairs
EXAMPLES 195
(i, j), with i 6= j. Note that the quantity qii is as yet undefined. For strictly technical
reasons, we define X
qii = − qik = −qi , i ∈ S.
k:k6=i
It is convenient to put all the qij ’s in a matrix form:
Q = [qij ]i,j∈S .
This is called the infinitesimal generator or simply the generator matrix of the CTMC.
It will become clear later that the seemingly arbitrary definition of qii makes it easy
to write many equations of interest in matrix form. An important property of the
generator matrix is that its row sums are zero, i.e.,
X
qij = 0, i ∈ S.
j∈S
The generator matrix of a CTMC plays the same role as the one-step transition prob-
ability matrix of a DTMC.
In the examples that follow we do not explicitly verify the independence assump-
tions. However, we urge the students to do so for themselves.
Example 6.4 Two State Machine. Let X(t) be the state of the machine of Exam-
ple 6.1. The state-space of {X(t), t ≥ 0} is {0, 1}. In state 0 we have E01 = machine
repair completes, and T01 ∼ exp(λ). Hence q01 = λ. Similarly, in state 1 we have
E10 = machine fails, and T10 ∼ exp(µ). Hence q10 = µ. Thus {X(t), t ≥ 0} is a
CTMC with generator matrix
−λ λ
Q= , (6.10)
µ −µ
196 CONTINUOUS-TIME MARKOV CHAINS
λ
0 1
The state-space of {X(t), t ≥ 0} is {0, 1, 2}. We analyze the states one by one.
State 0. X(t) = 0 implies that both the machines are down at time t. When either
one of them gets repaired the state changes to 1. Hence we have E01 = one of the two
failed machines completes repair. Since the remaining repair times are exponential
due to memoryless property, T01 is the minimum of two independent exp(λ) random
variables. Hence T01 ∼ exp(2λ). Hence
q0 = 2λ, q01 = 2λ.
State 1. X(t) = 1 implies that one machine is up and the other is down at time
t. Now there are two triggering events: E10 = the working machine fails, and E12
= the failed machine completes repair. If E10 occurs before E12 , the process moves
to state 0, else it moves to state 2. Since the remaining repair time and life time
are exponential due to memoryless property, we see that T10 ∼ exp(µ) and T12 ∼
exp(λ). Hence
q1 = λ + µ, q12 = λ, q10 = µ.
State 2. X(t) = 2 implies that both the machines are up at time t. When either one
of them fails the state changes to 1. Hence we have E21 = one of the two working
machines fails. Since the remaining life times are exponential due to memoryless
property, T21 is the minimum of two independent exp(µ) random variables. Hence
T21 ∼ exp(2µ). Hence
q2 = 2µ, q21 = 2µ.
EXAMPLES 197
Thus {X(t), t ≥ 0} is a CTMC with generator matrix
−2λ 2λ 0
Q= µ −(λ + µ) λ ,
0 2µ −2µ
and the rate diagram as shown in Figure 6.3.
2λ λ
0 1 2
µ 2µ
Figure 6.3 The rate diagram for the two-machine, two-repairpersons workshop.
λ λ
0 1 2
µ 2µ
Figure 6.4 The rate diagram for the two-machine, one-repairperson workshop.
λ λ λ λ
0 1 2 3
Example 6.8 Pure Birth Process. An immediate extension of the PP(λ) is the pure
birth process, which is a CTMC on S = {0, 1, 2, · · ·} with the following generator
matrix:
−λ0 λ0
−λ1 λ1
Q= .
−λ 2 λ 2
.. ..
. .
The rate diagram is shown in Figure 6.6. Such a process spends an exp(λi ) amount
0 1 2 3
λ0 λ1 λ2 λ3
of time in state i and then jumps to state i + 1. In biological systems, X(t) represents
the number of organisms in a colony at time t, and the transition from i to i + 1
represents birth. Hence the name “pure birth process.” The parameter λi is called
the birth rate in state i. The PP(λ) is a special case of a pure birth process with all
birth rates given by λi = λ. We illustrate with several situations where pure birth
processes can arise.
Yule Process. Let X(t) be the number of amoebae at time t in a colony of amoebae.
Suppose each amoeba lives for an exp(λ) amount of time and then splits into two.
All amoebae in the colony behave independently of each other. Suppose X(t) = i.
When one of the i amoebae splits, the process moves to state i + 1. Hence Ei,i+1 =
one of the i living amoebae splits. This will happen after an exp(iλ) amount of time,
i.e., the minimum of i iid exp(λ) random variables. Thus Ti,i+1 ∼ exp(iλ). Thus
EXAMPLES 199
we get qi,i+1 = iλ, and qii = −iλ. Hence {X(t), t ≥ 0} is a pure birth process with
birth rates λi = iλ, i ≥ 0. Such a process is called the Yule process.
Yule Process with Immigration. Consider the above colony of amoebae. Suppose
amoebae arrive at this colony from outside according to a PP(θ). All amoebae in
the colony, whether native or immigrants, behave independently and identically. Let
X(t) be the number of amoebae in the colony at time t. As before, the state-space of
{X(t), t ≥ 0} is S = {0, 1, 2, · · ·}. Suppose X(t) = i. The system moves to state
i + 1 if one of the existing amoebae splits, which happens after an exp(iλ) amount
of time, or an amoeba arrive from outside, which happens after an exp(θ) amount of
time. Thus Ti,i+1 ∼ exp(iλ + θ). Hence qi,i+1 = iλ + θ and qii = −(iλ + θ). Hence
{X(t), t ≥ 0} is a pure birth process with birth rates λi = iλ + θ, i ≥ 0.
Maintenance. Suppose a machine is brand new at time zero. It fails after an
exp(θ0 ) amount of time, and is repaired instantaneously. The lifetime of a machine
that has undergone n ≥ 1 repairs is exp(θn ) random variable, and is independent
of how old the machine is. Let X(t) be the number of repairs the machine has un-
dergone over time (0, t]. Then {X(t), t ≥ 0} is a pure birth process with birth rates
λi = θi , i ≥ 0.
0 1 2 3
µ1 µ2 µ3 µ4
of time in state i ≥ 1 and then jumps to state i − 1. In biological systems, X(t) repre-
sents the number of organisms in a colony at time t, and the transition from i to i − 1
represents death. Hence the name “pure death process.” The parameter µi is called
the death rate in state i. Note that state 0 is absorbing, with q00 = 0. Thus once the
colony has no members left in it, it stays extinct forever. We discuss a special case
below.
0 1 2 3
µ1 µ2 µ3 µ4
Figure 6.8 The rate diagram for a birth and death process.
number of organisms in a colony at time t, and the transition from i to i−1 represents
death and that from i to i + 1 represents birth. Hence the name “birth and death
process.” The parameter µi is called the death rate in state i and λi is called the
birth rate in state i. We define µ0 = 0, since death cannot occur when there are no
organisms to die. A birth and death process spends exp(λi + µi ) amount of time in
state i, and then jumps to state i − 1 with probability µi /(λi + µi ), or to state i + 1
with probability λi /(λi + µi ). A very large number of queuing models give rise to
birth and death processes and we shall study them in detail in Chapter 7. Here we
give a few examples.
The state-space is S = {0, 1, 2, · · ·}. Suppose X(t) = 0, that is, there are no
customers in the system at time t. The triggering event is E01 = arrival of a new
customer. From the properties of the PP we have T ∼ exp(λ). Thus q01 = λ. Now
suppose X(t) = i > 0. Then one customer is in service and i − 1 are waiting at
time t. Now there are two triggering events: Ei,i+1 = arrival of a new customer,
and Ei,i−1 = departure of the customer in service. From the memoryless property of
exponentials, we see that Ti,i+1 ∼ exp(λ) and Ti,i−1 ∼ exp(µ). Hence qi,i+1 = λ
EXAMPLES 201
and qi,i−1 = µ for i ≥ 1. Hence {X(t), t ≥ 0} is a birth and death process with birth
parameters λi = λ for i ≥ 0, and death parameters µi = µ for i ≥ 1.
The state-space is S = {0, 1, 2, · · ·}. Suppose X(t) = 0, that is, there are no
organisms in the colony at time t. Then X(u) = 0 for all u ≥ t. Thus state 0
is absorbing. Now suppose X(t) = i > 0. Now there are two triggering events:
Ei,i+1 = one of the i organisms gives birth to an individual, and Ei,i−1 = one of
the i organisms dies. From the results about the superposition of Poisson processes,
we see that Ti,i+1 ∼ exp(iλ) and from the memoryless properties of exponential
random variables we get Ti,i−1 ∼ exp(iµ). Hence qi,i+1 = iλ and qi,i−1 = iµ.
Hence {X(t), t ≥ 0} is a birth and death process with birth parameters λi = iλ and
death parameters µi = iµ for i ≥ 0.
Example 6.15 Retrial Queue. We describe a simple retrial queue here. Consider
a single-server system where the customers arrive according to a PP(λ) and have iid
exp(µ) service times. Suppose the system capacity is 1. Thus if an arriving customer
finds the server idle, he immediately starts getting served. On the other hand, if an
arriving customer finds the server busy, he goes away (we say he joins an orbit) and
tries his luck again after an exp(θ) amount of time independent of everything else.
He persists this way until he is served. All customers behave in this fashion. We
model this as a CTMC.
Suppose (X(t), R(t)) = (1, k), k ≥ 0, i.e., the server is busy and there are k cus-
tomers in the orbit at time t. There are two possible events that can lead to a state
change: E(1,k),(1,k+1) = an external arrival occurs, and E(1,k),(0,k) = the customer
in service departs. Notice that if a customer in orbit conducts a retrial, he simply re-
joins the orbit, and hence there is no change of state. We have T(1,k),(1,k+1) ∼ exp(λ)
and T(1,k),(0,k) ∼ exp(µ). Thus q(1,k),(1,k+1) = λ and q(1,k),(0,k) = µ.
Next suppose (X(t), R(t)) = (0, k), k ≥ 1, i.e., the server is idle and there are
k customers in the orbit at time t. There are two possible events that can lead to a
state change: E(0,k),(1,k) = an external arrival occurs, and E(0,k),(1,k−1) = one of
the k customers in the orbit conducts a retrial, and finding the server idle, joins ser-
vice. We have T(0,k),(1,k) ∼ exp(λ) and T(0,k),(1,k−1) , being the minimum of k iid
exp(θ) random variables, is an exp(kθ) random variable. Thus q(0,k),(1,k) = λ and
q(0,k),(1,k−1) = kθ.
Finally, suppose (X(t), R(t)) = (0, 0). There is only one triggering event in this
state, namely, arrival of an external customer, leading to the new state (1, 0). Hence
T(0,0),(1,0) ∼ exp(λ). Thus {(X(t), R(t)), t ≥ 0} is a CTMC. The rate diagram is
shown in Figure 6.9.
TRANSIENT BEHAVIOR: MARGINAL DISTRIBUTION 203
λ λ λ λ
1,0 1,1 1,2 1,3
λ
θ 2θ 3θ
µ λ µ λ µ λ µ λ 4θ
Proof: Part (i) is obvious since pij (t)’s are conditional probabilities. To show Equa-
tion 6.11, we assume the regularity condition in 6.1 and use the representation in
Equation 6.3 to get
X X
pij (t) = P(XN (t) = j|X0 = i)
j∈S j∈S
204 CONTINUOUS-TIME MARKOV CHAINS
∞
XX
= P(XN (t) = j|X0 = i, N (t) = n)P(N (t) = n|X0 = i)
j∈S n=0
XX ∞
= P(Xn = j|X0 = i, N (t) = n)P(N (t) = n|X0 = i)
j∈S n=0
X∞ X
= P(Xn = j|X0 = i, N (t) = n)P(N (t) = n|X0 = i)
n=0 j∈S
X
= P(N (t) = n|X0 = i)
j∈S
= P(N (t) < ∞|X0 = i) = 1.
Here the second to last equality is a result of the fact that Xn ∈ S for all n ≥ 0, and
the last equality is the result of Equation 6.2.
To show Equation 6.12, we condition on X(s) to get
pij (s + t) = P(X(s + t) = j|X(0) = i)
X
= P(X(s + t) = j|X(0) = i, X(s) = k)P(X(s) = k|X(0) = i)
k∈S
X
= P(X(s + t) = j|X(s) = k)pik (s)
k∈S
(from Markov property, Equation 6.5)
X
= pik (s)P(X(t) = j|X(0) = k)
k∈S
(from time-homogeneity, Equation 6.6)
X
= pik (s)pkj (t).
k∈S
Theorem 6.4 Forward and Backward Equations. Let P (t) be the transition prob-
ability matrix of a CTMC with state-space S = {0, 1, 2, · · ·} and generator matrix
Q. Then P (t) is differentiable with respect t and satisfies
d
P (t) = P ′ (t) = QP (t), (Backward Equations) (6.13)
dt
and
d
P (t) = P ′ (t) = P (t)Q, (Forward Equations) (6.14)
dt
with initial condition
P (0) = I,
where I is an identity matrix of appropriate size.
X 1 − e−(qi −qj )h
= qi pij e−qj h (assuming qi 6= qj )
qi − qj
j∈S,j6=i
= qi h + o(h)
where the last equality follows after a bit of algebra. Similar analysis yields
P(N (h) ≥ 2|X0 = i) = o(h).
Using these we get
pii (h) = P(X(h) = i|X(0) = i)
= P(XN (h) = i|X0 = i)
= P(XN (h) = i|X0 = i, N (h) = 0)P(N (h) = 0|X0 = i)
206 CONTINUOUS-TIME MARKOV CHAINS
+P(XN (h) = i|X0 = i, N (h) = 1)P(N (h) = 1|X0 = i)
+P(XN (h) = i|X0 = i, N (h) ≥ 2)P(N (h) ≥ 2|X0 = i)
= 1 · (1 + qii h + o(h)) + 0 · (qi h + o(h)) + o(h)
= 1 + qii h + o(h).
Also, for j 6= i, we get
pij (h) = P(X(h) = j|X(0) = i)
= P(XN (h) = j|X0 = i)
= P(XN (h) = j|X0 = i, N (h) = 0)P(N (h) = 0|X0 = i)
+P(XN (h) = j|X0 = i, N (h) = 1)P(N (h) = 1|X0 = i)
+P(XN (h) = j|X0 = i, N (h) ≥ 2)P(N (h) ≥ 2|X0 = i)
= 0 · (1 + qii h + o(h)) + P(X1 = j|X0 = i)(qi h + o(h)) + o(h)
= pij (qi h + o(h)) + o(h) = qij h + o(h).
This proves Equation 6.15. Using this we get
X
pij (t + h) = pik (h)pkj (t)
k∈S
(Chapman-Kolmogorov Equations 6.12)
X
= (δik + qik h + o(h))pkj (t) (Equation 6.15).
k∈S
Hence X
pij (t + h) − pij (t) = qik hpkj (t) + o(h).
k∈S
Dividing by h yields
pij (t + h) − pij (t) X o(h)
= qik pkj (t) + .
h h
k∈S
Letting h → 0 we see that the right hand side has a limit, and hence pij (t) is differ-
entiable with respect to t and satisfies
X
p′ij (t) = qik pkj (t).
k∈S
Writing this in matrix form we get the backward equations 6.13. The forward
equations follow similarly by interchanging t and h in applying the Chapman-
Kolmogorov equations.
Thus one may solve forward or backward equations to obtain P (t). Once we have
the matrix P (t), we have the distribution of X(t). We illustrate by means of an
example.
Example 6.16 Two-State Machine. Consider the CTMC of Example 6.4 with the
rate matrix given in Equation 6.10. The forward equations are
p′00 (t) = −λp00 (t) + µp01 (t), p00 (0) = 1,
TRANSIENT BEHAVIOR: OCCUPANCY TIMES 207
p′01 (t) = λp00 (t) − µp01 (t), p01 (0) = 0,
p′10 (t) = −λp10 (t) + µp11 (t), p10 (0) = 0,
p′11 (t) = λp10 (t) − µp11 (t), p11 (0) = 1,
and the backward equations are
p′00 (t) = −λp00 (t) + λp10 (t), p00 (0) = 1,
p′10 (t) = µp00 (t) − µp10 (t), p10 (0) = 0,
p′01 (t) = −λp01 (t) + λp11 (t), p01 (0) = 0,
p′11 (t) = µp10 (t) − µp11 (t), p11 (0) = 1.
Note that we do not need to solve four equations in four unknowns simultaneously,
but only in two equations in two unknowns at a time. We solve the first two forward
equations here. We have
p00 (t) + p01 (t) = P(X(t) = 0 or 1|X(0) = 0) = 1. (6.16)
Substituting for p01 (t) in the first forward equation we get
p′00 (t) = −λp00 (t) + µ(1 − p00 (t)) = −(λ + µ)p00 (t) + µ.
This is a non-homogeneous first order differential equation with constant coeffi-
cients, and can be solved by standard methods (see Appendix I) to get
λ µ
p00 (t) = e−(λ+µ)t + .
λ+µ λ+µ
Using Equation 6.16 we get
λ
p01 (t) = (1 − e−(λ+µ)t ).
λ+µ
Similarly we can solve the last two forward equations to get p10 (t) and p11 (t). Com-
bining all these solutions we get
" #
λ −(λ+µ)t µ λ −(λ+µ)t
λ+µ e + λ+µ λ+µ (1 − e )
P (t) = µ −(λ+µ)t µ λ . (6.17)
λ+µ (1 − e ) λ+µ e−(λ+µ)t + λ+µ
It is easy to check that P (t) satisfies the Chapman-Kolmogorov Equations 6.12 and
that P (t) and P (s) commute. Figure 6.10 shows graphically various pij (t) functions.
Following the steps in the study of DTMCs, we now study the occupancy times in
the CTMCs. Let {X(t), t ≥ 0} be a CTMC on state-space S with generator matrix
Q. Let Vj (t) be the amount of time the CTMC spends in state j over (0, t]. Thus
Vj (0) = 0 for all j ∈ S. Define
Mij (t) = E(Vj (t)|X(0) = i), i, j ∈ S, t ≥ 0. (6.18)
208 CONTINUOUS-TIME MARKOV CHAINS
1
P11(t)
λ
λ+µ P00(t)
µ P01(t)
λ+µ P10(t)
Figure 6.10 The transition probability functions for the two-state CTMC.
Mij (t) is called the occupancy time of state j up to time t starting from state i. Define
the occupancy times matrix as
M (t) = [Mij (t)].
The next theorem shows how to compute the occupancy times matrix M (t).
In this section we present several methods of computing the transition matrix P (t) of
a CTMC with finite state-space S = {1, 2, · · · , N } and generator matrix Q. We shall
illustrate the methods with two examples: the two-state machine of Example 6.4
which we analyze algebraically, and the two-machine workshop of Example 6.6
which we analyze numerically.
Note that eA is an N ×N square matrix. One can show that the series on the right had
side converges absolutely, and hence eA is well defined. Note that if A is a diagonal
matrix
A = diag[a1 , a2 , · · · , aN ],
then
eA = diag[ea1 , ea2 , · · · , eaN ].
In particular e0 = I where 0 is a square matrix with all elements equal to zero. The
next theorem gives the main result. Following the discussion in Section 2.4, we say
that Q is diagonalizable if there exist a diagonal matrix D and an invertible matrix
X such that
Q = XDX −1 . (6.21)
We saw in that section how to obtain the D and X if Q is diagonalizable.
Proof: We have
∞
X (Qt)n
eQt = I + , t ≥ 0. (6.24)
n=1
n!
Since the infinite series converges uniformly, we can take derivatives term by term to
get
d Qt
e = QeQt = eQt Q.
dt
Thus eQt satisfies the forward and backward equations. Since there is a unique so-
lution to those equations in the finite state-space case, we get Equation 6.22. Now,
from Theorem 2.7 we get
Qn = XDn X −1 , n ≥ 0.
Substituting in Equation 6.24, and factoring out X on the left and X −1 on the right,
we get
∞
!
Qt
X (Dt)n
e =X I+ X −1 , t ≥ 0,
n=1
n!
which yields Equation 6.23. The last representation follows from a similar represen-
tation in Equation 2.32 on page 38.
The next theorem is analogous to Theorem 2.8 on page 39.
Proof: Define
Q
P =I+ .
q
(We shall see this matrix again in the sub-section on uniformization.) It is easy to
verify that P is a stochastic matrix. Now it can be seen that the N eigenvalues of P
are given by 1 + λi /q , (1 ≤ i ≤ m). From Theorem 2.8 we see that P has at least
one eigenvalue equal to one, thus at least one λi /q + 1 equals 1. Thus at least one λi
COMPUTATION OF P (T ): FINITE STATE-SPACE 211
is zero. Also, all eigenvalues of P lie in complex plane within the unit circle centered
at zero. Thus we must have
λi
|1 + | ≤ 1.
q
Thus each λi must lie in the complex plane within a circle of radius q with center at
−q, which is the second part of the theorem.
Many matrix oriented software programs provide built-in ways to compute the
matrix exponential function. For example, Matlab provides a function expm: we can
compute P (t) by the Matlab statement: P (t) = expm(Q ∗ t). Although simple to use
for reasonable sized matrices with numerical entries, it does not provide any insight
about the behavior of P (t) as a function of t. We get this insight from the above
theorem, which gives P (t) as an explicit function of t. We illustrate with examples.
Example 6.18 Two-State Machine. Consider the two-state CTMC with generator
matrix given in Equation 6.10. We have
Q = XDX −1 ,
where
" #
λ λ λ
1
λ+µ 0 0 −1 λ+µ λ+µ
X= µ , D= , X = .
1 − λ+µ 0 −(λ + µ) 1 −1
Substituting in Equation 6.23 we get
1 0
P (t) = X X −1 .
0 e−(λ+µ)t
Straightforward calculations show that this reduces to the transition probability ma-
trix given in Equation 6.17.
Define the Laplace transform (LT) (see Appendix F) of pij (t) as follows:
Z ∞
p∗ij (s) = e−st pij (t)dt, Re(s) > 0,
0
where Re(s) is the real part of the complex number s. Defining the LT of a matrix as
the matrix of the LTs of its elements, we write
P ∗ (s) = [p∗ij (s)]i,j∈S .
The next theorem gives the main result.
In matrix form, this yields that the LT of the derivative matrix P ′ (t) is given by
sP ∗ (s) − I. Now taking the LT on both sides of Equation 6.13 or 6.14, and using the
initial condition P (0) = I, we get
sP ∗ (s) − I = QP ∗ (s) = P ∗ (s)Q. (6.25)
This can be rearranged to get
(sI − Q)P ∗ (s) = P ∗ (s)(sI − Q) = I.
Since sI − Q is invertible for Re(s) > 0, the theorem follows.
Example 6.20 Two-State Machine. Consider the two-state CTMC with generator
matrix given in Equation 6.10. From Theorem 6.8 we have
∗ 1 s+µ λ
P (s) = .
s(s + λ + µ) µ s+λ
Using partial fraction expansion, we get, for example
s+µ µ 1 λ 1
p∗00 (s) = = · + · .
s(s + λ + µ) λ+µ s λ+µ s+λ+µ
Using the table of LTs (See Appendix F), we can invert the above transform to get
λ µ
p00 (t) = e−(λ+µ)t + .
λ+µ λ+µ
We can compute other transition probabilities in a similar fashion to obtain the tran-
sition probability matrix given in Equation 6.17.
6.5.3 Uniformization
and define
Q
P =I+ . (6.28)
λ
COMPUTATION OF P (T ): FINITE STATE-SPACE 215
Then, P (t), the transition probability matrix of {X(t), t ≥ 0} is given by
∞
X (λt)n n
P (t) = e−λt P , t ≥ 0. (6.29)
n=0
n!
Proof: First, Equation 6.28 implies that P is a stochastic matrix. Now let {Xn , n ≥
0} be a DTMC with one-step transition matrix P and let {N (t), t ≥ 0} be a PP(λ)
that is independent of the DTMC. From Theorem 6.10 it follows that {XN (t) , t ≥ 0}
is a CTMC with generator matrix Q. Hence, for i, j ∈ S and t ≥ 0, we get
pij (t) = P(X(t) = j|X(0) = i)
= P(XN (t) = j|X0 = i)
X∞
= P(XN (t) = j|N (t) = n, X0 = i)P(N (t) = n|X0 = i)
n=0
∞
X (λt)n
= P(Xn = j|N (t) = n, X0 = i)e−λt
n=0
n!
∞
X (λt)n n
= e−λt [P ]ij .
n=0
n!
Thus PM (t) is an ǫ lower bound on P (t) for all 0 ≤ t ≤ τ. In our experience uni-
formization has proved to be an extremely stable and efficient numerical procedure
216 CONTINUOUS-TIME MARKOV CHAINS
for computing P (t). A subtle numerical difficulty arises when λt is so large that
e−λt is numerically computed to be zero. In this case we need more ingenious ways
of computing the series. We shall not go into the details here.
Example 6.22 Two-State Machine. Consider the two-state CTMC with generator
matrix given in Equation 6.10. To avoid confusing notation, we use q instead of λ in
Equation 6.28 to get
q = max(−q00 , −q11 ) = max(λ, µ).
Note that any q larger than the right hand side would do. Thus instead of choosing
q = max(λ, µ), we use q = λ + µ. Then we get
Q 1 µ λ
P =I+ = .
q λ+µ µ λ
Clearly, we have P 0 = I and P n = P for n ≥ 1. Thus we have
∞
!
−(λ+µ)t
X ((λ + µ)t)n
P (t) = e I +P ,
n=1
n!
It should be noted that the method of uniformization works even if the CTMC
has infinite state-space as along as λ computed by the Equation 6.28 is finite. Such
COMPUTATION OF P (T ): INFINITE STATE-SPACE 217
CTMCs are called uniformizable. It is easy to see that uniformizable CTMCs are
automatically regular, since they can have at most P (λt) transitions over (0, t].
Finally, one can always solve the backward or forward equations by standard nu-
merical methods of solving differential equations. We refer the readers to any book
on differential equations for more details.
In this section we discuss the computation of the transition matrix P (t) for a CTMC
{X(t), t ≥ 0} on infinite state-space S = {0, 1, 2, · · ·}. This can be done only when
the CTMC has highly specialized structure. In our experience the transform methods
are most useful in such cases. We illustrate with several examples.
Example 6.25 Pure Birth Process. Let {X(t), t ≥ 0} be a pure birth process as
described in Example 6.8 with birth parameters λi in state i ≥ 0, and assume that
X(0) = 0. As before we write
pi (t) = p0i (t) = P(X(t) = i|X(0) = 0), i ≥ 0, t ≥ 0.
218 CONTINUOUS-TIME MARKOV CHAINS
The forward equations are
p′0 (t) = −λ0 p0 (t),
p′i (t) = −λi pi (t) + λi−1 pi−1 (t), i ≥ 1
with initial conditions
p0 (0) = 1, pi (0) = 0, i ≥ 1.
Taking Laplace transforms of the above differential equations, we get
sp∗0 (s) − 1 = −λ0 p∗0 (s),
sp∗i (s) = −λi p∗i (s) + λi−1 p∗i−1 (s), i ≥ 1.
These can be solved recursively to obtain
i
1 Y λk
p∗i (s) = , i ≥ 0.
λi s + λk
k=0
Consider the case when all the λi ’s are distinct. Then we can use partial fractions
expansion easily to get
i
X
pi (t) = Aki λk e−λk t , i ≥ 0,
k=0
where
i
1 Y λr
Aki = .
λi λr − λk
r=0,r6=k
As a special case, suppose λi = iλ, (i ≥ 0), but with X(0) = 1. It can be shown that
(see Computational Exercise 6.4) in this case we get
P(X(t) = i|X(0) = 1) = e−λt (1 − e−λt )i−1 .
Thus given X(0) = 1, X(t) is geometrically distributed with parameter e−λt .
Example 6.26 Pure Death Process. Let {X(t), t ≥ 0} be a pure death process as
described in Example 6.9 with death parameters µi in state i ≥ 1, and assume that
X(0) = N . We want to compute
pi (t) = pN i (t) = P(X(t) = i|X(0) = N ), 0 ≤ i ≤ N, t ≥ 0.
Using the fact that pN +1 (t) = pN,N +1 (t) = 0, we can write the forward equations
as
p′N (t) = −µN pN (t),
p′i (t) = −µi pi (t) + µi+1 pi+1 (t), 0 ≤ i ≤ N − 1,
with initial conditions
pN (0) = 1, pi (0) = 0, 0 ≤ i ≤ N − 1.
COMPUTATION OF P (T ): INFINITE STATE-SPACE 219
Taking Laplace transforms on both sides of the above differential equations, we get
sp∗N (s) − 1 = −µN p∗N (s),
sp∗i (s) = −µi p∗i (s) + µi+1 p∗i+1 (s), 0 ≤ i < N.
These can be solved recursively to obtain
N
1 Y µk
p∗i (s) = , 0 ≤ i ≤ N.
µi s + µk
k=i
When all the λi ’s are distinct we can use partial fractions expansion easily to get
N
X
pi (t) = Bki µk e−µk t , 0≤i≤N
k=i
where
N
1 Y µr
Bki = .
µi µr − µk
r=i,r6=k
As a special case, suppose µi = iµ, (i ≥ 0), with X(0) = N . It can be shown that
(see Computational Exercise 6.5) given X(0) = N , X(t) is binomially distributed
with parameters N and e−µt .
Example 6.27 Linear Growth Model. Let X(t) be the birth and death process of
Example 6.13 with birth parameters λi = iλ and death parameters µi = iµ, i ≥ 0.
Suppose X(0) = 1. We shall compute
pi (t) = p1i (t) = P(X(t) = i|X(0) = 1), i ≥ 0, t ≥ 0.
The forward equations are
p′0 (t) = µp1 (t),
p′i (t) = (i − 1)λpi−1 (t) − i(λ + µ)pi (t) + (i + 1)µpi+1 (t), i ≥ 1,
with initial conditions
p1 (0) = 1, pi (0) = 0, i 6= 1.
Now define the generating function
∞
X
p(z, t) = z i pi (t).
i=0
Multiplying the differential equation for pi (t) by z i and summing over all i from 0
to ∞ we get
∞
X ∞
X ∞
X ∞
X
z i p′i (t) = (i − 1)λz i pi−1 (t) − i(λ + µ)z i pi (t) + (i + 1)µz i pi+1 (t),
i=0 i=1 i=1 i=0
which reduces to
∂ ∂
p(z, t) = (λz 2 − (λ + µ)z + µ) p(z, t).
∂t ∂z
220 CONTINUOUS-TIME MARKOV CHAINS
This equation can be written in the canonical form as
∂ ∂
p(z, t) − a(z) p(z, t) = 0,
∂t ∂z
where
a(z) = λz 2 − (λ + µ)z + µ = (z − 1)(λz − µ).
Thus we have a linear first order partial differential equation, which can be solved
by the method of characteristic functions (see Chaudhry and Templeton (1983)). We
first form the total differential equations
dt dz dp
= = .
1 −a(z) 0
The first equation can be integrated to yield
dz
Z Z
dt =
−a(z)
or
1 λz − µ
t− ln = c,
λ−µ z−1
where c is an arbitrary constant. The last equation gives
p = constant.
Hence the general solution is
ˆ 1 λz − µ
p(z, t) = f t− ln
λ−µ z−1
λz − µ
= f (λ − µ)t − ln , (6.30)
z−1
where f is a function to be determined by the boundary condition
∞
X
p(z, 0) = z i pi (0) = z.
i=0
Hence
z−1
f ln = z. (6.31)
λz − µ
Now write
z−1
u = ln
λz − µ
and invert it to get
µeu − 1
z= .
λeu − 1
Substituting in Equation 6.31 we get
µeu − 1
f (u) = .
λeu − 1
FIRST-PASSAGE TIMES 221
Substituting in Equation 6.30 we get
µ exp (λ − µ)t − ln λz−µ
z−1 −1
p(z, t) =
λ exp (λ − µ)t − ln λz−µ
z−1 −1
which can be simplified to obtain
µ(1 − e(λ−µ)t ) − (λ − µe(λ−µ)t )z
p(z, t) = .
(µ − λe(λ−µ)t ) − λ(1 − e(λ−µ)t )z
The above expression can be expanded in a power series in z, and then the coefficient
of z i will give us pi (t). Doing this, and using ρ = λ/µ, we get
1 − e(λ−µ)t
p0 (t) = ,
1 − ρe(λ−µ)t
(1 − e(λ−µ)t )i−1
pi (t) = ρi−1 (1 − ρ)e(λ−µ)t , i ≥ 1.
(1 − ρe(λ−µ)t )i+1
This completes the solution for the case X(0) = 1. The case X(0) = i ≥ 2 can
be analyzed by treating the linear growth process as the sum of i independent linear
growth processes, each starting in state 1. This example shows that even for highly
structured CTMCs computation of P (t) is a formidable task.
We have
∞
X
m′i (t) = kp′ik (t).
k=0
By using the forward differential equations for pik (t) and carrying out the above sum
we get
m′i (t) = (λ − µ)mi (t).
Using the initial condition, mi (0) = i, we can solve the above equation to get
mi (t) = ie(λ−µ)t .
Thus if λ, the birth rate per individual, is greater than µ, the death rate per individual,
then the mean population size explodes as t → ∞. If it is less, the mean population
size exponentially reduces to zero. If the two rates are equal, the mean population
size is constant. All these conclusions confirm our intuition.
We follow the developments of DTMCs and first study the first passage times in the
CTMCs before we study their limiting behavior. Specifically, let {X(t), t ≥ 0} be a
222 CONTINUOUS-TIME MARKOV CHAINS
CTMC on S = {0, 1, 2, · · ·} with generator matrix Q = [qij ], and define
T = min{t ≥ 0 : X(t) = 0}. (6.32)
We study the random variable T in this section. Since the first passage times in a
CTMC has a lot of similarity to the first passage times in the embedded DTMC,
many of the results follow from similar results in Chapter 3. Hence we do not spend
as much time on this topic here.
Define
rij (t) = P(T > t, X(t) = j|X(0) = i), i, j ≥ 1, t ≥ 0.
Then the complementary cdf of T , conditioned on X(0) = i, is given by
∞
X
ri (t) = P(T > t|X(0) = i) = rij (t).
j=1
Theorem 6.11 The matrix function R(t) satisfies the following set of differential
equations:
R′ (t) = M R(t) = R(t)M, (6.33)
with the initial condition
R(0) = I.
Proof: Follows along the same lines as the proof of the backward and forward equa-
tions in Theorem 6.4.
Thus we can use the methods described in the previous two sections to compute
R(t), and hence the complementary cdf of T .
The next theorem gives a more explicit expression for the cdf of T when the CTMC
has a finite state-space.
Theorem 6.12 Suppose the CTMC has state-space {0, 1, · · · , K} and initial distri-
bution
α = [α1 , α2 , · · · , αK ],
FIRST-PASSAGE TIMES 223
with K
P
i=1 αi = 1.Then
P(T ≤ t) = 1 − αeMt 1, (6.34)
where 1 is a column vector of ones.
Proof: Since M is finite and R(0) = I, we can use Theorem 6.6 to obtain
R(t) = eMt .
Now,
P(T ≤ t) = 1 − P(T > t)
K
X
= 1− αi P(T > t|X(0) = i)
i=1
K X
X K
= 1− αi P(T > t, X(t) = j|X(0) = i)
i=1 j=1
K X
X K
= 1− αi rij (t)
i=1 j=1
= 1 − αR(t)1
= 1 − αeMt 1
as desired.
Define
vi = P(T = ∞|X(0) = i), i ≥ 1. (6.35)
Thus vi is the probability that the CTMC never visits state zero starting from state i.
The next theorem gives the main result.
Theorem 6.13 Let v = [v1 , v2 , · · ·]′ . Then v is given by the largest solution bounded
above by 1 to
M v = 0. (6.36)
Proof: Let {Xn , n ≥ 0} be the embedded DTMC in the CTMC {X(t), t ≥ 0}. The
transition probability matrix P of the embedded DTMC is related to the generator
matrix Q of the CTMC by Equation 6.9. We have
vi = P({X(t), t ≥ 0} never visits state 0|X(0) = i)
= P({Xn , n ≥ 0} never visits state 0|X0 = i).
Stating Equation 3.8 on page 60 in scalar form we get
∞
X
vi = pij vj , i ≥ 1,
j=1
224 CONTINUOUS-TIME MARKOV CHAINS
where pij ’s are the transition probabilities in the embedded DTMC. Substituting
from Equation 6.9 we get
∞
X qij
vi = vj .
qi
j=1,j6=i
Multiplying on both sides by qi and using qi = −qii we get
∞
X
qij vj = 0,
j=1
which, in matrix form, yields Equation 6.36. The maximality of the solution follows
from Theorem 3.2.
In this section we develop the methods of computing the moments of T . The next
theorem gives a method of computing
mi = E(T |X(0) = i), i ≥ 1. (6.37)
Obviously, mi = ∞ if vi > 0. Hence we consider the case vi = 0 in the next
theorem.
Proof: We use the first step analysis for CTMCs, which is analogous to the first step
analysis in the DTMCs as explained in Section 3.1. Let X(0) = i > 0 and Y1 be the
first sojourn time in state i. Then Y1 ∼ exp(qi ). Now condition on X1 = X(Y1 ) = j.
Clearly, if j = 0, T = Y1 . If j > 0, then T = Y1 + T ′ , where T ′ has the same
distribution as T conditioned on X0 = j. Hence we get
mi = E(T |X(0) = i)
X∞
= E(T |X(0) = i, X(Y1 ) = j)P(X(Y1 ) = j|X(0) = i)
j=0
= E(Y1 |X(0) = i, X(Y1 ) = 0)P(X(Y1 ) = 0|X(0) = i)
X∞
+ E(Y1 |X(0) = i, X(Y1 ) = j)P(X(Y1 ) = j|X(0) = i)
j=1
X∞
+ E(T ′ |X(0) = i, X(Y1 ) = j)P(X(Y1 ) = j|X(0) = i)
j=1
∞
X
= E(Y1 |X(0) = i) + E(T ′ |X(0) = j)P(X(Y1 ) = j|X(0) = i)
j=1
FIRST-PASSAGE TIMES 225
∞
1 X qij
= + mj .
qi qi
j=1,j6=i
which, in matrix form, yields Equation 6.38. The minimality of the solution follows
from Theorem 3.3.
We next derive the Laplace Stieltjes transform (LST) of T . For s ≥ 0, define
φi (s) = E(e−sT |X(0) = i), i ≥ 1,
and
φ(s) = [φ1 (s), φ2 (s), · · ·]′ .
The main result is given in the next theorem.
Proof: Use the terminology and the first step analysis as described in the proof of
Theorem 6.14. Conditioning on (X1 , Y1 ) we get
φi (s) = E(e−sT |X(0) = i)
∞
′
X
= pij E(eY1 |X(0) = i)E(e−sT |X(0) = j)
j=0
∞
qi X
= pij φj (s) + pi0 .
s + qi j=1
Theorem 6.16 Suppose v = 0. Then m(k) = [m1 (k), m2 (k), · · ·]′ is given by the
smallest non-negative solution to
M m(k) + km(k − 1) = 0, k ≥ 1, (6.40)
with m(0) = 1.
226 CONTINUOUS-TIME MARKOV CHAINS
Proof: Follows from taking successive derivatives of Equation 6.39 and using
dk
mi (k) = (−1)k φi (s)|s=0 .
dsk
We end this section with an example.
Example 6.28 Birth and Death Processes. Let {X(t), t ≥ 0} be the birth and
death process of Example 6.10 with birth parameters λi , i ≥ 0, and death parameters
µi , i ≥ 1. Let T be as defined in Equation 6.32.
Let us first compute the quantities {vi } as defined in Equation 6.35. The Equa-
tion 6.36 can be written in scalar form as
µi vi−1 − (λi + µi )vi + λi vi+1 = 0, i ≥ 1,
with boundary condition v0 = 0. The above equation can be written as
µi λi
vi = vi−1 + vi+1 , i ≥ 1.
λi + µi λi + µi
But these equations are identical to Equations 3.14 on page 63 with
λi µi
pi = , qi = . (6.41)
λi + µi λi + µi
Using the results of Example 3.9 on page 63 we get
Pi−1
αj P∞
Pj=0 if j=0 αj < ∞,
∞
αj
vi = j=0
(6.42)
P∞
0 if j=0 αj = ∞,
where α0 = 1 and
µ1 µ2 · · · µi
αi = , i ≥ 1. (6.43)
λ1 λ2 · · · λi
P∞ we compute mi , as defined in Equation 6.37 under the assumption that
Next
j=0 αj = ∞, so that vi = 0 for all i ≥ 0. The Equation 6.38 can be written
in scalar form as
µi mi−1 − (λi + µi )mi + λi mi+1 + 1 = 0, i ≥ 1,
with boundary condition m0 = 0. The above equation can be written as
1 µi λi
mi = + mi−1 + mi+1 , i ≥ 1.
λi + µi λi + µi λi + µi
But these equations are similar to Equations 3.24 on page 71 with pi and qi as defined
1
in Equation 6.41. The only difference is that we have λi +µ i
on the right hand side
instead of 1. However, we can solve these equations using the same procedure as in
EXPLORING THE LIMITING BEHAVIOR BY EXAMPLES 227
Example 3.15 to get
i−1
! ∞
X X 1
mi = αk .
λj αj
k=0 j=k+1
In the next several sections we study the limiting behavior of P (t) as t → ∞. Since
the row sums of P (t) are 1, it follows that the row sums of M (t) are t. Hence we
study the limiting behavior of M (t)/t as t → ∞. Note that [M (t)]ij /t can be inter-
preted as the fraction of the time spent by the CTMC in state j during (0, t] starting
from state i. Hence studying this limit makes practical sense. We begin by some ex-
amples illustrating the types of limiting behavior that can arise.
Example 6.29 Two-State Example. Consider the CTMC of Example 6.16 with the
P (t) matrix as given in Equation 6.17. Hence we get
" #
µ λ
λ+µ λ+µ
lim P (t) = µ λ .
t→∞
λ+µ λ+µ
Thus the limit of P (t) exists and its row sums are one. As we had observed in the
DTMCs, the rows of the limiting matrix are the same, implying that the limiting dis-
tribution of X(t) does not depend upon the initial distribution of the CTMC.
Next, the occupancy matrix M (t) for this CTMC is given in Equation 6.20. Hence,
we get " #
µ λ
M (t) λ+µ λ+µ
lim = µ λ .
t→∞ t λ+µ λ+µ
Thus the limit of M (t)/t in this example is the same as that of P (t).
228 CONTINUOUS-TIME MARKOV CHAINS
Example 6.30 Three-state CTMC. Let {X(t), t ≥ 0} be a CTMC on state-space
{0, 1, 2} with generator matrix
0 0 0
Q = µ −(λ + µ) λ .
0 0 0
Direct calculations show that
1 0 0
µ λ
P (t) = λ+µ (1 − e−(λ+µ)t ) e−(λ+µ)t λ+µ (1 − e
−(λ+µ)t
) ,
0 0 1
and
t 0 0
µ p10 (t) 1 p12 (t)
M (t) = λ+µ t− λ+µ λ+µ
(1 − e−(λ+µ)t ) λ
λ+µ
t− λ+µ
.
0 0 t
Hence we get
1 0 0
M (t) µ λ
lim P (t) = lim = λ+µ 0 λ+µ
.
t→∞ t→∞ t
0 0 1
Thus the limiting matrix has distinct rows, implying the that the limiting behavior
depends on the initial state. Furthermore, the row sums are one.
Example 6.31 Linear Growth Model. Consider the CTMC of Example 6.27. Con-
sider the case λ > µ. In this case we can use the transition probabilities derived there
to obtain µ i
if j = 0,
lim pij (t) = λ
t→∞ 0 if j > 0.
Thus in this case the limits of pij (t) exist but depends on the initial state i. It is more
tedious to show that Mij (t)/t has the same limit as pij (t). Furthermore, the row
sums of the limiting matrix are less than 1.
Case 1: Limit of P (t) exists, has identical rows, and each row sums to one.
Case 2: Limit of P (t) exists, does not have identical rows, each row sums to one.
Case 3: Limit of P (t) exists, but the rows may not sum to one.
We have also observed that limit of M (t)/t always exists, and it equals the limit
of P (t). We have not seen any examples where the limit of P (t) displays oscillatory
behavior as in the case of periodic DTMCs. In next two sections we develop the
CLASSIFICATION OF STATES 229
necessary theory to help us classify the CTMCs so we can understand their limiting
behavior better.
Clearly, there is a close connection between the limiting behavior of a CTMC with
state-space S and generator matrix Q and that of the corresponding embedded
DTMC on the same state-space S and one-step transition matrix P that is related
to Q as in Equation 6.9. Thus it seems reasonable that we will need to follow the
same path as we did when we studied the limiting behavior of the DTMCs in Chap-
ter 4.
6.9.1 Irreducibility
Example 6.32 The CTMC of Example 6.1 is irreducible if λ > 0 and µ > 0. The
CTMC of Example 6.30 is reducible, with {0} and {2} as two closed communicat-
ing classes, and {1} as a communicating class that is not closed. The linear growth
process of Example 6.13, with λ > 0 and µ > 0, has one closed communicating
class, namely {0}. The set {1, 2, · · ·} is a communicating class that is not closed.
The CTMC is reducible.
We see that the issue of periodicity does not arise in the CTMCs, since if a CTMC
can go from state i to j at all, it can do so at any time. Thus, for the two state CTMC
of Example 6.1, the embedded DTMC is periodic. However, the P (t) matrix for the
two-state CTMC does not show a periodic behavior.
Following the developments in the DTMCs, we now introduce the concepts of re-
currence and transience for the CTMCs. Let Y1 be the first sojourn time in a CTMC
230 CONTINUOUS-TIME MARKOV CHAINS
{X(t), t ≥ 0} and define the first passage time (in a slightly modified form)
T̃i = inf{t ≥ Y1 : X(t) = i}, i ∈ S. (6.45)
This is a well defined random variable if Y1 < ∞ with probability 1, i.e., if qi > 0.
Note that if qi = 0, i is an absorbing state in the CTMC. Let
ũi = P(T̃i < ∞|X(0) = i), (6.46)
and
m̃i = E(T̃i |X(0) = i). (6.47)
When ũi < 1, m̃i = ∞. However, as in the DTMCs, m̃i can be infinite even if
ũi = 1. With this in mind, we make the following definition, which is analogous to
the corresponding definitions in the case of the DTMCs.
It follows from the above theorem that recurrence and transience of states in the
CTMCs are class properties, just as they are in the DTMCs. This enables us to call
an irreducible CTMC transient or recurrent if all its states are transient or recurrent,
respectively. Since an irreducible CTMC is recurrent (transient) if and only if the
corresponding embedded DTMC is recurrent (transient), we can use the criteria de-
veloped in Chapter 4 to establish the transience or recurrence of the CTMCs.
CLASSIFICATION OF STATES 231
Next we define positive and null recurrence.
Definition 6.5 Null and Positive Recurrence. A recurrent state i with qi > 0 is
said to be
Proof: Since the CTMC is irreducible and has at least two states, we see that none
of the states can be absorbing. Thus qi > 0 for all i ∈ S. We use the first step analysis
as in the proof of Theorem 6.14 to derive a set of equations for
µij = E(T̃j |X(0) = i), i, j ∈ S.
Let X(0) = i and Y1 be the first sojourn time in state i. Then Y1 ∼ exp(qi ). Now
condition on X1 = X(Y1 ) = k. Clearly, if k = j, T̃j = Y1 . If k 6= j, then T̃j =
Y1 + T ′ , where T ′ has the same distribution as T̃j conditioned on X(0) = k. Hence
we get
µij = E(T̃j |X(0) = i)
X
= E(T̃j |X(0) = i, X(Y1 ) = k)P(X(Y1 ) = k|X(0) = i)
k∈S
= E(Y1 |X(0) = i, X(Y1 ) = j)P(X(Y1 ) = j|X(0) = i)
X
+ E(Y1 |X(0) = i, X(Y1 ) = k)P(X(Y1 ) = k|X(0) = i)
k∈S−{j}
X
+ E(T ′ |X(0) = i, X(Y1 ) = k)P(X(Y1 ) = k|X(0) = i)
k∈S−{j}
232 CONTINUOUS-TIME MARKOV CHAINS
X
= E(Y1 |X(0) = i) + pik E(T ′ |X(0) = k)
k∈S−{j}
1 X
= + pik µkj . (6.49)
qi
k∈S−{j}
since X
πk = πi pik .
i∈S
P
Hence, subtracting k∈S−{j} πk µkj from both sides, we get
X πi
πj µjj = ,
qi
i∈S
which yields !
1 X πi
µjj = . (6.50)
πj qi
i∈S
Since πj > 0, we see that m̃j = µjj < ∞ if and only if Equation 6.48 is satisfied.
The theorem then follows from the definition of positive recurrence.
Theorem 6.18 also implies that null and positive recurrence in the CTMCs are class
properties, just as in the DTMCs. We shall use this theorem to construct examples
of null recurrent CTMCs with positive recurrent embedded DTMCs, and vice versa.
However, note that if the CTMC is positive recurrent, but the embedded DTMC is
null recurrent, the CTMC cannot be regular. Since, this situation would imply that
the CTMC makes infinite number of transitions in a finite amount of time. Thus in
our applications a positive recurrent CTMC will have a positive recurrent embedded
DTMC.
In this section we derive the main results regarding the limiting distribution of an
irreducible CTMC {X(t), t ≥ 0} on state-space S = {0, 1, 2 · · ·} and generator
matrix Q. We treat the three types of irreducible CTMCs: transient, null recurrent,
and positive recurrent. We treat the case of reducible CTMCs in the next section.
Following the development in Section 4.5, we start with the statement of the con-
tinuous renewal theorem, which is the continuous analogue of its discrete version in
Theorem 4.14.
Proof: As in the proof of Theorem 4.14, the hard part is to prove that g(t) has a limit
as t → ∞. We refer the reader to Karlin and Taylor (1975) or Kohlas (1982) for the
details. Here we assume that the limit exists and show that it is given as stated in
Equation 6.54.
Define the Laplace transform (LT) as follows (see Appendix F). Here s is a com-
plex number with Re(s) ≥ 0.
Z ∞
∗
f (s) = e−st f (t)dt,
0
Z ∞
∗
g (s) = e−st g(t)dt,
Z0 ∞
∗
h (s) = e−st h(t)dt.
0
LIMITING BEHAVIOR OF IRREDUCIBLE CTMCS 235
Multiplying Equation 6.53 by e−st and integrating from 0 to ∞, and using the prop-
erties of LTs, we get
g ∗ (s) = h∗ (s) + g ∗ (s)f ∗ (s),
which yields
h∗ (s)
g ∗ (s) = . (6.55)
1 − f ∗ (s)
If limt→∞ g(t) exists, we know that it is given by
lim g(t) = lim sg ∗ (s).
t→∞ s→0
Theorem 6.21 Renewal Equation for pjj (t). Suppose qj > 0 and let fj be the
density of T̃j , as defined in Equation 6.45. Then pjj (t) satisfies the continuous re-
newal equation
Z t
−qj t
pjj (t) = e + fj (u)pjj (t − u)du, t ≥ 0. (6.56)
0
Proof: Suppose X(0) = j and T̃j = u. If u ≤ t, then the CTMC starts all over again
in state j at time u, due to Markov property. If u > t, then X(t) = j if and only if
Y1 > t. Hence we have
P(Y1 > t|T̃j = u, X(0) = j) if u > t,
P(X(t) = j|X(0) = j, T̃j = u) =
P(X(t − u) = j|X(0) = j) if u ≤ t.
Thus we get
pjj (t) = P(X(t) = j|X(0) = j)
Z ∞
= P(X(t) = j|X(0) = j, T̃j = u)fj (u)du
0
236 CONTINUOUS-TIME MARKOV CHAINS
Z t
= P(X(t − u) = j|X(0) = j)fj (u)du
0
Z ∞
+ P(Y1 > t|T̃j = u, X(0) = j)fj (u)du
t
Z ∞ Z t
= P(Y1 > t|T̃j = u, X(0) = j)fj (u)du + pjj (t − u)fj (u)du
0 0
(since P(Y1 > t|T̃j = u, X(0) = j) = 0 if u < t)
Z t
= P(Y1 > t|X(0) = j) + pjj (t − u)fj (u)du
0
Z t
= e−qj t + fj (u)pjj (t − u)du.
0
Theorem 6.22 If qj = 0
lim pjj (t) = 1.
t→∞
If state j is recurrent with qj > 0
1
lim pjj (t) = , (6.57)
t→∞ qj m̃j
where m̃j is as defined in Equation 6.47.
Proof: If qj = 0, we have pjj (t) = 1 for all t ≥ 0. Hence the first equation in
the theorem follows. If qj > 0, we see from Theorem 6.22 that pjj (t) satisfies the
continuous renewal equation. Since j is recurrent, we have
Z ∞
fj (t)dt = 1,
0
and ∞ ∞
1
Z Z
|h(t)|dt = e−qj t dt = .
0 0 qj
We also have Z ∞
µ= tfj (t)dt = E(T̃j |X0 = j) = m̃j .
0
Hence we can apply Theorem 6.20 to get
Z ∞
1 1
lim pjj (t) = e−qj t dt = .
t→∞ m̃j 0 qj m̃j
This proves the theorem.
LIMITING BEHAVIOR OF IRREDUCIBLE CTMCS 237
6.10.3 The Null Recurrent Case
Now we study the limiting behavior of an irreducible null recurrent CTMC. The main
result is given by
Theorem 6.23 The Null Recurrent CTMC. For an irreducible null recurrent
CTMC
lim pij (t) = 0.
t→∞
Now let i 6= j, and let fij (·) be the conditional probability density of T̃j given
X(0) = i. Using the argument in the proof of Theorem 6.21 we see that
Z t
pij (t) = fij (u)pjj (t − u)du.
0
Since the CTMC is irreducible and recurrent it follows that
P(T̃j < ∞|X(0) = i) = 1.
Hence
lim pij (t) = 0, i, j ∈ S.
t→∞
Now we study the limiting behavior of an irreducible positive recurrent CTMC. Such
CTMCs are also called ergodic. If the state-space is a singleton, say S = {1}, then
q1 = 0, and p11 (t) = 1 for all t ≥ 0. Hence the limting behavior is trivial in this
case. So suppose that S has at least two elements. Then qj must be strictly positive
and m̃j < ∞ for all j ∈ S. Hence, from Theorem 6.21 we get
1
lim pjj (t) = > 0, j ∈ S.
t→∞ qj m̃j
The next theorem yields the limiting behavior of pij (t) as t → ∞.
Theorem 6.24 The Positive Recurrent CTMC. For an irreducible positive recur-
rent CTMC
lim pij (t) = pj > 0, i, j ∈ S (6.58)
n→∞
238 CONTINUOUS-TIME MARKOV CHAINS
where p = [pj , j ∈ S] is given by the unique solution to
pQ = 0, j ∈ S, (6.59)
X
pj = 1. (6.60)
j∈S
Proof: The theorem is true if the CTMC has a single state. Hence assume that the
CTMC has at least two states. Then qj > 0 for all j ∈ S. Equation 6.57 implies
that Equation 6.58 holds when i = j with pj = 1/qj m̃j > 0. Hence assume i 6= j.
Following the proof of Theorem 6.21 we get
Z t
pij (t) = fij (u)pjj (t − u)du, t ≥ 0,
0
where fij (·) is the density of T̃j conditioned on X(0) = i. Since i ↔ j and the
CTMC is positive recurrent, it follows that
Z ∞
fij (u)du = P(X(t) = j for some t ≥ 0 |X(0) = i) = 1.
0
Now let 0 < ǫ < 1 be given. Thus it is possible to pick an N such that
Z ∞
fij (u)du ≤ ǫ/2,
N
and
|pjj (t) − pj | ≤ ǫ/2, for all t ≥ N.
Then, for t ≥ 2N , we get
Z t
|pij (t) − pj | = fij (u)pjj (t − u)du − pj
0
Z t−N
= fij (u)(pjj (t − u)du − pj )du
0
Z t
+ fij (u)(pjj (t − u) − pj )du
t−N
Z ∞
− fij (u)pj du
t
Z t−N
≤ fij (u)|pjj (t − u) − pj |du
0
Z t
+ fij (u)|pjj (t − u) − pj |du
Zt−N
∞
+ fij (u)dupj
t
Z t−N Z t Z ∞
≤ fij (u)duǫ/2 + fij (u)du + fij (u)du
0 t−N t−N
≤ ǫ/2 + ǫ/2 ≤ ǫ.
LIMITING BEHAVIOR OF IRREDUCIBLE CTMCS 239
This proves Equation 6.58. Next we derive Equations 6.59 and 6.60. For any finite
set A ⊂ S, we have X X
pij (t) ≤ pij (t) = 1.
j∈A j∈S
Letting t → ∞ on the left hand side, we get
X
pj ≤ 1.
j∈A
Since the above equation holds for any finite A, we must have
X
pj ≤ 1. (6.61)
j∈S
Let s → ∞ on both sides. The interchange of the limit and the sum on the right hand
side is justified due to bounded convergence theorem. Hence we get
X
pj = pi pij (t). (6.62)
i∈S
Replacing t by t + h yields
X
pj = pi pij (t + h). (6.63)
i∈S
Dividing the above equation by h and letting h → 0, the above equation in matrix
form yields
pP ′ (t) = 0.
Using Equation 6.13, the above equation reduces to
pQP (t) = 0.
Substituting t = 0 in the above equation and using P (0) = I, we get Equation 6.59.
Again, letting t → ∞ in Equation 6.62 and using bounded convergence theorem to
interchange the sum and the limit on the right hand side we get
!
X
pj = pi pj .
i∈S
240 CONTINUOUS-TIME MARKOV CHAINS
P
But pj > 0. Hence we must have pi = 1, yielding Equation 6.60.
Now suppose {p′i , i ∈ S} is another solution to Equations 6.59 and 6.60. Using
the same steps as before we get
X
p′j = p′i pij (t), t ≥ 0.
i∈S
Letting t → ∞ we get !
X
p′j = p′i pj = pj .
i∈S
Proof: Let p be a positive solution to the Equations 6.64 and 6.65. Then it is straight-
forward to verify that πj = pj qj solves the balance equations for the embedded
DTMC. Substituting in Equation 6.50 we get
!
1 X πi
µjj =
πj qi
i∈S
!
1 X 1
= pi = < ∞.
pj qj pj qj
i∈S
Hence state j is positive recurrent, and hence the entire CTMC is positive recurrent.
The uniqueness was already proved in Theorem 6.24.
The vector p = [pj ] is called the limiting distribution or the steady state distribu-
tion of the CTMC. It is also the stationary distribution or the invariant distribution of
the CTMC, since if p is the pmf of X(0) then it is also the pmf of X(t) for all t ≥ 0.
See Conceptual Exercise 6.5.
The next theorem shows the relationship between the stationary distribution of the
CTMC and that of the corresponding embedded DTMC.
LIMITING BEHAVIOR OF IRREDUCIBLE CTMCS 241
Theorem 6.26 Let {X(t), t ≥ 0} be an irreducible positive recurrent CTMC
on state-space S with generator Q and stationary distribution {pj , j ∈ S}. Let
{Xn , n ≥ 0} be the corresponding embedded DTMC with transition probability
matrix P and stationary distribution {πj , j ∈ S}. Then
πj /qj
pj = P , j ∈ S. (6.66)
i∈S πi /qi
Proof: Follows by substituting pij = qij /qi in π = πP and verifying that it reduces
to pQ = 0. We leave the details to the reader.
The last theorem of this section shows that the limiting occupancy distribution of
a CTMC is always the same as its limiting distribution.
Proof: Follows from Equation 6.19 and the fact that pij (t) ≥ 0 and has a limit as
t → ∞.
Thus, if Equations 6.64 and 6.65 have a solution p, it represents the limiting dis-
tribution, the stationary distribution as well as the limiting occupancy distribution
of the CTMC. Equations 6.64 are called the balance equations. Sometimes they are
called global balance equations to distinguish them from the local balance equations
to be developed in Section 6.14. It is more instructive to write them in a scalar form:
X X
pi qij = pj qji .
j:j∈S,j6=i j:j∈S,i6=j
The left hand side equals pi qi , the rate at which transitions take the system out of
state i, and the right hand side equals the rate at which transitions take the system
into state i. In steady state the two rates must be equal. In practice it is easier to write
these “rate out = rate in” equations by looking at the rate diagram, rather than by
using the matrix equation 6.64. We illustrate the theory developed in this section by
several examples below.
Example 6.34 Two-State Machine. Consider the two state CTMC of Exam-
ple 6.29. The balance equations are
λp0 = µp1 ,
µp1 = λp0 .
Thus there is only one independent balance equation. The normalizing equation is
p0 + p1 = 1.
242 CONTINUOUS-TIME MARKOV CHAINS
Solving these we get
µ λ
p0 = , p1 = .
λ+µ λ+µ
This agrees with the result in Example 6.29.
Example 6.35 Birth and Death Processes. Let {X(t), t ≥ 0} be the birth and
death process of Example 6.10 with birth parameters λi > 0 for i ≥ 0, and µi > 0
for i ≥ 1. This CTMC is irreducible. The balance equations are
λ0 p0 = µ1 p1 ,
(λi + µi )pi = λi−1 pi−1 + µi+1 pi+1 , i ≥ 1.
Summing the first i equations we get
λi pi = µi+1 pi+1 , i ≥ 0. (6.67)
These can be solved recursively to get
pi = ρi p0 , i ≥ 0,
where ρ0 = 1, and
λ0 λ1 · · · λi−1
ρi = , i ≥ 1. (6.68)
µ1 µ2 · · · µi
Now, substituting in the normalizing equation we get
∞ ∞
!
X X
pi = ρi p0 = 1.
i=0 i=0
If the infinite sum diverges, there is no solution. Thus the CTMC is positive recurrent
if and only if
X∞
ρi < ∞
i=0
and, when it is positive recurrent, has the limiting distribution given by
ρj
pj = P∞ , j ≥ 0. (6.69)
i=0 ρi
If the CTMC is transient or null recurrent, pj = 0 for all j ≥ 0. We analyze two
special birth and death processes in the next two examples.
Example 6.36 Single-Server Queue. Let X(t) be the number of customers at time
t in the single-server service system of Example 6.11. There we saw that {X(t), t ≥
0} is a birth and death process with birth parameters λi = λ for i ≥ 0, and death
LIMITING BEHAVIOR OF IRREDUCIBLE CTMCS 243
parameters µi = µ for i ≥ 1. Thus we can use the results of Example 6.35 to study
the limiting behavior of this system. Substituting in Equation 6.68 we get
λ0 λ1 · · · λi−1
ρi = = ρi , i ≥ 0
µ1 µ2 · · · µi
where
λ
ρ= .
µ
Now,
∞ 1
X if ρ < 1,
ρi = 1−ρ
∞ if ρ ≥ 1.
i=0
Thus the queue is stable (i.e., the CTMC is positive recurrent) if ρ < 1. In that case
the limiting distribution can be obtained from Equation 6.69 as
pj = ρj (1 − ρ), j ≥ 0. (6.70)
Thus, in steady state, the number of customers in a stable single server queue is a
modified Geometric random variable with parameter 1 − ρ.
Example 6.38 Retrial Queue. Consider the bivariate CTMC {(X(t), R(t)), t ≥
0} of the retrial queue of Example 6.15. From of the rate diagram in Figure 6.9 we
get the following balance equations:
(λ + nθ)p0n = µp1n , n ≥ 0,
244 CONTINUOUS-TIME MARKOV CHAINS
(λ + µ)p10 = λp00 + θp01 ,
(λ + µ)p1n = λp0n + (n + 1)θp0,n+1 + λpi,n−1 , n ≥ 1.
Now using
∞
X ∞
X
p0n + p1n = 1
n=0 n=0
we get
∞ n n−1 ∞ n
" n+1 Y #
X 1 λ Y λ X
1 θ λ λ
1+ +k + +k p00 = 1.
n=1
n! µ θ n=0
n! λ µ θ
k=0 k=0
Example 6.39 Batch Arrival Queue. Consider a single server queue where cus-
tomers arrive in batches. The successive batch sizes are iid with common pmf
P(Btach Size = k) = αk , k ≥ 1.
LIMITING BEHAVIOR OF IRREDUCIBLE CTMCS 245
The batches themselves arrive according to a PP(λ). Thus the customer arrival pro-
cess is a CPP. Customers are served one at a time, service times being iid exp(µ)
random variables. Assume that there is an infinite waiting room. Let X(t) be the
number of customers in the system at time t.
Using the methodology developed in Section 6.2 we can show that {X(t), t ≥ 0}
is a CTMC with state space {0, 1, 2, · · ·} and the following transition rates:
qi,i−1 = µ i ≥ 1,
qi,i+k = λαk , k ≥ 1, i ≥ 0.
Thus we have q00 = −λ and qii = −(λ + µ) for i ≥ 1. Hence the balance equations
for this system are
λp0 = µp1 ,
i−1
X
(λ + µ)pi = λ αi−r pr + µpi+1 , i ≥ 1.
r=0
Multiplying the balance equation for state i by z i and summing over all i, we get
∞
X ∞
X i−1
X ∞
X
λp0 + (λ + µ) pi z i = µp1 + λ zi αi−r pr + µ z i pi+1 .
i=1 i=1 r=0 i=1
Adding µp0 on both sides, interchanging the i and r sums on the right hand side, and
regrouping terms, we get
∞ ∞ ∞ ∞
!
X µ X X X
(λ + µ) pi z i = µp0 + pi z i + λ z i αi−r pr
i=0
z i=1 r=0 i=r+1
∞ ∞
! ∞ !
1 µ X
i
X
r
X
i
= µ 1− p0 + pi z + λ z pr z αi .
z z i=0 r=0 i=1
In this section we derive the main results regarding the limiting distribution of a re-
ducible CTMC {X(t), t ≥ 0} on state-space S = {0, 1, 2 · · ·} and generator matrix
Q. Assume that there are k closed communicating classes Ci , 1 ≤ i ≤ k, and T is
the set of states that do not belong to any closed communicating class. Now, relabel
the states in S by non-negative integers such that i ∈ Cr and j ∈ Cs with r < s
implies that i < j, and i ∈ Cr and j ∈ T implies that i < j. With this relabeling, the
generator matrix Q has the following canonical block structure:
LIMITING BEHAVIOR OF REDUCIBLE CTMCS 247
Q(1) 0 ··· 0 0
0 Q(2) · · · 0 0
.. .. .. ..
Q=
.. .
(6.75)
. . . . .
0 0 ··· Q(k) 0
R Q(T )
Here Q(i) is a generator matrix (1 ≤ i ≤ k) of an irreducible CTMC with state-
space Ci , Q(T ) is a |T | × |T | sub-stochastic generator matrix (i.e., all row sums of
Q(T ) being less than or equal to zero, with at least one being strictly less than zero),
and R is a |T | × |S − T | matrix. Then the transition probability matrix P has the
same block structure:
P (1)(t) 0 ··· 0 0
0 P (2)(t) · · · 0 0
.. . . .
P (t) = . .. . . . .. .. .
0 0 · · · P (k)(t) 0
PR (t) P (T )(t)
Since P (r)(t) (1 ≤ r ≤ k) is a transition probability matrix of an irreducible CTMC
with state-space Cr , we already know how P (r)(t) behaves as t → ∞. Similarly,
since all states in T are transient, we know that P (T )(t) → 0 as t → ∞. Thus the
study of the limiting behavior of P (t) reduces to the study of the limiting behavior
of PR (t) as t → ∞. This is what we proceed to do.
Let T (r) be the first passage time to visit the set Cr , i.e.,
T (r) = min{t ≥ 0 : X(t) ∈ Cr }, 1 ≤ r ≤ k.
Let
ui (r) = P(T (r) < ∞|X(0) = i), 1 ≤ r ≤ k, i ∈ T. (6.76)
The next theorem gives a method of computing the above probabilities.
Proof: Equation 6.77 can be derived as in the proof of Theorem 6.13. The rest of the
proof is similar to the proof of Theorem 3.2 on page 60.
Using the quantities {ui (r), i ∈ T, 1 ≤ r ≤ k} we can describe the limit of PR (t)
as t → ∞. This is done in the theorem below.
Proof: Follows along the same lines as the proof of Theorem 4.23.
We discuss two examples of reducible CTMCs below.
Example 6.40 Let {X(t), t ≥ 0} be the CTMC of Example 6.30. This is a re-
ducible CTMC with two closed communicating classes C1 = {0} and C2 = {2}.
The set T = {1} is not closed. We do not do any relabeling. Equations 6.77 yield:
µ λ
u1 (1) = , u1 (2) = .
λ+µ λ+µ
We also have
p0 = 1, p2 = 1
since these are absorbing states. Then Theorem 6.29 yields
1 0 0
µ λ
lim P (t) = λ+µ 0 λ+µ .
t→∞
0 0 1
This matches the result in Example 6.30.
Example 6.41 Linear Growth Model. Consider the CTMC of Example 6.27. This
is a reducible CTMC with one closed class C1 = {0}. Hence p0 = 1. The rest of the
states form the set T . Thus we need to compute ui (1) for i ≥ 1. From the results of
Example 6.28 we get
1 if λ ≤ µ,
ui (1) = µ i
λ if λ > µ.
Hence, for i ≥ 1,
1 if λ ≤ µ,
lim pi0 = µ i
(6.79)
t→∞
λ if λ > µ.
which agrees with Example 6.31.
Recall that the linear growth model represents a colony of organisms where each
organism produces new ones at rate λ, and each organism dies at rate µ. State 0 is
CTMCS WITH COSTS AND REWARDS 249
absorbing: once all organisms die, the colony is permanently extinct. Equation 6.79
says that extinction is certain if the birth rate is no greater than the death rate. On
the other hand, even if the birth rate is greater than the death rate, there is a positive
probability of extinction in the long run, no matter how large the colony is to begin
with. Of course, in this case, there is also a positive probability, 1 − (µ/λ)i , that the
size of the colony of initial size i will become infinitely large in the long run.
As in the case of DTMCs, now we study CTMCs with costs and rewards. Let X(t) be
the state of a system at time t. Suppose {X(t), t ≥ 0} is a CTMC with state-space S
and generator matrix Q. Furthermore, the system incurs cost at a rate of c(i) per unit
time it spends in state i. For other cost models, see Conceptual Exercises 6.6 and 6.7.
Rewards can be thought of as negative costs. We consider costs incurred over infinite
horizon. For the analysis of costs over finite horizon, see Conceptual Exercise 6.8.
Suppose the costs are discounted continuously at rate α, where α > 0 is a fixed
(continuous) discount factor. Thus if the system incurs a cost of d at time t, its present
value at time 0 is e−αt d, i.e., it is equivalent to incurring a cost of e−αt d at time zero.
Let C be the total discounted cost over the infinite horizon, i.e.,
Z ∞
C= e−αt c(X(t))dt.
0
Let φ(i) be the expected total discounted cost (ETDC) incurred over the infinite
horizon starting with X(0) = i. That is,
φ(i) = E(C|X(0) = i).
The next theorem gives the main result regarding the ETDC. We introduce the fol-
lowing column vectors
c = [c(i)]i∈S , φ = [φ(i)]i∈S .
Theorem 6.30 ETDC. Suppose α > 0, and c is a bounded vector. Then φ is given
by
φ = (αI − Q)−1 c. (6.80)
Proof: We have
φ(i) = E(C|X(0) = i)
Z ∞
= E e−αt c(X(t))dt|X(0) = i
0
250 CONTINUOUS-TIME MARKOV CHAINS
Z ∞
= e−αt E(c(X(t))|X(0) = i)dt
0
Z ∞ X
= e−αt pij (t)c(j)dt.
0 j∈S
The right hand side is the Laplace transform of P (t) evaluated at α. From Theo-
rem 6.8, we see that is is given by (αI − Q)−1 . Hence we get Equation 6.80.
Note that there is no assumption of irreducibility or transience or recurrence be-
hind the above theorem. Equation 6.80 is valid for any generator matrix Q. Note that
the matrix αI − Q is invertible for α > 0. Also, the theorem remains valid if the c
vector is bounded from below or above. It does not need to be bounded from both
sides.
Let c(i) be the expected cost incurred per unit time spent in state i. We have
c = [c(0) c(1)]′ = [d − r]′ .
Then, using Theorem 6.30 we get
φ = [φ(0) φ(1)]′ = (αI − Q)−1 c.
Direct calculations yield
1 d(α + µ) − rλ
φ= .
α(α + λ + µ) dµ − r(α + λ)
Thus it is profitable to buy a new machine if the expected total discounted net revenue
from a new machine over the infinite horizon is greater than the initial purchase price
of m, i.e., if
r(α + λ) − dµ
m≤ .
α(α + λ + µ)
How much should you be willing to pay for a machine in down state?
CTMCS WITH COSTS AND REWARDS 251
6.12.2 Average Costs
The discounted costs have the disadvantage that they depend upon the discount fac-
tor and the initial state, thus making decision making more complicated. These is-
sues are addressed by considering the long run cost per unit time, called the av-
R t cost. The expected total cost up to time t, starting from state i, is given by
erage
E( 0 c(X(u))du|X(0) = i). Dividing it by t gives the cost per unit time. Hence the
long run expected cost per unit time is given by:
Z t
1
g(i) = lim E c(X(u))duX(0) = i ,
t→∞ t 0
assuming that the above limit exists. To keep the analysis simple, we will assume that
the CTMC is irreducible and positive recurrent with limiting distribution given by
{pj , j ∈ S}, which is also the limiting occupancy distribution.
P Intuitively, it makes
sense that the long run cost per period should be given by pj c(j), independent of
the initial state i. This intuition is formally proved in the next theorem:
Then X
g(i) = g = pj c(j).
j∈S
Proof: Let Mij (t) be the expected time spent in state j over (0, t] starting from state
i. See Section 6.4. Then, we see that
1X
g(i) = lim Mij (t)c(j)
t→∞ t
j∈S
X Mij (t)
= lim c(j)
t→∞ t
j∈S
X Mij (t)
= lim c(j)
t→∞ t
j∈S
X
= pj c(j).
j∈S
Here the last interchange of sum and the limit is allowed because the CTMC is posi-
tive recurrent. The last equality follows from Theorem 6.27.
We illustrate with an example.
Example 6.43 Two-State Machine. Consider the cost structure regarding the two-
state machine of Example 6.42. Compute the long run cost per unit time.
252 CONTINUOUS-TIME MARKOV CHAINS
The steady state distribution of the two-state machine is given by (see Exam-
ple 6.34)
µ λ
p0 = , p1 = .
λ+µ λ+µ
From Theorem 6.31 we see that the expected cost per unit time in the long run is
given by
cµ − rλ
g = c(0)p0 + c(1)p1 = .
λ+µ
Thus it is profitable to operate this machine if rλ > cµ.
Example 6.44 Single-Server Queue. Let X(t) be the number of customers in the
single server queue with arrival rate λ and service rate µ as described in Exam-
ple 6.11. Now suppose the cost of keeping customers in the system is $c per customer
per hour. The entry fee is $d per customer, paid upon entry. Compute the long run
net revenue per unit time.
Let N (t) be the number of arrivals over (0, t]. Since the arrival process is PP(λ), we
get r, the long run fees collected per unit time as
E(N (t)) λt
r=d = d = λd.
t t
Thus the net expected revenue per unit time is given by
cρ
λd − .
1−ρ
c
A bit of algebra shows that the entry fee must be at least µ−λ in order to break even.
It is possible to use the results in Section 6.11 to extend this analysis to reducible
CTMCs. However, the long run cost rate may depend upon the initial state in that
case.
The distribution given in Equation 6.34 is called a phase type distribution with pa-
rameters (α, M ). Any random variable whose distribution can be represented as in
PHASE TYPE DISTRIBUTIONS 253
Equation 6.34 for a valid α and M is called a phase type random variable. (By
“valid” we mean that α is a pmf, and M is obtained from a generator matrix of
an irreducible CTMC by deleting rows and columns corresponding to a non-empty
subset of states.) It is denoted by P H(α, M ), and the size of M is called the num-
ber of phases in the random variable. Many well-known distributions are phase type
distributions, as is demonstrated in the next example.
Examples 2 and 3 are special cases of the general theorem given below.
254 CONTINUOUS-TIME MARKOV CHAINS
Theorem 6.32 Sums and mixtures of a finite number of independent phase type
random variables are phase type random variables.
6.14 Reversibility
In Section 4.9 we studied reversible DTMCs. In this section we study the reversible
CTMCs. Intuitively, if we watch a movie of a reversible CTMC we will not be able to
tell whether the time is running forward or backward. We begin with the definition.
Proof: Suppose the CTMC is irreducible, positive recurrent, and reversible. Since
we implicitly assume that the CTMC is regular, the embedded DTMC is irreducible,
positive recurrent, and reversible. Let {πi , i ∈ S} be the stationary distribution of the
DTMC. Since it is reversible, Theorem 4.26 yields
πi pij = πj pji , i 6= j ∈ S,
which implies that
qij qji
= πj , i 6= j ∈ S.
πi
qi qj
From Theorem 6.26, we know that pj = Cπj /qj , for some constant C. Hence the
above equation reduces to Equation 6.89.
Now suppose the CTMC is regular, irreducible, positive recurrent, and Equa-
tion 6.89 holds. This implies that
pi qi pij = pj qj pji .
Then, using Theorem 6.26, we see that the above equation reduces to
πi pij = πj pji .
256 CONTINUOUS-TIME MARKOV CHAINS
Thus the embedded DTMC is reversible. Hence the CTMC is reversible, and the the-
orem follows.
The Equations 6.89 are called the local balance or detailed balance equations, as
opposed to Equations 6.64, which are called global balance equations. Intuitively,
the local balance equations say that, in steady state, the rate of transitions from state
i to j is the same as the rate of transitions from j to i. This is in contrast to stationary
CTMCs that are not reversible: for such CTMCs the global balance equations imply
that the rate of transitions out of a state is the same as the rate of transitions into that
state. It can be shown that the local balance equations imply global balance equations,
but not the other way. See Conceptual Exercise 6.16.
Example 6.46 Birth and Death Processes. Consider a positive recurrent birth and
death process as described in Example 6.35. Show that it is a reversible DTMC.
From Equation 6.67 we see that the stationary distribution satisfies the local bal-
ance equations
pi qi,i+1 = pi+1 qi+1,i .
Since the only transitions in a birth and death process are from i to i + 1, and i to
i − 1, we see that Equations 6.89 are satisfied. Hence the CTMC is reversible.
(In practice there are two classes of users: readers (who read the records) or writers
(who alter the records), and correspondingly, the locks are of two types: exclusive and
shared. When a writer wants to access records in the set A, he gets the access if none
of the records in A have any kind of lock from other users, in which case he puts an
exclusive lock on each of them. When a reader wants to access records in the set A,
he gets it if none of the records in A have an exclusive lock on them, and then puts
REVERSIBILITY 257
a shared lock on each of them. This ensures that many readers can simultaneously
read a record, but if a record is being modified by a writer no one else can access a
record. We do not consider this further complication here.)
Now suppose users of type A arrive according to a P P (λ(A)), and the arrival pro-
cesses are independent of each other. A user of type A needs an exp(µ(A)) amount
of time to process these records. The processing times are independent of each other
and the arrival processes. We say that the database is in state (A1 , A2 , · · · , Ak ) if
there is exactly one user of type Ai (1 ≤ i ≤ k) in the system at time t. This implies
that the sets A1 , A2 , · · · , Ak must be mutually disjoint. When there are no users in
the system, we denote that state by φ.
Let X(t) be the state of the database at time t. The process {X(t), t ≥ 0} is a
CTMC with state-space
S = {φ} ∪ {(A1 , A2 , · · · , Ak ) : k ≥ 1, Ai 6= φ, Ai ⊆ {1, 2, · · · , N },
Ai Aj = φ if i 6= j, 1 ≤ i, j ≤ k},
and transition rates given below (we write q(i → j) instead of qij for the ease of
reading):
q(φ → (A)) = λ(A),
q((A1 , A2 , · · · , Ak ) → (A1 , A2 , · · · , Ak , Ak+1 )) = λ(Ak+1 ),
q((A1 , A2 , · · · , Ak ) → (A1 , · · · , Ai−1 , Ai+1 , · · · , Ak )) = µ(Ai ),
q((A) → φ) = µ(A).
The above rates are defined whenever (A1 , A2 , · · · , Ak ) ∈ S and (A1 , A2 , · · · ,
Ak , Ak+1 ) ∈ S. Thus the state-space is finite and the CTMC is irreducible and
positive recurrent. Let
p(φ) = lim P(X(t) = φ),
t→∞
p(A1 , A2 , · · · , Ak ) = lim P(X(t) = (A1 , A2 , · · · , Ak )),
t→∞
We say that the restaurant is in state (i1 , i2 , · · · , ik ) if there are k parties in the
restaurant and the size of the rth party is ir (1 ≤ r ≤ k). The empty restaurant is
said to be in state φ. Let X(t) be the state of the restaurant at time t. The process
{X(t), t ≥ 0} is a CTMC with state-space
k
X
S = {φ} ∪ {(i1 , i2 , · · · , ik ) : k ≥ 1, ir > 0, 1 ≤ r ≤ k, ir ≤ N },
r=1
and transition rates given below (we write q(i → j) instead of qij for the ease of
reading):
q(φ → (i)) = λi , 1 ≤ i ≤ N,
q((i1 , i2 , · · · , ik ) → (i1 , i2 , · · · , ik , ik+1 )) = λik+1 , (i1 , i2 , · · · , ik+1 ) ∈ S,
q((i1 , i2 , · · · , ik ) → (i1 , · · · , ir−1 , ir+1 , · · · , ik )) = µir , (i1 , i2 , · · · , ik ) ∈ S,
q((i) → φ) = µi 1 ≤ i ≤ N.
Thus the state-space is finite and the CTMC is irreducible and positive recurrent. Let
p(φ) = lim P(X(t) = φ),
t→∞
p(i1 , i2 , · · · , ik ) = lim P(X(t) = (i1 , i2 , · · · , ik )), (i1 , i2 , · · · , ik ) ∈ S.
t→∞
6.1 Consider a workshop with k machines, each with its own repairperson. The
k machines are identical and behave independently of each other as described in
Examples 6.1 and 6.4. Let X(t) be the number of working machines at time t. Show
that {X(t), t ≥ 0} is a CTMC. Display its generator matrix and the rate diagram.
6.2 Do Modeling Exercise 6.1 assuming that there are r repairpersons (1 ≤ r < k).
The machines are repaired in the order of failure.
6.3 Consider the two-machine one repairperson workshop of Example 6.6. Suppose
the repair time of the ith machine is exp(λi ) and the lifetime is exp(µi ), i = 1, 2.
The repairs take place in the order of failure. Construct an appropriate CTMC to
model this system. Assume that all the life times and repair times are independent.
6.4 Do Modeling Exercise 6.3 with three distinct machines. Care is needed to handle
the “order of failure” service.
6.5 A metal wire, when subject to a load of L kilograms, breaks after an exp(µL)
amount of time. Suppose k such wires are used in parallel to hold a load of L kilo-
grams. The wires share the load equally, and their failure times are independent.
Let X(t) be the number of unbroken wires at time t, with X(0) = k. Formulate
{X(t), t ≥ 0} as a CTMC.
6.6 A bank has five teller windows, and customers wait in a single line and are
served by the first available teller. The bank uses the following operating policy: if
there are k customers in the bank, it keeps one teller window open for 0 ≤ k ≤ 5,
two windows for 6 ≤ k ≤ 8, three for 9 ≤ k ≤ 12, four for 13 ≤ k ≤ 15, and all five
are kept open for k ≥ 16. The service times are iid exp(µ) at any of the tellers, and
the arrival process is PP(λ). Let X(t) be the number of customers in the system at
time t. Show that {X(t), t ≥ 0} is a birth and death process and find its parameters.
6.8 Consider a two-server queue that operates as follows: two different queues are
formed in front of the two servers. The arriving customer joins the shorter of the two
queues (the customer in service is counted as part of that queue). If the two queues
are equal, he joins either one with probability .5. Queue jumping is not allowed. Let
260 CONTINUOUS-TIME MARKOV CHAINS
Xi (t) be the number of customers in the ith queue, including any in service with
server i, i = 1, 2. Assume that the arrival process is PP(λ) and the service times are
iid exp(µ) at either server. Model {(X1 (t), X2 (t)), t ≥ 0} as a CTMC and specify
its transition rates and draw the rate diagram.
6.9 Consider a system consisting of n components in series, i.e., it needs all com-
ponents in working order in order to function properly. The lifetime of the ith com-
ponent is exp(λi ) random variable. At time 0 all the components are up. As soon as
any of the components fails, the system fails, and the repair starts immediately. The
repair time of the ith component is an exp(µi ) random variable. When the system is
down, no more failures occur. Let X(t) = 0 if the system is functioning at time t,
and X(t) = i if component i is down (and hence under repair) at time t. Show that
{X(t), t ≥ 0} is a CTMC and display its rate diagram.
6.10 Consider an infinite server queue of Example 6.12. Suppose the customers
arrive according to a CPP with batch arrival rate λ and the successive batch sizes are
iid geometric with parameter 1 − p. Each customer is served by a different server
in an independent fashion, and the service times are iid exp(µ) random variables.
Model the number of customers in the system by a CTMC. Compute the transition
rates.
6.11 Consider a queueing system with s servers. The incoming customers belong
to two classes: 1 and 2. Class 2 customers are allowed to enter the system if and
only if there is a free server immediately available upon their arrival, otherwise they
are turned away. Class 1 customers always enter the system and wait in an infinite
capacity waiting room for service if necessary. Assume that class i customers arrive
according to independent P P (λi ), i = 1, 2. The service times are iid exp(µ) for
both classes. Let X(t) be the number of customers in the system at time t. Show that
{X(t), t ≥ 0} is a birth and death process and find its parameters.
6.12 Consider a bus depot where customers arrive according to a PP(λ), and the
buses arrive according to a P P (µ). Each bus has capacity K > 0. The bus de-
parts with min(x, K) passengers, where x is the number of customers waiting when
the bus arrives. Loading time is insignificant. Let X(t) be the number of customers
waiting at the depot at time t. Show that {X(t), t ≥ 0} is a CTMC and compute its
generator matrix.
6.15 Genetic engineers have come up with a super-amoeba with exp(µ) lifetimes
and following property: at the end of its lifetime it ceases to exist with probability
0.3, or splits into two clones of itself with probability .4, or three clones of itself
with probability .3. Let X(t) be the number of super-amoebae in a colony at time
t. Suppose all the existing amoebae behave independently of each other. Show that
{X(t), t ≥ 0} is a CTMC and compute its generator matrix.
6.16 Consider a single-server queue with the following operating policy: Once the
server becomes idle, it stays idle as long there are less than N (a given positive
integer) customers in the system. As soon as there are N customers in the system,
the server starts serving them one by one and continues until there is no one left in
the system, at which time the server becomes idle. The arrival process is PP(λ) and
the service times are iid exp(µ). Model this as a CTMC by describing its state-space
and the generator matrix. Draw its rate diagram.
6.17 Consider a computer system with five independent and identical central pro-
cessing units (CPUs). The lifetime of each CPU is exp(µ). When a CPU fails, an
automatic mechanism instantaneously isolates the failed CPU, and the system con-
tinues in a reduced capacity with the remaining working CPUs. If this automatic
mechanism fails to isolate the failed CPU, which can happen with probability 1 − c,
the system crashes. If no working CPUs are left, then also the system crashes. Since
the system is aboard a spaceship, once the system crashes it cannot be repaired.
Model the system by a CTMC by giving its state-space, generator matrix, and the
rate diagram.
6.18 Consider the system in Modeling Exercise 6.17. Now suppose there is a robot
aboard the space ship that can repair the failed CPUs one at a time, the repair time
for each CPU being exp(λ). However, the robot uses the computer system for the
repair operation, and hence requires a non-crashed system. That is, the robot works
as long as the system is working. Model this modified system by a CTMC by giving
its state-space, generator matrix, and the rate diagram.
6.19 Consider the s-server model of Modeling Exercise 6.11. Suppose the service
times of the customers of class i are iid exp(µi ), i = 1, 2. Model this modified
system by a CTMC by giving its state-space and the generator matrix.
6.20 Unslotted Aloha. Consider a communications system where the messages ar-
rive according to a PP(λ). As soon as a message arrives it attempts transmission. The
262 CONTINUOUS-TIME MARKOV CHAINS
message transmission times are iid exp(µ). If no other message tries to transmit dur-
ing this time, the message transmission is successful. Otherwise a collision results,
and both the messages involved in the collision are terminated instantaneously. All
messages involved in a collision are called backlogged, and are forced to retransmit.
All backlogged messages wait for an exp(θ) amount of time (independently of each
other) before starting retransmission attempt. Let X(t) be the number of backlogged
messages at time t, Y (t) = 1 if a message is under transmission at time t and zero
otherwise. Model {(X(t), Y (t)), t ≥ 0} as a CTMC and specify its transition rates
and draw the rate diagram.
6.21 Leaky Bucket. Packets arrive for transmission at a node according to a PP(λ)
and join the packet queue. Tokens (permission slips to transmit) arrive at that node
according to a P P (µ) and join the token pool. The node uses a “leaky bucket” trans-
mission protocol to control the entry of the packets into the network, and it operates
as follows: if the token pool is empty, the arriving packets wait in the packet queue.
Otherwise, an incoming packet removes a token from the token pool and is instanta-
neously transmitted. Thus the packet queue and the token pool cannot simultaneously
be non-empty. The token pool size is M , and any tokens generated when the token
pool is full are discarded. Model this modified system by a CTMC by giving its
state-space, generator matrix, and the rate diagram.
6.25 Customers arrive according to PP(λ) to a queueing system with two servers.
The ith server (i = 1, 2) needs exp(µi ) amount of time to serve one customer. Each
incoming customer is routed either to server 1 with probability p1 or to server 2 with
probability p2 = 1 − p1 , independently. Queue jumping is not allowed, thus there is a
separate queue in front of each server. Let Xi (t) be the number of customers in the i-
th queue at time t (including any in service with the ith server). Show that {Xi (t), t ≥
0} is a birth and death process, i = 1, 2. Are the two processes independent?
6.26 Jobs arrive at a central computer according to a PP(λ). The job processing
times are iid exp(µ). The computer processes them one at a time in the order of
arrival. The computer is subject to failures and repairs as follows: It stays functional
for an exp(α) amount of time and then fails. It takes an exp(β) amount of time to
repair it back to a fully functional state. Successive up and down times are iid and
independent of the number of jobs in the system. When the computer fails, all the
jobs in the system are lost. All jobs arriving at the computer while it is down are also
lost. Model this system as a CTMC by giving its state-space and the rate diagram.
6.27 A single server queue serves K types of customers. Customers of type k arrive
according to a PP(λk ), and have iid exp(µk ) service times. An arriving customer
enters service if the server is idle, and leaves immediately if the server is busy at the
time of arrival. Let X(t) be 0 if the server is idle at time t, and X(t) = k if the
server is serving a customer of type k at time t. Model X(t) as a CTMC by giving
its state-space and the rate diagram.
6.29 Consider the following modification of the machine shop of Example 6.5.
When both machines are down each machine is repaired by one repairperson. How-
ever, if only one machine is down, both repair persons work on it together so that
the repair occurs at twice the speed. Model this system as a CTMC by giving its
state-space and the rate diagram.
264 CONTINUOUS-TIME MARKOV CHAINS
6.30 A machine produces items according to a PP(λ). The produced items are stored
in a buffer. Demands for these items occur according to a PP(µ). If an item is avail-
able in the buffer when a demand occurs, the demand is satisfied immediately. If the
buffer is empty when a demand occurs, the demand is lost. The machine is turned
off when the number of items in the buffer reaches K, a fixed positive number, and
is turned on when the number of items in the buffer falls to a pre-specified number
k < K. Once it is turned on it stays on until the number of items in the buffer reaches
K. Model this system as a CTMC. State the state-space and show the rate diagram.
6.31 Customers of two classes arrive at a single server service station with infinite
waiting room. Class i customers arrive according to PP(λi ), i = 1, 2. Customers
of class 1 are always allowed to enter the system, while those of class 2 can enter
the system if and only if the total number of customers in the system (before this
customer joins) is less than K, where K > 0 is a fixed integer. The service times
are iid exp(µ) for both the classes. Let X(t) be the total number of customers in the
system at time t. Show that {X(t), t ≥ 0} is a birth and death process and state its
parameters.
6.33 Consider a gas station with two pumps and three car-spaces as shown in Fig-
ure 6.11. Potential customers arrive at the gas station according to a PP(λ). An in-
Pump Pump
1 2
coming customer goes to pump 1 if both pumps are idle. If pump 1 is busy but pump
2 is idle, she goes to pump 2. If pump 2 is occupied she waits in the space 3 (re-
gardless of whether pump 1 occupied or not. If space 3 is occupied she goes away. It
takes an exp(µ) amount of time to fill gas. After finishing filling gas, the customer at
pump 1 leaves, the customer at pump 2 leaves if space 1 is vacant, else she has to wait
until the customer in space 1 is done (due to one way rules and space restrictions). In
that case both customers leave simultaneously when the customer in space 1 is done.
Model this system as a CTMC. State the state-space and show the rate diagram.
COMPUTATIONAL EXERCISES 265
6.34 A machine shop has three identical machines, at most two of which can be in
use at any time. Initially all three machines are in working condition. The policy of
the shop is to keep a machine in standby mode if and only if there are two working
machines in use. Each machine in use fails after an exp(λ) amount of time, whereas
the standby machine (if in working condition) fails after an exp(θ) amount of time.
The failure of a standby machine can be detected only after it is put in use. There is
a single repair person who repairs the machines in exp(µ) amount of time. Model
this system as a CTMC. Describe the state-space and show the rate diagram. Assume
independence as needed.
6.2 Let mi (t) = E(X(t)|X(0) = i), where X(t) is the number of customers at
time t in an infinite-server queue of Example 6.12. Derive the differential equations
for mi (t) (i ≥ 0) by using the forward equations for pij (t). Solve for m0 (t).
6.3 Let {X(t), t ≥ 0} be the CTMC of Example 6.5. Compute its transition proba-
bility matrix P (t) by
6.5 For the pure death process {X(t), t ≥ 0} with µn = nµ for n ≥ 0, compute
P(X(t) = j|X(0) = i).
6.6 Consider the CTMC {X(t), t ≥ 0} of Modeling Exercise 6.15. Let mi (t) =
E(X(t)|X(0) = i). Show that
m′i (t) = 0.7mi (t), mi (0) = i.
Solve it.
6.7 Let {X(t), t ≥ 0} be the pure birth process of Computational Exercise 6.1.
Compute E(X(t)|X(0) = 0).
266 CONTINUOUS-TIME MARKOV CHAINS
6.8 Customers arrive at a service center according to a PP(λ), and demand iid exp(µ)
service. There is a single service person and waiting room for only one customer.
Thus if an incoming customer finds the server idle he immediately enters service, or
else he leaves without service. A special customer, who knows how the system works,
wants to use it in a different way. He inspects the server at times 0, T, 2T, 3T, . . .
(here T > 0 is a fixed real number) and enters the system as soon as he finds the
server idle upon inspection. Compute the distribution of the number of visits the
special customer makes until he enters, assuming that he finds the server busy at
time 0. Also compute the expected time that the special customer has to wait until he
enters.
6.9 A machine is either up or down. The up machine stays up for an exp(µ) amount
of time and then fails. The repairperson takes an exp(λ) time to fix the machine to a
good-as-new state. A repair person is on duty from 8:00am to 4:00pm every day, and
works on the machine if it fails. A working machine can fail at any time, whether
the repair person is on duty or not. If the machine is under repair at 4:00pm on a
given day, or it fails while the repair person is off-duty, it stays down until 8:00am
the next day, when the repairperson resumes (or starts) the repair work. Compute the
steady-state probability that the machine is working when the repairperson reports to
work.
6.10 Compute the steady state distribution of the CTMC of Modeling Exercise 6.1.
6.11 Compute the steady state distribution of the CTMC of Modeling Exercise 6.6.
What is the condition of stability?
6.12 Compute the steady state distribution of the CTMC of Modeling Exercise 6.7.
What is the condition of stability?
6.13 Compute the steady state distribution of the CTMC of Modeling Exercise 6.9.
6.14 Compute the steady state distribution of the CTMC of Modeling Exercise 6.4.
6.15 Consider the Modeling Exercise 6.10. Let pk be the limiting probability that
there are k customers in the systems. Define
∞
X
G(z) = pk z k .
k=0
Using the balance equations derive the following differential equation for G(z):
λ 1
G′ (z) = · · G(z).
µ 1 − pz
Solve for G(z).
6.16 Compute the steady state distribution of the CTMC of Modeling Exercise 6.11.
What is the condition of stability?
COMPUTATIONAL EXERCISES 267
6.17 Consider the Modeling Exercise 6.13. Let
pik = lim P(X(t) = i, Y (t) = k), i ≥ 0, k = 1, 2,
t→∞
and define
∞
X
φk (z) = pik z i , k = 1, 2.
i=0
Show that
αλ(λ(1 − z) + µ2 )p00
φ1 (z) = .
µ1 µ2 /z − λµ1 (1 − α/z) − λµ2 (1 − β/z) − λ2 (1 − z)
(Expression for φ2 is obtained by interchanging α with β and µ1 with µ2 .) Compute
p00 and show that the condition of stability is
α β
λ + < 1.
µ1 µ2
6.18 Compute the steady state distribution of the CTMC of Modeling Exercise 6.14.
What is the condition of stability?
6.19 Consider the parking lot of Modeling Exercise 6.24. Compute the long run
probability that the i-th space is occupied. Also compute the long run fraction of the
customers that are lost.
6.20 Consider the queueing system of Modeling Exercise 6.25. When is {Xi (t), t ≥
0} stable, i = 1, 2? Assuming it is stable, what is the mean and the variance of the
number of customers in the entire system in steady state?
6.21 Consider the computer system of Modeling Example 6.26. When is this system
stable? What is its limiting distribution, assuming stability? What fraction of the
incoming jobs are completed successfully in steady state?
6.22 Compute the limiting distribution of the number of items in the inventory sys-
tem described in Modeling Exercise 6.22. In steady state, what fraction of the de-
mands are lost?
6.23 Consider the system described in Modeling Exercise 6.27. What is the limiting
probability that the server is idle? What is the limiting probability that it is serving a
customer of type k, 1 ≤ k ≤ K?
6.24 Compute the steady state distribution of the system described in Modeling
Exercise 6.28.
6.25 Consider the system of Modeling Exercise 6.31. What is the condition of sta-
bility for this process? Compute its limiting distribution assuming it is stable.
268 CONTINUOUS-TIME MARKOV CHAINS
6.26 Consider the single server queue of Example 6.11. Now suppose the system
capacity is K, i.e., an arriving customer who finds K customers already in the system
is turned away. Compute the limiting distribution of the number of customers in such
a system.
6.27 Consider the system described in Modeling Exercise 6.32. State the condition
of stability and compute the expected number of customers in steady state assuming
that the system is stable.
6.28 Consider the gas station of Modeling Exercise 6.33. Compute the limiting dis-
tribution of the state of the system. In steady state, what fraction of the potential
customers actually enter the gas station?
6.29 Consider the three machine workshop of Modeling Exercise 6.34. Compute
the limiting distribution of the state of the system. In steady state, what fraction of
the time is the repair person idle?
6.31 A finite state CTMC with generator matrix Q is called doubly stochastic if
the row as well column sums of Q are zero. Find the limiting distribution of an
irreducible doubly stochastic CTMC with state-space {1, 2, · · · , N }.
6.33 Consider a birth and death process on N nodes arranged in a circle. The pro-
cess takes a clockwise step with rate λ and a counter-clockwise step with rate µ.
Display the rate diagram of the CTMC. Compute its limiting distribution.
6.34 Consider the inventory system described in Modeling Exercise 6.22. Compute
the expected time between two consecutive orders placed with the supplier.
6.35 Consider the machine shop of Modeling Exercise 6.29. Suppose machine 1
fails at time zero, and machine two is working, so that both repair persons are avail-
able to repair machine 1 at that time. Let T be the time when machine 1 becomes
operational again. Compute E(T ).
6.36 Consider the computer system described in Modeling Exercise 6.17. Suppose
at time 0 all CPUs are functioning. Compute the expected time until the system
crashes.
COMPUTATIONAL EXERCISES 269
6.37 Consider the machine shop described in Modeling Exercise 6.1. Suppose one
machine is functioning at time 0. Show that the expected time until all machines is
down is given by !
k
1 λ+µ
−1 .
kλ µ
6.38 Do the Computational Exercise 6.36 for the computer system in Modeling
Exercise 6.18.
6.39 Consider the finite capacity single server queue of Computational Exer-
cise 6.26. Suppose at time zero there is a single customer in the system. Let T be
the arrival time of the first customer who sees an empty system. Compute E(T ).
6.40 Consider the computer system of Modeling Exercise 6.26. Suppose initially
the computer is up and there are i jobs in the system. Compute the expected time
until there are no jobs in the system.
6.41 Consider the parking lot of Modeling Exercise 6.24. Suppose initially there is
one car in the lot. What is the expected time until the lot becomes empty?
6.42 Consider the queueing system of Modeling Exercise 6.25. Suppose keeping a
customer for one unit of time in the i-th queue costs hi dollars (i = 1, 2), including
the time spent in service. Find the optimum routing probabilities that will minimize
the expected total holding cost per unit time in the two queues in steady state.
6.43 Consider the system described in Modeling Exercise 6.28. Suppose the ma-
chine generates revenue at a rate of $30 per hour while it is operating. Each visit of
the repairperson costs $100, and the repair itself costs $20 per hour. Compute the
long run net income (revenue - cost) per hour if the mean machine lifetime is 80 hrs
and mean repair time is 3 hrs. Find the optimal rate at which the repairperson should
visit this machine so as to maximize the net income rate in steady state.
6.44 Consider the inventory system described in Modeling Exercise 6.22. Suppose
each item sold produces a profit of $r, while it costs $h to hold one item in the
inventory for one unit of time. Compute the long run net profit per unit time.
6.45 Consider the k machine workshop of Modeling Exercise 6.1. Suppose each
working machine produces revenues at rate $r per unit time. Compute the expected
total discounted revenue over infinite horizon starting with k operating machines.
Hint: Use independence of the machines.
6.46 A machine shop consists of K independent and identical machines. Each ma-
chine works for an exp(µ) amount of time before it fails. During its lifetime each
machine produces revenue at rate $R per unit time. When the machine fails it needs
to be replaced by a new and identical machine at a cost of $Cm per machine. Any
number of machines can be replaced simultaneously. The replacement operation is
270 CONTINUOUS-TIME MARKOV CHAINS
instantaneous. The repair-person charges $Cv per visit, regardless of how many ma-
chines are replaced on a given visit. Consider the following policy: Wait until the
number of working machines falls below k, (where 0 < k ≤ K is a fixed integer)
and then replace all the failed machines simultaneously. Compute g(k), the long run
cost rate of following this policy. Compute the optimal value of the parameter k that
minimizes this cost rate for the following data: Number of machines in the machine
shop is 10, mean lifetime of the machine is 10 days, the revenue rate is $100.00 per
day, the replacement cost is $750 per machine, and the visit charge is $250.00.
6.47 Consider the computer system described in Modeling Exercise 6.17. Suppose
at time 0 all CPUs are functioning. Suppose each functioning CPU executes r in-
structions per unit time. Compute the expected total number of instructions executed
before the system crashes.
6.48 Consider the machine in Modeling Exercise 6.30. Suppose it costs $h dollar
to keep an item in the buffer for one unit of time. Each item sells for $r dollars.
Compute the long run net income (revenue-holding cost) per unit time, as a function
of k and K.
6.49 Consider the system in Computational Exercise 6.8. Suppose it costs the spe-
cial customer $c per unit time to wait for service, and $d to inspect the server. Com-
pute the total expected cost to the special customer until he enters as a function of T .
Suppose λ = 1/hr, µ = 2/hr, c = $10/hr and d = $20. Compute the value of T
that minimizes the expected total cost.
6.50 A small local car rental company has a fleet of K cars. We are interested in
deciding how many of them should be large cars and how many should be mid-size.
The demands for large cars occur according to a PP with rate λl per day, while those
for the mid-size cars occur according to a PP with rate λm . Any demand that can
not be met immediately is lost. The rental durations (in days) for the large (mid-size)
cars are iid exponential random variables with parameter µl (µm ). The rental net
revenue for the large (mid-size) cars is $rl ($rm ) per day. Compute g(k), the long
run average net rental revenue per day, if the fleet has k large cars and K − k mid-
size cars. Compute the optimal fleet mix if the following data is known: the fleet size
is fixed at 10, demand is 2 cars per day for large cars, 3 per day for mid-size cars,
the mean rental duration for large cars is 2 days, while that for the mid-size cars is 3
days. The large cars generate a net revenue of $60 per day, while that for the mid-size
cars is $40 per day.
6.51 Customers arrive at a single server service facility according to PP(λ) and re-
quest iid exp(µ) service times. Let Vi be the value of the service (in dollars) to the
i-th customer. Suppose {Vi , i ≥ 1} are iid U (0, 1) random variables. The customer
has to pay a price p to enter service. An arriving customer enters the system if and
only if the server is idle when he arrives and the service is worth more to him than the
service charge p. Otherwise, the customer leaves without service and is permanently
COMPUTATIONAL EXERCISES 271
lost. Find the optimal service charge p that will maximize the long run expected rev-
enue per unit time to the system. What fraction of the arriving customers join the
system when the optimal charge is used?
6.52 Consider the single server system of Computational Exercise 6.51. An arriving
customer enters the system if and only if the service is worth more to him than the
service charge p, even if the server is busy. The service provider incurs a holding cost
of $h per customer per hour of staying in the system. Find the optimal service charge
p that will maximize the long run expected revenue per unit time to the system.
6.53 Consider the system of Modeling Exercise 6.31. Class i customers pay $ci to
enter the system, c1 > c2 . Assuming the process is stable, compute the long run rate
at which the system collects revenue.
6.55 Consider the following special case of the database locking model described
in Example 6.47: N = 3 items in the database, λ(1) = λ(2) = λ(3) = λ > 0,
λ(123) = θ > 0. All other λ(A)’s are zero. Assume µ(A) = µ for all A’s with
λ(A) > 0. Compute the steady state distribution of the state of the database.
6.56 Consider a warehouse that gets shipments from k different sources. Source i
sends items to the warehouse according to a PP(λi ). The items from source i need
Mi (a fixed positive integer) amount of space. The capacity of the warehouse is
B > max(M1 , M2 ,· · · , Mk ). If there is not enough space in the warehouse for an
incoming item, it is sent somewhere else. An item from source i stays in the ware-
house for an exp(µi ) amount of time before it is shipped off. The sojourn times are
independent. Model this system by a CTMC and show that it is reversible. What is
the probability that a warehouse has to decline an item from source i, in steady state?
6.59 A system consists of N urns containing k balls. Each ball moves indepen-
dently among these urns according to a reversible CTMC with rate matrix Q.
Let Xi (t) be the number of balls in the ith urn at time t. Show that {X(t) =
(X1 (t), X2 (t), · · · , XN (t)), t ≥ 0} is a reversible CTMC.
272 CONTINUOUS-TIME MARKOV CHAINS
6.17 Conceptual Exercises
6.4 Let {X(t), t ≥ 0} be a CTMC on state-space {0, 1, 2, ...}. Suppose the system
earns reward at rate ri per unit time when the system is in state i. Let Z be the total
reward earned by the system until it reaches state 0, and let g(i) = E(Z|X(0) = i).
Using first step analysis derive a set of equations satisfied by {g(i), i = 1, 2, 3, ...}.
6.5 Let {X(t), t ≥ 0} be an irreducible and positive recurrent CTMC with state-
space S and limiting distribution p = [pj , j ∈ S]. Suppose the initial distribution is
given by
P(X(0) = j) = pj , j ∈ S.
Show that
P(X(t) = j) = pj , j ∈ S, t ≥ 0.
6.6 Consider a CTMC with state-space S and generator matrix Q. Suppose the
CTMC earns reward at rate ri per unit time it spends in state i (i ∈ S). Further-
more it earns a lump sum reward rij whenever it undergoes a transition from state i
to state j. Let gα (i) be the total expected discounted (with continuous discount factor
α > 0) reward earned over an infinite horizon if the initial state is i. Show that the
vector gα = [gα (i), i ∈ S] satisfies
[αI − Q]gα = r,
where r = [r(i), i ∈ S] is given by
X
r(i) = ri + qij rij .
j6=i
6.7 Consider Conceptual Exercise 6.6 and assume that the CTMC is irreducible and
positive recurrent. Let g be the long run average reward per unit time. Show that
X
g= r(i)pi ,
i∈S
CONCEPTUAL EXERCISES 273
where pi is the limiting probability that the CTMC is in state i, and r(i) is as given
in Conceptual Exercise 6.6.
6.8 Consider the cost structure of Section 6.12. Let gi (t) be the expected total cost
incurred over (0, t] by the CTMC starting from state i at time 0. Let g(t) = [gi (t), i ∈
S]. Show that
g(t) = M (t)c,
where M (t) is the occupancy times matrix, and c is the cost rate vector.
6.9 In Conceptual Exercise 6.4, suppose r(i) > 0 for all 1 ≤ i < N . Construct a
new CTMC {Y (t), t ≥ 0} on {0, 1, 2, · · · , N } with the same initial distribution as
that of X(0), such that the reward Z equals the first passage time until the {Y (t), t ≥
0} process visits state N .
6.12 Consider the computer system described in Conceptual Exercise 6.11, with the
following modification: whenever the system changes state, all the work done on the
job is lost, and the job starts from scratch in the new state. Now show that
N
X qij
φi (s, x) = e−(s+qi )x/ri + (1 − e−(s+qi )x/ri φj (s, x)).
s + qi
j=1,j6=i
6.16 Suppose a vector of probabilities p satisfies the local balance Equation 6.89.
Show that is satisfies the global balance Equation 6.64.
Queueing Models
An operations research professor gets an offer from another university that gives him
better research opportunities, better students, and better pay, but it means uprooting
his family and moving to a new place. When he describes this quandary to his friend,
the friend says, “Well, why don’t you use the methodology of operations research?
Set up an optimization model with an objective function and constraints to maximize
your expected utility.” The professor objects, saying, “No, no! This is serious!”
7.1 Introduction
Queues are an unavoidable aspect of modern life. We do not like queues because
of the waiting involved. However, we like the fair service policies that a queueing
system imposes. Imagine what would happen if an amusement park did not enforce
a first-come first-served queueing discipline at its attractions!
Finally, waiting in queues is not a uniquely human fate. All kinds of systems en-
force queueing for all kinds of non-human commodities. For example, a modern
computer system manages queues of computer programs at its central processing
unit, its input/output devices, etc. A telephone system maintains a queue of calls
and serves them by assigning circuits to them. A digital communication network
transmits packets of data in a store-and-forward fashion, i.e., it maintains a queue of
packets at each node before transmitting them further towards their destinations. In
manufacturing setting queues are called inventories. Here the items are produced at
275
276 QUEUEING MODELS
a factory and stored in a warehouse, i.e., they form a queue in the warehouse. The
items are removed from the warehouse whenever a demand occurs.
Another aspect of queues mentioned earlier is the “fair service discipline.” In hu-
man queues this generally means first-come first-served (or first-in, first-out, or head
of the line). In non-human queues many other disciplines can be used. For example,
blood banks may manage their blood inventory by last-in first-out policy. This en-
sures better quality blood for most clients. The bank may discard blood after it stays
in the bank for longer than a certain period. Generally, queues (or inventories) of per-
ishable commodities use last-in first-out systems. Computers use many specialized
service disciplines. A common service discipline is called time sharing, under which
each job in the system gets ∆ milliseconds from the CPU in a round-robin fashion.
Thus all jobs get some service in reasonable intervals of time. In the limit, as ∆ → 0,
all jobs get served continuously in parallel, each job getting a fraction of the CPU
processing capacity. This limiting discipline is called processor sharing. As a last
example, the service discipline may be random: the server picks one of the waiting
customers at random for the next service. Such a discipline is common in statistical
experiments to avoid bias. It is also common to have priorities in service, although
we do not consider such systems in this chapter.
INTRODUCTION 277
A block diagram of a simple queueing system is shown in Figure 7.1. There are
several key aspects to describing a queueing system. We discuss them below.
Arrivals Departures
Queue
1. The Arrival Process. The simplest model is when the customers arrive one at a
time and the times between two consecutive arrivals are iid non-negative random
variables. We use special symbols to denote these inter-arrival times as follows:
M for exponential (stands for memoryless or Markovian), Ek for Erlang with k
phases, D for deterministic, P H for phase type, G for general (sometimes we
use GI to emphasize independence). This list is by no means exhaustive, and
new notation keeps getting introduced as newer applications demand newer arrival
characteristics. For example, the applications in telecommunication systems use
what are known as the Markovian arrival processes, or Markov modulated Poisson
processes, denoted by MAP or MMPP.
2. The Service Times. The simplest model assumes that the service times are iid non-
negative random variables. We use the notation of the inter-arrival times for the
service times as well.
3. The Number of Servers. It is typically denoted by s (for servers) or c (for chan-
nels in telecommunication systems). It is typically assumed that all servers are
identical.
4. The Holding Capacity. This is the maximum number of customers that can be in
the system at any time. We also call it the system capacity, or just capacity. If
the capacity is K, an arriving customer who sees K customers in the system is
permanently lost. If no capacity is mentioned, it is assumed to be infinity.
5. The Service Discipline. This describes the sequence in which the waiting cus-
tomers are serviced. As described before, the possible disciplines are FCFS (first-
come first-served), LCFS (last-come first-served), random, PS (processor shar-
ing), etc.
We shall also find it useful to study the customers in the queue (i.e., those who are
in the system but not in service). We define
X q (t) = the number of customers in the queue at time t,
Xnq = the number of customers in the queue just after the n-th customer departs,
Xn∗q = the number of customers in the queue just before the n-th customer enters,
X̂nq = the number of customers in the queue just before the n-th customer arrives,
Wnq = time spent in the queue by the n-th customer,
and
pqj = lim P(X q (t) = j),
t→∞
πjq = lim P(Xnq = j),
n→∞
πj∗q = lim P(Xn∗q = j),
n→∞
With this introduction we are ready to apply the theory of DTMCs and CTMCs to
queueing systems. In particular we shall study queueing systems in which {X(t), t ≥
0} or {Xn , n ≥ 0} or {Xn∗ , n ≥ 0} is a Markov chain. We shall also study queueing
systems modeled by multi-dimensional Markov chains. There is an extremely large
and growing literature on queueing theory and this chapter is by no means an exhaus-
tive treatment of even the Markovian queueing systems. Readers are encouraged to
refer to one of the several excellent books that are devoted entirely to queueing the-
ory.
It is obvious that {X(t), t ≥ 0}, {Xn , n ≥ 0}, {Xn∗ , n ≥ 0}, and {X̂n , n ≥ 0}
are related processes. The exact relationship between these processes is studied in
the next section.
In this section we shall study several important properties of general queueing sys-
tems. They are discussed in Theorems 7.1, 7.2, 7.3, and 7.5.
In Theorem 7.1 we state the connection between the state of the system as seen
by a potential arrival and an entering customer. In Theorem 7.2 we show that under
mild conditions on the sample paths of {X(t), t ≥ 0}, the limiting distributions of
Xn and Xn∗ , if they exist, are identical. Thus, in steady state, the state of the system
as seen by an entering customer is identical to the one seen by a departing customer.
In Theorems 7.3 and 7.4, we prove that, if the arrival process is Poisson, and certain
other mild conditions are satisfied, the limiting distributions of X(t) and X̂n (if they
exist) are identical. Thus in steady state the state of the system as seen by an arriving
customer is the same as the state of the system at an arbitrary point of time. This
property is popularly known as PASTA – Poisson Arrivals See Time Averages. The
last result (Theorem 7.5) is called Little’s Law, and it relates limiting averages L and
W defined in the last sections. All these results are very useful in practice, but their
proofs are rather technical. We suggest that the reader should first read the statements
of the theorems, and understand their implications, before reading the proofs.
As pointed out in the last section, we make a distinction between the arriving cus-
tomers and the entering customers. One can think of the arriving customers as the
potential customers and the entering customers as the actual customers. The potential
customers become an actual customers when they decide to enter. The relationship
280 QUEUEING MODELS
between the state of the system as seen by an arrival (potential customer) and as seen
by an entering customer depends on the decision rule used by the arriving customer
to actually enter the system. To make this more precise, let In = 1 if the n-th arriving
customer enters, and 0 otherwise. Suppose the following limits exist
lim P(In = 1) = α, (7.1)
n→∞
and
lim P(In = 1|X̂n = j) = αj , j ≥ 0. (7.2)
n→∞
Note that α is the long run fraction of the arriving customers that enter the system.
The next theorem gives a relationship between the state of the system as seen by an
arrival and by an entering customer.
Theorem 7.1 Arriving and Entering Customers. Suppose α > 0, and one of the
two limiting distributions {πj∗ , j ≥ 0} and {π̂j , j ≥ 0} exist. Then the other limiting
distribution exists, and the two are related to each other by
αj
πj∗ = π̂j , j ≥ 0. (7.3)
α
Proof: Define
n
X
N (n) = Ii , n ≥ 1.
i=1
Thus N (n) be the number of customers who join the system from among the first n
arrivals. This implies that
∗
P(X̂n = j|In = 1) = P(XN (n) = j).
Letting n → ∞ on both sides, and using Equations 7.31 and 7.2, we get
αj π̂j = απj∗
if either P(X̂n = j) or P(Xn∗ = j) has a limit as n → ∞. The theorem follows from
this.
Note that if we further know that
∞
X
πj∗ = 1
j=0
we can evaluate α as
∞
X
α= αi π̂i .
i=0
PROPERTIES OF GENERAL QUEUEING SYSTEMS 281
Hence Equation 7.3 can be written as
αj π̂j
πj∗ = P∞ , j ≥ 0.
i=0 αi π̂i
Finally, if every arriving customer enters, we have αj = 1 for all j ≥ 0 and α = 1.
Hence the above equation reduces to πj∗ = π̂j for all j ≥ 0, as expected.
Hence we get
π̂j π̂j
πj∗ = PK−1 = , 0 ≤ j ≤ K − 1.
i=0 π̂i 1 − π̂K
The next theorem gives the relationship between the state of the system as seen by
an entering customer and a departing customer.
Theorem 7.2 Entering and Departing Customers. Suppose the customers enter
and depart a queueing system one at a time, and one of the two limiting distributions
{πj∗ , j ≥ 0} and {πj , j ≥ 0} exists. Then the other limiting distribution exists, and
πj = πj∗ , j ≥ 0. (7.4)
Proof: We follow the proof as given in Cooper [1981]. Suppose that there are i
customers in the system at time 0. We shall show that
∗
{Xn+i ≤ j} ⇔ {Xn+j+1 ≤ j}, j ≥ 0. (7.5)
282 QUEUEING MODELS
First suppose Xn+i = k ≤ j. This implies that there are exactly k + n entries before
the (n + i)th departure. Thus there can be only departures between the (n + i)th
departure and (n + k + 1)st entry. Hence
∗
Xn+k+1 ≤ k.
Using k = j we see that
∗
{Xn+i ≤ j} ⇒ {Xn+j+1 ≤ j}, j ≥ 0.
∗
To go the other way, suppose Xn+j+1 = k ≤ j. This implies that there are exactly
n + i + j − k departures before the (n + j + 1)st entry. Thus there no entries between
the (n + i + j − k)th departure and (n + j + 1)st entry. Hence
∗
Xn+i+j−k ≤ Xn+j+1 = k ≤ j, j ≥ 0.
Setting k = j we get Xn+i ≤ j. Hence
∗
{Xn+j+1 ≤ j} ⇒ {Xn+i ≤ j}, j ≥ 0.
This proves the equivalence in Equation 7.5. Hence we have
∗
P(Xn+i ≤ j) = P(Xn+j+1 ≤ j), j ≥ 0.
Letting n → ∞, and assuming one of the two limits exist, the theorem follows.
Two comments are in order at this point. First, the above theorem is a sample path
result and does not require any probabilistic assumptions. As long as the limits exist,
they are equal. Of course, we need probabilistic assumption to assert that the limits
exist. Second, the above theorem can be applied even in the presence of batch arrivals
and departures, as long as we sequence the entries (or departures) in the batch and
observe the system after every entry and every departure in the batch. Thus, suppose
n customers have entered so far, and a new batch of size 2 enters when there are i
customers in the system. Then we treat this as two single entries occurring one after
∗ ∗
another, thus yielding Xn+1 = i + 1 and Xn+2 = i + 2.
Now we discuss the relationship between the limiting distribution of the state of the
system as seen by an arrival and that of the state of the system at any time point. This
will lead us to an important property called PASTA – Poisson Arrivals See Time
PROPERTIES OF GENERAL QUEUEING SYSTEMS 283
Averages. Roughly speaking, we shall show that, the limiting probability that an ar-
riving customer sees the system in state j is the same as the limiting probability that
the system is in state j, if the arrival process is Poisson. Once can think of pj as the
time average (that is, the occupancy distribution): the long run fraction of the time
that the system spends in state j. Similarly, we can interpret π̂j as the long run frac-
tion of the arriving customers that see the system in state j. PASTA says that, if the
arrival process is Poisson, these two averages are identical.
We shall give a proof of this under the restrictive setting when the stochastic pro-
cess {X(t), t ≥ 0} describing the system state is a CTMC on {0, 1, 2, · · ·}. However,
PASTA is a very general result that applies even though the process is not Markovian.
For example, it applies to the queue length process in an M/G/1 queue, even if that
process is not a CTMC. However, its general proof is rather technical and will not be
presented here. We refer the reader to Wolff [1989] or Heyman and Sobel [1982] for
the general proof.
Let X(t) be the state of queueing system at time t. Let N (t) be the number of
customers who arrive at this system up to time t, and assume that {N (t), t ≥ 0} is a
PP(λ), and Sn is the time of arrival of the nth customer. Assume that {X(t), Sn ≤
t < Sn+1 } is a CTMC with state-space S and rate matrix G = [gij ], n ≥ 0. When the
nth arrival occurs at time Sn , it causes an instantaneous transition in the X process
from i to j with probability rij , regardless of the past history up to time t. That is,
P(X(Sn +) = j|X(Sn −) = i, (X(u), N (u)), 0 ≤ u < Sn ) = ri,j , i, j ≥ 0.
Now the nth arrival sees the system in state X̂n = X(Sn −). Hence we have, assum-
ing the limits exist,
Theorem 7.3 PASTA. If the limits in Equations 7.6 and 7.7 exist,
π̂j = pj , j ≥ 0.
The proof proceeds via several lemmas, and is completely algebraic. We provide the
following intuition to strengthen the feel for the result. Figure 7.2(a) shows a sample
path {X(t), t ≥ 0} of a three-state system interacting with a PP(λ) {N (t), t ≥ 0}.
In the figure, the Ti s are the intervals of time (open on the left and closed on the
right) when the system is in state 3. The events in the PP {N (t), t ≥ 0} that see the
system in state 3 are numbered consecutively from 1 to 12. Figure 7.2(b) shows the
Ti s spliced together, essentially deleting all the intervals of time when the system is
not in state 3. Thus we count the Poisson events that trigger the transitions out of
284 QUEUEING MODELS
X(t)
× ×× × × × × × ×× × × × × × × × × × × × × ×
1 2 3 4 56 7 8 9 10 11 12
T1 T2 T3 T4 T5 T6
(a)
123 456 78 9 10 11 12
××× ××× ×× × × × ×
T1 T2 T3 T4 T5 T6
(b)
Figure 7.2 (a) Sample path of a CTMC interacting with a PP. (b) Reduced sample path of a
CTMC interacting with a PP.
state 3 to states 1 and 2, but not those that trigger a transition into state 3 from state
1 and 2. We also count all the Poisson events that do not generate any transitions.
Now, the times between consecutive events in Figure 7.2(a) are iid exp(λ), due
to the model of the interaction that we have postulated. Hence the process of events
in Figure 7.2(b) is a PP(λ). Following the notation of Section 6.4, let Vj (t) be the
amount of time the system spends in state j over (0, t]. Hence the expected number
of Poisson events up to time t that see the system in state 3 is λE(V3 (t)). By the same
argument, the expected number of Poisson events up to time t that see the system in
state j is λE(Vj (t)). The expected number of events up to time t in a PP(λ) is λt.
Hence the fraction of the Poisson events that see the system in state j is
We now continue with the proof of Theorem 7.3. First we study the structure of
the process {X(t), t ≥ 0}.
The next lemma describes the probabilistic structure of the {X̂n , n ≥ 0} process.
Example 7.3 M/M/1/1 Queue. Verify Theorem 7.3 directly for the M/M/1/1
queue.
Using Lemma 7.2 we see that {X̂n , n ≥ 0} is a DTMC on {0, 1} with transition
probability matrix " #
µ λ
λ+µ λ+µ
P = RB = µ λ .
λ+µ λ+µ
Hence its limiting distribution is given by
µ λ
π̂0 = , π̂1 = .
λ+µ λ+µ
Using the results of Example 6.34 on page 241 we see that the limiting distribution
of the CTMC {X(t), t ≥ 0} is given by
µ λ
p0 = , p1 = .
λ+µ λ+µ
Thus Theorem 7.3 is verified.
288 QUEUEING MODELS
Example 7.4 PASTA for M/M/1 and M/M/s Queues. Let X(t) be the number
of customers at time t in an M/M/1 queue with arrival rate λ and service rate µ > λ.
Let N (t) be the number of arrivals over (0, t]. Then {N (t), t ≥ 0} is a PP(λ). We
see that X and N processes satisfy the assumptions described in this section. Hence
we can apply PASTA (Theorem 7.3) to get
π̂j = pj , j ≥ 0.
Furthermore, the X process satisfies the conditions for Theorem 7.1. Thus
πj = πj∗ , j ≥ 0.
Finally, every arriving customer enters the system, hence
π̂j = πj∗ , j ≥ 0.
From Example 6.36 we see that this queue is stable, and using Equation 6.70 and the
above equations, we get
π̂j = πj∗ = πj = pj = (1 − ρ)ρj , j ≥ 0, (7.10)
where ρ = λ/µ < 1.
Example 7.5 M/M/1/K System. Let X(t) be the number of customers at time
t in an M/M/1/K queue with arrival rate λ and service rate µ. Let N (t) be the
number of arrivals over (0, t]. An arriving customer enters if and only if the number
in the system is less than K when he arrives at the system. Then {N (t), t ≥ 0} is
a PP(λ). We see that X and N processes satisfy the assumptions described in this
section. Hence we can apply PASTA (Theorem 7.3) to get
π̂j = pj , 0 ≤ j ≤ K.
Furthermore, the X process satisfies the conditions for Theorem 7.1. Thus
πj = πj∗ , 0 ≤ j ≤ K − 1.
Finally, from Example 7.1 we get
π̂j
πj∗ = , 0 ≤ j ≤ K − 1.
1 − π̂K
Thus the arriving customers see the system in steady state, but not the entering cus-
tomers. This is because the arriving customers form a PP, but not the entering cus-
tomers. We shall compute the limiting distribution {pj , 0 ≤ j ≤ K} for this system
in Section 7.3.2.
We have proved PASTA in Theorem 7.3 for a {X(t), t ≥ 0} process that interacts
with a PP in a specific way, and behaves like a CTMC between the events of the PP.
PROPERTIES OF GENERAL QUEUEING SYSTEMS 289
However, PASTA is a very general property. We state the general result here, but omit
the proof. In almost all applications the version of PASTA given here is sufficient,
since almost all the processes we study in this book can be turned into CTMCs by
using phase-type distributions as we shall see in Section 7.6.
Let X(t) be the state of a system at time t, and let {N (t), t ≥ 0} be a PP that may
depend on the {X(t), t ≥ 0} process. However, we assume the following:
Now let B be a given set of states and let VB (t) be the time spent by the system
in the set B over (0, t], and AB (t) be the number of arrivals over (0, t] that see the
system in the set B,
N (t)
X
AB (t) = 1{X(Sn −)∈B} , t ≥ 0
n=1
Now suppose the sample paths of the {X(t), t ≥ 0} are almost surely piecewise
continuous and have finite number of jumps in finite intervals of time. Then the
following theorem gives the “almost sure” version of PASTA.
Theorem 7.4 PASTA. VBt(t) converges almost surely if and only if AB (t)
N (t) converges
almost surely, and both have the same limit.
Note that PASTA, which equates the limit of VBt(t) to that of ANB(t)
(t)
, holds when-
ever the convergence holds. We use the tools developed in this book to show that
VB (t)
lim = lim P(X(t) ∈ B),
t→∞ t t→∞
and
AB (t)
lim = lim P(X(Sn −) ∈ B).
t→∞ N (t) n→∞
290 QUEUEING MODELS
Thus according to PASTA, whenever the limits exist,
lim P(X(t) ∈ B) = lim P(X(Sn −) ∈ B),
t→∞ n→∞
Example 7.6 PASTA and the M/G/1 and G/M/1 Queues. Theorems 7.2 and
7.4 imply that for the M/G/1 queue
π̂j = πj∗ = πj = pj , j ≥ 0.
On the other hand, for the G/M/1 queue, we have
π̂j = πj∗ = πj , j ≥ 0.
However, PASTA does not apply unless the inter-arrival times are exponential. Thus
in general, for an G/M/1 queue, we do not have π̂j = pj .
In the rest of the section we shall make the statement of Little’s law more precise.
Consider a general queueing system where the customers arrive randomly, get served,
and then depart. Let X(t) the number of customers in the system at time t, A(t) be
the number of arrivals over (0, t], and Wn be the time spent in the system by the
nth customer. We also refer to it as the waiting time or sojourn time. Now define the
PROPERTIES OF GENERAL QUEUEING SYSTEMS 291
following limits, whenever they exist. Note that these are defined for every sample
path of the queueing system.
1 t
Z
L = lim X(u)du, (7.12)
t→∞ t 0
A(t)
λ = lim , (7.13)
Pt n
t→∞
k=1 Wk
W = lim . (7.14)
n→∞ n
The next theorem states the relationship that binds the three limits defined above.
Theorem 7.5 Little’s Law. Suppose that for a fixed sample path the limits in Equa-
tions 7.13 and 7.14 exist and are finite. Then the limit in Equation 7.12 exists, and is
given by
L = λW.
There are many results that prove Little’s Law under less restrictive conditions.
We refer the readers to Wolff (1989) and Heyman and Sobel (1982) and El-Taha
and Stidham (1998). The condition in Equation 7.15 usually holds since we concen-
trate on stable queueing systems, where the queue length has non-defective limiting
distribution.
Note that as long as the service discipline is independent of the service times,
the {X(t), t ≥ 0} process does not depend on the service discipline. Hence L is
also independent of the service discipline. On the other hand, the {Wn , n ≥ 0}
process does depend upon the service discipline. However, Little’s Law implies that
the average wait is independent of the service discipline, since the quantities L and
λ are independent of the service discipline.
Example 7.7 The M/G/∞ Queue. Consider the infinite server queue of Exam-
ple 5.17 on page 173. In the queueing nomenclature introduce in Section 7.1 this is
an M/G/∞ queue. Verify Little’s law for this system.
From Example 5.17, we see that in steady state the number of customers in this
system is a P(λτ ) random variable, where λ is the arrival rate of customers, and τ is
the mean service time. Thus
L = λτ.
Since the system has infinite number of servers, Wn , the time spent by the n customer
in the system equals his service time. Hence
W = E(Wn ) = τ.
Thus Equation 7.11 is satisfied.
BIRTH AND DEATH QUEUES 293
7.3 Birth and Death Queues
Many queueing systems where customers arrive one at a time, form a single queue,
and get served one at a time, can be modeled by birth and death processes. See
Example 6.10 on page 200 for the definition, and Example 6.35 on page 242 for the
limiting behavior. We shall use these results in the rest of this section.
Consider an M/M/1 queue. Such a queue has a single server, infinite capacity wait-
ing room, and the customers arrive according to a PP(λ) and request iid exp(µ)
service times. Let X(t) be the number of customers in the system at time t. We have
seen in Example 6.11 on page 200 that {X(t), t ≥ 0} is a birth and death process
with birth rates
λn = λ, n ≥ 0
and death rates
µn = µ, n ≥ 1.
We saw in Example 6.36 on page 242 that this queue is stable if
ρ = λ/µ < 1.
The parameter ρ is called the traffic intensity of the queue, and it can be interpreted
as the expected number of arrivals during one service time. The system serves one
customer during one service times, and gets ρ new customers during this time on the
average. Thus if ρ < 1, the system can serve more customers than it gets, so it should
be stable. The stability condition can also be written as λ < µ. In this form it says that
the rate at which customers arrive is less than the rate at which they can be served, and
hence the system should be stable. Note that the system is perfectly balanced when
λ = µ, but it is unstable, since it has no spare service capacity to handle random
variation in arrivals and services. Example 6.36 shows that the limiting distribution
of X(t) in a stable M/M/1 queue is given by
pj = (1 − ρ)ρj , j ≥ 0. (7.17)
From Example 7.4 we get
π̂j = πj∗ = πj = pj = (1 − ρ)ρj , j ≥ 0.
We have
∞
X ρ
L= jpj = . (7.18)
j=0
1−ρ
Thus as ρ → 1, L → ∞. This is a manifestation of increasing congestion as ρ → 1.
Next we shall compute F (·), the limiting distribution of Wn , assuming that the
294 QUEUEING MODELS
service discipline is FCFS. We have
F (x) = lim P(Wn ≤ x)
n→∞
∞
X
= lim P(Wn ≤ x|Xn∗ = j)P(Xn∗ = j)
n→∞
j=0
∞
X
= lim πj∗ P(Wn ≤ x|Xn∗ = j).
n→∞
j=0
This is always finite, hence the queue is always stable. Substituting in Equation 6.69
we get 1−ρ j
pj = 1−ρK+1 ρ if ρ 6= 1,
(7.19)
1
K+1 if ρ = 1.
Finally, from Example 7.5 we have
π̂j = pj , 0 ≤ j ≤ K
and
pj
πj = πj∗ = , 0 ≤ j ≤ K − 1.
1 − pK
The mean number of customers in the system in steady state is given by
∞
1 − ρK
X ρ
L= jpj = − Kp K . (7.20)
j=0
1 − ρ 1 − ρK+1
Consider an M/M/s queue. Such a queue has s identical servers, infinite capacity
waiting room, and the customers arrive according to a PP(λ) and request iid exp(µ)
service times. The customers form a single line and the customer at the head of the
line is served by the first available server. If more than one server is idle when a
customer arrives, he goes to any one of the available servers. Let X(t) be the number
of customers in the system at time t. One can show that {X(t), t ≥ 0} is a birth and
death process with birth rates
λn = λ, n ≥ 0
and death rates
µn = min(n, s)µ, n ≥ 0.
We can use the results of Example 6.35 on page 242 to compute the limiting distri-
bution of the X(t). Using the traffic intensity parameter
λ
ρ=
sµ
296 QUEUEING MODELS
in Equation 6.68 we get
smin(n,s) n
ρn = ρ , n ≥ 0.
min(n, s)!
We have
∞ Ps−1 ss ρs
X
n=0 ρn + s! 1−ρ if ρ < 1,
ρn =
n=0
∞ if ρ ≥ 1.
Hence the stability condition for the M/M/s queue is
ρ < 1.
This condition says that the queue is stable if the arrival rate λ is less than the maxi-
mum service rate sµ, which makes intuitive sense. From now on we assume that the
queue is stable. Using Equation 6.69 we get
∞
!−1 " s−1 #−1
X X ss ρs
p0 = ρn = ρn +
n=0 n=0
s! 1 − ρ
and
smin(n,s) n
p n = ρn p 0 = ρ p0 , n ≥ 1.
min(n, s)!
The limiting probability that all servers are busy is given by
∞
X ss ρ s ps
pn = p0 = .
n=s
s! 1 − ρ 1−ρ
The mean number of customers in the system can be shown to be
λ ρ
L= + ps . (7.21)
µ (1 − ρ)2
In Example 6.6 on page 197 we modeled a workshop with two machines and one
repairperson. Here we consider a more general workshop with N machines and s
repairpersons. The life times of the machines are iid exp(µ) random variables, and
the repair times are iid exp(λ) random variables, and are independent of the life
times. The machines are as good as new after repairs. The machines are repaired in
the order in which they fail. Let X(t) be the number of working machines at time
t. One can show that {X(t), t ≥ 0} is a birth and death process on {0, 1, 2, · · · , N }
with birth parameters
λn = (N − n)λ, 0 ≤ n ≤ N − 1
and death parameters
µn = min(n, s)µ, 0 ≤ n ≤ N.
Since this is a finite state queue, it is always stable. One can compute the limiting
distribution using the results of Example 6.35 on page 242.
Consider an M/M/1 queue of Section 7.3.1. Now suppose an arriving customer who
sees n customers in the system ahead of him joins the system with probability αn .
This is called the balking behavior. Once he joins the system, he displays reneging
behavior as follows: He has a patience time that is an exp(θ) random variable. If his
service does not begin before his patience time expires, he leaves the system without
getting served (i.e., he reneges); else he completes his service and then departs. All
customer patience times are independent of each other.
Let X(t) be the number of customers in the system at time t. We can show that
{X(t), t ≥ 0} is a birth and death process with birth rates
λn = αn λ, n≥0
298 QUEUEING MODELS
and death rates
µn = µ + (n − 1)θ, n ≥ 1.
(Why do we get (n − 1)θ and not nθ in the death parameters?) If θ > 0, such a queue
is always stable and its steady state distribution can be computed by using the results
of Example 6.35 on page 242.
The above examples illustrate the usefulness of the birth and death processes in
modeling queues. Further examples are given in Modeling Exercises 7.1 - 7.5.
The queueing models in Section 7.3 are for single-station queues, i.e., there is a sin-
gle place where the service is rendered. Customers arrive at this facility, which may
have more than one server, get served once, and then depart.
In this and the next section we consider more complicated queueing systems called
queueing networks. A typical queueing network consists of several service stations,
or nodes. Customers form queues at each of these queueing stations. After complet-
ing service at a station the customer may depart from the system or join a queue at
some other service station. In open queueing networks, customers arrive at the nodes
from outside the system, visit the nodes in some order, and then depart. Thus the
total number of customers in an open queueing network varies over time. In closed
queueing networks there are no external arrivals or departures, thus the total number
of customers in a closed queueing network remains constant. We shall study open
queueing networks in this section and closed queueing networks in the next.
This simple patient flow model can be set up as a queueing network with five
nodes as shown in Figure 7.3. The arrows interconnecting the nodes show the patient
routing pattern. In this figure the customers can arrive at the system at two nodes: ad-
mitting and emergency. Customers can depart the system from three nodes: clinics,
OPEN QUEUEING NETWORKS 299
Home
Returning
patients Reappointments
Intensive Discharged
Care patients
emergency, and intensive care. Note that a customer can visit a node a random num-
ber of times before departing the system, i.e., a queueing network can have cycles.
All the above assumptions are crucial to the analysis of Jackson networks. Some as-
sumptions can be relaxed, but not the others. For example, in practice we would like
to consider finite capacity waiting rooms (assumption 3) or state-dependent routing
(assumption 5). However such networks are very difficult to analyze. On the other
hand certain types of state-dependent service and arrival rates can be handled fairly
300 QUEUEING MODELS
easily. See Subsections 7.4.1 and 7.4.2.
Now let us study a Jackson network described above. Let Xi (t) be the number of
customers at node i at time t, (1 ≤ i ≤ N, t ≥ 0), and let
be the state of the queueing network at time t. The state-space of the {X(t), t ≥ 0}
process is S = {0, 1, 2, · · ·}N . To understand the transitions in this process we need
some notation.
Let ei be an N -vector with a 1 in the ith coordinate and 0 in all others. Suppose
the system is in state x = [x1 , x2 , · · · , xN ] ∈ S. If an external arrival takes place at
node i, the system state changes to x + ei . If a customer completes service at node
i (this can happen only if xi > 0) and departs the system, the system state changes
from x to x − ei , and if the customer moves to state j, the system state changes from
x to x − ei + ej . It can be seen that {X(t), t ≥ 0} is a multi-dimensional CTMC
with the following transition rates:
q(x, x + ei ) = λi ,
q(x, x − ei ) = min(xi , si )µi ri ,
q(x, x − ei + ej ) = min(xi , si )µi rij , i 6= j.
Hence we get
N
X N
X
q(x, x) = −q(x) = − λi − min(xi , si )µi (1 − rii ).
i=1 i=1
Now let
We now study node j in isolation. The input to node j consists of two parts: the
external input that occurs at rate λj , and the internal input originating from other
nodes (including node j) in the network. Let aj be the total arrival rate (external +
internal) to node j. In steady state (assuming it exists) the departure rate from node i
must equal the total input rate ai to node i. A fraction rij of the departing customers
goes to node j. Thus the internal input from node i to node j is ai rij . Thus in steady
state we must have
XN
aj = λj + ai rij , 1 ≤ j ≤ N, (7.23)
i=1
OPEN QUEUEING NETWORKS 301
which can be written in matrix form as
a(I − R) = λ,
Note that the invertibility of I − R implies that no customer stays in the network
indefinitely.
Now consider an M/M/si queue with arrival rate ai and service rate µi . From the
results of Section 7.3.3 we see that such a queue is stable if ρi = ai /si µi < 1, and
φi (n), the steady state probability that there are n customers in the system is by
min(n,s )
i
si
φi (n) = ρn φi (0), n ≥ 0,
min(n, si )! i
where
"s −1 #−1
Xi
sni n ssi ρsi i
φi (0) = ρi + i .
n=0
n! si ! 1 − ρ i
With these preliminaries we are ready to state the main result about the Jackson
networks below.
Theorem 7.6 Open Jackson Networks. The CTMC {X(t), t ≥ 0} is positive re-
current if and only if
ai < si µi , 1 ≤ i ≤ N,
However, the above equation holds in steady state since the left hand side is the
rate at which the customers enter the network, and the right hand side is the rate at
which they depart the network. We can also derive this equation from Equations 7.23
and 7.22. Thus the solution in Equation 7.25 satisfies the balance equation. Since
φi (0) > 0 if and only if ai < si µi , the condition of positive recurrence follows.
Also, the CTMC is irreducible. Hence there is a unique limiting distribution. Hence
the theorem follows.
The form of the distribution in Equation 7.25 is called the product form, for the
obvious reason. Theorem 7.6 says that, in steady state, the queue lengths at the N
nodes are independent random variables. Furthermore, node i behaves as if it is an
M/M/si queue with arrival rate ai and service rate µi . The phrase “behaves as if”
is important, since the process {Xi (t), t ≥ 0} is not a birth and death process of an
M/M/si queue. For example, in general, the total arrival process to node i is not a
PP(ai ). It just so happens that the steady distribution of {Xi (t), t ≥ 0} is the same
as that of an M/M/si queue. We illustrate the result of Theorem 7.6 with a few
examples.
OPEN QUEUEING NETWORKS 303
Example 7.8 Single Queue with Feedback. The simplest queueing network is a
single station queue with feedback as shown in Figure 7.4. Customers arrive from
1–α
λ s
Servers α
outside at this service station according to a PPλ) and request iid exp(µ) services.
The service station has s identical servers. When a customer completes service, he
leaves the system with probability α. With probability 1 − α he rejoins the queue and
behaves as a new arrival.
service times at the ith node are iid exp(µi ) random variables. Customers complet-
ing service at node i join the queue at node i + 1, 1 ≤ i ≤ N − 1. Customers leave
304 QUEUEING MODELS
the system after completing service at node N .
Example 7.10 Patient Flow. Consider the queueing network model of patient flow
as shown in Figure 7.3. Suppose external patients arrive at the admitting ward at a
rate of 4 per hour and at the emergency ward at a rate of 1/hr. The admissions desk
is managed by one secretary who processes an admission in five minutes on the av-
erage. The clinic is served by k doctors, (here k is to be decided on), and the average
consultation with a doctor takes 15 minutes. Generally, one out of every four patients
going through the clinic is asked to return for another check up in two weeks (336
hours). One out of every ten is sent to the intensive care unit from the clinic. The rest
are dismissed after consultations. Patients arriving at the emergency room requires
on the average one hour of consultation time with a doctor. The emergency room is
staffed by m doctors, where m is to be decided on. One out of two patients in the
emergency ward goes home after treatment, whereas the other is admitted to the in-
tensive care unit. The average stay in the intensive care unit is four days, and there
are n beds available, where n is to decided on. From the intensive care unit, 20% of
the customers go home, and the other 80% are given reappointments for follow up in
two weeks. Analyze this system assuming that the assumptions of Jackson networks
are satisfied.
The parameters of the N = 5 node Jackson network (using time units of hours)
are
λ1 = 4, λ2 = 1, λ3 = 0, λ4 = 0, λ5 = 0,
s1 = 1, s2 = m, s3 = k, s4 = n, s5 = ∞,
µ1 = 12, µ2 = 1, µ3 = 4, µ4 = 1/96, µ5 = 1/336,
r1 = 0, r2 = 0.5, r3 = 0.65, r4 = 0.2, r5 = 0.
OPEN QUEUEING NETWORKS 305
The routing matrix is given by
0 0 1 0 0
0 0 0 0.5 0
R=
0 0 0 0.1 0.25
.
0 0 0 0 0.80
1 0 0 0 0
Equation 7.23 are given by
a1 = 4 + a5
a2 = 1
a3 = a1
a4 = .5a2 + .1a3
a5 = .25a3 + .8a4 .
Solving the above equations we get
a1 = 6.567, a2 = 1, a3 = 6.567, a4 = 1.157, a5 = 2.567.
We use Theorem 7.6 to establish the stability of the network. Note that a1 < s1 µ1
and a5 < s5 µ5 . We must also have
1 = a2 < s2 µ2 = m,
6.567 = a3 < s3 µ3 = 4k,
1.157 = a4 < s4 µ4 = n/96.
These are satisfied if we have
m > 1, k > 1.642, n > 111.043.
Thus the hospital must have at least two doctors in the emergency room, at least two
in the clinics, and have at least 112 beds in the intensive care unit. So let us assume
the hospital uses two doctors each in the emergency room and the clinics, and has
120 beds. With these parameters, the steady state analysis of the queueing network
can be done by treating (1) the admissions queue as an M/M/1 with arrival rate
6.567 per hour, and service rate of 12 per hour; (2) the emergency ward queue as an
M/M/2 with arrival rate 1 per hour, and service rate of 1 per hour; (3) the clinic
queue as an M/M/2 queue with arrival rate 6.567 per hour and service rate of 4 per
hour per server; (4) the intensive care queue as an M/M/120 with arrival rate 1.157
per hour, and service rate of 1/96 per hour; and (5) the home queue as an M/M/∞
with arrival rate 2.567 per hour, and service rate of 1/336 per hour. Furthermore these
five queues are independent of each other in steady state.
In the Jackson network model we had assumed that the service rate at node i when
there are n customers at that node is given by min(si , n)µi . We define Jackson net-
works with state-dependent service by replacing assumption A2 by A2’ as follows:
A2’. The service rate at node i when there are n customers at that node is given by
µi (n), with µi (0) = 0 and µi (n) > 0 for n ≥ 0, 1 ≤ i ≤ N .
Note that the service rate at node i is not allowed to depend on the state of node
j 6= i. Now define
n
Y ai
φi (0) = 1, φi (n) = , n ≥ 1, 1 ≤ i ≤ N (7.26)
j=1
µi (j)
where ai is the total arrival rate to node i as given by Equation 7.24. Jackson networks
with state-dependent service also admit a product form solution as shown in the next
theorem.
Proof: Follows along the same lines as the proof of Theorem 7.6.
Thus, in steady-state, the queues at various nodes in a Jackson network with state-
dependent service are independent.
It is also possible to further generalize the model of Subsection 7.4.1 by allowing the
instantaneous arrival rate to the network to depend on the total number of customers
in the network. Specifically, we replace assumption A4 by A4’ as follows:
A4’. External customers arrive at the network at rate λ(n) when the total number
of customers in the network is n. An arriving customer joins node i with probability
ui , where
XN
ui = 1.
i=1
OPEN QUEUEING NETWORKS 307
The above assumption implies that the instantaneous external arrival rate to node
i is ui λ(n) if the total number of customers in the network is n. To keep the
{X(t), t ≥ 0} process irreducible, we assume that there is a K ≤ ∞ such that
λ(n) > 0 for 0 ≤ n < K, and λ(n) = 0 for n ≥ K. We call the Jackson networks
with assumptions A2 and A5 replaced by A2’ and A5’ “Jackson networks with state-
dependent arrivals and service.”
We shall see that such Jackson networks with state-dependent arrival and service
rates continue to have a kind of product form limiting distribution. However, the
queue at various nodes are not independent any more. The results are given in the
next theorem. First, we need the following notation.
N
X
aj = u j + ai pij , 1 ≤ j ≤ N.
i=1
Let φi (n) be as defined in Equation 7.26 using the above {ai , 1 ≤ i ≤ N }. Also, for
x = [x1 , x2 , · · · , xN ] ∈ S, let
N
X
|x| = xi .
i=1
Thus if the state of the network is X(t), the total number of customers in it at time t
is |X(t)|.
Computation of the constant c is the hard part. There is a large literature on “prod-
uct form” queueing networks, and it is more or less completely understood now as
what enables a network to have “product form” solution. See Kelly (1979) and Wal-
rand (1988).
308 QUEUEING MODELS
7.5 Closed Queueing Networks
In this section we consider the closed queueing networks. In these networks there
are no external arrivals to the network, and there are no departures from the network.
Thus the total number of customers in the network is constant. Closed queueing net-
works have been used to study population dynamics, multi-programmed computer
systems, telecommunication networks with window flow control, etc. We start with
the definition.
Now let us study a closed Jackson network described above. Let Xi (t) be the
number of customers at node i at time t, (1 ≤ i ≤ N, t ≥ 0), and let
X(t) = [X1 (t), X2 (t), · · · , XN (t)]
be the state of the queueing network at time t. As in the case of open Jackson net-
works, we see that {X(t), t ≥ 0} is a CTMC on state-space
N
X
S = {x = [x1 , x2 , · · · , xN ] : xi ≥ 0, xi = K}
i=1
Since the CTMC has finite state-space and is irreducible, it is positive recurrent. Let
p(x) = lim P(X(t) = x)
t→∞
= lim P(X1 (t) = x1 , X2 (t) = x2 , · · · , XN (t) = xN )
t→∞
be the limiting distribution. We need the following notation before we give the result
CLOSED QUEUEING NETWORKS 309
about p(x). Let π = [π1 , π2 , · · · , πN ] be the limiting distribution of the DTMC with
transition matrix R. Since R is assumed to be irreducible, π is the unique solution to
N
X
π = πR, πi = 1. (7.27)
i=1
Next, define
n
Y πi
φi (0) = 1, φi (n) = , 1 ≤ n ≤ K, 1 ≤ i ≤ N. (7.28)
j=1
µi (j)
Theorem 7.9 Closed Jackson Networks. The limiting distribution of the CTMC
{X(t), t ≥ 0} is given by
N
Y
p(x) = GN (K) φi (xi ), (7.29)
i=1
Proof: Follows by verifying that the solution in Equation 7.29 satisfies the balance
equation X
q(x)p(x) = p(y)q(y, x).
y∈s:y6=x
The verification proceeds along the same lines as that in the proof of Theorem 7.6.
Thus the closed Jackson network has a “product form” limiting distribution. The
hard part is the evaluation of G(K), the normalizing constant. The computation is
difficult since the size of the state-space grows exponentially in N and K: it has
N +K−1
K elements. A recursive method of computing G(K) for closed Jackson net-
works of single-server queues is described in the next example.
µ1 µ2 µN–1 µN
We leave it to the readers to verify that the generating function of GN (K) is given
by
∞ N
X Y 1
G̃N (z) = GN (K)z K = .
i=1
1 − ρi z
K=0
This system can be modeled by a closed Jackson network as shown in Figure 7.7.
The parameters of this network are
CLOSED QUEUEING NETWORKS 311
Printer
CPU
1–α
Disc
Drive
µi (n) = µi , i = 1, 2, 3; n ≥ 1,
0 α 1−α
R= 1 0 0 .
1 0 0
Thus the solution to Equation 7.27 is given by
π1 = 0.5, π2 = 0.5α, π3 = 0.5(1 − α).
Using
1 α 1−α
ρ1 = , ρ2 = , ρ3 = ,
2µ1 2µ2 2µ3
we get
φi (n) = ρni .
Thus
p(x1 , x2 , x3 ) = G3 (K)ρx1 1 ρx2 2 ρx3 3 , x1 + x2 + x3 = K.
The constant G3 (K) can be computed by using the method of Example 7.11.
The throughput of the system is defined as the rate at which jobs get completed in
steady state. In our system, if the system is in state (x1 , x2 , x3 ) with x2 > 0, jobs get
completed at rate µ2 β. Hence we have
X
throughput = µ2 β p(x1 , x2 , x3 ).
x∈S:x2 >0
The closed queueing systems have been found to be highly useful models for com-
puting systems and there is a large literature in this area. See Gelenbe and Pujolle
(1987) and Saur and Chandy (1981).
312 QUEUEING MODELS
7.6 Single Server Queues
So far we have studied queueing systems that are described by CTMCs. In this sec-
tion we study single server queues where either the service times or the interarrival
times are non-exponential, making the queue length process non-Markovian.
We study an M/G/1 queue where customers arrive according to a PP(λ) and form
a single queue in an infinite waiting room in front of a single server and demand iid
service times with common cdf G(·), with mean τ and variance σ 2 . Let X(t) be the
number of customers in the system at time t. The stochastic process {X(t), t ≥ 0}
is a CTMC if and only if the service times are exponential random variables. Thus in
general we cannot use the theory of CTMCs to study
pj = lim P(X(t) = j), j ≥ 0,
t→∞
in an M/G/1 queue.
Recall the definitions of Xn , Xn∗ , X̂n , πj , πj∗ , and π̂j from Section 7.1. Since
every arriving customer joins the system, the {X(t), t ≥ 0} process jumps up and
down by one at a time, and the arrival process is Poisson, we can use Theorems 7.3,
7.2 and 7.4 to get
π̂j = πj = πj∗ = pj , j ≥ 0.
Thus we can compute the limiting distribution of {X(t), t ≥ 0} by studying the
limiting distribution of {Xn , n ≥ 0}. This is possible to do, since the next theorem
shows that {Xn , n ≥ 0} is a DTMC.
Proof: Let An be the number of arrivals to the queueing system during the nth ser-
vice time. Since the service times are iid random variables with common distribution
SINGLE SERVER QUEUES 313
G(·), and the arrival process is PP(λ), we see that {An , n ≥ 1} is a sequence of iid
random variables with common pmf
P(An = i) = P(i arrivals during a service time)
Z ∞
= P(i arrivals during a service time of duration t)dG(t)
0
Z ∞
(λt)i
= e−λt dG(t)
0 i!
= αi .
Now, if Xn > 0, the (n+ 1)st service time starts immediately after the nth departure,
and during that service time An+1 customers join the system. Hence after the (n +
1)st departure there are Xn + An+1 − 1 customers are left in the system. On the other
hand, if Xn = 0, the (n + 1)st service time starts immediately after the (n + 1)st
arrival, and during that service time An+1 customers join the system. Hence after the
(n + 1)st departure there are An+1 customers are left in the system. Combining these
two observations, we get
An+1 if Xn = 0,
Xn+1 = (7.32)
Xn − 1 + An+1 if Xn > 0.
This is identical to Equation 2.9 derived in Example 2.16 on page 19 if we define
Yn = An+1 . The result then follows from the results in Example 2.16. The DTMC
is irreducible and aperiodic since αi > 0 for all i ≥ 0.
The next theorem gives the result about the limiting distribution of {Xn , n ≥ 0}.
Proof: Since {Xn , n ≥ 0} is the DTMC studied in Example 2.16 on page 19, we
can use the results about its limiting distribution from Example 4.23 on page 120.
From there we see that the DTMC is positive recurrent if and only if
∞
X
kαk < 1.
k=0
314 QUEUEING MODELS
Substituting from Equation 7.31 we get
∞ ∞ ∞
(λt)k
X X Z
kαk = k e−λt dG(t)
0 k!
k=0 k=0
∞
!
∞
(λt)k
Z X
= e−λt k dG(t)
0 k!
k=0
Z ∞
= λtdG(t) = λτ = ρ.
0
Thus the DTMC is positive recurrent if and only if ρ < 1. From Equation 4.44 (using
ρ in place of µ) we get
ψ(z)(1 − z)
φ(z) = (1 − ρ) , (7.34)
ψ(z) − z
where
∞
X
ψ(z) = αk z k .
k=0
Substituting for αk from Equation 7.31 in the above equation
∞ Z ∞
X (λt)k
ψ(z) = zk e−λt dG(t)
0 k!
k=0
∞
Z ∞ !
k
k (λt)
X
−λt
= e z dG(t)
0 k!
k=0
Z ∞
= e−λt eλzt dG(t)
Z0 ∞
= e−λ(1−z)t dG(t) = G̃(λ − λz).
0
Substituting in Equation 7.34 we get Equation 7.33. This proves the theorem.
One immediate consequence of Equation 7.33 is that the probability that the server
is idle in steady state can be computed as
p0 = π0 = φ(0) = 1 − ρ. (7.35)
Also, since pj = πj for all j ≥ 0, φ(z) in Equation 7.33 is also the generating func-
tion of the limiting distribution of the {X(t), t ≥ 0} process. Using Equation 7.33
we can compute the expected number of customers in the system in steady state as
given in the following theorem.
ρ2 σ2
1
L=ρ+ · 1+ 2 , (7.36)
2 1−ρ τ
SINGLE SERVER QUEUES 315
where τ and σ 2 are the mean and variance of the service time.
Proof: We have
L = lim E(X(t))
t→∞
X∞
= jpj
j=0
X∞
= jπj
j=0
dφ(z)
= |z=1 .
dz
The theorem follows after evaluating the last expression in straight forward fashion.
This involves using L’Hopital’s rule twice.
Hence we get
pj = (1 − ρ)ρj , j ≥ 0.
This matches with the result in Example 6.36 on page 242, as expected.
Example 7.14 The M/Ek /1 Queue. Suppose the service times are iid Erl(k, µ).
Then the M/G/1 queue reduces to M/Ek /1 queue. In this case we get
k k
τ= , σ2 = 2 .
µ µ
The queue is stable if
kλ
ρ = λτ = < 1.
µ
Assuming the queue is stable, the expected number in steady state can be computed
by using Equation 7.36 as
1 ρ2 k + 1
L=ρ+ · .
2 1−ρ k
A large number of variations of the M/G/1 queue have been studied in literature.
See Modeling Exercises 7.11 and 7.13.
Next we study the waiting times (this includes time in service) in an M/G/1 queue
assuming FCFS service discipline. Let Fn (·) be the cdf of Wn , the waiting time of
the nth customer. Let
F̃n (s) = E(e−sWn ).
The next theorem gives the Laplace Stieltjes transform (LST) of the waiting time in
steady state, defined as
F̃ (s) = lim F̃n (s).
n→∞
Theorem 7.13 Waiting Times in an M/G/1 Queue. The LST of the waiting time
in steady state in a stable M/G/1 queue with FCFS service discipline is given by
sG̃(s)
F̃ (s) = (1 − ρ) . (7.37)
s − λ(1 − G̃(s))
Proof: Let An be the number of arrivals during the n customer’s waiting time in the
system. Since Xn is the number of customers left in the system after the nth depar-
ture, the assumption of FCFS service discipline implies that Xn = An . The Poisson
assumption implies that (see the derivation of ψ(z) in the proof of Theorem 7.11)
E(z An ) = F̃n (λ − λz).
SINGLE SERVER QUEUES 317
Hence
φ(z) = lim E(z Xn ) = lim E(z An ) = lim F̃n (λ − λz) = F̃ (λ − λz).
n→∞ n→∞ n→∞
Now we study a G/M/1 queue where customers arrive one at a time and the inter-
arrival times are iid random variables with common cdf G(·), with G(0) = 0 and
mean 1/λ. The arriving customers form a single queue in an infinite waiting room in
front of a single server and demand iid exp(µ) service times. Let X(t) be the num-
ber of customers in the system at time t. The stochastic process {X(t), t ≥ 0} is a
CTMC if and only if the interarrival times are exponential random variables. Thus
in general we cannot use the theory of CTMCs to study the limiting behavior of X(t).
Recall the definitions of Xn , Xn∗ , X̂n , πj , πj∗ , and π̂j from Section 7.1. Since
every arriving customer joins the system, and the {X(t), t ≥ 0} process jumps up
and down by one at a time, we can use Theorems 7.3 and 7.2 to get
π̂j = πj = πj∗ , j ≥ 0.
However, unless the interarrival times are exponential, the arrival process is not a PP,
and hence π̂j 6= pj . The next theorem shows that {Xn∗ , n ≥ 0} is a DTMC.
Proof: Let Dn be the number of departures that can occur (assuming there are
enough customers in the system) during the nth interarrival time. Since the interar-
rival times are iid random variables with common distribution G(·), and the service
times are iid exp(µ), we see that {Dn , n ≥ 1} is a sequence of iid random variables
with common pmf
P(Dn = i) = P(i possible departures during an interarrival time)
Z ∞
(µt)i
= e−µt dG(t)
0 i!
= αi .
Now, the nth arrival sees Xn∗ customers in the system. Hence there are Xn∗ + 1
customers in the system after the nth customers enters. If Dn+1 < Xn∗ + 1, the
(n + 1) arrival will see Xn∗ + 1 − Dn+1 customers in the system, else there will be
no customers in the system when the next arrival occurs. Hence we get
∗
Xn+1 = max{Xn∗ + 1 − Dn+1 , 0}.
This is identical to Equation 2.11 derived in Example 2.17 on page 20 if we define
Yn = Dn+1 . The result then follows from the results in Example 2.17. The DTMC
is irreducible and aperiodic since αi > 0 for all i ≥ 0.
The next theorem gives the result about the limiting distribution of {Xn∗ , n ≥ 0}.
Proof: Since {Xn∗ , n ≥ 0} is the DTMC studied in Example 2.17 on page 20, we
can use the results about its limiting distribution from Example 4.24 on page 121.
From there we see that the DTMC is positive recurrent if and only if
∞
X
kαk > 1.
k=0
SINGLE SERVER QUEUES 319
Substituting from Equation 7.39 we get
∞
X µ
kαk = .
λ
k=0
µ
Thus the DTMC is positive recurrent if and only if λ > 1, i.e., ρ < 1. Let
∞
X
ψ(z) = z i αi .
i=0
In the next theorem we study the limiting distribution of waiting times (this in-
cludes time in service) in a G/M/1 queue assuming FCFS service discipline.
Theorem 7.17 Waiting Times in a G/M/1 Queue. The limiting distribution of the
waiting time in a stable G/M/1 queue with FCFS service discipline is given by
F (x) = lim P(Wn ≤ x) = 1 − e−µ(1−α)x , x ≥ 0. (7.44)
n→∞
Proof: The waiting time of a customer who sees j customers ahead of him is an
Erlang(j + 1, µ) random variable. Using this we get
∞
X
F (x) = lim P(Wn ≤ x|Xn∗ = j)P(Xn∗ = j)
n→∞
j=0
∞
X
= πj∗ P(Erl(j + 1, µ) ≤ x).
j=0
The rest of the proof follows along the same lines as in the case of the M/M/1
queue.
We have already seen the M/M/1/1 retrial queue in Example 6.15 on page 202. In
this section we generalize it to M/G/1/1 queue. We describe the model below.
Customers arrive from outside to a single server according to a PP(λ) and require
RETRIAL QUEUE 321
iid service times with common distribution G(·) and mean τ . There is room only for
the customer in service. Thus the capacity is 1, hence the M/G/1/1 nomenclature. If
an arriving customer finds the server idle, he immediately enters service. Otherwise
he joins the “orbit,” where he stays for an exp(θ) amount of time (called the retrial
time) independent of his past and the other customers. At the end of the retrial time
he returns to the server, and behaves like a new customer. He persists in conducting
retrials until he is served, after which he exits the system. A block diagram of this
queueing system is shown in Figure 7.8.
Orbit
Server busy
Single Server
Let X(t) be the number of customers in the system (those in service + those in
orbit) at time t. Note that {X(t), t ≥ 0} is not a CTMC. It has jumps of size +1
when a new customer arrives, and of size -1 when a customer completes service.
Since every arriving customer enters the system (either service or the orbit), and the
arrival process is Poisson, we have
π̂j = πj∗ = πj = pj , j ≥ 0.
Thus we can study the limiting behavior of the {X(t), t ≥ 0} by studying the
{Xn , t ≥ 0} process at departure points, since, as the next theorem shows, it is a
DTMC.
Proof: Let An be the number of arrivals to the queueing system during the nth ser-
vice time. Since the service times are iid random variables with common distribution
G(·), and the arrival process is PP(λ), we see that {An , n ≥ 1} is a sequence of iid
random variables. Now, immediately after a service completion, the server is idle.
Hence Xn represents the number of customers in the orbit when the nth service
completion occurs. Each of these customers will conduct a retrial after iid exp(θ)
times. Also, a new arrival will occur after an exp(λ) amount of time. Hence the next
service request will occur after an exp(λ + θXn ) amount of time. With probability
θXn /(λ + θXn ) this request is from a customer from the orbit, and with probability
λ/(λ + θXn ), it is from a new customer. The (n + 1)st service starts when this re-
quest arrives, during which An+1 new customers arrive and join the orbit. Hence the
322 QUEUEING MODELS
system dynamics is given by
Xn + An+1 with probability λ/(λ + θXn )
Xn+1 = (7.45)
Xn + An+1 − 1 with probability θXn /(λ + θXn ).
Since An+1 is independent of history, the above recursion implies that {Xn , n ≥ 0}
is a DTMC. Irreducibility and aperiodicity is obvious.
The next theorem gives the generating function of the limiting distribution of
{Xn , n ≥ 0}.
One immediate consequence of Equation 7.46 is that the probability that the sys-
tem is empty in steady state can be computed as
!
λ 1 1 − G̃(λ − λu)
Z
p0 = π0 = φ(0) = (1 − ρ) exp − du .
θ 0 G̃(λ − λu) − u
However, this is not the same as the server being idle, since the server can be idle
even if the system is not empty. We can use Little’s Law type argument to show that
the server is idle in steady state with probability 1 − ρ, independent of θ! But this
324 QUEUEING MODELS
simple fact cannot be deduced by using the embedded DTMC.
Now, since pj = πj for all j ≥ 0, φ(z) in Equation 7.46 is also the generating func-
tion of the limiting distribution of the {X(t), t ≥ 0} process. Using Equation 7.46
we can compute the expected number of customers in the system in steady state as
given in the following theorem.
Proof: We have
L = lim E(X(t))
t→∞
∞ ∞
X X dφ(z)
= jpj = jπj = |z=1 .
j=0 j=0
dz
The theorem follows after evaluating the last expression in straight forward fashion.
This involves using L’Hopital’s rule twice.
We have seen the M/M/s queue in Subsection 7.3.3 and the M/M/∞ queue in
Subsection 7.3.4. Unfortunately, the M/G/s queue proves to be intractable. Surpris-
ingly, M/G/∞ queue can be analyzed very easily. We present that analysis here.
Consider an infinite server queue where customers arrive according to a PP(λ), and
request iid service times with common cdf G(·) and mean τ . Let X(t) be the number
of customers in such a queue at time t. Suppose X(0) = 0. We have analyzed this
process in Example 5.17 on page 173. Using the analysis there we see that X(t) is a
MODELING EXERCISES 325
Poisson random variable with mean λm(t), where
Z t
m(t) = (1 − G(u))du.
0
We have
lim m(t) = τ.
t→∞
Hence the limiting distribution of X(t) is P(λτ ). Note that this limiting distribution
holds even if X(0) > 0, since all the initial customers will eventually leave, and do
not affect the newly arriving customers.
We conclude this chapter with the remark that it is possible to analyze a G/M/s
queue with an embedded DTMC chain. The G/M/∞ queue can be analyzed by the
methods of renewal processes, to be developed in the next chapter. Note that the
{X(t), t ≥ 0} processes studied in the last three sections are not CTMCs. What kind
of processes are these? The search for the answer to this question will lead us into
renewal theory, regenerative processes, and Markov regenerative processes. These
topics will be covered in the next two chapters.
7.1 Customers arrive at a taxi stand according to a PP(λ). If a taxi is waiting at the
taxi stand, the customer immediately hires it and leaves the taxi stand in the taxi. If
there are no taxis available, the customer waits. There is essentially infinite waiting
room for the customers. Independently of the customers, taxis arrive at the taxi stand
according to a PP(µ). If a taxi arriving at the taxi stand finds that no customer is
waiting, it leaves immediately. Model this system as an M/M/1 queue, and specify
its parameters.
7.2 A machine produces items one at a time according to a PP(λ). These items are
stored in a warehouse of infinite capacity. Demands arise according to a PP(µ). If
there is an item in the warehouse when a demand arises, an item is immediately
removed to satisfy the demand. Any demand that occurs when the warehouse is
empty is lost. Let X(t) be the number of items in the warehouse at time t. Model the
{X(t), t ≥ 0} process as a birth and death process.
7.3 Customers arrive at a bank according to a PP(λ). The service times are iid
exp(µ). The bank follows the following policy: when there are fewer than four cus-
tomers in the bank, only one teller is active; for four to nine customers, the bank uses
two tellers; and beyond nine customers there are three tellers. Model the number of
customers in the bank as a birth and death process.
7.4 Customers arrive according to a PP(λ) at a single-server station and demand iid
exp(µ) service times. When a customer completes his service, he departs with proba-
bility α, or rejoins the queue instantaneously with probability 1 − α, and behaves like
a new customer. The service times are iid exp(µ). Model the number of customers in
the system as a birth and death process.
326 QUEUEING MODELS
7.5 Consider a grocery store checkout queue. When the number of customers in the
line is three or less, the checkout person does the pricing as well as bagging, taking
exp(µ1 ) time. When there are three or more customers in the line, a bagger comes
to help, and the service rate increases to µ2 > µ1 , i.e., the reduced service times are
now iid exp(µ2 ). Assume that customers join the checkout line according to a PP(λ).
Model the number of customers in the checkout line as a birth and death process.
7.6 Consider a single server queue subject to break downs and repairs as follows: the
worker stays functional for an exp(θ) amount of time and then fails. The repair time
is exp(α). The successive up and down times are iid. However, the server is subject
to failures only when it is serving a customer. The service times are iid exp(µ).
Assume that the failure does not cause any loss of work. Thus if a customer service
is interrupted by failure, the service simply resumes after the server is repaired. Let
X(t) be the number of customers in this system at time t. Model {X(t), t ≥ 0} as
an M/G/1 queue by identifying the correct service time distribution G.
7.7 Consider a single server queue that serves customers from k independent
sources. Customers from source i arrive according to a PP(λi ) and need iid exp(µi )
service times. They form a single queue and are served in an FCFS fashion. Let X(t)
be the number of customers in the system at time t. Show that {X(t), t ≥ 0} is the
queue-length process in an M/G/1 queue. Identify the service distribution G.
7.8 Redo the problem in Modeling Exercise 7.6 assuming that the server can fail
even when it is not serving any customers. Is {X(t), t ≥ 0} the queue length process
of an M/G/1 queue? Explain. Let Xn be the number of customers in the system
after the nth departure. Show that {Xn , n ≥ 0} is a DTMC and display its transition
probability matrix.
7.9 Consider the {X(t), t ≥ 0} process described in Modeling Exercise 7.2 with
the following modification: the machine produces items in a deterministic fashion at
a rate of one item per unit time. Model {X(t), t ≥ 0} as the queue length process in
a G/M/1 queue.
7.11 Consider an M/G/1 queue where the server goes on vacation if the system is
empty upon service completion. If the system is empty upon return from the vacation,
the server goes on another vacation; else he starts serving the customers in the system
one by one. Successive vacation times are iid. Let Xn be the number of customers in
the system after the nth customer departs. Show that {Xn , n ≥ 0} is a DTMC.
COMPUTATIONAL EXERCISES 327
7.12 A service station is staffed with two identical servers. Customers arrive ac-
cording to a PP(λ). The service times are iid with common distribution exp(µ) at
either server. Consider the following two routing policies
1. Each customer is randomly assigned to one of the two servers with equal proba-
bility.
2. Customers are alternately assigned to the two servers.
Once a customer as assigned to a server he stays in that line until served. Let Xi (t) be
the number of customers in line for the ith server. Is {Xi (t), t ≥ 0} the queue-length
process of an M/M/1 queue or an G/M/1 queue under the two routing schemes?
Identify the parameters of the queues.
7.13 Consider the following variation of an M/G/1 queue: All customers have iid
2
service times with common cdf G, with mean τG and variance σG . However the
customers who enter an empty system have a different service time cdf H with mean
2
τH and variance σH . Let X(t) be the number of customers at time t. Is {X(t), t ≥ 0}
a CTMC? If yes, give its generator matrix. Let Xn be the number of customers in the
system after the nth departure. Is {Xn , n ≥ 0} a DTMC? If yes, give its transition
probability matrix.
7.15 Suppose the customers that cannot enter an M/M/1/1 queue (with arrival
rate λ and service rate µ) enter service at another single server queue with infinite
waiting room. This second queue is called an overflow queue. The service times
at the overflow queue are iid exp(θ) random variables. Let X(t) be the number of
customers at the overflow queue at time t. Model the overflow queue as a G/M/1
queue. What is the LST of the interarrival time distribution to the overflow queue?
7.1 Show that the variance of the number of customers in steady state in a stable
M/M/1 system with arrival rate λ and service rate µ is given by
ρ
σ2 = ,
(1 − ρ)2
where ρ = λ/µ.
328 QUEUEING MODELS
7.2 Let X q (t) be the number of customers in the queue (not including any in
service) at time t in an M/M/1 queue with arrival rate λ and service rate µ. Is
{X q (t), t ≥ 0} a CTMC? Compute the limiting distribution of X q (t) assuming
λ < µ. Show that the expected number of customers in the queue (not including the
customer in service) is given by
ρ2
Lq = .
1−ρ
7.3 Let Wnq be the time spent in the queue (not including time in service) by the
nth arriving customer in an M/M/1 queue with arrival rate λ and service rate µ.
Compute the limiting distribution of Wnq assuming λ < µ. Compute W q , the limiting
expected value of Wnq as n → ∞. Using the results of Computational Exercise 7.2
show that Lq = λW q . Thus little’s law holds when applied to the customers in the
queue.
7.4 Let X(t) be the number of customers in the system at time t in an M/M/1
queue with arrival rate λ and service rate µ > λ. Let
T = inf{t ≥ 0 : X(t) = 0}.
T is called the busy period. Compute E(T |X(0) = i).
7.5 Let T be as in Computational Exercise 7.4. Let N be the total number of cus-
tomers served during (0, T ]. Compute E(N |X(0) = i).
7.6 Customers arrive according to P P (λ) to a queueing system with two servers.
The ith server (i = 1, 2) needs exp(µi ) amount of time to serve one customer. Each
incoming customer is routed to server 1 with probability p1 or to server 2 with prob-
ability p2 = 1 − p1 , independently. Queue jumping is not allowed. Find the optimum
routing probabilities that will minimize the expected total number of customers in
the system in steady state.
7.7 Consider a stable M/M/1 queue with the following cost structure. A customer
who sees i customers ahead of him when he joins the system costs $ci to the system.
The system charges every customer a fee of $f upon entry. Show that the long run
net revenue is given by
∞
X
λ(f − ci ρi (1 − ρ)).
i=0
1. What are the feasible values of αk ’s so that the resulting system is stable?
COMPUTATIONAL EXERCISES 329
2. Compute the the expected holding cost per unit time as a function of the routing
probabilities αk (1 ≤ k ≤ K) in the stable region.
3. Compute the optimal routing probabilities αk that minimize the holding cost per
unit time for the entire system.
7.9 Compute the long run fraction of customers who cannot enter the M/M/1/K
system described in Subsection 7.3.2.
7.10 Compute W , the expected time spent in the system by an arriving customers
in steady state in an M/M/1/K system, by using Little’s Law and Equation 7.20.
(If an arriving customer does not enter, his time in the system is zero.) What is the
correct value of λ in L = λW as applied to this example?
7.11 Compute W , the expected waiting time of entering customers in steady state in
an M/M/1/K system, by using Little’s Law and Equation 7.20. What is the correct
value of λ in L = λW as applied to this example?
7.12 Suppose there are 0 < i < K customers in an M/M/1/K queue at time 0.
Compute the expected time when the queue either becomes empty or full.
7.13 Consider the M/M/1/K system of Subsection 7.3.2 with the following cost
structure. Each customer waiting in the system costs $c per unit time. Each customer
entering the system pays $a as an entry fee to the system. Compute the long run rate
of net revenue for this system.
7.14 Consider the system of Modeling Exercise 7.2 with production rate of 10 per
hour and demand rate of 8 per hour. Suppose the machine is turned off when the
number of items in the warehouse reaches K, and is turned on again when it falls
to K − 1. Any demand that occurs when the warehouse is empty is lost. It costs 5
dollars to produce an item, and 1 dollar to keep an item in the warehouse for one
hour. Each item sells for ten dollars.
7.15 Consider the M/M/1 queue with balking (but no reneging) as described in
Subsection 7.3.6. Suppose the limiting distribution of the number of customers in this
queue is {pj , j ≥ 0}. Using PASTAP show that in steady state an arriving customer
enters the system with probability ∞ j=0 αj pj .
7.16 Consider the M/M/1 queue with balking (but no reneging) as described in
Subsection 7.3.6. Suppose the limiting distribution of the number of customers in
this queue is P(ρ), where ρ = λ/µ. What balking probabilities will produce this
limiting distribution?
330 QUEUEING MODELS
7.17 Show that the expected number of busy servers in a stable M/M/s queue is
λ/µ.
7.18 Derive Equation 7.21. Hence or otherwise compute the expected waiting time
of a customer in the M/M/s system in steady state.
7.20 Compute the limiting distribution of the time spent in the queue by a customer
in an M/M/s queue. Hence or otherwise compute the limiting distribution of the
time spent in the system by a customer in an M/M/s queue.
7.21 Consider two queueing systems. System 1 has s servers, each serving at rate
µ. System 2 has a single server, serving at rate sµ. Both systems are subject to PP(λ)
arrivals. Show that in steady state, the expected number of customers in the queue
(not including those in service) System 2 is less than in System 1. This shows that it
is better to have a single efficient server than many inefficient ones.
7.22 Consider the finite population queue of Subsection 7.3.5 with two machines
and one repairperson. Suppose every working machine produces revenue at a rate of
$r per unit time. It costs $C to repair a machine. Compute the long run rate at which
the system earns profits (revenue - cost).
7.23 When is the system in Modeling Exercise 7.2 stable? Assuming stability, com-
pute the limiting distribution of the number of items in the warehouse. What fraction
of the incoming demands are satisfied in steady state?
7.25 The quantity ps in the Computational Exercise 7.24 is called the blocking prob-
ability, and is denoted by B(s, ρ) where ρ = λ/µ. Show that the long run rate at
which the customers enter the system is given by λ(1 − B(s, ρ)). Also, show that
B(s, ρ) satisfies the recursion
ρB(s − 1, ρ)
B(s, ρ) = ,
s + ρB(s − 1, ρ)
with initial condition B(0, ρ) = 1.
7.26 When is the system in Modeling Exercise 7.3 stable? Assuming stability, com-
pute the limiting distribution of the number of customers in the bank. What is the
steady state probability that three tellers are active?
COMPUTATIONAL EXERCISES 331
7.27 When is the system in Modeling Exercise 7.4 stable? Assuming stability, com-
pute the limiting distribution of the number of customers in the system.
7.28 When is the system in Modeling Exercise 7.5 stable? Assuming stability, com-
pute the expected number of customers in the system in steady state.
7.29 Consider the single server queue with N -type control described in Modeling
Exercise 6.16. Let X(t) be the number of customers in the system at time t, and Y (t)
be 1 if the server busy and 0 if it is idle at time t. Show that {(X(t), Y (t)), t ≥ 0} is
a CTMC and that it is stable if ρ = λ/µ < 1. Assuming it is stable, show that
pi,j = lim P(X(t) = i, Y (t) = j), i ≥ 0, j = 0, 1,
t→∞
is given by
1−ρ
pi,0 = , 0 ≤ i < N,
N
ρ
pi,1 = (1 − ρi ), 1 ≤ i < N
N
ρ
pN +n,1 = (1 − ρN )ρn , n ≥ 0.
N
7.30 Consider the queueing system of Computational Exercise 7.29. Suppose it
costs $f to turn the server on from the off position, while turning the server off
is free of cost. It costs $c to keep one customer in the system for one unit of time.
Compute the long run operating cost per unit of the N -type policy. Show how one
can optimally choose N to minimize this cost rate.
7.31 Consider the system of Modeling Exercise 6.31. What is the limiting distribu-
tion of the number of customers in the system as seen by an arriving customer of
type i? By an entering customer of type i? (i = 1, 2)
7.32 Compute the limiting distribution of the CTMC in modeling Exercise 7.10 for
the case of s = 3. What fraction of the customers are turned away in steady state?
7.33 Consider the Jackson network of single server queues as shown in Figure 7.9.
Derive the stability condition. Assuming stability compute
λ 1–p
µ1 µ2 µN–1 µN
p1 p2 pN–1 pN
P1 P2 PN–1 PN
µN+1
7.36 North Carolina State Fair has 35 rides, and it expects to get about 60,000 vis-
itors per day (12 hours) on the average. Each visitor is expected to take 5 rides on
the average during his/her visit. Each ride lasts approximately 1 minute and serves
an average of 30 riders per batch. Construct an approximate Jackson network model
of the rides in the state fair that incorporates all the above data in a judicious fashion.
State your assumptions. Is this network stable? Show how to compute the average
queue length at a typical ride.
7.37 Consider a network of two nodes in series that operates as follows: customers
arrive at the first node from outside according to a PP(λ), and after completing service
at node 1 move to node 2, and exit the system after completing service at node 2. The
service times at each node are iid exp(µ). Node 1 has one server active as long
as there are five or fewer customers present at that node, and two servers active
otherwise. Node 2 has one server active for up to two customers, two servers for
three through ten customers, and three servers for any higher number. If an arriving
customer sees a total of i customers at the two nodes, he joins the first node with
probability 1/(i + 1) and leaves the system without any service with probability
i/(i + 1). Compute
7.40 Show that the probability that a customer in an open Jackson network of Sec-
tion 7.4 stays in the network forever is zero if I − R is invertible.
7.41 For a closed Jackson network of single server queues, show that
7.42 Generalize the method of computing GN (K) derived in Example 7.11 to gen-
eral closed Jackson networks of single-server queues with N nodes and K customers.
334 QUEUEING MODELS
7.43 A simple communications network consists of two nodes labeled A and B
connected by two one-way communication links: line AB from A to B, and line BA
from line from B to A. There are N users at each node. The ith user (1 ≤ i ≤ N ) at
node A (B) is denoted by Ai (Bi ). User Ai has an interactive session set up with user
Bi and it operates as follows: User Ai sends a message to user Bi . All the messages
generated at node A wait in a buffer at node A for transmission to the appropriate
user at node B on line AB in an FCFS fashion. When user Bi receives the message
from user Ai , she spends a random amount of time, called think time, to generate a
response to it. All the messages generated at node B wait in a buffer at node B for
transmission to the appropriate user at node A on line BA in an FCFS fashion. When
user Ai receives the message from user Bi , she spends a random amount of time to
generate a response to it. This process of messages going back and forth between
the pairs of users Ai and Bi continues forever. Suppose all the think times are iid
exp(θ) random variables, and the message transmission times are iid exp(µ) random
variables. Model this as a closed Jackson network. What is the expected number of
messages in the buffers at nodes A and B in steady state?
7.44 For the closed Jackson network of Section 7.5, define the throughput T H(i)
of node i as the rate at which customers leave node i in steady state, i.e.,
K
X
T H(i) = µi (n) lim P(Xi (t) = n).
t→∞
n=0
Show that
GN (K)
T H(i) = πi .
GN (K − 1)
7.45 When is the system in Modeling Exercise 7.7 stable? Assuming stability, com-
pute the expected number of customers in the system in steady state.
7.46 When is the system in Modeling Exercise 7.6 stable? Assuming stability, com-
pute the generating function of the limiting distribution of the number of customers
in the system.
7.47 Compute the expected number of customers in steady state in an M/G/1 sys-
tem where the arrival rate is one customer per hour and the service time distribution
is PH(α, M ) where
α = [0.5 0.5 0]
and
−3 1 1
M = 0 −3 2 .
0 0 −3
7.48 Compute the expected queue length in an M/G/1 queue with the following
service time distributions (all with mean 1/µ):
Which distribution produces the largest congestion? Which produces the smallest?
7.49 Consider the {X(t), t ≥ 0} and the {Xn , n ≥ 0} processes defined in Mod-
eling Exercise 7.8. Show that the limiting distribution of the two (if they exist) are
identical. Let pn (qn ) be the limiting probability that there are n customers in the
system and the server is up (down). Let p(z) and q(z) be the generating functions of
{pn , n ≥ 0} and {qn , n ≥ 0}. Show that this system is stable if
λ α
< .
µ α+θ
Assuming that the system is stable show that
µ α λ
z α+θ − µ
q(z) = µ α
,
+ λθ (1 − z) − λ
z −λ θ
and
α λ
p(z) = + (1 − z) q(z).
θ θ
7.50 Show that the DTMC {Xn , n ≥ 0} in the Modeling Exercise 7.11 is positive
recurrent if ρ = λτ < 1, where λ is the arrival rate and τ is the mean service time.
Assuming the DTMC is stable, show that the generating function of the limiting
distribution of Xn is given by
1−ρ G̃(λ − λz)
φ(z) = · · (ψ(z) − 1),
m z − G̃(λ − λz)
where G̃ is the LST of the service time, m is the expected number of arrivals during a
single vacation, and ψ(z) is the generating function of the number of arrivals during
a single vacation.
7.51 Let X(t) be the number of customers at time t in the system described in Mod-
eling Exercise 7.11. Show that {Xn , n ≥ 0} and {X(t), t ≥ 0} have the same limit-
ing distribution, assuming it exists. Using the results of Computational Exercise 7.50
show that the expected number of customers in steady state is given by
ρ2 σ2 m(2)
1
L=ρ+ · 1+ 2 + ,
2 1−ρ τ 2m
where σ 2 is the variance of the service time, m(2) is the second factorial moment of
the number of arrivals during a single vacation.
7.52 Let X(t) be the number of customers at time t in an M/G/1 queue under N -
type control as explained in Modeling Exercise 6.16 for an M/M/1 queue. Using the
336 QUEUEING MODELS
results of Computational Exercises 7.50 and 7.51 establish the condition of stability
for this system and compute the generating function of the limiting distribution of
X(t) as t → ∞.
7.53 When is the queueing system described in Modeling Exercise 7.12 stable? As-
suming stability, compute the expected number of customers in the system in steady
state under the two policies. Which policy is better at minimizing the expected num-
ber in the system in steady state?
7.54 Analyze the stability of the {X(t), t ≥ 0} process in Modeling Exercise 7.9.
Assuming stability, compute the limiting distribution of the number of items in the
warehouse. What fraction of the demands are lost in steady state?
7.55 Show that the DTMC {Xn , n ≥ 0} in the Modeling Exercise 7.12 is positive
recurrent if ρ = λτG < 1. Assuming the DTMC is stable, show that the generating
function of the limiting distribution of Xn is given by
7.56 Let X(t) be the number of customers at time t in the system described in Mod-
eling Exercise 7.12. Show that {Xn , n ≥ 0} and {X(t), t ≥ 0} have the same limit-
ing distribution, assuming it exists. Using the results of Computational Exercise 7.55
show that the expected number of customers in steady state is given by
λτH λ2 σH
2
+ τH2 2
− σG 2
− τG λ2 σG
2 2
+ τG
L= + · + · .
1 − λτG + λτH 2 1 − λτG + λτH 2 1 − λτG
7.57 Show that the DTMC {Xn , n ≥ 0} in the Modeling Exercise 7.14 is positive
recurrent if λ < 1. Assuming the DTMC is stable, compute φ(z), the generating
function of the limiting distribution of Xn as n → ∞.
7.58 Show that the DTMC {X̄n , n ≥ 0} in the Modeling Exercise 7.14 is positive
recurrent if λ < 1. Assuming the DTMC is stable, compute φ̄(z), the generating
function of the limiting distribution of X̄n as n → ∞.
7.60 Consider an M/G/1 queue where the customers arrive according to a PP(λ)
and request iid service times with common mean τ , and variance σ 2 . After service
completion, a customer leaves with probability p, or returns to the end of the queue
with probability 1 − p, and behaves like a new customer.
COMPUTATIONAL EXERCISES 337
1. Compute the mean and variance of the amount of time a customer spends in ser-
vice during the sojourn time in the system.
2. Compute the condition of stability.
3. Assuming stability, compute the expected number of customers in the system as
seen by a departure (from the system) in steady state.
4. Assuming stability, compute the expected number of customers in the system at a
service completion (customer may or may not depart at each service completion)
in steady state.
where 0 < r < 1, λ1 > 0, λ2 > 0. The service times are iid exp(µ).
7.62 Let X(t) be the number of customers in a G/M/2 queue at time t. Let Xn∗
be the number of customers as seen by the nth arrival. Show that {Xn∗ , n ≥ 0} is a
DTMC, and compute its one-step transition probability matrix. Derive the condition
of stability and the limiting distribution of Xn∗ as n → ∞.
7.64 Consider the following modification to the M/G/1/1 retrial queue of Sec-
tion 7.7. A new customer joins the service immediately if he finds the server free
upon his arrival. If the server is busy, the arriving customer leaves immediately with
probability c, or joins the orbit with probability 1 − c, and conducts retrials until he
is served. Let Xn and X(t) be as in Section 7.7. Derive the condition of stability and
compute the generating function of the limiting distribution of Xn and X(t). Are
they the same?
7.65 Consider the retrial queue of Section 7.7 with exp(µ) service times. Show that
the results of Section 7.7 are consistent with those of Example 6.38.
7.66 A warehouse stocks Q items. Orders for these items arrive according to a
PP(µ). The warehouse follows a (Q, Q − 1) replenishment policy with back orders
as follows: If the warehouse is not empty, the incoming demand is satisfied from the
existing stock and an order is placed with the supplier for replenishment. If the ware-
house is empty, the incoming demand is back-logged and an order is placed with the
supplier for replenishment. The lead time, i.e., the amount of time it takes for the
order to reach the warehouse from the supplier, is a random variable with distribu-
tion G(·). The lead times are iid, and orders may cross, i.e., the orders placed at the
338 QUEUEING MODELS
supplier may be received out of order. Let X(t) be the number of outstanding orders
at time t.
Renewal Processes
“Research is seeing what everyone else has seen and thinking what no one else has
thought.”
—Anonymous
8.1 Introduction
This chapter is devoted to the study of a class of stochastic processes called renewal
processes (RPs), and their applications. The RPs play several important roles in the
grand scheme of stochastic processes. First, they help remove the stringent distribu-
tional assumptions that were needed to build Markov models, namely, the geometric
distributions for the DTMCs, and the exponential distributions for the CTMCs. RPs
provide us with important tools such as the key renewal theorem (Section 8.5) to deal
with general distributions.
Second, RPs provide a unifying theoretical framework for studying the limiting
behavior of specialized stochastic processes such as the DTMCs and CTMCs. Recall
that we have seen the discrete renewal theorem in Subsection 4.5.2 on page 106 and
continuous renewal theorem in Subsection 6.10.2 on page 234. The discrete renewal
theorem was used in the study of the limiting behavior of the DTMCs, and the contin-
uous renewal theorem was used in the study of the limiting behavior of the CTMCs.
We shall see that RPs appear as embedded processes in the DTMCs and the CTMCs,
and the key-renewal theorem provides the unifying tool to obtain convergence results.
339
340 RENEWAL PROCESSES
begin the study of RPs.
Consider a process of events occurring over time. Let S0 = 0, and Sn be the time
of occurrence of the nth event, n ≥ 1. Assume that
0 ≤ S1 ≤ S2 ≤ S3 ≤ · · · ,
and define
Xn = Sn − Sn−1 , n ≥ 1.
Thus {Xn , n ≥ 1} is a sequence of inter-event times. Clearly, Xn ≥ 0. Since we
allow Xn = 0, multiple events can occur simultaneously. Next define
N (t) = sup{n ≥ 0 : Sn ≤ t}, t ≥ 0. (8.1)
Thus N (t) is simply the number of events up to time t. The process {N (t), t ≥ 0}
is called the counting process generated by {Xn , n ≥ 1}. A typical sample path of
{N (t), t ≥ 0} is shown in Figure 8.1. With this notation we are ready to define a
Renewal Process.
N(t)
6
0 t
S0 S1 S2 S3 S4 S5 S6
Definition 8.1 Renewal Sequence and Renewal Process. The sequence {Sn , n ≥
1} is called the renewal sequence and the process {N (t), t ≥ 0} is called the renewal
process generated by {Xn , n ≥ 1} if {Xn , n ≥ 1} is a sequence on non-negative iid
random variables.
Example 8.1 Poisson Process. From the definition of a Poisson process, we see
that the RP generated by a sequence of iid exp(λ) random variables is a Poisson
process with parameter λ. Thus an RP is a direct generalization of a Poisson process
INTRODUCTION 341
when the inter-event times are iid non-negative random variables that have a general
distribution.
Example 8.2 Arrival Process in a G/M/1 Queue. Let N (t) be the number of
arrivals in a G/M/1 queue up to time t. By definition (See Section 7.6.2) the inter-
arrival times {Xn , n ≥ 1} in a G/M/1 queue are iid random variables. Hence
{N (t), t ≥ 0}, the counting process generated by {Xn , n ≥ 1}, is an RP.
Example 8.5 RPs in an M/G/1 Queue. Let X(t) be the number of customers at
time t in an M/G/1 system. Suppose the system is initially empty, i.e., X(0) = 0.
Let Sn be the time of departure of the nth customer who leaves behind an empty
system, i.e., it is the completion time of the nth busy cycle. Since the Poisson process
has independent increments, and the service times are iid, we see that successive busy
cycles are iid, i.e., {Xn = Sn − Sn−1 , n ≥ 1} is a sequence of iid random variables.
It generates a counting process {N (t), t ≥ 0} that counts the number of busy cycles
completed by time t. It is thus an RP.
If we change the initial state of the system so that an arrival has occurred to an empty
system at time 0-, we can come up with a different RP by defining Sn as the arrival
time of the nth customer who enters an empty system. In this case we get an RP
where N (t) is the number of busy cycles that start by time t. (Note that we do not
count the initial busy cycle.) In general, we can come up with many RPs embedded
in a given stochastic process. What about a G/G/1 queue?
342 RENEWAL PROCESSES
Example 8.6 RPs in a DTMC. Following Example 8.3 we can identify RPs in a
DTMC {Xn , n ≥ 0} on {0, 1, 2, · · ·} with P(X0 = 0) = 1. Let Sn be the time of
the nth visit to state 0, n ≥ 1, and let N (t) be the number of entries into state 0 up to
t. (Note that we do not count the visit at time 0, since the process was already in state
0 at time 0.) Then, by the same argument as in Example 8.3, {N (t), t ≥ 0} is an RP.
Note that in this case renewals can occur only at integer times n = 0, 1, 2, · · · .
The above examples show how pervasive RPs are. Next we characterize an RP.
The next theorem shows that the RP “renews” at time X1 = S1 , i.e., it essentially
starts anew at time S1 .
Proof: To show that they are stochastically identical, we have to show that their finite
dimensional distributions are identical. Now, for any integer n ≥ 1, and 0 < t1 <
· · · < tn and integers 0 ≤ k1 ≤ kn ≤ · · · ≤ kn , we have
P(N (t1 + X1 ) − 1 = k1 , N (t2 + X1 ) − 1 = k2 , · · · , N (tn + X1 ) − 1 = kn )
= P(N (t1 + X1 ) = k1 + 1, N (t2 + X1 ) = k2 + 1, · · · , N (tn + X1 ) = kn + 1)
= P(Sk1 +1 ≤ t1 + X1 , Sk1 +2 > t1 + X1 , Sk2 +1 ≤ t2 + X1 , Sk2 +2 > t2 + X1 ,
· · · , Skn +1 ≤ tn + X1 , Skn +2 > tn + X1 )
= P(Sk1 +1 − X1 ≤ t1 , Sk1 +2 − X1 > t1 , Sk2 +1 − X1 ≤ t2 , Sk2 +2 − X1 > t2 ,
· · · , Skn +1 ≤ tn + X1 , Skn +2 − X1 > tn )
= P(Sk1 ≤ t1 , Sk1 +1 > t1 , Sk2 ≤ t2 , Sk2 +1 > t2 , · · · , Skn ≤ tn , Skn +1 > tn )
= P(N (t1 ) = k1 , N (t2 ) = k2 , · · · , N (tn ) = kn ).
Here the second to the last equality follows because (S2 − X1 , S3 − X1 , · · · , Sn+1 −
X1 ) has the same distribution as (S1 , S2 , · · · , Sn ). The last equality follows from the
PROPERTIES OF N (T ) 343
proof of Theorem 8.1.
The above theorem provides a justification for calling S1 the first renewal epoch,
since the process essentially restarts at that point of time. By the same reason, we
call Sn as the nth renewal epoch.
Now that we know how to characterize an RP, we will study it further: its transient
behavior, limiting behavior, etc. We shall use the following notation throughout this
chapter:
G(t) = P(Xn ≤ t), τ = E(Xn ), s2 = E(Xn2 ), σ 2 = Var(Xn ).
We shall assume that
G(0−) = 1, G(0+) = G(0) < 1.
Thus Xn ’s are non-negative, but not identically zero with probability 1. This assump-
tion implies that
τ > 0,
and is necessary to avoid trivialities.
Theorem 8.3
P(N (t) < ∞) = 1, t ≥ 0.
The next theorem gives the exact distribution of N (t) for a given t.
Example 8.7 Poisson Process. Let G(t) = 1 − e−λt , t ≥ 0. Then Gk (t) is the cdf
of an Erl(k, λ) random variable. Hence
k−1
X (λt)r
Gk (t) = 1 − e−λt , t ≥ 0.
r=0
r!
Hence, from Theorem 8.4, we get
(λt)k
pk (t) = Gk (t) − Gk+1 (t) = e−λt , t ≥ 0.
k!
Thus N (t) ∼ P(λt). This is to be expected, since G(t) = 1 − e−λt implies the RP
{N (t), t ≥ 0} is a PP(λ).
The function Gk (·) of Equation 8.2 is called the k-fold convolution of G with
itself. It can be computed recursively as
Z t Z t
Gk (t) = Gk−1 (t − u)dG(u) = G(t − u)dGk−1 (u), t ≥ 0
0 0
with initial condition G0 (t) = 1 for t ≥ 0. Let G̃(s) be the Laplace-Stieltjes trans-
form (LST) of G, defined as
Z ∞
G̃(s) = e−st dG(t).
0
Since the LST of a convolution is the product of the LSTs (see Appendix E), it
follows that
G̃k (s) = G̃(s)k .
Hence
Z ∞
p̃k (s) = e−st dpk (t) = G̃k (s) − G̃k+1 (s) = G̃(s)k (1 − G̃(s)). (8.4)
0
One of the most important tools of renewal theory is the renewal argument. It is a
method of deriving an integral equation for a probabilistic quantity by conditioning
on S1 , the time of the first renewal. We explain it by deriving an equation for pk (t) =
P(N (t) = k). Now fix a k > 0 and t ≥ 0. Suppose S1 = u. If u > t, we must
have N (t) = 0. If u ≤ t, we already have one renewal at time u, and the process
renews at time u. Hence we can get k renewals by time t if and only if we get k − 1
additional renewals in this new renewal process starting at time u up to time t, which
is P(N (t − u) = k − 1). Combining these observations we get
0 if u > t,
P(N (t) = k|S1 = u) =
P(N (t − u) = k − 1) if u ≤ t.
PROPERTIES OF N (T ) 345
Hence,
Z ∞
pk (t) = P(N (t) = k|S1 = u)dG(u)
0
Z t
= P(N (t − u) = k − 1)dG(u)
0
Z t
= pk−1 (t − u)dG(u). (8.5)
0
We also have
The above equation can be used to compute pk (n) for increasing values of k starting
with p0 (n) of Equation 8.7.
This completes our study of the transient behavior of the RP. Next we study its
limiting behavior. Unlike in the case of DTMCs and CTMCs, where most of limiting
results were about convergence in distribution, the results in renewal theory are about
convergence with probability one (w.p. 1), or almost sure (a.s.) convergence. See
Appendix G for relevant definitions. Let N (∞) be the almost sure limit of N (t) as
t → ∞. That is,
N (∞) = lim N (t), with probability 1.
t→∞
In other word, N (∞) is the sample-path wise limit of N (t). It exists since the sample
paths of a RP are non-decreasing functions of time.
Theorem 8.5 Almost Sure Limit of N (t). Let {N (t), t ≥ 0} be an RP with com-
mon inter-event time cdf G(·), with almost sure limit N (∞).
Proof: We have
N (∞) = k < ∞ ⇔ {Xn < ∞, 1 ≤ n ≤ k, Xk+1 = ∞}.
The probability of the event on the right is 0 if G(∞) = P(Xn < ∞) = 1, and
G(∞)k (1 − G(∞)) if G(∞) < 1. The theorem follows from this.
Thus if G(∞) = 1 the renewals recur infinitely often, while if G(∞) < 1, the
renewals stop occurring after a while. We use this behavior to make the following
definition:
From now on we shall concentrate on recurrent RPs, i.e., we shall assume that
G(∞) = 1. In this case N (t) → ∞ with probability 1 as t → ∞. The next theorem
gives us the rate at which it approaches infinity.
Proof: From the definition of the RP and Figure 8.2, we see that SN (t) ≤ t <
SN (t)+1 . Hence, when N (t) > 0,
Time
0 SN(t) t SN(t)+1
SN (t) t SN (t)+1
≤ < .
N (t) N (t) N (t)
PROPERTIES OF N (T ) 347
Since the RP is recurrent, N (t) → ∞ as t → ∞. Thus, using the strong law of large
numbers, we get
SN (t) Sn
lim = lim = τ, w.p.1,
t→∞ N (t) n→∞ n
and
SN (t)+1 Sn+1 Sn+1 n + 1
lim = lim = lim = τ, w.p.1.
t→∞ N (t) n→∞ n n→∞ n+1 n
Hence
t SN (t)+1
lim sup ≤ lim = τ,
t→∞ N (t) t→∞ N (t)
and
t SN (t)
lim inf ≥ lim = τ.
t→∞ N (t) t→∞ N (t)
Hence
t t
τ ≥ lim sup ≥ lim inf ≥ τ.
t→∞ N (t) t→∞ N (t)
Thus we have
t t
lim sup = lim inf = τ.
t→∞ N (t) t→∞ N (t)
Thus N (t)/t has a limit given by
t
lim = τ.
t→∞ N (t)
Now, if τ < ∞, we can use the continuity of the function f (x) = 1/x for x > 0 to
get Equation 8.9. If τ = ∞, we first construct a new renewal process with inter-event
times given by min(Xn , T ), for a fixed T , and then let T → ∞. We leave the details
to the reader.
Theorem 8.6 makes intuitive sense because the limiting value of N (t)/t is the long
run number of renewals per unit time. Thus if the mean inter-event time is 10 min-
utes, it makes sense that in the long run we should see one renewal every 10 minutes,
or 1/10 renewals every minute.
The following theorem gives more detailed distributional information about how
the RP approaches infty.
Theorem 8.7 Central Limit Theorem for N (t). Let {N (t), t ≥ 0} be a recurrent
RP with mean inter-event time 0 < τ < ∞ and variance σ 2 < ∞. Then
! Z x −u2 /2
N (t) − t/τ e
lim P p ≤ x = Φ(x) = √ du. (8.10)
t→∞ 2
σ t/τ 3 2π
−∞
Proof: We have
P(N (t) ≥ k) = P(Sk ≤ t)
348 RENEWAL PROCESSES
and hence
!
N (t) − t/τ k − t/τ Sk − kτ t − kτ
P p ≥ p =P √ ≤ √ . (8.11)
2
σ t/τ 3 σ 2 t/τ 3 σ k σ k
Now let t and k both grow to ∞ so that
t − kτ
√ → x,
σ k
where x is a fixed real number. The above equation implies that
k − t/τ
p → −x.
σ 2 t/τ 3
Letting k, t → ∞ appropriately in Equation 8.11, and using the central limit theorem,
we get
!
N (t) − t/τ Sk − kτ
lim P p ≥ −x = lim P √ =≤ x Φ(x). (8.12)
t→∞ σ 2 t/τ 3 k→∞ σ k
Hence !
N (t) − t/τ
lim P p ≤ x = 1 − Φ(−x) = Φ(x).
t→∞ σ 2 t/τ 3
This completes the proof.
The above theorem says that for large t, N (t) approaches a normal random vari-
able with mean t/τ and variance σ 2 t/τ 3 .
Let Xn be the time between the nth and (n − 1)st failure. {Xn , n ≥ 1} are iid
random variables with common distribution given by
1 + exp(1/8) with probability .3,
Xn ∼
0.5 + exp(1/5) with probability .7.
Let N (t) be the number of failures up to time t. Then {N (t), t ≥ 0} is an RP
generated by {Xn , n ≥ 1}. Straightforward calculations yield
τ = E(Xn ) = 6.55, σ 2 = Var(Xn ) = 39.2725.
Hence, from Theorem 8.7, N (t) is approximately normally distributed with mean
t/6.55 and variance (39.2725/6.553)t. Thus, in the first year (t = 365), the number
of failures is approximately normal with mean 55.725 and variance 51.01.
Definition 8.3 Let {N (t), t ≥ 0} be an RP. The renewal function M (·) is defined
as
M (t) = E(N (t)), t ≥ 0. (8.13)
Proof: We have
∞
X
M (t) = kP(N (t) = k)
k=0
X∞
= k(Gk (t) − Gk+1 (t)) (Eq. 8.3)
k=0
X∞
= Gk (t).
k=1
350 RENEWAL PROCESSES
This proves the theorem.
The next theorem introduces an important equation known as the renewal equation,
and gives a simple expression for the LST of the renewal function defined by
Z ∞
M̃ (s) = e−st dM (t), Re(s) > 0.
0
Proof: We use the renewal argument. Fix a t, and suppose S1 = u. If u > t the very
first renewal is after t, hence N (t) = 0, and hence M (t) = 0. On the other hand, if
u ≤ t, then we get one renewal at u, and a new renewal process starts at u, which
produces additional M (t − u) expected number of events up to t. Hence we have
0 if u > t,
E(N (t)|S1 = u) =
1 + M (t − u) if u ≤ t.
Hence we get
Z ∞
M (t) = E(N (t)|S1 = u)dG(u)
0
Z t
= (1 + M (t − u))dG(u)
0
Z t
= G(t) + M (t − u)dG(u).
0
This gives the renewal equation. Taking LSTs on both sides of Equation 8.15 we get,
M̃ (s) = G̃(s) + M̃ (s)G̃(s),
which yields Equation 8.16.
As in the case of pk (t), when the inter-event times are integer valued random
variables, the renewal argument provides a simple recursive method of computing
M (t). To derive this, let
αi = P(Xn = i), i = 0, 1, 2, · · · ,
THE RENEWAL FUNCTION 351
and
i
X
βi = P(Xn ≤ i) = αk .
k=0
Since all renewals take place at integer time points, it suffices to study M (n) for
n = 0, 1, 2, · · · . We leave it to the reader to show that Equation 8.15 reduces to
n
X
M (n) = βn + αk M (n − k), n = 0, 1, 2, · · · . (8.17)
k=0
The above equation can be used to compute M (n) recursively for increasing values
of n.
The renewal function plays a very important role in the study of renewal processes.
The next theorem shows why.
Example 8.11 Renewal Function for a PP. Consider a renewal process with inter-
event time distribution G(x) = 1 − e−λx for x ≥ 0. We have G̃(s) = λ/(s + λ).
From Equation 8.16 we get
λ
M̃ (s) = .
s
Inverting this we get
M (t) = λt, t ≥ 0.
This is as expected, since the renewal process in this case is a PP(λ). Theorem 8.10
implies that if a renewal process has a linear renewal function, it must be a Poisson
process!
Theorem 8.11 Let M (t) be the renewal function of an RP with mean inter-event
time τ > 0. Then
M (t) < ∞ for all t ≥ 0.
Proof: Let G be the cdf of the inter-event times. Since τ > 0, G(0) < 1. Thus there
is a δ > 0 such that G(δ) < 1. Define
0 if Xn < δ,
Xn∗ =
δ if Xn ≥ δ.
Let {Sn∗ , n ≥ 1} and {N ∗ (t), t ≥ 0} be the renewal sequence and the renewal
process generated by {Xn∗ , n ≥ 1}. Since Xn∗ ≤ Xn for all n ≥ 1, we see that
Sn∗ ≤ Sn for all n ≥ 1, which implies that N (t) ≤ N ∗ (t) for all t ≥ 0. Hence
M (t) = E(N (t)) ≤ E(N ∗ (t)) = M ∗ (t).
We can modify Example 8.12 slightly to get
[t/δ] + G(δ)
M ∗ (t) = < ∞.
1 − G(δ)
Hence
M (t) ≤ M ∗ (t) < ∞,
and the result follows.
Before we prove the theorem, it is worth noting that this theorem does not follow
from Theorem 8.6, since almost sure convergence does not imply convergence of the
THE RENEWAL FUNCTION 353
expected values. Hence we need to establish this result independently. We need the
following result first.
Theorem 8.13
E(SN (t)+1 ) = τ (M (t) + 1). (8.20)
A quick exercise in renewal argument shows that E(SN (t) ) 6= τ M (t). Thus Equa-
tion 8.20 is unusual indeed! With this result we are now ready to prove the elementary
renewal theorem.
Proof of Theorem 8.12. Assume 0 < τ < ∞. By definition, we have
SN (t)+1 > t.
Hence
E(SN (t)+1 ) > t.
354 RENEWAL PROCESSES
Using Equation 8.20, the above inequality yields
M (t) 1 1
> − .
t τ t
Hence
M (t) 1
lim inf ≥ . (8.22)
t→∞ t τ
Now, fix a 0 < T < ∞ and define
Xn′ = min(Xn , T ), n ≥ 1.
Let {N ′ (t), t ≥ 0} be an RP generated by {Xn′ , n ≥ 1}, and M ′ (t) be the corre-
sponding renewal function. Now,
SN ′ (t)+1 ≤ t + T.
Hence, taking expected values on both sides, and using Equation 8.20, we get
τ ′ (M ′ (t) + 1) ≤ t + T,
where τ ′ = E(Xn′ ). Hence we get
M ′ (t) 1 T − τ′
≤ ′+ .
t τ t
Since Xn′ ≤ Xn , we see that M (t) ≤ M ′ (t). Hence we get
M (t) 1 T − τ′
≤ ′+ .
t τ t
This implies
M (t) 1
lim sup ≤ ′.
t→∞ t τ
Now as T → ∞, τ ′ → τ. Hence letting T → ∞ on both sides af the above inequality,
we get
M (t) 1
lim sup ≤ . (8.23)
t→∞ t τ
Combining Equations 8.22 and 8.23 we get Equation 8.19. We leave the case of
τ = ∞ to the reader.
This theorem has the same intuitive explanation as Theorem 8.6. We end this sec-
tion with results about the higher moments of N (t).
In this section we study the type of integral equations that arise from the use of
renewal argument. We have already seen two instances of such equations: Equa-
tion 8.15 for M (t), and Equation 8.21 for E(SN (t)+1 ). All these equations have the
following form: Z t
H(t) = D(t) + H(t − u)dG(u), (8.25)
0
where G(·) is a cdf of a random variable with G(0−) = 0 and G(∞) = 1, D(·) is a
given function, and H is to be determined. When D(t) = G(t), the above equation is
the same as Equation 8.15, and is called the renewal equation. When D(·) is a func-
tion other than G(·), Equation 8.25 is called the renewal-type equation. Thus, when
D(t) = τ , we get the renewal-type equation for E(SN (t)+1 ), namely, Equation 8.21.
We will need to solve the renewal-type equations arising in applications. The fol-
lowing theorem gives the conditions for the existence and uniqueness of the solution
to a renewal-type equation.
Proof: Since M (t) < ∞ and Gk (t) is a decreasing function of k, Equation 8.14
implies that
lim Gn (t) = 0.
n→∞
We introduce the following convenient notation for convolution:
Z t
A ∗ B(t) = A(t − u)dB(u).
0
By recursive use of the renewal type equation, and writing H for H(t) for compact-
ness, we get
H = D + H ∗ G = D + (D + H ∗ G) ∗ G = D + D ∗ G + H ∗ G2 = · · ·
n−1
X
= D+D∗ Gk + H ∗ Gn . (8.27)
k=1
which is Equation 8.26. Thus we have shown that H as given in Equation 8.26 is a
solution to Equation 8.25.
Since D is assumed to be bounded we get
c = sup |D(x)| < ∞.
0≤x≤t
Example 8.13 Two-State Machine. We revisit our two state machine with iid
RENEWAL-TYPE EQUATION 357
exp(µ) up-times. However, when the machine fails after an up-time U , the ensu-
ing down time is cU , for a fixed c > 0. Suppose the machine is up at time 0. Let
X(t) = 1 if the machine is up at time t, and 0 otherwise. Compute
H(t) = P(X(t) = 1).
which is a renewal-type equation with D(t) = e−µt , and G being the cdf of an
exp(µ/(1 + c)) random variable. The RP corresponding to this G is a PP(µ/(1 + c)).
Hence the renewal function is given by (see Example 8.11)
µ
M (t) = t.
1+c
Hence the solution given by Equation 8.26 reduces to
Z t
µ 1 + ce−µt
H(t) = e−µt + e−µ(t−u) du = .
0 1+c 1+c
Note that we could not have derived the above expression by the method of CTMCs
since {X(t), t ≥ 0} is not a CTMC due the dependence of the up and down times.
The solution in Equation 8.26 is not easy to use in practice since, unlike in the
previous example, M (t) is generally not explicitly known. The next theorem gives a
transform solution that can be more useful in practice.
358 RENEWAL PROCESSES
Theorem 8.16 If D(·) has an LST D̃(s), then H(·) has an LST H̃(s) given by
D̃(s)
H̃(s) = . (8.29)
1 − G̃(s)
Define Sn be the nth time when a customer enters an empty system. Since the
arrival process is Poisson, and the service times are iid, we see that {Sn , n ≥ 1} is a
renewal sequence. S1 is called the busy cycle. Let H(t) = P(X(t) = 0). Using the
renewal argument, we get
P(T ≤ t|S1 = u) if u > t,
P(X(t) = 0|S1 = u) =
H(t − u) if u ≤ t.
Hence, using G as the cdf of S1 , we get
Z t
H(t) = P(X(t) = 0|S1 = u)dG(u)
0
Z t Z ∞
= H(t − u)dG(u) + P(T ≤ t|S1 = u)dG(u). (8.30)
0 t
Since P(T ≤ t|S1 = u) = 1 if u ≤ t, we get
Z ∞ Z ∞ Z t
P(T ≤ t|S1 = u)dG(u) = P(T ≤ t|S1 = u)dG(u) − dG(u)
t 0 0
= P(T ≤ t) − G(t).
Substituting in Equation 8.28, we get
Z t
H(t) = P(T ≤ t) − G(t) + H(t − u)dG(u). (8.31)
0
We see that S1 = T + M where M ∼ exp(λ) is the time until a customer arrives to
the system once it becomes empty at time T , and is independent of T . Since F is the
cdf of T and G is cdf of S1 , we see that
λ
G̃(s) = F̃ (s) .
s+λ
KEY RENEWAL THEOREM 359
Also, from the results in Section 7.8, and using the fact that the M/G/∞ queue starts
with an arrival to an empty queue at time 0, we see that
Z t
H(t) = exp(−λ (1 − B(u))du)B(t).
0
Let H̃(s) be the LST of H(·). Taking the LST of Equation 8.31 we get
λ λ
H̃(s) = F̃ (s) − F̃ (s) + H̃(s)F̃ (s) .
s+λ s+λ
Solving for F̃ (s) we get
(s + λ)H̃(s)
F̃ (s) = ,
s + λH̃(s)
which is the result we desired. Differentiating F̃ (s) with respect to s and using
lim H̃(s) = H(∞) = e−λτ
s→0
we get
eλτ − 1
E(T ) = .
λ
In the previous section we saw several examples of the renewal type equation, and
methods of solving it. In general explicit solutions to renewal type equations are hard
to come by. Hence in this section we study the asymptotic properties of the solution
H(t) as t → ∞. We have done such asymptotic analysis for a discrete renewal type
equation in our study of DTMCs (Equation 4.24 on page 107), and a continuous re-
newal type equation in the study of CTMCs (Equation 6.53 on page 234). In this
section we shall deal with the general case. We begin with a definition.
The largest such d is called the period (or span) of X. If there is no such d, X is
called aperiodic (or non-arithmetic, or non-lattice).
Theorem 8.17 Key Renewal Theorem. Let H be a solution to the following re-
newal type equation
Z t
H(t) = D(t) + H(t − u)dG(u). (8.32)
0
Suppose D is a difference of two non-negative bounded monotone functions and
Z ∞
|D(u)|du < ∞. (8.33)
0
The proof of this theorem is beyond the scope of this book, and we omit it. The
hard part is proving that the limit exists. If we assume the limit exists, evaluating
the limits is relatively easy, as we have done in the proofs of the discrete renewal
theorem (Theorem 4.14 on page 107) and the continuous renewal theorem (Theo-
rem 6.20 on page 234). Feller proves the result under the assumption that D is a
“directly Riemann integrable” function. The condition on D assumed above is a suf-
ficient condition for direct Riemann integrability, and is adequate for our purposes.
We refer the readers to Feller(1971), Kohlas (1982), or Heyman and Sobel (1982) for
proofs. We shall refer to key renewal theorem as KRT from now on.
In the remainder of this section we shall illustrate the usefulness of KRT by means
of several examples. As a first application, we shall prove the following theorem.
Theorem 8.18 Blackwell’s Renewal Theorem. Let M (·) be the renewal function
of an RP with mean inter-renewal time τ > 0.
1. If the RP is aperiodic,
h
lim [M (t + h) − M (t)] = , h ≥ 0. (8.36)
t→∞ τ
2. If the RP is periodic with period d,
kd
lim [M (t + kd) − M (t)] = , k = 0, 1, 2, · · · . (8.37)
t→∞ τ
KEY RENEWAL THEOREM 361
Proof: We consider the aperiodic case. For a given h ≥ 0, consider the renewal type
equation 8.25 with the following D function:
1 if 0 ≤ t ≤ h,
D(t) =
0 if t > h.
The solution is given by Equation 8.26. For t > h this reduces to
Z t
H(t) = dM (u) = M (t) − M (t − h).
t−h
It is interesting to note that one can prove that the above theorem is in fact equiv-
alent to the KRT, although the proof is not simple!
Example 8.15 We verify Blackwell’s renewal theorem for the two renewal func-
tions we derived in Examples 8.11 and 8.12. In Example 8.11 we have a Poisson
process, which is an aperiodic RP with M (t) = λt. In this case the mean inter-
renewal time is τ = 1/λ. We get
h
lim [M (t + h) − M (t)] = lim λh = ,
t→∞ t→∞ τ
thus verifying Equation 8.36. In Example 8.12 we have a periodic RP with period 1,
with M (t) = ([t] + 1 − α)/α. In this case the mean inter-renewal time is τ = α.
Thus, if h is a nonnegative integer,
[t + h] − [t] h
lim [M (t + h) − M (t)] = lim = .
t→∞ t→∞ α τ
If h is not an integer, the above limit does not exist. This verifies Equation 8.37.
Example 8.16 Asymptotic Behavior of M (t). Let M (t) be the renewal function
of an aperiodic RP with inter-event time with finite variance (σ 2 < ∞). Show that
t σ2 − τ 2
M (t) = + + o(1), (8.38)
τ 2τ 2
where o(1) is a function that approaches 0 as t → ∞.
A(t) B(t)
SN(t) t SN(t)+1
C(t)
called the remaining life (or excess life) process, and {C(t), t ≥ 0} is called the total
life process. Figures 8.4, 8.5, and 8.6 show the sample paths of the three stochastic
A(t)
t
S0 S1 S2 S3 S4
processes. The age process increases linearly with rate 1, jumps down to zero at ev-
ery renewal epoch Sn . The remaining life process starts at X1 , and decreases linearly
at rate 1. When it reaches zero at time Sn , it jumps up to Xn+1 . The total life pro-
cess has piecewise constant sample paths with upward or downward jumps at Sn . We
364 RENEWAL PROCESSES
B(t)
t
S0 S1 S2 S3 S4
C(t)
t
S0 S1 S2 S3 S4
shall use the renewal argument and the KRT to study these three stochastic processes.
Theorem 8.19 Remaining Life Process. Let B(t) be the excess life at time t in an
aperiodic RP with inter-event time distribution G. Then, for a given x > 0,
H(t) = P(B(t) > x), t ≥ 0,
satisfies the renewal type equation
Z t
H(t) = 1 − G(x + t) + H(t − u)dG(u), (8.39)
0
and ∞
1
Z
lim P(B(t) > x) = (1 − G(u))du. (8.40)
t→∞ τ x
Hence we get
Z ∞
H(t) = P(B(t) > x|S1 = u)dG(u)
0
RECURRENCE TIMES 365
Z t
= 1 − G(x + t) + H(t − u)dG(u),
0
which is Equation 8.39. Now D(t) = 1 − G(x + t) is monotone, and satisfies the
condition in Equation 8.33. Since G is assumed to be aperiodic, we can use KRT to
get
1 ∞ 1 ∞
Z Z
lim H(t) = (1 − G(x + u))du = (1 − G(u))du,
t→∞ τ 0 τ x
which gives Equation 8.40.
Theorem 8.20 Age Process. Let A(t) be the age at time t in an aperiodic RP with
inter-event time distribution G. Its limiting distribution is given by
1 ∞
Z
lim P(A(t) ≥ x) = (1 − G(u))du. (8.41)
t→∞ τ x
Proof: It is possible to prove this theorem by first deriving a renewal type equation
for H(t) = P(A(t) ≥ x) and then using the KRT. We leave that to the reader and
show an alternate method here. For t > x, we have
{A(t) ≥ x} ⇔ {No renewals in (t − x, t]} ⇔ {B(t − x) > x}.
Hence
lim P(A(t) ≥ x) = lim P(B(t − x) > x)
t→∞ t→∞
= lim P(B(t) > x)
t→∞
1 ∞
Z
= (1 − G(u))du,
τ x
which is Equation 8.41.
Note that in the limit, A(t) and B(t) are continuous random variables with density
(1 − G(u))/τ , even if the inter-event times may be discrete (but aperiodic). Thus
the presence of strict or weak inequalities in Equations 8.40 and 8.41 have do not
make any difference. In the limit the age and excess life are identically distributed!
However, they are not independent, as shown in the next theorem.
Theorem 8.21 Joint Distribution of Age and Excess Life. Suppose the hypotheses
of Theorems 8.19 and 8.20 hold. Then
1 ∞
Z
lim P(A(t) ≥ y, B(t) > x) = (1 − G(u))du. (8.42)
t→∞ τ x+y
The complementary cdf given in Equation 8.40 occurs frequently in the study of
renewal processes. Hence we introduce the following notation:
1 x
Z
Ge (x) = (1 − G(u))du. (8.43)
τ 0
R∞
Ge is a proper distribution of a continuous random variable on [0, ∞), since 0 (1 −
G(u))du = τ. It is called the equilibrium distribution corresponding to G. Let Xe be
a random variable with cdf Ge . One can show that
Z ∞
σ2 + τ 2
E(Xe ) = (1 − Ge (u))du = . (8.44)
0 2τ
Unfortunately, since convergence in distribution does not imply convergence of the
means, and we cannot conclude
σ2 + τ 2
lim E(A(t)) = lim E(B(t)) =
t→∞ t→∞ 2τ
based on Theorems 8.19 and 8.20. We can, however, show the validity of the above
limit directly by using KRT, as shown in the next theorem.
Theorem 8.22 Limiting Mean Excess Life. Suppose the hypothesis of Theo-
rems 8.19 holds, and σ 2 < ∞. Then
σ2 + τ 2
lim E(B(t)) = . (8.45)
t→∞ 2τ
Proof: Let H(t) = E(B(t)). Conditioning on S1 we get
H(t − u) if u ≤ t,
E(B(t)|S1 = u) =
u−t if u > t.
Hence we get
Z ∞
H(t) = E(B(t)|S1 = u)dG(u)
0
Z t
= D(t) + H(t − u)dG(u),
0
where Z ∞
D(t) = (u − t)dG(u).
t
DELAYED RENEWAL PROCESSES 367
Now D is monotone since
d
D(t) = −(1 − G(t)) ≤ 0.
dt
Also, we can show that
Z ∞
D(u)du = (σ 2 + τ 2 )/2 < ∞.
0
Hence the KRT can be applied to get
∞
1 σ2 + τ 2
Z
lim E(B(t)) = D(u)du = .
t→∞ τ 0 2τ
This proves the theorem.
Theorem 8.23 Limiting Mean Age. Suppose the hypothesis of Theorem 8.19
holds, and σ 2 < ∞. Then
σ2 + τ 2
lim E(A(t)) = . (8.46)
t→∞ 2τ
Theorem 8.24 Limiting Mean Total Life. Suppose the hypothesis of Theorem 8.19
holds, and σ 2 < ∞. Then
σ2 + τ 2
lim E(C(t)) = . (8.47)
t→∞ τ
Equation 8.47 requires some fine tuning of our intuition. Since C(t) = XN (t)+1 ,
the above theorem implies that
σ2 + τ 2
lim E(XN (t)+1 ) = ≥ τ = E(Xn ).
t→∞ τ
Thus, for large t, the inter-renewal time covering t, namely, XN (t)+1 , is longer in
mean than a generic inter-renewal time, say Xn . This counter-intuitive fact is called
the inspection paradox. One way to rationalize this “paradox” is to think of picking
a t uniformly over a very long interval [0, T ] over which we have observed an RP.
Then it seems plausible the probability that our randomly picked t will lie in a given
inter-renewal interval is directly proportional to the length of that interval. Hence
such a random t is more likely to fall in a longer interval. This fact is quantified by
Equation 8.47.
Example 8.17 Let {N (t), t ≥ 0} be a standard RP. For a fixed s > 0 define
Ns (t) = N (s + t) − N (s). Then {Ns (t), t ≥ 0} is a delayed RP. Here F is the
distribution of B(s), the excess life at time s in the original RP.
Example 8.19 Delayed RPs in an M/G/1 Queue. Let X(t) be the number of
customers at time t in an M/G/1 system. Let N (t) be the number of busy cycles
completed by time t. We saw in Example 8.5 that {N (t), t ≥ 0} is an RP if the
system starts empty, that is, if X(0) = 0. For any other initial distribution, {N (t), t ≥
0} is a delayed RP.
Theorem 8.25 The Renewal Function for the Delayed RP. The renewal function
{M D (t), t ≥ 0} of a renewal process {N D (t), t ≥ 0} satisfies the integral equation
Z t
D
M (t) = F (t) + M (t − u)dG(u), t ≥ 0. (8.49)
0
DELAYED RENEWAL PROCESSES 369
Its LST is given by
F̃ (s)
M̃ D (s) = . (8.50)
1 − G̃(s)
The renewal function {M D (t), t ≥ 0} also inherits almost all the properties of the
renewal function of a standard RP, except that its does not uniquely characterize the
delayed RP.
This is not a renewal type equation, since H D appears on the left, but not on the
right. The next theorem gives a result about the limiting behavior of the function H D
satisfying the above equation.
Theorem 8.26 Let C(t) and H(t) be bounded functions with finite limits as t →
∞. Let H D be given as in Equation 8.51, where F is a cdf of a non-negative non-
defective random variable. Then
lim H D (t) = lim C(t) + lim H(t). (8.52)
t→∞ t→∞ t→∞
Proof: Since H(·) is a bounded function with a finite limit h (say), we see that the
functions Ht (·) defined by
H(t − x) if 0 ≤ x ≤ t
Ht (x) =
0 if x > t
370 RENEWAL PROCESSES
form a sequence of bounded functions with
lim Ht (x) = h, x ≥ 0.
t→∞
Example 8.20 Asymptotic Behavior of M D (t). Let M D (t) be the renewal func-
tion of a delayed RP. Let
τF = E(X1 ), σF2 = Var(X1 ) < ∞,
τ = E(Xn ), σ 2 = Var(Xn ) < ∞, n ≥ 2.
Show that
t σ 2 + τ 2 − 2τ τF
M D (t) = + + o(1), (8.53)
τ 2τ 2
where o(1) is a function that approaches 0 as t → ∞.
Next we study a special case of a delayed RP called the equilibrium renewal pro-
cess as defined below.
Note that M e (t) is a linear function of t, but {N e (t), t ≥ 0} is not a PP. Does this
contradict our statement in Example 8.11 that linear renewal function implies that
the RP is a PP? Not at all, since {N e (t), t ≥ 0} is a delayed RP, and not an RP. If it
was an RP, i.e., if Ge = G, then it would be a PP. This indeed is the case, since it is
possible to show that G = Ge if and only if G is exponential!
Theorem 8.28 Excess Life in an Equilibrium RP. Let B e (t) be the excess life at
time t in an equilibrium RP as in Definition 8.6. Then,
P(B e (t) ≤ x) = Ge (x), t ≥ 0, x ≥ 0.
372 RENEWAL PROCESSES
Proof: Let H e (t) = P(B e (t) > x) and H(t) = P(B(t) > x) where B(t) is the
excess life in an RP with common inter-event cdf G. Using Ge as the cdf of S1 and
following the proof of Theorem 8.19 we get
Z t
e
H (t) = 1 − Ge (x + t) + H(t − u)dGe (u).
0
Taking the LST on both sides, we get
H̃ e (s) = 1 − c(s) + H̃(s)G̃e (s), (8.55)
where Z ∞
c(s) = e−st dGe (x + t).
0
On the other hand, from Equation 8.39, we have
Z t
H(t) = 1 − G(x + t) + H(t − u)dG(u).
0
Taking LST of the above equation we get
1 − b(s)
H̃(s) = , (8.56)
1 − G̃(s)
where Z ∞
b(s) = e−st dG(x + t).
0
Taking into account the jump of size Ge (x) at t = 0 in the function Ge (x + t), we
can show that
1 − b(s)
c(s) = Ge (x) + . (8.57)
sτ
Substituting Equation 8.57 and 8.56 in 8.55, and simplifying, we get
H̃ e (s) = 1 − Ge (x).
Hence we must have
H e (t) = 1 − Ge (x),
for all t ≥ 0. This proves the theorem.
Thus the distribution of the excess life in an equilibrium RP does not change with
time. This is another manifestation of the “equilibrium”! We leave it to the reader to
derive the next result from this.
Consider a stochastic process {X(t), t ≥ 0} with state-space {0, 1}. Suppose the
process starts in state 1 (also called the “up” state). It stays in that state for U1 amount
of time and then jumps to state 0 (also called the “down” state). It stays in state 0 for
D1 amount of time and then goes back to state 1. This process repeats forever, with
Un being the nth up-time, and Dn the nth down-time. The nth up time followed by
the nth down time is called the nth cycle. A sample path of such a process is shown
in Figure 8.7.
Z(t)
Up
Down
t
U1 D1 U2 D2
We abbreviate “alternating renewal process” as ARP. Note that an ARP does not
count any events even though it has the term “renewal process” in its name. The next
theorem gives the main result about the ARPs.
Proof: Let Sn be the nth time the ARP enters state 1, i.e.,
n
X
Sn = (Ui + Di ), n ≥ 1.
i=1
374 RENEWAL PROCESSES
Then {Sn , n ≥ 1} is a renewal sequence. Since the ARP renews at time S1 =
U1 + D1 , we get
H(t − u) if u ≤ t,
P(X(t) = 1|S1 = u) =
P(U1 > t|S1 = u) if u > t.
Using G(u) = P(S1 ≤ u), we get
Z ∞
H(t) = P(X(t) = 1|S1 = u)dG(u)
0
Z t Z ∞
= H(t − u)dG(u) + P(U1 > t|S1 = u)dG(u). (8.60)
0 t
We use the same trick as in Example 8.13: since P(U1 > t|S1 = u) = 0 if u ≤ t, we
get
Z ∞ Z ∞
P(U1 > t|S1 = u)dG(u) = P(U1 > t|S1 = u)dG(u) = P(U1 > t).
t 0
Substituting in Equation 8.60 we get Equation 8.58. Now, P(U1 > t) is bounded and
monotone, and Z ∞
P(U1 > u)du = E(U1 ) < ∞.
0
Hence we can use KRT to get, assuming S1 is aperiodic,
E(U1 ) E(U1 )
lim H(t) = lim P(X(t) = 1) = = .
t→∞ t→∞ E(S1 ) E(U1 ) + E(D1 )
This proves the theorem. The periodic case follows similarly.
The above theorem is intuitively obvious if one interprets the limiting probability
that X(t) = 1 as the fraction of the time the ARP is up. Thus, if the successive up-
times are 30 minutes long on the average, and the down-times are 10 minutes long
on the average, then it is reasonable to expect that the ARP will be up 75% of the
time in the long run. What is non-intuitive about the theorem is that it is valid even
if the Un and Dn are dependent random variables. All we need to assume is that
successive cycles of the ARP are independent. This fact makes the ARP a powerful
tool. We illustrate its power with several examples.
Example 8.22 Excess Life. In this example we use ARPs to derive the limting dis-
tribution of the excess life in a standard RP given in Equation 8.40.
t
S1 S2 S3 S4
Z(t)
Up
Down
t
S1 S2 S3 S4
Figure 8.8 The excess life process and the induced ARP.
Un = max(Xn − x, 0),
Dn = min(Xn , x),
Un + Dn = Xn .
Note that Un and Dn are dependent, but {(Un , Dn ), n ≥ 0} is a sequence of iid
bivariate random variables. Hence {X(t), t ≥ 0} is an ARP. Then, assuming Xn is
aperiodic, we can use Theorem 8.30 to get
lim P(B(t) > x) = lim P(X(t) = 1)
t→∞ t→∞
E(U1 )
=
E(U1 ) + E(D1 )
E(max(X1 − x, 0))
=
E(X1 )
376 RENEWAL PROCESSES
∞
1
Z
= (1 − G(u))du
τ x
Example 8.23 M/G/1/1 Queue. Consider an M/G/1/1 queue with arrival rate
λ and iid service times with mean τ . Let X(t) be the number of customers in the
system at time t. Compute the limiting distribution of X(t) as t → ∞.
The stochastic process {X(t), t ≥ 0} has state-space {0, 1}, where the down-
times {Dn , n ≥ 1} are iid exp(λ) and the up-times {Un , n ≥ 1} are iid service
times. Hence {X(t), t ≥ 0} is an aperiodic ARP (i.e., S1 = U1 + D1 is aperiodic).
Hence
p1 = lim P(X(t) = 1)
t→∞
E(U1 )
=
E(U1 ) + E(D1 )
τ
=
τ + 1/λ
ρ
= ,
1+ρ
where ρ = λτ . Hence
1
p0 = lim P(X(t) = 0) = .
t→∞ 1+ρ
Example 8.24 G/M/1/1 Queue. Consider a G/M/1/1 queue with iid inter-arrival
times with common cdf G and mean 1/λ, and iid exp(µ) service times. Let X(t) be
the number of customers in the system at time t. Compute the limiting distribution
of X(t) as t → ∞.
The stochastic process {X(t), t ≥ 0} has state-space {0, 1}. The up-times
{Un , n ≥ 1} are iid exp(µ) and the cycle lengths {Un +Dn , n ≥ 1} form a sequence
of iid random variables, although Un and Dn are dependent. Hence {X(t), t ≥ 0} is
an ARP. Now, let N (t) be the number of arrivals (who may or may not enter) up to
time t and An be the time of the nth arrival. Then
U1 + D1 = AN (U1 )+1 .
Hence, from Theorem 8.13, we get
1
E(U1 + D1 |U1 = t) = (1 + M (t)),
λ
where 1/λ is the mean inter-arrival time, and M (t) = E(N (t)). Hence
Z ∞
1
E(U1 + D1 ) = (1 + M (t))µe−µt dt
0 λ
ALTERNATING RENEWAL PROCESSES 377
Z ∞
1 −µt
= 1+ M (t)µe dt
λ 0
Z ∞
1 −µt
= 1+ e dM (t)
λ 0
1
= 1 + M̃ (µ)
λ !
1 G̃(µ)
= 1+
λ 1 − G̃(µ)
1
= .
λ(1 − G̃(µ))
Assuming G is aperiodic, we get
E(U1 )
p1 = lim P(X(t) = 1) = = ρ(1 − G̃(µ)),
t→∞ E(U1 + D1 )
where ρ = λ/µ. Hence
p0 = lim P(X(t) = 0) = 1 − ρ(1 − G̃(µ)).
t→∞
Next we define a delayed ARP, along the same lines as in the definition of the
delayed RP.
Thus a delayed ARP behaves like a standard ARP from the second cycle onward.
The main result about the delayed ARP is given in the next theorem.
This theorem says that the limiting behavior of a delayed ARP is not affected by
its behavior over the first cycle, as long as it terminates with probability 1. Finally,
one can prove the validity of Theorems 8.30 and 8.31 even if the sojourn in states 1
and 0 during one cycle are not contiguous intervals.
In this section we study a class of stochastic processes obtained by relaxing the expo-
nential sojourn time assumption in the CTMCs. Specifically, we consider a stochastic
process {X(t), t ≥ 0} of Section 6.1 with a countable state-space S. It starts in the
initial state X0 at time t = 0. It stays there for a sojourn time Y1 and then jumps to
state X1 . In general it stays in state Xn for a duration given by Yn+1 and then jumps
to state Xn+1 , n ≥ 0. Let N (t) be the number of jumps up to time t. We assume
that the condition in Equation 6.1 on page 189 holds. Then {N (t), t ≥ 0} is well
defined, and the sequence {X0 , (Xn , Yn ), n ≥ 1} can be used to define the process
{X(t), t ≥ 0} by
X(t) = XN (t) , t ≥ 0. (8.62)
In this chapter we study the case where {X(t), t ≥ 0} belongs to a particular class
of stochastic processes called the semi-Markov processes (SMP), as defined below.
Thus the semi-Markov process has Markov property at every jump epoch, hence
the name “semi”-Markov. In comparison, a CTMC has Markov property at every
time t. Note that unlike in the CTMCs, we allow pii > 0. Also, unlike in the CTMCs,
the variables Xn+1 and Yn+1 in an SMP are allowed to depend on each other.
Clearly {Xn , n ≥ 0} is a DTMC (called the embedded DTMC in the SMP) with
transition probabilities
pij = Gij (∞) = P(Xn+1 = j|Xn = i), i, j ∈ S.
If there is a state i ∈ S for which Gij (y) = 0 for all j ∈ S and all y ≥ 0, then state
i must be absorbing . Hence we set pii = 1 in such a case. With this convention, we
see that X
pij = 1, i ∈ S.
j∈S
Next, let
X
Gi (y) = Gij (y) = P(S1 ≤ y|X0 = i), i ∈ S, y ≥ 0.
j∈S
Thus Gi is the cdf of the sojourn time in state i. We illustrate with several examples.
and the next state is i with probability λi /λ. Once the system enters state i ∈
{1, 2, · · · , N }, repair starts on component i. Thus the sojourn time in this state has
cdf Hi , at the end of which the system jumps to state 0. Combining all these obser-
vations we see that {X(t), t ≥ 0} is an SMP with kernel
λ1 −λy
) λλ2 (1 − e−λy ) · · · λλN (1 − e−λy )
0 λ (1 − e
H1 (y) 0 0 ··· 0
H2 (y) 0 0 ··· 0
G(y) = .
.. .. .. .. ..
. . . . .
HN (y) 0 0 ··· 0
We had seen this system in Modeling Exercise 6.9 on page 260 where the repair-
times were assumed to be exponential random variables. Thus the CTMC there is a
special case of the SMP developed here.
Armed with these examples of SMPs we now study the limiting behavior of the
SMPs. We need the following preliminaries. Define
Tj = min{t ≥ Y1 : X(t) = j}, j ∈ S,
where Y1 is the first sojourn time. Thus if the SMP starts in state j, Tj is the first time
it returns to state j (after leaving it at time S1 .) If it starts in a state i 6= j, then Tj is
the first time it enters state j. Now let
τi = E(Y1 |X(0) = i), i ∈ S,
and
τij = E(Tj |X(0) = i), i, j ∈ S.
The next theorem shows how to compute the τij ’s. It also extends the concept of first
step analysis to SMPs.
Theorem 8.32 First Passage Times in SMPs. The mean first passage times {τij }
satisfy the following equations:
X
τij = τi + pik τkj . (8.64)
k6=j
Proof: Follows along the same line as the derivation of Equation 6.49 on page 232,
and using τi in place of 1/qi as E(Y1 |X(0) = i).
Proof: Follows along the same line as the derivation of Equation 6.50 on page 232,
and using τi in place of 1/qi as E(Y1 |X(0) = i).
It follows from Theorem 8.33 that if the mean return time to any state j in an irre-
ducible recurrent SMP is finite, it is finite for all states in the SMP. Hence we can
make the following definition.
It follows from Theorem 8.33 that a necessary and sufficient condition for positive
recurrence of an irreducible recurrent SMP is
X
πi τi < ∞,
i∈S
where πi and τi are as in Theorem 8.33. Note that if the first return time is aperiodic
(periodic with period d) for any one state, it is aperiodic (periodic with period d) for
all states in an irreducible and recurrent SMP. This motivates the next definition.
Definition 8.12 An irreducible and recurrent SMP is called aperiodic if the first
passage time Ti , starting with X0 = i, is an aperiodic random variable for any state
i. If it is periodic with period d, the SMP is said to be periodic with period d.
With these preliminaries we can now state the main result about the limiting be-
havior of SMPs in the next theorem. Intuitively, we consider a (possibly delayed)
ARP associated with the SMP such that the ARP is in state 1 whenever the SMP is
in state j, and zero otherwise. The length of a cycle in this ARP is the time between
two consecutive visits by the SMP to state j, hence the expected length of the cycle is
τjj . During this cycle the SMP spends an expected time τj in state j. Hence the long
run fraction of the time spent in state j is given by τj /τjj , which gives the limiting
distribution of the SMP. Since the behavior in the first cycle does not matter (as long
as it is finite with probability 1), the same result holds no matter what state the SMP
starts in, as long as the second and the subsequent cycles are defined to start with an
entry into state j.
382 RENEWAL PROCESSES
Theorem 8.34 Limiting Behavior of SMPs. Let {X(t), t ≥ 0} be an irreducible,
positive recurrent and aperiodic SMP with kernel G. Let π be a positive solution to
π = πG(∞).
Then {X(t), t ≥ 0} has a limiting distribution [pj , j ∈ S], and it is given by
πj τj
pj = lim P(X(t) = j|X(0) = i) = P , j ∈ S. (8.66)
k∈S πk τk
t→∞
If the SMP is periodic with period d, then the above limit holds if t = nd and n → ∞.
where the last equality follows from Equation 8.65. This proves the theorem.
Equation 8.66 implies that the limiting distribution of the SMP depends on the
sojourn time distributions only through their means! This insensitivity of the lim-
iting distribution is very interesting and useful. It has generated a lot of literature
investigating similar insensitivity results in other contexts.
We illustrate with examples.
Example 8.30 Series System. Compute the limiting distribution of the series sys-
tem of Example 8.27.
Unlike in the study of CTMCs, the study of the limiting distribution of the SMPs
cannot stop at the limiting distribution of X(t) as t → ∞, since knowing the value
of X(t) at time t is, in general, not enough to determine the future of an SMP. We
also need to know the distribution of the remaining sojourn time at that time. We will
have to wait for the theory of Markov regenerative processes in the next chapter to
settle this question completely.
We studied cost/reward models for the DTMCs in Section 4.8 and for the CTMCs in
Section 6.12. Following that tradition we now study cost/reward models associated
with RPs.
Note that R(t) = 0 if N (t) = 0. With this notation we are ready to define a renewal
reward process.
R2
R1
t
X1 X2
Example 8.33 Machine Maintenance. Consider the age replacement policy de-
scribed in Example 8.4, where we replace a machine upon failure or upon reaching
age T . Recall that Li is the lifetime of the ith machine and {Li , i ≥ 1} is a sequence
of iid non-negative random variables. Replacing a machine by a new one costs $Cr ,
and failure costs $Cf . Let Z(t) be the total cost incurred up to time t. Show that
{Z(t), t ≥ 0} is a renewal reward process.
Let Sn be the time when nth replacement occurs (S0 = 0), and N (t) be the
number of replacements up to time t. Then
Sn − Sn−1 = Xn = min(Ln , T ).
Thus {Xn , n ≥ 1} is a sequence of iid random variables, and {N (t), t ≥ 0} is an
RP generated by it. The cost Rn , incurred at time Sn , is given by
Cr if Ln > T,
Rn =
Cf + Cr if Ln ≤ T.
Here we have implicitly assumed that if Ln = T , then we pay for the failure and
then replace the machine. With the above expression for the cost Rn , we see that
{(Xn , Rn ), n ≥ 1} is a sequence of iid bivariate random variables. It is clear that
{Z(t), t ≥ 0} is generated by {(Xn , Rn ), n ≥ 1}, and hence it is a renewal reward
process.
Computing the distribution of Z(t) is rather difficult. Hence we study its asymp-
totic properties as t → ∞.
Theorem 8.35 Almost-Sure ERT for Renewal Reward Processes. Let {Z(t), t ≥
0} be a renewal reward process generated by {(Xn , Rn ), n ≥ 1}, and suppose
r = E(Rn ) < ∞, τ = E(Xn ) < ∞.
386 RENEWAL PROCESSES
Then
Z(t) r
lim = , w.p.1. (8.68)
t→∞ t τ
Next we derive the expected value version of the above result, i.e., lim E(Z(t))/t =
r/τ. As in the case of the elementary renewal theorem, this conclusion does not fol-
low from Theorem 8.35, and has to be established independently. We need two results
before we can prove this.
Proof: Let
N (t)+1
X
H(t) = E Rn .
n=1
Also, Equation 8.71 implies that |D(t)| → 0 as t → ∞. Hence, for a given ǫ > 0,
there exists a T < ∞ such that |D(t)| < ǫ for t ≥ T . Then, for all t ≥ T , we have
Z t−T Z t
H(t) D(t) D(t − u) D(t − u)
= + dM (u) + dM (u)
t t 0 t t−T t
ǫ M (t − T ) M (t) − M (t − T )
≤ +ǫ + E(|R1 |) .
t t t
388 RENEWAL PROCESSES
Now, as t → ∞, M (t) − M (t − T ) → T /τ . Hence we get
H(t) ǫ M (t − T ) M (t) − M (t − T ) ǫ
lim ≤ lim + ǫ lim + lim E(|R1 |) = ,
t→∞ t t→∞ t t→∞ t t→∞ t τ
since the first and the third limit on the right hand are zero, and the second limit is
1/τ from the elementary renewal theorem. Since ǫ > 0 was arbitrary, we get Equa-
tion 8.70.
Theorem 8.38 ERT for Renewal Reward Processes. Suppose the hypothesis of
Theorem 8.35 holds. Then
E(Z(t)) r
lim = . (8.72)
t→∞ t τ
Proof: Write
N (t)+1
X
Z(t) = Rn − RN (t)+1 .
n=1
Hence
P
N (t)+1
E(Z(t)) E n=1 Rn RN (t)+1
= −
t t t
1 + M (t) RN (t)+1
= r − ,
t t
where we have used Theorem 8.36. Now let t → ∞. Theorem 8.37 and the elemen-
tary renewal theorem yield
E(Z(t)) r
lim =
t→∞ t τ
as desired.
The above theorem is very intuitive and useful: it says that the long run expected
rate of reward is simply the ratio of the expected reward in one cycle and the expected
length of that cycle. What is surprising is that the reward does not have to be inde-
pendent of the cycle length. This is what makes theorem so useful in applications.
We end this section with two examples.
Example 8.34 Machine Maintenance. Compute the long run expected cost per
unit time of the age replacement policy described in the machine maintenance model
of Example 8.33.
Suppose the lifetimes {Li , i ≥ 1} are iid random variables with common cdf F (·).
Then we have
Z T
τ = E(Xn ) = E(min(Ln , T )) = (1 − F (u))du.
0
RENEWAL PROCESSES WITH COSTS/REWARDS 389
r = E(Rn ) = Cr + Cf F (T ).
Hence the long run cost rate is given by
E(Z(t)) r Cr + Cf F (T )
lim = = RT .
t→∞ t τ
0 (1 − F (u))du
Clearly, as T increases, the cost rate of the planned replacements decreases, but the
cost rate of the failures increases. Hence one would expect that there is an optimal
T which minimizes the total cost rate. The actual optimal value of T depends upon
Cr , Cf and F . For example, if Ln ∼ exp(λ), we get
E(Z(t)) λCr
lim = λCf + .
t→∞ t 1 − e−λT
This is a monotonically decreasing function of T , implying that the optimal T is
infinity, i.e., the machine should be replaced only upon failure. In retrospect, this is
to be expected, since a machine with exp(λ) life time is always as good as new!
In our study of renewal reward processes, we have assumed that the reward Rn is
earned at the end of the nth cycle. This is not necessary. The results of this section
remain valid no matter how the reward is earned over the cycle as long as the total
reward earned over the nth cycle is Rn and {(Xn , Rn ), n ≥ 1} is a sequence of iid
bivariate random variables. We use this fact in the next example.
Example 8.35 Total Up-Time. Let {X(t), t ≥ 0} be the ARP as defined in Sec-
tion 8.8. Let W (t) be the total time spent in state 1 by the ARP up to time t. Show
that
W (t) E(U1 )
lim = .
t→∞ t E(U1 ) + E(D1 )
{W (t), t ≥ 0} can be seen to be a renewal reward process with Un +Dn as the nth
cycle duration and Un as the reward earned over the nth cycle. Note that the reward
is earned continuously at rate X(t) during the cycle. Thus,
τ = E(U1 + D1 ), r = E(U1 ).
Hence the result follows from Theorem 8.38.
The results of this section remain valid for “delayed” renewal reward processes,
i.e., when {(Xn , Rn ), n ≥ 2} is a sequence of iid bivariate random variables, and is
independent of (X1 , R1 ). We shall omit the proofs.
Theorems 8.35 and 8.38 deal with what we had earlier called the “average cost
case.” It is possible to study the “discounted cost” case as well. The results are left as
Computational Exercise 8.49.
390 RENEWAL PROCESSES
8.11 Regenerative Processes
Example 8.40 GI/GI/1 Queue. Let X(t) be the number of customers in a GI/GI
/1 queue. Let Sn be the nth time when a customer enters an empty system. From
the independence of the inter-arrival times and the service times, it follows that the
system loses dependence on the history at times S1 , S2 , S3 , · · ·. A sufficient condition
for P(S1 < ∞) = 1 is that the mean service time be less than the mean inter-
arrival time. If the process starts with a customer entering an empty system at time
0, {X(t), t ≥ 0} is an RGP, otherwise it is a delayed RGP.
Next we study the limiting behavior of the RGPs. The main result is given in the
next theorem.
Theorem 8.39 Limiting Distribution for RGP. Let {X(t), t ≥ 0} be an RGP with
state-space (−∞, ∞) with right continuous sample paths with left limits. Let S1 be
the first regeneration epoch and U1 (x) be the time that the process spends in the
interval (−∞, x] during [0, S1 ). If S1 is aperiodic with E(S1 ) < ∞,
E(U1 (x))
F (x) = lim P(X(t) ≤ x) = . (8.73)
t→∞ E(S1 )
If S1 is periodic with period d, the above limit holds if t = nd and n → ∞.
Proof: Fix an x ∈ (−∞, ∞), and define H(t) = P(X(t) ≤ x). Since the process
regenerates at time S1 , we have
H(t − u) if u ≤ t,
P(X(t) ≤ x|S1 = u) =
P(X(t) ≤ x|S1 = u) if u > t.
392 RENEWAL PROCESSES
Using G(u) = P(S1 ≤ u), we get
Z ∞
H(t) = P(X(t) ≤ x|S1 = u)dG(u)
0
Z t
= D(t) + H(t − u)dG(u), (8.74)
0
where
D(t) = P(X(t) ≤ x, S1 > t).
It can be shown that the assumptions about the sample paths ensure that D(·) satisfies
the conditions in KRT (Theorem 8.17). Now define
1 if X(t) ≤ x,
Z(t) =
0 if X(t) > x.
Then we have
Z S1
E(U1 (x)) = E Z(t)dt
0
Z ∞ Z u
= E Z(t)dt S1 = u dG(u)
Z0 ∞ Z u 0
= E(Z(t)|S1 = u)dtdG(u)
Z0 ∞ Z0 u
= P(X(t) ≤ x|S1 = u)dtdG(u)
Z0 ∞ Z0 ∞
= P(X(t) ≤ x|S1 = u)dG(u)dt
0 t
Z ∞
= P(X(t) ≤ x, S1 > t)dt.
0
Using the KRT we get
Z ∞
1
lim P(X(t) ≤ x) = lim H(t) = D(t)dt
t→∞ t→∞ E(S1 ) 0
Z ∞
1 E(U1 (x))
= P(X(t) ≤ x, S1 > t)dt = .
E(S1 ) 0 E(S1 )
This proves the theorem.
Several observations about the above theorem are in order. Let U1 (∞) =
limx→∞ U1 (x). Since the state-space of the RGP is (−∞, ∞) and S1 < ∞ with
probability 1, we have U1 (∞) = S1 and E(U1 (∞)) = E(S1 ). This implies
E(U1 (∞))
F (∞) = lim P(X(t) < ∞) = = 1.
t→∞ E(S1 )
Thus the RGP satisfying the hypothesis of the above theorem has a proper limiting
distribution.
REGENERATIVE PROCESSES 393
If the state-space of the RGP is discrete, say {0, 1, 2 · · ·}, we can define U1,j as
the time the RGP spends in state j over the first regenerative cycle. In that case the
above theorem implies that the RGP has a proper limiting pmf given by
E(U1,j )
pj = lim P(X(t) = j) = . (8.75)
t→∞ E(S1 )
Let Yn be the outcome of the nth toss. Let X0 = 1 and for n ≥ 1 define
1 if Yn = H, Yn+1 = H, Yn+2 = T, Yn+3 = T,
Xn =
0 otherwise.
Let S0 = 0 and define Sk+1 = min{n > Sk : Xn = 1}, for k ≥ 0. Note that since
the coin tosses are iid, {Sk+1 − Sk , k ≥ 1} are iid and have the same distribution
as S1 + 3. Let X(t) = X[t] , where [t] is the largest integer less than or equal to t.
Thus {X(t), t ≥ 0} is a periodic delayed RGP with period 1, and {Sk , k ≥ 1} are
the regeneration epochs. During each regenerative cycle it spends 1 unit of time in
state 1. Hence
1
lim P(X(n) = 1) = lim P(Xn = 1) = .
n→∞ n→∞ E(S2 − S1 )
However we know that
P(Xn = 1) = P(Yn = H, Yn+1 = H, Yn+2 = T, Yn+3 = T ) = p2 q 2 .
Hence
1
E(S2 − S1 ) = E(S1 + 3) = .
p2 q 2
Since the number of tosses needed to observe HHTT is S1 + 3, we see that the
expected number of tosses needed to observe HHTT is 1/p2 q 2 . Will this method
work for the sequence HTTH?
We next consider the cost/reward models in RGPs. Let X(t) be the state of a system
at time t, and assume that {X(t), t ≥ 0} is a regenerative process. Suppose the
394 RENEWAL PROCESSES
system earns rewards at rate r(x) whenever it is in state x. Then the total reward
obtained by the system up to time t is given by
Z t
r(X(u))du,
0
and the reward rate up to time t is given by
1 t
Z
r(X(u))du.
t 0
The next theorem gives the limiting behavior of the reward rate as t → ∞.
Let X(t) be the number of items in the warehouse at time t. Suppose X(0) = 0
and Sn be the time of the nth clearing time of the warehouse. It is obvious that
{X(t), t ≥ 0} is a regenerative process on state-space {0, 1, · · · , k − 1} with regen-
eration epochs {Sn , n ≥ 1}. A typical sample path of the {X(t), t ≥ 0} process is
shown in Figure 8.10. We see that
P(S1 < ∞) = 1, E(S1 ) = kτ.
Z(t)
4
3
2
1
t
S1 S2
We compute Ch , the long run expected holding costs per unit time first. The system
incurs waiting costs at a rate jh per unit time if there are j items in the warehouse.
We can use Equation 8.77 to get
k−1 k−1
X hX 1
Ch = jhpj = j = h(k − 1).
j=0
k j=0 2
396 RENEWAL PROCESSES
To compute Cc , the long run clearing cost per unit time, we use Theorem 8.38 for
the renewal reward processes to get
c
Cc = .
kτ
Thus the C(k), the long run total cost per unit time is given by
1 c
C(k) =
h(k − 1) + .
2 kτ
This is a convex function of k and is minimized at
r
∗ 2c
k = .
hτ
Since k ∗ must be an integer we check the two integers near the above solution to see
which is optimal.
We gave a sample path proof of Little’s Law as given in Theorem 7.5 on page 291.
Here we give an alternate proof under the assumption that the queueing system is
described by an RGP {X(t), t ≥ 0}, where X(t) is the number of customers in the
system at time t.
Proof: Let N be the number of customers who enter the system over [0, S1 ). Thus
it includes the arrival at time 0, but not the one at time S1 . By the definition of S1 ,
the number of departures up to time S1 is also N . Then using the renewal reward
theorem (Theorem 8.38) we see that the limits L, λ, and W exist, and are given by
Z S1
L = E X(u)du /E(S1 ), (8.78)
0
λ = E(N )/E(S1 ), (8.79)
N
!
X
W = E Wn /E(N ). (8.80)
n=1
Now define
1 if the nth customer is in the system at time t,
In (t) =
0 otherwise.
COMPUTATIONAL EXERCISES 397
Then for 1 ≤ n ≤ N we have
Z S1
Wn = In (t)dt,
0
since these customers depart by time S1 . Also, for 0 ≤ t < S1 , we have
N
X
X(t) = In (t),
n=1
since the right hand side simply counts the number of customers in the system at time
t. Combining these equations we get
X N XN Z S1 N
Z S1 X Z S1
Wn = In (t)dt = In (t)dt = X(t)dt.
n=1 n=1 0 0 n=1 0
We end this chapter with the observation that the difficulty in using Theorem 8.39
as a computational tool is in the computation of E(U1 (x)) and E(S1 ). This is gen-
erally the result of the fact that the sample paths of the RGP over [0, S1 ) can be
quite complicated with multiple visits to the interval (−∞, x]. In the next chapter we
shall study the Markov RGPs, which alleviate this problem by using “smaller” S1 ,
but in the process giving up the assumption that the system loses dependence on the
history completely at time S1 . This leads to a richer structure and a more powerful
computational tool. So we march on to Markov RGPs!
8.3 Consider the RP of Computational Exercise 8.2. Is this renewal process transient
or recurrent?
8.4 Let X(t) be the number of customers at time t in a G/G/1 queue with iid inter-
arrival times and iid service times. Suppose at time 0 a customer enters an empty
system and starts service. Find an embedded renewal sequence in {X(t), t ≥ 0}.
8.5 Let {N (t), t ≥ 0} be an RP with iid inter-renewal times with common pdf
g(t) = λ2 te−λt , t ≥ 0.
Compute P(N (t) = k) for k = 0, 1, 2, · · ·.
8.6 Let {N (t), t ≥ 0} be an RP with integer valued inter-renewal times with com-
mon pmf
P(Xn = 0) = 1 − α, P(Xn = 1) = α, n ≥ 1,
where 0 < α < 1. Compute P(N (t) = k) for k = 0, 1, 2, · · ·.
8.11 Consider the machine maintenance problem of Example 8.4. Suppose the ma-
chine lifetimes (in years) are iid U (2, 5), and they are replaced upon failure or upon
reaching age 3. Compute
8.13 Let {Yn , n ≥ 0} be a DTMC on {0, 1} with the following transition probabil-
ity matrix
α 1−α
.
1−β β
Analyze the asymptotic behavior of N (n) = number of visits to state 0 during {1, 2
, · · · , n}, assuming that Y0 = 0.
8.14 Let {Y (t), t ≥ 0} be a DTMC on {0, 1} with the following generator matrix
−λ λ
.
µ −µ
Analyze the asymptotic behavior of N (t) = number of visits to state 0 during (0, t],
assuming that Y (0) = 0.
8.15 Compute the renewal function for the RP in Computational Exercise 8.10.
8.16 Compute the renewal function for the RP in Computational Exercise 8.8.
8.17 Compute the renewal function for the RP in Computational Exercise 8.14.
8.18 Derive a renewal type equation for E(SN (t)+k ), k ≥ 1, and solve it.
8.19 Compute M ∗ (t) = E(N ∗ (t)) in terms of M (t) = E(N (t)), where N ∗ (t) and
N (t) are as defined in Conceptual Exercise 8.2.
8.21 Compute the renewal type equation for P(A(t) ≤ x), where A(t) is the age at
time t in an RP. Show that the KRT is applicable and compute the limiting distribution
of A(t) as t → ∞, assuming that the RP is aperiodic.
8.22 Compute the renewal type equation for E(A(t)B(t)), where A(t) (B(t)) is the
age (excess-life) at time t in an RP. Show that the KRT is applicable and compute the
limiting value of E(A(t)B(t)) as t → ∞, assuming that the RP is aperiodic. Using
this compute the limiting covariance of A(t) and B(t) as t → ∞.
8.23 Derive an integral equation for P(N (t) is odd) for an RP {N (t), t ≥ 0} by
conditioning on the first renewal time. Is this a renewal type equation? Solve it ex-
plicitly when the RP is PP(λ).
400 RENEWAL PROCESSES
8.24 Consider the two-state CTMC of Computational Exercise 8.14. Let W (t) be
the time spent by this process in state 0 during (0, t]. Derive a renewal type equation
for E(W (t)), and solve it using the transform method.
8.26 Compute the renewal type equation for E(A(t)k ), where A(t) is the age at
time t in an RP. Assume that the inter-renewal times are aperiodic with finite (k +
1)st moment. Show that the KRT is applicable and compute the limiting value of
E(A(t)k ) as t → ∞.
8.27 Compute the renewal type equation for E(C(t)k ), where C(t) is the total life
at time t in an RP. Assume that the inter-renewal times are aperiodic with finite
(k + 1)st moment. Show that the KRT is applicable and compute the limiting value
of E(C(t)k ) as t → ∞.
8.28 Compute the renewal type equation for E(A(t)k C(t)m ), where A(t) is the age
and C(t) is the total life at time t in an RP. Assume that the inter-renewal times are
aperiodic with finite (k + m + 1)st moment. Show that the KRT is applicable and
compute the limiting value of E(A(t)k C(t)m ) as t → ∞.
8.29 Compute an integral equation for P(B D (t) > x), where B D (t) is the excess
life at time t in a delayed RP. Compute its limiting value.
8.30 Consider a G/G/1/1 queue with inter-arrival time cdf G and service time cdf
F . Using ARPs, compute the limiting probability that the server is busy.
8.31 Compute the expected busy period started by a single customer in a stable
M/G/1 queue of Section 7.6.1 of Chapter 7, by constructing an appropriate ARP.
(Use Equation 7.35 on page 314.)
8.33 A particle moves on n sites arranged in a circle as follows: it stays at the ith site
for a random amount of time with cdf Fi and mean µi and then moves to the adjacent
site in the clockwise direction. Let H be the cdf of the time it takes to complete the
circle, and assume that it is aperiodic with mean µ. Furthermore, assume that the
successive sojourn times are independent. Construct an appropriate ARP to compute
the limiting probability that the particle is on site i.
COMPUTATIONAL EXERCISES 401
8.34 For the series system in Example 8.27 define Y (t) = 0 if the system is up at
time t and 1 if it is down at time t. Assume that the system is up at time 0, and show
that {Y (t), t ≥ 0} is an ARP, and compute the long run probability that the system
is up. Verify the result with result in Example 8.30.
8.37 In the machine maintenance model of Example 8.34, suppose the machine
lifetimes are iid U (0, a) random variables. Compute the optimal age replacement
parameter T that minimizes the long run expected total cost per unit time.
8.39 Let X(t) be the number of customers in a queueing system. In which of the
following systems is {X(t), t ≥ 0} an SMP? Why or why not?
1. An M/G/1/1 system,
2. An G/M/1/1 system,
3. An G/G/1/1 system.
8.41 A machine is subject to shocks that arrive according to PP(λ). Each shock
causes a damage that can represented by an integer-valued random variable with
pmf {αj , j ≥ 1}. The damages are additive, and when the total damage exceeds a
threshold K, the machine breaks down. The repair time has cdf A(·), and successive
repair times are iid. Shocks have no effect during repair, and the machine is as good
as new once the repair completes. Model this system by an appropriate SMP. Show
its kernel.
8.42 Let {W (t), t ≥ 0} be as defined in Example 8.35 and let U and D be the
E(U)
generic up and down times. Define H(t) = E(W (t)) − E(U+D) t. Show that H
satisfies the renewal type equation
Z t
E(min(U + D, t)
H(t) = E(min(U, t)) − + H(t − u)dG(u),
E(U + D) 0
402 RENEWAL PROCESSES
where G is the cdf of U + D. Assuming that G is aperiodic, show that
1 E(U )E((U + D)2 ) − E(U + D)E(U 2 )
lim H(t) = · .
t→∞ 2 (E(U + D))2
8.44 Use the renewal equation derived in the proof of Theorem 8.37 to compute the
limit of E(RN (t)+1 ) as t → ∞.
8.45 What is the long run fraction of customers who are turned away in an
M/G/1/1 queue? In a G/M/1/1 queue?
8.46 The patients in a hospital are classified as belonging to the following units:
(1) coronary care unit, (2) intensive care unit, (3) ambulatory unit, (4) extended care
unit, and (5) home or dead. As soon as a patient goes home or dies, a new patient is
admitted to the coronary unit. The successive units the patient visits form a DTMC
with transition probability matrix given below:
0 1 0 0 0
0 0 1 0 0
0.1 0 0 0.9 0 .
0.1 0.1 0.1 0.5 0.2
0 0 0 0 1
The patient spends on the average 1.7 days in the coronary care unit, 2.2 days in
the intensive care unit, 8 days in the ambulatory unit and 16 days in the extended
care unit. Let X(t) be the state of the patient in the hospital (assume it has exactly
one patient at all times. Extension to more than one patient is easy if the patient
movements are iid.) at time t. Model it as an SMP with four states and compute the
long run probability that patient is in the ambulatory unit.
8.47 A company classifies its employees into four grades, labeled 1, 2, 3, and 4. An
employee’s stay in grade i is determined by two random variables: the promotion
time Ai , and the tolerance time Bi . The employee stays in grade i for min(Ai , Bi )
amount of time. If Ai ≤ Bi , he moves to grade i + 1. If Ai > Bi , he quits, and is
instantaneously replaced by a new employee in grade 1. Since there is no promotion
from grade 4, we set A4 = ∞. Let X(t) be the grade of the employee working at
time t.
COMPUTATIONAL EXERCISES 403
1. Model {X(t), t ≥ 0} as an SMP. Explicitly state any assumptions needed to do
this, and display the kernel.
2. Assume Ai ∼ exp(λi ) and Bi ∼ Erl(2, µi ), and that they are independent. Com-
pute the limiting distribution of X(t).
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
random amount of time in a bunker and then moves to any of the adjacent ones
with equal probability. Suppose the mean time spent in bunker i during one stay is
τi . Compute the long run probability that the despot is in bunker i. (You may use
symmetry to analyze the embedded DTMC.)
8.49 Consider the renewal reward process of Section 8.10. Suppose the rewards are
discounted with continuous discount factor α > 0. Let D(t) be the total discounted
reward earned up to time t, i.e.,
N (t)
X
D(t) = e−αSn Rn .
n=1
Show that
E(R1 e−αS1 )
lim E(D(t)) = .
t→∞ 1 − E(e−αS1 )
8.50 Suppose demands arise according to a PP(λ) at a warehouse that initially has
S items. A demand is satisfied instantly if there are items in the warehouse, else the
demand is lost. When the warehouse becomes empty, it places an order for S items
from the supplier. The order is fulfilled after a random amount of time (called the
404 RENEWAL PROCESSES
lead-time) with mean L days. Suppose it costs $h to store an item in the warehouse
for one day. The warehouse manager pays $c to buy an item and sells it for $p. The
order processing cost is $d, regardless of the size of the order. Using an appropriate
renewal reward process, compute the long run cost rate for this policy. Find the value
of S that will minimize this cost rate for the following parameters: λ = 2 per day,
h = 1, c = 70, p = 80, d = 50, L = 3.
8.51 Consider the parking lot of Modeling Exercise 6.24, and let N0 (t) be the num-
ber of customers who arrived there up to time t. Suppose {N0 (t), t ≥ 0} is an RP
with common inter-arrival time cdf A(·). Let Nk (t) be the number of customers
who arrived up to time t and found spaces 1 through k occupied (1 ≤ k ≤ K).
{Nk (t), t ≥ 0} is called the overflow process from space k.
1. Show that {Nk (t), t ≥ 0} is an RP and that the LST of its inter-renewal times is
given by
φk−1 (s + µ)
φk (s) = ,
1 − φk−1 (s) + φk−1 (s + µ)
where φ0 (s) is the LST of A(·).
2. Let Xk (t) be 1 if space k is occupied, and zero otherwise. Show that
{Xk (t), t ≥ 0} is the queue-length process in a G/M/1/1 queue with arrival
process{Nk−1 (t), t ≥ 0} and iid service times with LST φk (s). Compute the lim-
iting distribution of Xk (t) as t → ∞, and show that long run fraction of the time
the kth space is occupied (called its utilization) is given by
1 − φk−1 (µ)
,
µτk−1
where τk is the mean inter-renewal time in {Nk (t), t ≥ 0}. Show that
τk = τk−1 /φk−1 (µ),
where τ0 is the mean of A(·).
3. Compute the space utilizations for each space if there are six parking spaces and
customers arrive every 2 minutes in a deterministic fashion and stay in the lot for
an average of 15 minutes.
8.52 Consider a G/M/∞ queue with infinite number of servers, common inter-
arrival time cdf G with mean 1/λ and exp(µ) service times. Suppose that at time
0 the system is in steady state. In particular the arrival process is assumed to have
started at −∞ so that at time 0 it is in equilibrium. Let {Tn , n ≥ 0} be the arrival
times of customers who arrived before time 0, indexed in reverse, so that 0 > T1 >
T2 > · · ·. Now, let Xi be 1 if the customer
P who arrived at time Ti is still in the system
at time 0, and 0 otherwise. Thus X = ∞ i=1 Xi is the total number of customers in
the system at time 0 (i.e., in steady state.)
1. Show that cdf of −T1 is Ge , the equilibrium cdf associated with G, and {Ti −
Ti+1 , i ≥ 1} is a sequence of iid random variables with common cdf G, and is
independent of −T1 .
COMPUTATIONAL EXERCISES 405
2. Show that
λ
E(Xi ) = G̃(µ)i−1 (1 − G̃(µ)), i ≥ 1.
µ
3. Using the above, show that the expected number of customers in steady state in a
G/M/∞ queue is given by
λ
E(X) = .
µ
Show this directly by using Little’s Law.
4. Show that for i > j,
λ
E(Xi Xj ) = G̃(µ)i−j G̃(2µ)j−1 (1 − G̃(2µ)).
2µ
5. Using the above, show that
λ G̃(µ)
E(X(X − 1)) = · .
µ 1 − G̃(µ)
8.53 Functionally identical machines are available from N different vendors. The
lifetimes of the machines from vendor i are iid exp(λi ) random variables. The ma-
chines from different vendors are independent of each other. We use a “cyclic” re-
placement policy parameterized by a fixed positive number T as follows: suppose
we are currently using a machine from vendor i. If it is less than T time units old
upon failure, it is replaced by machine from vendor i + 1 if i < N and vendor 1 if
i = N . If the machine is at least T time units old upon failure, the replacement is
from vendor i. Replacements are instantaneous. Let X(t) = i if the machine in use
at time t is from vendor i.
8.54 April One Computers provides the following warranty on all its hard drives for
its customers that sign a long term contract with it: it will replace a malfunctioning
drive with a new one for free any number of times for up to one year after the initial
purchase. If the hard drive fails after one year, the customer must purchase a new one
with a new one year free replacement warranty. Suppose the lifetimes of these drives
are iid exponential variables with common mean 1/λ. The customer pays c for each
new hard drive that is not replaced for free. It costs the company d to make the drive,
(d < c). All failed hard drives are discarded. Suppose a customer has signed a long
term contract with April One Computers for his hard drive. Let Z(t) be the total cost
(excluding the initial purchase cost) to the customer over the interval (0, t], and Y (t)
be the total profits to the April One Computers from this contract over the period
(0, t]. Assume that replacement is instantaneous.
8.2 Let {N (t), t ≥ 0} be an RP. Suppose each event is registered with probability
p, independent of everything else. Let N ∗ (t) be the number of registered events up
to t. Is {N ∗ (t), t ≥ 0} an RP?
8.4 Complete the proof of Theorem 8.6 when τ = ∞, by showing that the limsup
and liminf of N (t)/t are both zero.
8.6 Complete the proof of Theorem 8.12 when τ = ∞, by showing that the limsup
and liminf of M (t)/t are both zero.
8.7 Show that a random variable with the following pmf is aperiodic:
P(X = e) = .5, P(X = π) = .5.
8.13 Let {X(t), t ≥ 0} be an ARP as given in Definition 8.7. Let Ni (t) be the
number of entries into state i (i = 1, 2) over (0, t]. Is {Ni (t), t ≥ 0} an RP?
8.14 Let Nj (t) be as in Example 8.18. Show that the limiting values of Nj (t)/t and
E(Nj (t))/t are independent of the initial state of the CTMC.
8.15 Let {N e (t), t ≥ 0} is an equilibrium renewal process. Show that the distribu-
tion of N e (s + t) − N e (s) is independent of s, i.e., {N e (t), t ≥ 0} has stationary
increments. Are the increments independent?
8.19 Let {X(t), t ≥ 0} be a positive recurrent and aperiodic SMP with state-space
S, kernel G, and limiting distribution {pi , i ∈ S}. Suppose we incur cost at rate ci
wheneverP the SMP is in state i. Show that the long-run rate at which we incur cost is
given by i∈S pi ci .
408 RENEWAL PROCESSES
8.20 Consider the system in Conceptual Exercise 8.19. Suppose the costs are dis-
counted continuously with a discount factor α > 0. Let φ(i) be the total expected
discounted cost incurred over the infinite horizon given that the SMP enters state i at
time 0. Let
ci
γi = (1 − G̃i (α)),
α
where G̃i (·) is the LST of the sojourn time in state i. Show that the φ(i)’s satisfy the
following equation: X
φ(i) = γi + G̃ij (α)φ(j).
j∈S
CHAPTER 9
Statistics can be used to reach any conclusion you want: The odds of getting in an
accident are directly proportional to the time spent on the road. The time spent on
the road is inversely proportional to the speed at which you drive. One-third of traffic
accidents are caused by drunk drivers, while two-thirds are caused by drivers who
are not drunk. Clearly, the odds of getting into a traffic accident are minimized by
driving drunk at a high speed!
Markov renewal theory is a natural generalization of renewal theory, and as the name
suggests, it combines the concepts from Markov chains and renewal processes. We
begin with a definition.
409
410 MARKOV REGENERATIVE PROCESSES
renewal sequence {(Xn , Sn ), n ≥ 0} such that {Z(t + Sn ), t ≥ 0} given {Z(u) :
0 ≤ u < Sn , Xn = i} is stochastically identical to {Z(t), t ≥ 0} given X0 = i and
is conditionally independent of {Z(u) : 0 ≤ u < Sn , Xn = i} given Xn = i.
Several comments on the implications of the above definition are in order. In most
applications we find that Xn = Z(Sn +) or Xn = Z(Sn −). One can see that if
{(Xn , Sn ), n ≥ 0} is Markov renewal sequence then {Xn , n ≥ 0} is a DTMC. Thus
in most applications {Z(Sn +), n ≥ 0} or {Z(Sn −), n ≥ 0} is a DTMC. Hence
sometimes an MRGP is also called a process with an embedded Markov chain. We
also see that
P(Z(t + Sn ) ≤ x|Z(u) : 0 ≤ u ≤ Sn , Xn = i) = P(Z(t) ≤ x|X0 = i).
Also, the future of the MRGP from t = Sn depends on its past up to time Sn only
through Xn . This distinguishes an MRGP from a regenerative process, whose future
from t = Sn is completely independent of its past up to time Sn . A typical sample
path of an MRGP is shown in Figure 9.1, where we have assumed that the state-space
of the MRGP is discrete, and Xn = Z(Sn −).
Z(t)
X4
X3
X1
X2
t
S0 S1 S2 S3 S4
We get the following as a corollary to the above theorem. We leave the proof to
the reader.
Let S0 = 0 and Sn be the nth jump epoch in the CTMC. Define Xn = Z(Sn +).
Then, from Definition 6.1 on page 190, it follows that {(Xn , Sn ), n ≥ 0} is an MRS
with kernel
qij
Gij (y) = (1 − e−qi y ), i, j ∈ I, y ≥ 0.
qi
If qi = 0, we define Gij (y) = 0 for all j ∈ I. The Markov property of the CTMC
implies that {Z(t + Sn ), t ≥ 0} depends on {Z(t), 0 ≤ t < Sn , Xn } only through
Xn . Thus {Z(t), t ≥ 0} is an MRGP.
We assume that the SMP has entered the initial state at time 0. Let S0 = 0 and Sn
be the nth jump epoch in the SMP. Define Xn = Z(Sn +). Then, from Definition 8.9
on page 378, it follows that {(Xn , Sn ), n ≥ 0} is an MRS with kernel G(y). The
Markov property of the SMP at jump epochs implies that {Z(t+Sn ), t ≥ 0} depends
on {Z(t), 0 ≤ t < Sn , Xn } only through Xn . Thus {Z(t), t ≥ 0} is an MRGP.
Example 9.3 M/G/1 Queue. Let Z(t) be the number of customers in an M/G/1
queue with PP(λ) arrivals and common service time (s.t.) cdf B(·). Show that
{Z(t), t ≥ 0} is an MRGP.
412 MARKOV REGENERATIVE PROCESSES
Assume that either the system is empty initially, or a service has just started at
time 0. Let Sn be the time of the nth departure, and define Yn = Sn − Sn−1 , and
Xn = Z(Sn +). Note that if Xn > 0, Yn is the nth service time. If Xn = 0, then
Yn is the sum of an exp(λ) random variable and the nth service time. Then, due to
the memoryless property of the PP and the iid service times, we get, for i > 0 and
j ≥ i − 1,
Gij (y) = P(Xn+1 = j, Yn+1 ≤ y|X0 = i0 , X1
= i1 , Y1 ≤ y1 , · · · , Xn = i, Yn ≤ yn )
= P(Xn+1 = j, Yn+1 ≤ y|Xn = i)
= P(j − i + 1 arrivals during a s.t. and s.t. ≤ y)
Z y
(λt)j−i+1
= e−λt dB(t).
0 (j − i + 1)!
For i = 0 and j ≥ 0 we get
G0j (y) = P(Xn+1 = j, Yn+1 ≤ y|X0 = i0 , X1
= i1 , Y1 ≤ y1 , · · · , Xn = 0, Yn ≤ yn )
= P(Xn+1 = j, Yn+1 ≤ y|Xn = 0)
= P(j arrivals during a s.t. and s.t. + idle time ≤ y)
Z y
(λt)j
= (1 − e−λ(y−t) )e−λt dB(t).
0 j!
Thus {(Xn , Sn ), n ≥ 0} is an MRS with kernel G(y). The queue length process
from time Sn onwards depends on the history up to time Sn only via Xn . Hence
{(Z(t), t ≥ 0} is an MRGP. Note that we had seen in Section 7.6.1 that {Xn , n ≥ 0}
is an embedded DTMC with transition probability matrix P = G(∞). One can show
that this is consistent with Equation 7.30 on page 312.
Example 9.4 G/M/1 Queue. Let Z(t) be the number of customers in a G/M/1
queue with common inter-arrival time (i.a.t.) cdf A(·), and iid exp(µ) service times.
Show that {Z(t), t ≥ 0} is an MRGP.
Assume that an arrival has occurred at time 0. Let Sn be the time of the nth arrival,
and define Xn = Z(Sn −), and Yn = Sn − Sn−1 , the nth inter-arrival time. Due to
the memoryless property of the PP and the iid service times, we get, for 0 < j ≤ i+1,
Gij (y) = P(i + 1 − j departures during an i.a.t. and i.a.t. ≤ y)
Z y
(λt)i+1−j
= e−λt dA(t),
0 (i + 1 − j)!
MARKOV RENEWAL PROCESS AND MARKOV RENEWAL FUNCTION 413
and
i+1
X
Gi0 (y) = A(y) − Gij (y).
j=1
Thus {(Xn , Sn ), n ≥ 0} is an MRS. The queue length process from time Sn onwards
depends on the history up to time Sn only via Xn . Hence {(Z(t), t ≥ 0} is an MRGP.
Note that we had seen in Section 7.6.2 that {Xn , n ≥ 0} is an embedded DTMC with
transition probability matrix P = G(∞). One can show that this is consistent with
Equation 7.38 on page 317.
Assume that the server is idle initially. Let Sn be the time of the nth service com-
pletion, and define Xn = X(Sn −), and Yn = Sn − Sn−1 , the nth idle time plus the
following service time. Since the server is idle after a service completion, Xn is also
the number of customers in the orbit at the nth service completion. Thus, if Xn = i,
the (n+1)st idle time is exp(λi ), where λi = λ+iθ. Due to the memoryless property
of the PP and the iid service times, we get, for 0 ≤ i ≤ j − 1,
λ
Gij (y) = P(j − i arrivals during a s.t. and s.t. + idle time ≤ y)
λi
iθ
+ P(j − i − 1 arrivals during a s.t. and s.t. + idle time ≤ y)
λi
λ y (λt)j−i
Z
= (1 − e−λi (y−t) )e−λt dB(t)
λi 0 (j − i)!
Z y
iθ (λt)j−i−1
+ (1 − e−λi (y−t) )e−λi t dB(t).
λi 0 (j − i − 1)!
Thus {(Xn , Sn ), n ≥ 0} is an MRS with kernel G(y). The {(R(t), I(t)), t ≥ 0} and
{X(t), t ≥ 0} processes from time Sn onwards depend on the history up to time Sn
only via Xn . Hence both are MRGPs with the same embedded MRS. Note that we
had seen in Theorem 7.18 on page 321 that {Xn , n ≥ 0} is an embedded DTMC
in the {X(t), t ≥ 0} process. It is an embedded DTMC in the {(R(t), I(t)), t ≥ 0}
process as well.
Proof: Let {Yn∗ , n ≥ 1} be a sequence of iid random variables with common pmf
P(Yn∗ = 0) = 1 − ǫ, P(Yn∗ = δ) = ǫ. (9.6)
Let {Sn∗ , n ≥ 1} be renewal sequence and {N ∗ (t), t ≥ 0} be the RP generated by
{Yn∗ = Sn∗ − Sn−1
∗
, n ≥ 1}. Equations 9.5 and 9.6 imply that
P(Yn ≤ t) ≤ P(Yn∗ ≤ t), t ≥ 0, n ≥ 1.
Hence
P(Sn ≤ t) ≤ P(Sn∗ ≤ t), t ≥ 0, n ≥ 1,
which implies
P(N (t) ≥ k) ≤ P(N ∗ (t) ≥ k), k ≥ 1.
However N ∗ (t) < ∞ with probability 1, from Theorem 8.3 on page 343. Hence
N (t) < ∞ with probability 1.
The condition in Equation 9.5 is not necessary, and can be relaxed. We refer the
reader to Cinlar (1975) for weaker conditions. For a regular process {N (t), t ≥ 0},
define
X(t) = XN (t) . (9.7)
It can be seen that {X(t), t ≥ 0} is an SMP with kernel G. We say that {X(t), t ≥ 0}
is the SMP generated by the MRS {(Xn , Sn ), n ≥ 0}. Define Nj (t) be the number
of entries into state j by the SMP over (0, t], and
Mij (t) = E(Nj (t)|X0 = i), i, j ∈ I, t ≥ 0. (9.8)
Thus Mij (t) is the expected number of transitions into state j over (0, t] starting
from X(0) = i.
Definition 9.3 Markov Renewal Process. The vector valued process {N(t) =
[Nj (t)]j∈I , t ≥ 0} is called the Markov renewal process generated by the Markov
renewal sequence {(Xn , Sn ), n ≥ 0}.
Next, following the development in the renewal theory, we define the Markov re-
newal function.
MARKOV RENEWAL PROCESS AND MARKOV RENEWAL FUNCTION 415
Definition 9.4 Markov Renewal Function. The matrix M (t) = [Mij (t)]i,j∈I
is called the Markov renewal function generated by the Markov renewal sequence
{(Xn , Sn ), n ≥ 0}.
∗
Pthe process {N (t), t ≥ 0} introduced in the proof of Theorem 9.3.
Proof: Consider
Since N (t) = j∈I Nj (t), we have
X X
Mij (t) = E(Nj (t)|X0 = i)
j∈I j∈I
= E(N (t)|X0 = i)
1 t
≤ E(N ∗ (t)) ≤ + 1 < ∞.
ǫ δ
This yields the theorem.
We define the concept of matrix convolution next. We shall find it very useful in
the study of Markov renewal processes. Let A(t) = [Aij (t)] and B(t) = [Bij (t)]
be two matrices of functions for which the product A(t)B(t) is defined. A matrix
C(t) = [Cij (t)] is called the convolution of A and B, written C(t) = A ∗ B(t), if
XZ t XZ t
Cij (t) = Aik (t − u)dBkj (u) = dAik (u)Bkj (t − u).
k 0 k 0
The proof of the next theorem introduces the concept of Markov renewal argu-
ment, and we urge the reader to become adept at using with it.
Theorem 9.5 Markov Renewal Equation. Let M (t) be the Markov renewal func-
tion generated by the Markov renewal sequence {(Xn , Sn ), n ≥ 0} with kernel G.
M (t) satisfies the following Markov renewal equation:
M (t) = G(t) + G ∗ M (t). (9.9)
A k-fold convolution of matrix A with itself is denoted by A∗k . Using this notation
we get the following analog of Theorem 8.8 on page 349.
Proof: Iterating Equation 9.9, and writing M for M (t) etc, we get
M = G+G∗M
= G + G ∗ (G + G ∗ M ) = G + G∗2 + G∗2 ∗ M
Xn
= G∗k + G∗n ∗ M.
k=1
Note that one consequence of the regularity condition of Equation 9.5 is that
lim G∗n e = 0, (9.10)
n→∞
Theorem 9.7 Suppose the condition in Equation 9.5 holds. The LST of the Markov
renewal function is given by
M̃ (s) = [I − G̃(s)]−1 G̃(s). (9.12)
KEY RENEWAL THEOREM FOR MRPS 417
Proof: Taking the LST on both sides of Equation 9.9, we get
M̃ (s) = G̃(s) + G̃(s)M̃ (s).
Thus
(I − G̃(s))M̃ (s) = G̃(s).
The regularity condition is one sufficient condition under which the inverse of (I −
G̃(s)) exists. Hence the theorem follows.
Note that if the SMP is recurrent and X0 = j then {Nj (t), t ≥ 0} is a standard
renewal process, and Mjj (t) is its renewal function; while {Ni (t), t ≥ 0} (i 6= j)
is a delayed renewal process, and Mij (t) is the delayed renewal function. Thus the
limiting behavior of the Markov renewal processes and functions can be derived from
the corresponding theorems from Chapter 8.
Equation 9.9 is called the Markov renewal equation and plays the same role as the
renewal equation in renewal theory. The Markov renewal argument, namely the tech-
nique of conditioning on X1 and S1 , generally yields an integral equation of the
following form
H(t) = D(t) + G ∗ H(t), (9.13)
where G is the kernel of the Markov renewal sequence. Such an equation is called a
Markov renewal type equation. We study such an equation in this section. The next
theorem gives the solution to the Markov renewal type equation.
and G satisfies the condition in Theorem 9.3. Then there exists a unique solution to
the Markov renewal type Equation 9.13 such that
sup |Hij (t)| ≤ hj (t) < ∞, j ∈ I, t ≥ 0, (9.15)
i∈I
and is given by
H(t) = D(t) + M ∗ D(t), (9.16)
where M (t) is the Markov renewal function associated with G.
Proof: First we verify that the solution in Equation 9.16 satisfies Equation 9.13.
Dropping (t) for ease of notation we get
H = D + M ∗ D = D + (G + G ∗ M ) ∗ D = D + G ∗ (D + M ∗ D) = D + G ∗ H.
Here the first equality follows from Equation 9.16, the second from Equation 9.9,
the third from associativity of matrix convolutions, and the last from Equation 9.13.
Thus any H satisfying Equation 9.16 satisfies Equation 9.13.
418 MARKOV REGENERATIVE PROCESSES
Next we establish Equation 9.15. Suppose Equation 9.14 holds and let
cj (t) = sup dj (u).
0≤u≤t
XZ t
≤ dj (t) + cj (t) dMik (u)
k∈I 0
X
≤ cj (t)(1 + Mik (t))
k∈I
1 t
≤ cj (t) 1 + +1 ,
ǫ δ
where the last inequality follows from Theorem 9.4. Thus Equation 9.15 follows if
we define the right hand side above as hj (t).
To show uniqueness, suppose H1 and H2 are two solutions to Equation 9.13 sat-
isfying Equation 9.15. Then H = H1 − H2 satisfies Equation 9.15 and H = G ∗ H.
Iterating this n times we get
H = G∗n ∗ H.
Using
ĥj (t) = sup hj (u)
0≤u≤t
we get
Z tX
|Hij (t)| ≤ | dG∗n
ik (u)Hkj (t − u)|
0 k∈I
X
≤ ĥj (t) G∗n
ik (t).
k∈I
Letting n → ∞ and using Equation 9.10 we see that the right hand side goes to 0.
Hence H = 0, or, H1 = H2 . This shows uniqueness.
Then ∞
Z X pk
lim Hij (t) = Dkj (u)du. (9.18)
t→∞ 0 τk
k∈I
Proof: Since the condition of Theorem 9.3 is satisfied, and the condition [3a] holds,
we see that the unique solution to Equation 9.17 is given by Equation 9.16. In scalar
form we get
XZ t
Hij (t) = Dij (t) + dMik (u)Dkj (t − u)
k∈I 0
Z t
= Dij (t) + dMii (u)Dij (t − u)
0
XZ t
+ dMik (u)Dkj (t − u). (9.19)
k6=i 0
Conditions [3a], [3b], and [3c], and the fact that Mii (t) is a standard renewal function
of a renewal process with mean inter-renewal time τii , implies that we can use the
key renewal theorem (Theorem 8.17 on page 360) to show that
Z t Z ∞
1
lim [Dij (t) + dMii (u)Dij (t − u)] = Dij (u)du. (9.20)
t→∞ 0 τii 0
Since Mik (t) is a delayed renewal function with common mean inter-renewal time
τkk , we can use an argument similar to that in the proof of Theorem 8.26 on page 369,
to get Z t Z ∞
1
lim dMik (u)Dkj (t − u) = Dkj (u)du. (9.21)
t→∞ 0 τkk 0
Letting t → ∞ in Equation 9.19, and using Equations 9.20 and 9.21, we get
XZ t
lim Hij (t) = lim Dij (t) + lim dMik (u)Dkj (t − u)
t→∞ t→∞ t→∞ 0
k∈I
420 MARKOV REGENERATIVE PROCESSES
X 1 Z ∞
= Dkj (u)du
τkk 0
k∈I
X pk Z ∞
= Dkj (u)du,
τk 0
k∈I
where the last equality follows from Equation 8.66 on page 382.
This theorem appears in Theorem 4.17 of Chapter 10 of Cinlar (1975), and later
in Kohlas (1982) and Heyman and Sobel (1982). We do not consider the periodic
case here, since it is more involved in terms of notation. We refer the reader to Cinlar
(1975) for further details.
The reader can safely skip over the rest of this section without any loss of continuity.
In this section we remove the implicit assumption
lim Dij (t) = 0, i, j ∈ I
t→∞
made in Theorem 9.9. In some applications this may not be satisfied. We need to
develop a few preliminary results before we give the main result in Theorem 9.13
below.
Theorem 9.10 The second moments {s2ij , i ∈ I}, for a given j ∈ I, satisfy
X X
s2ij = s2i + 2 Gik (∞)µik τkj + Gik (∞)s2kj .
k6=j k6=j
Next two theorems describe the asymptotic behavior of the Markov renewal func-
tion.
EXTENDED KEY RENEWAL THEOREM 421
Theorem 9.11 Let {X(t), t ≥ 0} be an irreducible and recurrent SMP. Then
Mij (t) 1
lim = .
t→∞ t τjj
With these preliminary results we are ready to state the extension of Theorem 9.9
below.
Then ∞
X Z X pk
lim Hij (t) = dij + αik dkj + Dkj (u)du, (9.23)
t→∞ 0 τk
k∈I k∈I
where αik are as given in Theorem 9.12.
Before we prove this theorem we make several observations. The limit of Hij (t) has
three terms: the first two depend on i, while the third one does not. If dij = 0 for all
422 MARKOV REGENERATIVE PROCESSES
i, j ∈ I, the first two terms disappear, and we are left with a limit that is independent
of i, and we get Theorem 9.9. Clearly we do not need to assume this. All we need to
assume is that X pk
dkj = 0, j ∈ I,
τk
k∈I
which is implicit in the condition [3e] of the above theorem. As before, we do not
consider the periodic case here, since it is more involved in terms of notation. We
refer the reader to Cinlar (1975) for further details.
Proof: Since the condition of Theorem 9.3 is satisfied, and the condition [3a]
holds, we see that the unique solution to Equation 9.17 is given by Equation 9.16. In
scalar form we get
XZ t
Hij (t) = Dij (t) + dMik (u)Dkj (t − u)
k∈I 0
Z t
= Dij (t) − dij + dMii (u)(Dij (t − u) − dij )
0
XZ t
+ dMik (u)(Dkj (t − u) − dkj ) (9.24)
k6=i 0
X
+dij + Mik (t)dij . (9.25)
k∈S
Hence X X
lim Mik (t)dkj = αik dkj .
t→∞
k k
EXTENDED KEY RENEWAL THEOREM 423
This yields the theorem.
Example 9.6 Two-State SMP. Consider the two-state SMP of Example 8.25 on
page 379 with kernel given by
0 H0 (y)
G(y) = .
H1 (y) 0
Here Hi is the cdf of the sojourn time in state i = 0, 1. Assume that Hi is a non-
defective cdf with mean τi , second moment s2i , and variance σi2 . We see that
τ00 = τ11 = τ0 + τ1 ,
τ01 = τ0 , τ10 = τ1 ,
s200 = s211 = s20 + 2τ0 τ1 + s21 ,
s210 = s21 , s201 = s20 .
Using the above in Theorem 9.12, and simplifying, we get
s20 + 2τ0 τ1 + s21 − 2(τ0 + τ1 )2
α00 = α11 =
2(τ0 + τ1 )2
σ02 + σ12 − (τ0 + τ1 )2
= ,
2(τ0 + τ1 )2
s2jj − 2τjj τij
αij = 2
2τjj
σ02 + σ12 − τi2 + τj2
= , i 6= j.
2(τ0 + τ1 )2
Let
t
Hij (t) = Mij (t) − .
τjj
We saw in Theorem 9.12 that
lim Hij (t) = αij , i, j = 0, 1.
t→∞
We shall derive the same limits using the extended key Markov-renewal theorem.
First we ask the reader to derive the following Markov renewal type equation:
H(t) = G(t) + G ∗ H(t) (9.28)
with D(t) = [Dij (t)] given by
t
1
Z
Dij (t) = (1 − δij )Hi (t) − (1 − Hi (u))dt, (9.29)
τ0 + τ1 0
where δij = 1 if i = j and 0 otherwise. Hence we have
τi
dij = (1 − δij ) − .
τ0 + τ1
424 MARKOV REGENERATIVE PROCESSES
We also have (see Example 8.28 on page 382)
τi
pi = , i = 0, 1.
τ0 + τ1
The reader should verify that
p0 p1
d0j + d1j = 0, j = 0, 1.
τ0 τ1
Thus the conditions in Theorem 9.13 are satisfied. We leave it to the reader to alge-
braically verify that Equation 9.23 reduces to
lim Hij (t) = αij . (9.30)
t→∞
This shows that Theorem 9.13 produces consistent results. Note that we cannot get
this result from Theorem 9.9.
Let {(Xn , Sn ), n ≥ 0} be a Markov renewal sequence with kernel G that satisfies the
regularity condition of Theorem 9.3. Let {X(t), t ≥ 0} be the semi-Markov process
generated by this Markov renewal sequence. We have studied the limiting behavior
of this SMP in Section 8.9 by means of an alternating renewal process. In this sec-
tion we derive the results of Theorem 8.34 by using the key Marov-renewal theorem
(Theorem 9.9).
Define
pij (t) = P(X(t) = j|X(0) = i), i, j ∈ I, t ≥ 0.
Note that by X(0) = i we implicity mean that the SMP has just entered state i at time
0. We use the Markov-renewal argument to derive a Markov-renewal type equation
for
P (t) = [pij (t)]
in the theorem below.
Theorem 9.14 Markov-Renewal Type Equation for P (t). The matrix P (t) satis-
fies the following Markov-renewal type equation:
P (t) = D(t) + G ∗ P (t), (9.31)
where D(t) is a diagonal matrix with
X
Dii (t) = 1 − Gi (t) = 1 − Gij (t), i ∈ I. (9.32)
j∈I
Next we use the key Markov-renewal theorem 9.9 to study the limiting behavior
of P (t). This may seem circular, since the the key Markov-renewal theorem uses
{pj , j ∈ I}, the limiting distribution of the SMP, in the limit! We assume that this
limiting distribution is obtained by using Theorem 8.34 on page 382. Then we show
that the limiting distribution produced by applying the key Markov-renewal theorem
produces the same limiting distribution.
Proof: Theorem 9.14 shows that P (t) satisfies the Markov-renewal type equa-
tion 9.31 with the D(t) matrix as given in Equation 9.32. It is easy to verify that
all the conditions of Theorem 9.9 are satisfied. The {pj , j ∈ I} in the condition 2 is
given by Equation 8.66 on page 382. Note that
Z ∞ Z ∞
Dii (t)dt = (1 − Gi (t))dt = τi .
0 0
Hence Equation 9.18 reduces to
Z ∞ X pk
lim Hij (t) = Dkj (t)dt
t→∞ 0 τk
k∈I
Z ∞
pj
= Dii (u)du
τj 0
pj
= τj = pj .
τj
This proves the theorem.
We had remarked in Section 8.9 that the study of the limiting distribution of the
426 MARKOV REGENERATIVE PROCESSES
SMPs cannot stop at the limiting distribution of X(t) as t → ∞, since knowing the
value of X(t) at time t is, in general, not enough to determine the future of an SMP.
We also need to know the distribution of the remaining sojourn time at that time. We
proceed to do that here. Towards this end, define
B(t) = SN (t)+1 − t, Z(t) = XN (t)+1 . (9.34)
Thus B(t) is the time until the next transition after t (or the remaining sojourn time
in the current state at time t), and Z(t) is the state after the next transition after t. To
complete the study of the limiting distribution, we study
Hij (t) = P(X(t) = j, B(t) > x, X(t) = k|X(0) = i), (9.35)
where x ≥ 0 and k ∈ I is fixed. The next theorem gives the limit of the above
quantity as t → ∞.
Hence we get
Z ∞ X
Hij (t) = P(X(t) = j, B(t) > x, Z(t) = k|X0 = i, X1 = r, S1 = u)dGir (u)
0 r∈I
Z tX Z ∞ X
= Hrj (t − u)dGir (u) + δij δrk dGir (u)
0 r∈I t+x r∈I
Z ∞ Z tX
= δij dGik (u) + Hrj (t − u)dGir (u)
t+x 0 r∈I
Hence we get
∞
1
Z
lim P(B(t) > x|X(0) = i, X(t) = j) = (1 − Gj (u))du.
t→∞ τj x
Since the right hand side is independent of i we get the desired result in the
theorem.
The above theorem has an interesting interpretation: in the limit the state of the
SMP is j with probability pj , and given that the current state j, the remaining sojourn
time in that state has the same distribution as the equilibrium distribution associated
with Gj . In hindsight, this is to be expected. We close this section with an example.
428 MARKOV REGENERATIVE PROCESSES
Example 9.7 Remaining Service Time in an M/G/1 Queue. Let Z(t) be the
number of customers at time t in an M/G/1 queue with PP(λ) arrivals and iid service
times with common cdf F (·) and common mean τ . Define U (t) to be the remaining
service time of the customer in service at time t, if Z(t) > 0. If Z(t) = 0, define
U (t) = 0. Show that
1 ∞
Z
lim P(U (t) > x|Z(t) > 0) = (1 − F (u))du.
t→∞ τ x
We began this chapter with the definition of an MRGP in Definition 9.2. Markov
regenerative processes (MRGPs) are to the Markov renewal sequence what regener-
ative processes are to renewal sequences. Let {Z(t), t ≥ 0} be an MRGP with an
embedded MRS {(Xn , Sn ), n ≥ 0}. We begin by assuming that the Z(t) and Xn
both take values in a discrete set I. We have seen several examples of MRGPs in
Section 9.1.
As with the SMPs we study the transient behavior of the MRGPs with countable
state-space I by concentrating on
Hij (t) = P(Z(t) = j|Z(0) = i), i, j ∈ I.
The next theorem gives the Markov renewal type equation satisfied by H(t) =
[Hij (t)].
Note that D(t) contains the information about the behavior of the MRGP over
the first cycle (0, S1 ). Thus the above theorem relates the behavior of the process at
time t to its behavior over the first cycle. In practice computing D(t) is not easy, and
solving the Markov renewal equation to obtain H(t) is even harder. Hence, following
the now well trodden path, we study its limiting behavior in the next theorem.
Proof: Consider the Markov renewal type equation derived in Theorem 9.18. It is
straightforward to verify that the conditions of Theorem 9.9 are satisfied. Thus Equa-
tion 9.18 can be used to compute the limit of Hij (t) = P(Z(t) = j|X0 = i) as
430 MARKOV REGENERATIVE PROCESSES
follows: ∞
Z X pk
lim Hij (t) = Dkj (u)du. (9.41)
t→∞ 0 τk
k∈I
Now,
αkj = E(time spent by the MRGP in state j during (0, S1 )|X0 = k)
Z S1
= E 1{Z(t)=j} dt X0 = k
0
Z ∞ Z S1
= E 1{Z(t)=j} dt X0 = k, S1 = u dGk (u)
Z0 ∞ Z s 0
= P(Z(t) = j|X0 = k, S1 = u)dtdGk (u)
0 0
Z ∞Z ∞
= P(Z(t) = j|X0 = k, S1 = u)dGk (u)dt
Z0 ∞ t
= P(Z(t) = j, S1 > t|X0 = k)dt
Z0 ∞
= Dkj (t)dt.
0
Note that the limiting distribution of the MRGP given in Equation 9.40 is indepen-
dent of the initial distribution of the MRS, or, initial value of X0 . The distribution
itself can be intuitively explained as follows: Since every time the SMP {Z(t), t ≥ 0}
visits state k it spends τk amount of time there, we can interpret αk /τk as the time
time spent by the MRGP in state j per unit time spent in state k by the SMP. Since
pk is the fraction of the time spent in state k by the SMP in the long run, we can
compute
lim P(Z(t) = j|X0 = i)
t→∞
= Long run fraction of the time spent by the MRGP in state j
X
= [Long run fraction of the time spent by the SMP in state k] ×
k∈I
[Long run time spent by the MRGP in state j per unit time spent by
the SMP in state k]
X αkj
= pk .
τk
k∈I
Now let us relax the assumption that the Z(t) and Xn take values in the same
discrete set I. Suppose Z(t) ∈ S for all t ≥ 0 and Xn ∈ I for all n ≥ 0. We can see
that 9.40 remains valid even in this case if S is also discrete. If S is continuous, say
(−∞, ∞), we can proceed as follows: fix an x ∈ S, and define
Y (t) = 1{Z(t)≤x} , t ≥ 0.
MARKOV REGENERATIVE PROCESSES 431
Then it is clear that {Y (t), t ≥ 0} is also an MRGP with the same embedded MRS
{(Xn , Sn ), n ≥ 0}. We do need to assume that the sample paths of {Z(t), t ≥ 0}
are sufficiently nice so that the sample paths of {Y (t), t ≥ 0} are right continuous
with left limits. Next define αk (x) as the expected time spent by the {Z(t), t ≥ 0}
process in the set (−∞, x] during (0, S1 ) starting with X0 = k. Then we can show
that
X αk (x)
lim P(Z(t) ≤ x|X0 = i) = pk , i ∈ I, x ∈ S.
t→∞ τk
k∈I
We illustrate with two examples.
as expected.
In this section we apply the theory of MRGPs to the queueing systems. Specifically
we apply it to the birth and death queues, the M/G/1 queue, the G/M/1 queue, and
the M/G/1/1 retrial queue.
Let Z(t) be the number of customers in a queueing system at time t. Assume that
{Z(t), t ≥ 0} is a birth and death process on {0, 1, 2, · · ·} with birth parameters
λi > 0 for i ≥ 0, and µi > 0 for i ≥ 1. We studied the limiting behavior of this
process in Example 6.35 on page 242. There we saw that such a queueing system is
stable if
X∞
ρi < ∞,
i=0
where ρ0 = 1, and
λ0 λ1 · · · λi−1
ρi = , i ≥ 1,
µ1 µ2 · · · µi
and its limiting distribution is given by
ρj
pj = P∞ , j ≥ 0.
i=0 ρi
Thus pj is the steady-state probability that there are j customers in the system. From
Section 7.1 recall that p∗j is the steady-state probability that an entering customer sees
j customers ahead of him at the time of entry. We saw in Theorem 7.3 that if λi = λ
for all i ≥ 0, we can apply PASTA (Theorem 7.3 on page 283, and noting that all
arriving customers enter) to see that p∗j = pj . We use the theory of MRGPs to derive
a relationship between πj∗ and pj in the general case in the next theorem.
Theorem 9.20 The Birth and Death Queue. For a stable birth and process queue,
λj pj
πj∗ = P∞ , j = 0, 1, 2, · · · . (9.42)
k=0 λk pk
Proof: Let S0 = 0, and Sn be the time of the nth upward jump in the {Z(t), t ≥ 0}.
Thus the nth entry to the queueing system takes place at time Sn . Let Xn = Z(Sn −),
the number of customers as seen by the nth entering customer. Note that Xn = k
implies that Z(Sn +) = k + 1. Since {Z(t), t ≥ 0} is a CTMC, {(Xn , Sn ), n ≥ 0}
is a Markov renewal sequence. Note that {Z(t), t ≥ 0} decreases over [0, S1 ), and
{Xn , n ≥ 0} is a DTMC with transition probabilities
λj
pkj = mkj , 0 ≤ j ≤ k + 1, (9.43)
λj + µj
where
k+1
Y µr
mk,k+1 = 1, mkj = , 0 ≤ j ≤ k.
λ + µr
r=j+1 r
434 MARKOV REGENERATIVE PROCESSES
Since
πj∗ = lim P(Xn = j), j ≥ 0, (9.44)
n→∞
we see that {πj∗ , j ≥ 0} satisfy
∞ ∞
X X λj
πj∗ = πk∗ pkj = πk∗ mkj , j ≥ 0. (9.45)
λj + µj
k=j−1 k=j−1
We can interpret mkj as the probability that the {Z(t), t ≥ 0} process visits state j
over [0, S1 ) starting with X0 = k. We can treat {Z(t), t ≥ 0} as an MRGP with the
embedded MRS {(Xn , Sn ), n ≥ 0}. Let {X(t), t ≥ 0} be the SMP generated by the
MRS {(Xn , Sn ), n ≥ 0}. We change the notation slightly and use
pj = lim P(Z(t) = j), j ≥ 0, (9.46)
t→∞
and
p̄j = lim P(X(t) = j), j ≥ 0. (9.47)
t→∞
We use Theorem 8.34 on page 382 to get
πj∗ τj
p̄j = P∞ ∗ , j ≥ 0.
k=0 πk τk
Now, let αkj be as defined in Theorem 9.19. Note that the MRGP can visit state j
at most once during [0, S1 ), mkj is the probability that the MRGP visits state j over
(0, S1 ) starting with X0 = k, and 1/(λj + µj ) is the average time it spends in state
j once its reaches state j. These observations can be combined to yield
1
αkj = mkj , 0 ≤ j ≤ k + 1.
λj + µj
Using the above results in Theorem 9.19 we get
∞ P∞ ∗
αkj k=j−1 πk αkj
X
pj = p̄k = P∞ ∗ , j ≥ 0.
τk k=0 πk τk
k=j−1
Now we have
∞ ∞
X X 1
πk∗ αkj = πk∗ mkj
λj + µj
k=j−1 k=j−1
∞
1X λj
= πk∗ mkj
λj λj + µj
k=j−1
∞
1 X
= πk∗ pkj (from Eq. 9.43)
λj
k=j−1
πj∗
= (from Eq. 9.45).
λj
APPLICATIONS TO QUEUES 435
Hence pj ∝ πj∗ /λj , or πj∗ ∝ λj pj . Since the πj∗ must add up to 1, we get Equa-
tion 9.42 as desired.
The relation in Equation 9.42 has a simple intuitive explanation: λj pj is the rate at
which state j + 1 is entered from state j in steady state. Hence it must be proportional
to the probability that an entering customer sees j customers ahead of him.
Now let πj be the limiting probability that a departing customer leaves behind j
customers in the system. One can define an appropriate Markov renewal sequence
and follow the steps of the proof of Theorem 9.20 to show that
µj+1 pj+1
πj = P∞ , j = 0, 1, 2, · · · . (9.48)
k=1 µk pk
However, for a positive recurrent birth and death process we have
λj pj = µj+1 pj+1 , j ≥ 0.
Hence we get
λj pj
πj = P∞ = πj∗ , j = 0, 1, 2, · · · . (9.49)
k=0 λ k p k
Thus, in steady state, the distribution of the number of customers as seen by an ar-
rival is the same as seen by a departure. This is a probabilistic proof of the general
Theorem 7.2 on page 281.
Let Z(t) the number of customers in a stable M/G/1 queue at time t with arrival rate
λ and service time cdf B(·) with mean τ . We showed in Example 9.3 that {Z(t), t ≥
0} is an MRGP with the embedded Markov renewal sequence {(Xn , Sn ), n ≥ 0} as
defined there. Let {X(t), t ≥ 0} be the SMP generated by this MRS. Let pj and p̄j
be as defined in Equations 9.46 and 9.47. Also define
πj = lim P(Xn = j), j ≥ 0.
n→∞
Proof: We have shown in Theorem 7.10 that {Xn , n ≥ 0} is a DTMC with transi-
tion probability matrix P as given there. We need the following quantities to apply
436 MARKOV REGENERATIVE PROCESSES
Theorem 8.34 on page 382:
1
τ0 = + τ, τj = τ, j ≥ 1.
λ
Using Equation 8.66 we get
πj τj
p̄j = P∞ , j ≥ 0.
k=0 πk τk
From Equation 7.35 on page 314 we get
π0 = 1 − λτ.
Hence we have
∞
X
πk τk = τ + π0 /λ = 1/λ.
k=0
Thus we get
p̄j = λπj τj . (9.50)
In order to use Theorem 9.19 we need to compute αkj , the expected time spent by
the MRGP in state j over [0, S1 ) starting with X0 = k. Note that the sojourn time of
the SMP in state k > 0 is given by Gk (x) = B(x). For j ≥ k > 0 we have
Z S1
αkj = E 1Z(t)=j dt X0 = k
0
Z ∞ Z S1
= E 1Z(t)=j dt X0 = k, S1 = u dGk (u)
Z0 ∞ Z u 0
= P(Z(t) = j X0 = k, S1 = u)dtdB(u)
Z0 ∞ Z0 u
= P(j − k arrivals in [0, t))dtdB(u)
0 0
Z ∞Z u
(λt)j−k
= e−λt dtdB(u)
0 0 (j − k)!
j−k
" #
1 X
= 1− αr
λ r=0
where
∞
(λt)i
Z
αi = e−λt dB(t), i ≥ 0.
0 i!
In a similar way we can show that
α00 = 1/λ, α0j = α1j , j ≥ 1.
Substituting in Equation 9.41 we get
∞
X αkj
pj = p̄k
τk
k=0
APPLICATIONS TO QUEUES 437
j
X
= λ πk αkj (Use Eq. 9.50)
k=0
j
!
X
= λ π0 α0j + πk αkj
k=1
= πj ,
where the last equality follows by using the balance equation π = πP repeatedly to
simplify the right hand side.
Let Z(t) the number of customers in a stable G/M/1 queue at time t with common
inter-arrival time cdf A(·) with mean 1/λ and iid exp(µ) service times. We showed
in Example 9.4 that {Z(t), t ≥ 0} is an MRGP with the embedded Markov renewal
sequence {(Xn , Sn ), n ≥ 0} as defined there. Let {X(t), t ≥ 0} be the SMP gener-
ated by this MRS. Let pj , p̄j and πj∗ be as defined in Equations 9.46, 9.47, and 9.44.
Here we use the theory of MRGPs to give a computational proof of Theorem 7.16.
Proof: We have shown in Theorem 7.14 on page 317 that {Xn , n ≥ 0} (it was
denoted by {Xn∗ , n ≥ 0} there) is a DTMC with transition probability matrix P as
given in Equation 7.38. We need the following quantities to apply Theorem 8.34 on
page 382:
τj = 1/λ, j ≥ 0.
Substituting in Equation 8.66 we get
p̄j = πj∗ , j ≥ 0. (9.51)
In order to use Theorem 9.19 we need to compute αkj . Note that the sojourn time
of the SMP in state k ≥ 0 is given by Gk (x) = A(x). Going through the same
calculations as in the proof of Theorem 9.21 we get, for 1 ≤ j ≤ k + 1
k+1−j
" #
1 X
αkj = 1− αr ,
µ r=0
438 MARKOV REGENERATIVE PROCESSES
where
∞
(µt)i
Z
αi = e−µt dA(t), i ≥ 0.
0 i!
Substituting in Equation 9.41 we get, for j ≥ 1,
∞
X αkj
pj = p̄k
τk
k=j
∞
X
= λ πk∗ αkj (Use Eq. 9.51)
k=j
∞ k+1−j
" #
λ X ∗ X
= πk 1 − αr
µ r=0
k=j−1
∗
= ρπj−1 ,
where the last equality follows by using the balance equation π ∗ = π ∗ P repeatedly
to simplify the right hand side. Finally, we have
∞
X ∞
X
∗
p0 = 1 − pj = 1 − ρ πj−1 = 1 − ρ.
j=1 j=1
Consider the M/G/1/1 retrial queue as described in Example 9.5. Using the no-
tation there we see that {Z(t) = (R(t), I(t)), t ≥ 0} is an MRGP with the MRS
{(Xn , Sn ), n ≥ 0} defined there. Let {X(t), t ≥ 0} be the SMP generated by this
MRS. Note that this is different than the X(t) defined in Example 9.5. Let p̄j be as
defined in Equations 9.46. We have derived the limiting distribution of the embedded
DTMC {Xn , n ≥ 0} in Theorem 7.19. We use those results to derive the limiting
distribution of (R(t), I(t)) as t → ∞.
Let
p(j,i) = lim P((R(t), I(t)) = (j, i)), j ≥ 0, i = 0, 1, (9.52)
t→∞
and define the generating functions
∞
X
φi (z) = z k p(j,i) , i = 0, 1.
j=0
APPLICATIONS TO QUEUES 439
Theorem 9.23 M/G/1/1 Retrial Queue. Suppose ρ = λτ < 1. The
!
λ 1 1 − G̃(λ − λu)
Z
φ0 (z) = (1 − ρ) exp − du , (9.53)
θ z G̃(λ − λu) − u
!
G̃(λ − λz) − 1 λ 1 1 − G̃(λ − λu)
Z
φ1 (z) = (1 − ρ) · exp − du . (9.54)
z − G̃(λ − λz) θ z G̃(λ − λu) − u
Proof: We have shown in Theorem 7.18 on page 321 that {Xn , n ≥ 0} is a DTMC.
Let
πk = lim P(Xn = k), k ≥ 0.
n→∞
In Theorem 7.19 we computed the generating function
∞
X
φ(z) = z k πk .
k=0
Now let φ(z) be the limiting generating function of Xn as derived in Equation 7.46
on page 322. We saw in Section 7.7 that it is also the limiting generating function of
R(t) + I(t) as t → ∞. Hence we get
φ(z) = φ0 (z) + zφ1 (z).
Substituting for φ(z) and φ0 (z) we get the expression for φ1 (z) in Equation 9.54.
Note that the limiting probability that the server is idle is given by
φ0 (1) = 1 − ρ,
as expected. It is a bit surprising that this is independent of the retrial rate θ as long
as it is positive! We could not derive this result in our earlier analysis of the retrial
queue in Section 7.7.
9.1 Let Z(t), t ≥ 0} be a birth and death process. Let Sn be the nth downward jump
epoch and define Xn = X(Sn +). Assume that S0 = 0. Show that {Z(t), t ≥ 0} is
an MRGP with {(Xn , Sn ), n ≥ 0} as the embedded MRS. Compute its kernel.
9.2 Let Z(t) be the number of customers in an M/G/1/K queue with PP(λ) ar-
rivals and common service time cdf B(·). Assume that either the system is empty
initially, or a service has just started at time 0. Let Sn be the time of the nth de-
parture, and define Xn = Z(Sn +). Show that {Z(t), t ≥ 0} is an MRGP with
{(Xn , Sn ), n ≥ 0} as the embedded MRS. Compute its kernel.
9.3 Let Z(t) be the number of customers in an G/M/1/K queue with common
inter-arrival time cdf A(·), and iid exp(µ) service times. Assume that an arrival
has occurred at time 0. Let Sn be the time of the nth arrival (who may or or may
not enter), and define Xn = Z(Sn −). Show that {Z(t), t ≥ 0} is an MRGP with
{(Xn , Sn ), n ≥ 0} as the embedded MRS. Compute its kernel.
MODELING EXERCISES 441
9.4 Consider a closed queueing network with two single-server nodes and N cus-
tomers. After completing service at node 1 (2) a customer joins node 2 (1). The
service times at node i are iid exp(µi ), (1, 2). Let Z(t) be the number of customers
at node 1 at time t, and Sn be the time of nth service completion at node 2, which is
also an arrival instant at node 1. Assume S0 = 0, i.e., a service completion at node
2 has occurred at time 0. Let Xn = Z(Sn −). Show that {Z(t), t ≥ 0} is an MRGP
with {(Xn , Sn ), n ≥ 0} as the embedded MRS. Compute its kernel.
9.6 Consider the queueing network of Modeling Exercise 9.4. Suppose one of the
N customer is tagged, and let Sn be the time of his nth return to node 1. Assume
S0 = 0, i.e., the tagged customer joins node 1 at time 0. Let Xn = Z(Sn −). Show
that {Z(t), t ≥ 0} is an MRGP with {(Xn , Sn ), n ≥ 0} as the embedded MRS.
Compute its kernel.
9.7 A machine can exist in one of N + 1 states labeled {0, 1, 2, · · · , N }, with state
0 representing a new machine, and state N representing a failed machine, and the in-
between states indicating increasing levels of deterioration. Left to itself the machine
changes states according to a DTMC with transition probability matrix H = [hij ].
The maintenance policy calls for repairing the machine whenever it reaches a state
k or more, where 0 < k ≤ N is a given integer. Suppose the repair takes one
unit of time and transforms the machine from state i(≥ k) to state j(< k) with
probability aij . Let Sn be the nth repair completion time and Xn the state of the
machine immediately after the nth repair completion. Assume that S0 = 0. Let Z(t)
be the state of the machine at time t. Show that {Z(t), t ≥ 0} is an MRGP with
{(Xn , Sn ), n ≥ 0} as the embedded MRS. Compute its kernel.
9.9 A high-speed network transmits cells (constant length packets of data) over
communication channels. At its input ports it exercises access control by dropping
incoming cells if the input rate gets too high. One such control mechanism is de-
scribed here. The controller generates r tokens at times n = 0, 1, 2, · · · into a token
pool of size M . Tokens that exceed the capacity are lost. The cells arrive according
442 MARKOV REGENERATIVE PROCESSES
to a PP(λ). If there is a token available when a cell arrives, it grabs one token and
enters the network immediately, else the cell is prohibited from entering the network
and is permanently lost. Let X(t) be the number of tokens in the token pool at time
t. Model {X(t), t ≥ 0} as an MRGP.
9.11 Customers arrive at a service station according to a Poisson process with rate
λ. Servers arrive at this station according to an independent renewal process with
iid inter-arrival times with mean τ and second moment s2 . Each incoming server
removes each of the waiting customer with probability α > 0 in an independent
fashion, and departs immediately. Let X(t) be the number of customers in the system
at time t. Model {X(t), t ≥ 0} as an MRGP by identifying an appropriate MRS
embedded in it.
9.2 Let {Z(t), t ≥ 0} be a CTMC with generator matrix Q. For the MRS described
in Example 9.1, compute the LST M̃ (s) of the Markov renewal function M (t).
9.3 Compute the limiting probability that 0 or 1 components are functional in the
machine in Example 9.9.
9.4 Consider a two state CTMC {X(t), t ≥ 0} on {0, 1} with generator matrix
−λ λ
Q= .
µ −µ
Let Hij (t) be the expected time spent in state j by the CTMC up to time t starting
in state i at time 0. Derive a Markov renewal type equation for H(t). Solve it by
inverting its LST H̃(s).
9.6 Let pj be the limiting probability that there are j customers in an M/G/1/K
queue, and πj∗ be the limiting probability that an entering customer sees j customers
ahead of him when he enters. Using the theory of MRGPs, establish the relationship
between the pj ’s and the πj∗ ’s.
9.7 Compute the long run behavior of the access control scheme described in Mod-
eling Exercise 9.9 for the special case when M = r.
9.8 Consider the following variation of the M/G/1/1 retrial queue. Suppose that
after a service completion a customer departs with probability 1 − p, or rejoins the
orbit with probability p and behaves like a new customer. Study the limiting distri-
bution of this system using appropriate MRGPs.
9.9 Consider yet another variation of the M/G/1/1 retrial queue. Suppose that an
arriving customer joins service immediately if he finds the server free upon arrival.
Else he departs with probability 1 − p, or joins the orbit with probability p and con-
ducts retrials until getting served. Study the limiting distribution of this system using
appropriate MRGPs.
9.10 Using the MRGP developed in Modeling Exercise 9.8, compute the long run
fraction of the time that the machine is up.
9.11 Consider the MRGP developed in Modeling Exercise 9.11. Let Xn be the num-
ber of customers left behind after the nth server departs. Compute the limiting value
of E(Xn ) as n → ∞. Compute the limiting value of E(X(t)) as t → ∞.
9.7 Using the fact that Mjj is a standard renewal function and Mij (i 6= j) is a
delayed renewal function with common inter-renewal time τjj , prove Theorem 9.11.
9.8 Use the results of Examples 8.16 on page 361 and 8.20 on page 370 to prove
Theorem 9.12.
9.9 Derive the Markov-renewal type equation 9.28 with D(t) as in Equation 9.29.
(Use the derivational steps in Example 8.16 on page 361.)
Diffusion Processes
Suppose you’re on a game show, and you’re given the choice of three doors. Behind
one door is a car, behind the others, goats. You pick a door, say #1, and the host, who
knows what’s behind the doors, opens another door, say #3, which has a goat. He
says to you, “Do you want to pick door #2?” Is it to your advantage to switch your
choice of doors?
A question asked of Marilyn vos Savant by a reader. Marilyn said yes, while count-
less others said, “It does not matter.” This created a heated controversy, and pro-
duced several papers, with one reader telling Marilyn: “You are the goat!”
In this chapter we study a class of stochastic processes called the diffusion processes.
Intuitively speaking these processes are continuous-time, continuous state-space pro-
cesses and their sample paths are everywhere continuous but nowhere differentiable.
The history of diffusion processes begins with the botanist Brown, who in 1827 ob-
served that grains of pollen suspended in a liquid display a kind of erratic motion.
This motion came to be known as the Brownian motion. Einstein later used physical
principles to do a mathematical analysis of this motion. Wiener later provided rigor-
ous probabilistic foundation for the Brownian motion, and hence some times it is also
called the Wiener process. Diffusion processes are built upon the simpler process of
Brownian motion.
We begin with a formal definition. It uses the concept of stationary and indepen-
dent increments introduced in Definition 5.5 on page 157. We also use the notation R
to denote the real line (−∞, ∞), and N (µ, σ 2 ) to denote a Normal random variable
(or its distribution) with mean µ and variance σ 2 .
445
446 DIFFUSION PROCESSES
1. for each t ≥ 0, X(t) ∼ N (µt, σ 2 t),
2. it has stationary and independent increments.
We reserve the notation {B(t), t ≥ 0} for the standard BM. Some important prop-
erties of a Brownian motion are given in the following theorem.
Theorem 10.1 Basic Properties of a BM. Let {B(t), t ≥ 0} be an SBM, and define
X(t) = µt + σB(t).
Proof: Parts 1 and 2 follow from part 1 of the definition of a BM. To see part 3, we
have
P(X(t + s) ≤ x|X(s) = y, X(u) : 0 ≤ u ≤ s)
= P(X(t + s) − X(s) ≤ x − y|X(s) = y, X(u) : 0 ≤ u ≤ s)
= P(X(t + s) − X(s) ≤ x − y) (Independent Increments)
= P(X(t) − X(0) ≤ x − y) (Stationary Increments)
x − y − µt
= Φ √
σ t
where Φ(·) is the cdf of a standard normal random variable, and the last equality
follows from part 1 of the definition of a BM. Part 4 follows from the definition of a
BM(µ, σ).
The last property mentioned in the above theorem implies that if {B(t), t ≥ 0} is
an SBM, then so is {−B(t), t ≥ 0}. We say that an SBM is symmetric.
We next compute the joint pdf of an SBM. By definition, B(t) is a N (0, t) random
variable, and has the density φ(t, x) given by
1 − x2
φ(t, x) = √ e 2t , t > 0, x ∈ R. (10.1)
2πt
Since Brownian motion has such a nice structure we can give much more detailed
results about its finite dimensional joint pdfs.
BROWNIAN MOTION 447
Theorem 10.2 Joint Distributions of an SBM. Let {B(t), t ≥ 0} be an SBM,
and 0 < t1 < t2 < · · · < tn . Then [B(t1 ), B(t2 ), · · · , B(tn )] is a multi-variate
N (µ, Σ) random variable with the mean vector µ = [µi ], 1 ≤ i ≤ n, and variance-
covariance matrix Σ = [σij ], 1 ≤ i, j ≤ n, given by
µi = 0, 1 ≤ i ≤ n,
and
σij = min{ti , tj }, 1 ≤ i, j ≤ n.
Since all the finite dimensional joint pdfs in an SBM are multi-variate normal,
explicit expressions can be computed for many probabilistic quantities. For example,
the joint density f (x, y) of B(s) and B(t) (0 < s < t) is
1 x2 (y−x)2
f (x, y) = φ(s, x)φ(t − s, y − x) = p e− 2s − 2(t−s) . (10.2)
2π s(t − s)
One can show that, given B(t) = x, B(s) is a normal random variable with mean
xs/t and variance s(t−s)/t. A special case of this arises when we consider the SBM
conditioned on B(1) = 0. Such a process is called the Brownian bridge, since it is
anchored at 0 at times 0 and 1. We can see that B(t) (0 ≤ t ≤ 1) in a Brownian
bridge is a normal random variable with mean 0 and variance t(1 − t).
We next state two more basic properties of the BM without proof. First we need
two definitions.
Definition 10.3 Stopping Times. A random variable T defined on the same proba-
bility space as a stochastic process {X(t), t ≥ 0} is called a stopping time for the
{X(t), t ≥ 0} if the event {T ≤ t} is completely determined by {X(u) : 0 ≤ u ≤ t}.
For example,
T = min{t ≥ 0 : X(t) ≥ 2}
is a stopping time, but
T = min{t ≥ 0 : X(t) ≥ X(1)}
448 DIFFUSION PROCESSES
is not. In the latter case we cannot determine if T ≤ 1/2 without observing X(1),
which is not part of {X(u) : 0 ≤ u ≤ 1/2}.
The above definition is mathematically imprecise as it stands, but its meaning is clear:
The future of the process from any stopping time onwards depends on the history up
to that stopping time only via the value of the process at that stopping time. We will
need an enormous mathematical apparatus from measure theory to make it precise,
and we shall not do so here. For the same reason, it is difficult to come up with an
example of a Markov process that is not strong Markov. The DTMCs and the CTMCs
that we have studied so far have strong Markov property. We state the following
theorem without proof.
In this section we shall study several important properties of the sample paths of a
BM. We state the first property in the next theorem.
Theorem 10.4 The sample paths of {X(t), t ≥ 0} are everywhere continuous and
nowhere differentiable with probability 1.
This is one of the deep results about Brownian motion. It is much stronger than as-
serting that the sample paths as a function of t are continuous for almost all values
of t ≥ 0. This property is valid even for the sample paths of a Poisson process since
they have a countable number of jumps with probability one. But none of the sample
paths of a Poisson process is continuous everywhere. What the above theorem as-
serts is that, with probability one, a randomly occurring sample path of a Brownian
motion is a continuous function of t for all t ≥ 0. Even more surprisingly, it asserts
that, with probability one, a randomly occurring sample path of a Brownian motion
is nowhere differentiable. Thus the sample paths of a Brownian motion are indeed
very crazy functions of t.
We will not give formal proof of the above theorem. Instead we shall provide a way
to understand such a bizarre behavior by proposing an explicit method of construct-
ing the sample paths of an SBM as a limit of a sequence of simple symmetric random
walks. Recall from Example 2.10 on page 16 that {Xn , n ≥ 0} is called a simple
symmetric random walk if it is a DTMC on all integers with transition probabilities
pi,i+1 = 1/2, pi,i−1 = 1/2, −∞ < i < ∞.
SAMPLE PATH PROPERTIES OF BM 449
Using the notation [x] to denote the integer part of x, define a sequence of continuous
time stochastic processes {X k (t), t ≥ 0}, indexed by k ≥ 1, as follows:
X[kt]
X k (t) = √ , t ≥ 0. (10.3)
k
Figure 10.1 shows the typical sample paths of the {X k (t), 0 ≤ t < 1} processes
0.4
X 9(t)
0.2
t
0
–0.2
–0.4
X 4(t)
–0.6
–0.8
–1
Figure 10.1 Typical sample paths of the {X k (t), 0 ≤ t < 1} processes for k = 4 and 9.
where the limit is in the almost sure sense. (See Appendix H for the relevant defi-
nitions.) Thus a sample path of {Xn , n ≥ 0} produces a unique (in the almost sure
sense) sample path of {X ∗ (t), t ≥ 0}. We have the following result:
Theorem 10.5 Random Walk to Brownian Motion. Suppose |X0 | < ∞ with
probability 1. Then limiting process {X ∗ (t), t ≥ 0} defined by Equation 10.3 exists,
and is an SBM.
Proof: We will not show the existence. Let {Yn , n ≥ 0} be a sequence of iid random
variables with common pmf
P(Yn ) = 1 = .5, P(Yn = −1) = .5, n ≥ 1.
We have
E(Yn ) = 0, Var(Yn ) = 1, n ≥ 1.
Since {Xn , n ≥ 0} is a simple random walk, we see that
n
X
Xn = X0 + Yi . (10.5)
i=1
450 DIFFUSION PROCESSES
Then
P[kt]
X0 + Y
∗
X (t) = lim √ i=1 i
k→∞ k
[kt] X0 + [kt]
p P
Yi
= lim √ p i=1
k→∞ k [kt]
√
= tN (0, 1) (in distribution)
= N (0, t) (in distribution).
This shows part 1 of the Definition 10.2.
Let p(t, x) be the density of B(t). Since the SBM is a Markov process, we can expect
to derive differential equations for p(t, x), much along the lines of the Chapman-
Kolmogorov Equations we derived in Theorem 6.4 on page 205. However, since
the state-space of the SBM is continuous, we need a different machinery to derive
these equations. The formal derivation of these equations is rather involved, and we
refer the reader to advanced books on the subject, such as Chung (1967). Here we
present an “engineering” derivation, which glosses over some of the technicalities.
The following theorem gives the main differential equation, which is equivalent to
the forward equations derived in Theorem 6.4. We shall use the following notation
KOLMOGOROV EQUATIONS FOR STANDARD BROWNIAN MOTION 451
for the partial derivatives:
∂p(t, x)
pt (t, x) = ,
∂t
∂p(t, x)
px (t, x) = ,
∂x
∂ 2 p(t, x)
pxx (t, x) = .
∂x2
Theorem 10.6 Kolmogorov Equation for SBM. The density p(t, x) satisfies the
following partial differential equation:
1
pt (t, x) = pxx (t, x), x ∈ R, t ≥ 0, (10.6)
2
with the boundary condition
p(0, x) = fB(0) (x).
where fB(h) (·) is the density of a B(h). Using Taylor series expansion for p(t− h, y)
around (t, x), we get
(y − x)2
p(t − h, y) = p(t, x) − hpt (t, x) + (y − x)px (t, x) + pxx (t, x) + · · · .
2
Substituting in Equation 10.7 we get
Z
p(t, x) = [p(t, x) − hpt (t, x) + (y − x)px (t, x)
R
(y − x)2
+ pxx (t, x) + o((y − x)2 )]fB(h) (x − y)dy
2
E(B(h)2 )
= p(t, x) − hpt (t, x) − E(B(h))px (t, x) + pxx (t, x) + o(h)
2
h
= p(t, x) − hpt (t, x) + pxx (t, x) + o(h).
2
Here we have used the fact that all odd moments of B(h) are zero, and
(2k)! k
E(B(h)2k ) = h ,
2k k!
452 DIFFUSION PROCESSES
to obtain the o(h) term. Dividing by h we get
1 o(h)
pt (t, x) = pxx (t, x) + .
2 h
Letting h → 0 we get equation 10.6.
Following our plan of studying any class of stochastic processes, we now study the
first passage times in an SBM. Define
Ta = min{t ≥ 0 : B(t) = a} (10.10)
as the first passage time to the state a. Note that this is a well defined random variable
since the sample paths of the SBM are everywhere continuous with probability 1. It
is clear that Ta is a stopping time, since one can tell if the SBM has visited state a by
time t by looking at the sample path of the SBM over [0, t]. The next theorem gives
the pdf of Ta . It uses a clever argument called ”reflection principle,” which uses the
symmetry of the SBM and its strong Markov property.
Proof: We shall do an infinitesimal first step analysis and derive a differential equa-
tion for the LST
ψ(s, x) = E(e−sTa |X(0) = x).
Note that, since a > 0 and X(0) = 0, we can redefine the above first passage time as
Ta = min{t ≥ 0 : X(t) ≥ a}.
We know that X(h)−X(0) ∼ N (µh, σ 2 h). Suppose X(0) = x and X(h)−X(0) =
y. If x + y > a, Ta ≈ h. Else, due to the Markov property of the BM, Ta has the
same distribution as h + Ta starting from state x + y. Using this argument and fh (·)
as the pdf of X(h) − X(0) we get
ψ(s, x) = E(e−sTa |X(0) = x)
FIRST PASSAGE TIMES 455
Z
= E(e−sTa |X(h) − X(0) = y, X(0) = x)fh (y)dy
y∈R
Z
= E(e−s(Ta +h) |X(0) = x + y)fh (y)dy
y∈R
Z
= e−sh ψ(s, x + y)fh (y)dy.
y∈R
The case of a < 0 can be handled by simply studying the −a case in a BM(−µ, σ).
Next let a < 0 and b > 0 and define
Tab = min{t ≥ 0 : X(t) ∈ {a, b}}. (10.17)
Thus Tab is the first time the BM(µ, σ) reaches either a or b. Following Theo-
rem 10.10 we give the main result in the following theorem.
Proof: Let
v(x) = P(X(Tab ) = b|X(0) = x).
Using the notation from the proof of Theorem 10.10, we get, for a < x < b,
Z
v(x) = v(x + y)fh (y)dy
y∈R
y 2 ′′
Z
= [v(x) + yv ′ (x) + v (x) + · · ·]fh (y)dy
y∈R 2
σ 2 h ′′
= v(x) + µhv ′ (x) + v (x) + o(h).
2
FIRST PASSAGE TIMES 457
Dividing by h and letting h → 0, we get
σ 2 ′′
v (x) + µv ′ (x) = 0.
2
The solution to the above equation is
v ′ (x) = Ceθx ,
where
2µ
θ=− .
σ2
This yields
v(x) = Aeθx + B,
where A and B are arbitrary constants. Using the boundary conditions
v(a) = 0, v(b) = 1,
we get the complete solution as
exp(θa) − exp(θx)
v(x) = . (10.23)
exp(θa) − exp(θb)
Substituting x = 0 gives Equation 10.19.
We shall show an alternate method of deriving the results of the above theorem by
using the concept of Martingales in Section 10.7.
The nomenclature makes intuitive sense because one can obtain a sample path of
{Y (t), t ≥ 0} by reflecting the parts of the sample path of an SBM that lie below
zero around the horizontal axis x = 0 in the (t, x) plane, as shown in Figure 10.2.
Y(t)
We also say that x = 0 is a reflecting boundary. The following theorem lists the
important properties of a reflected SBM.
Proof: Part 1 is obvious from the definition. To show part 2, we exploit the following
consequence of the symmetry of the SBM
P(−y ≤ B(t + s) ≤ y|B(t) = x) = P(−y ≤ B(t + s) ≤ y|B(t) = −x).
Now let 0 ≤ t1 < t2 < · · · < tn < tn + s. We have
P(Y (tn + s) ≤ y|Y (ti ) = yi , 1 ≤ i ≤ n)
= P(−y ≤ B(tn + s) ≤ y|B(ti ) = ±yi , 1 ≤ i ≤ n)
= P(−y ≤ B(tn + s) ≤ y|B(ti ) = yi , 1 ≤ i ≤ n) (by symmetry)
= P(−y ≤ B(tn + s) ≤ y|B(tn ) = yn ) (SBM is Markov)
= P(−y − yn ≤ B(tn + s) − B(tn ) ≤ y − yn |B(tn ) = yn )
Z y
= φ(s, x − yn )dx (SBM has ind. inc.).
−y
The last equality follows because B(tn + s) − B(tn ) is independent of B(tn ) and is
Normally distributed with mean 0 and variance s. This proves the Markov property.
It also implies that given Y (t) = x, the density of the increment Y (t + s) − Y (t)
is φ(s, y) + φ(s, −y − 2x) = φ(s, y) + φ(s, y + 2x). This shows the stationarity of
increments. Since the density depends on x, the increments are clearly not indepen-
dent. That proves part 3. Part 4 follows by taking the derivatives with respect y on
both sides of
Z y
P(Y (t) ≤ y) = P(−y ≤ B(t) < y) = φ(u, t)du,
−y
and using the symmetry of the φ density. Part 5 follows by direct integrations.
The symmetry of the SBM can be used to study its reflection across any horizontal
line x = a, not just across the t−axis x = 0. In case a > 0, we consider the process
that is reflected downward, so that the reflected process {Ya (t), t ≥ 0} has state-
space (−∞, a] and is defined by
Ya (t) = a − |a − B(t)|, t ≥ 0.
Similarly, in case a < 0, we consider the process that reflected upward, so that the
reflected process {Ya (t), t ≥ 0} has state-space [a, ∞) and is defined by
Ya (t) = |B(t) − a| + a, t ≥ 0.
For a = 0, we could use either of the above two definitions. The Definition 10.6
corresponds to using the upward reflection. A typical sample path of such a process
is shown in Figure 10.3.
460 DIFFUSION PROCESSES
Ya(t)
Taking the derivatives with respect to y we can derive the density given in Equa-
tion 10.25. The case a ≤ 0 is similar.
How does the partial differential equation of Theorem 10.6 account for the reflect-
ing boundaries? We state the result in the following theorem.
Theorem 10.14 Kolmogorov Equation for the Reflected SBM. Let p(t, y) be the
density Ya (t). It satisfies the partial differential equation and the boundary condi-
tions of Theorem 10.6, and it satisfies an additional boundary condition:
py (t, a) = 0, t ≥ 0.
Proof: The proof of the partial differential equation is as in Theorem 10.6. The proof
of the boundary condition is technical, and beyond the scope of this book.
Note that the partial derivative in the boundary condition of the above theorem is
to be interpreted as the right derivative if a ≤ 0 and the left derivative if a > 0. The
reader should verify that the density in Equation 10.25 satisfies the partial differential
equation and the boundary condition.
REFLECTED BM AND LIMITING DISTRIBUTIONS 461
10.6 Reflected BM and Limiting Distributions
Definition 10.6 Reflected BM. Let {X(t), t ≥ 0} be a BM(µ, σ). The process
{Y (t), t ≥ 0} defined by
Y (t) = X(t) − inf X(u), t ≥ 0, (10.26)
0≤u≤t
Note that |X(t)| is a very different process than the reflected BM defined above.
The two are identical (in distribution) if µ = 0. Since we do not have symmetry,
we derive the marginal distribution of a reflected BM by deriving the Kolmogorov
equation satisfied by its density in the next theorem.
One can similarly study a Brownian motion constrained to lie in the interval [a, b]
with X(0) ∈ [a, b]. A typical sample path such a BM is shown in Figure 10.4. Thus
it is reflected up at a and down at b. Giving the functional form of such a BM along
the same lines as Equation 10.26 is not possible. However, we give the Kolmogorov
equation satisfied by its density in the next theorem. The proof is omitted.
0 t
Note that py (t, a) is to be interpreted as the right derivative, and py (t, b) as the
left derivative. Although, one can solve the above equation analytically, the result is
complicated, and hence we do not give it here. Below we study the solution when
t → ∞.
Proof: We shall assume that the limiting distribution [p(y), a ≤ y ≤ b] exists. Hence
we expect
lim pt (t, y) = 0, y ∈ [a, b].
t→∞
Substituting in Equation 10.28 we get
d2 d
2
p(y) − θ p(y) = 0,
dy dy
with θ = 2µ/σ 2 . The solution to the above equation is given by
p(y) = A exp(θy) + B, y ∈ [a, b].
BM AND MARTINGALES 463
The boundary conditions in Theorem 10.16 reduce to
θp(a) = p′ (a),
θp(b) = p′ (b).
Both these conditions yield only one equation: B = 0. The remaining constant A
can be evaluated by using
Z b
p(y)dy = 1.
a
This yields the solution given in Equation 10.29.
Example 10.2 Linear Martingale. Let {X(t), t ≥ 0} be a BM(µ, σ), and define
Y (t) = X(t) − µt, t ≥ 0.
Show that {Y (t), t ≥ 0} is a Martingale.
Note that conditioning on {Y (u) : 0 ≤ u ≤ s} is equivalent to conditioning on
{X(u) : 0 ≤ u ≤ s}. From the properties of BM we see that X(t + s) − X(s) is
independent of {X(u) : 0 ≤ u ≤ s} and has N (µt, σ 2 t) distribution. Using this we
get
E(Y (t + s)|Y (u) : 0 ≤ u ≤ s)
464 DIFFUSION PROCESSES
= E(X(t + s) − µ(t + s)|X(u) : 0 ≤ u ≤ s)
= E(X(t + s) − X(s) − µt + X(s) − µs|X(u) : 0 ≤ u ≤ s)
= E(X(t + s) − X(s) − µt|X(u) : 0 ≤ u ≤ s)
+E(X(s) − µs|X(u) : 0 ≤ u ≤ s)
= X(s) − µs = Y (s).
Thus {Y (t), t ≥ 0} is a Martingale. As a special case we see that the SBM {B(t), t ≥
0} is a Martingale.
Example 10.3 Quadratic Martingale. Let {B(t), t ≥ 0} be the SBM, and define
Y (t) = B 2 (t) − t, t ≥ 0.
Show that {Y (t), t ≥ 0} is a Martingale.
Using the same arguments as in Example 10.2, we get
E(Y (t + s)|Y (u) : 0 ≤ u ≤ s)
= E(B 2 (t + s) − (t + s)|B 2 (u) − u : 0 ≤ u ≤ s)
= E((B(t + s) − B(s) + B(s))2 − (t + s)|B(u) : 0 ≤ u ≤ s)
= E((B(t + s) − B(s))2 |B(u) : 0 ≤ u ≤ s)
+E(2(B(t + s) − B(s))B(s)|B(u) : 0 ≤ u ≤ s)
+E(B(s)2 |B(u) : 0 ≤ u ≤ s) − (t + s)
= t + B 2 (s) − (t + s)
= B 2 (s) − s = Y (s).
Thus {B 2 (t) − t, t ≥ 0} is a Martingale.
One of the main results about Martingales is the optional sampling theorem (also
called the stopping theorem), stated below for the continuous time case.
We do not include the proof of this theorem here. Similar theorem holds for the
discrete time Martingales as well. We illustrate the theorem by using it to derive the
results of Theorem 10.11.
Example 10.5 Let {X(t), t ≥ 0} be a BM(µ, σ). Let a < 0 and b > 0 be given,
and Tab be the first time the BM visits a or b. From Example 10.4 we conclude that
exp(θX(t)) is a Martingale if we choose
2µ
θ=− .
σ2
Since X(t) ∼ N (µt, σ 2 t), we see that X(t) will eventually go below a or above b
with probability 1. Hence P(Tab < ∞) = 1. Also a ≤ X(min(Tab , t)) ≤ b for all
t ≥ 0. Hence we can apply Theorem 10.18. We have
E(exp(θX(Tab ))) = E(exp(θX(0))) = 1.
Now let α = P(X(Tab ) = b) be the probability that the BM visits b before it visits
a. Then
1 = E(exp(θX(Tab ))) = exp(θb)α + exp(θa)(1 − α).
Solving for α we get the result in Equation 10.19.
To derive E(Tab ) we use the linear Martingale X(t) − µt of Example 10.2. Using
Theorem 10.18 we get
E(X(Tab ) − µTab ) = E(X(0) − µ0) = 0.
Hence
E(X(Tab )) bα + a(1 − α)
E(Tab ) = = .
µ µ
Simplifying this we get Equation 10.20.
466 DIFFUSION PROCESSES
10.8 Cost/Reward Models
Let X(t) be the state of a system at time t, and suppose {X(t), t ≥ 0} is a BM(µ, σ).
Suppose the system incurs costs at rate f (x) whenever it is in state x. Using α ≥ 0 as
a continuous discount factor, we see that the expected total discounted cost (ETDC)
over [0, t] starting in state x is given by
Z t
−αu
c(t, x) = E e f (X(u))du|X(0) = x , x ∈ R.
0
The next theorem gives the partial differential equation satisfied by the function c.
Theorem 10.19 ETDC Over [0, t]. The function c(t, x) satisfies the following par-
tial differential equation:
σ2
ct (t, x) = f (x) − αc(t, x) + µcx (t, x) + cxx (t, x). (10.32)
2
The boundary condition is c(0, x) = 0.
Proof: We shall do an infinitesimal first step analysis and derive a differential equa-
tion for c(t, x). We know that X(h) − X(0) ∼ N (µh, σ 2 h). Using fh (·) as the pdf
of X(h) − X(0) we get
c(t, x)
t
Z
−αu
= E e f (X(u))du X(0) = x
0
Z Z t
−αu
= E e f (X(u))du X(h) − X(0) = y, X(0) = x fh (y)dy
y∈R 0
Z Z t
e−αu f (X(u))du X(h) = x + y + o(h) fh (y)dy
= f (x)h + E
y∈R h
Z
= [f (x)h + e−αh c(t − h, x + y) + o(h)]fh (y)dy.
x∈R
Example 10.6 Quadratic Cost Model. Suppose f (x) = βx2 , where β > 0 is a
fixed constant. Compute c(t, 0).
We have
Z t
−αu 2
c(t, 0) = E e βX (u)du X(0) = 0
0
Z t
e−αu βE(X 2 (u) X(0) = 0)du
=
0
Z t
= e−αu β(σ 2 u + µ2 u2 )du
0
1
= βσ 2 (1 − e−αt (1 + αt))/α2 + 2βµ2 (1 − e−αt (1 + αt + α2 t2 ))/α3 .
2
If we let t → ∞, we get the ETDC over the infinite horizon as
α
c(∞, 0) = 2β(µ2 + σ 2 )/α3 .
2
If we let α → 0 in the expression for c(t, 0) we get the expected total cost over [0, t]
as
1 2 2 1 2 3
β σ t + µ t .
2 3
This implies that the average cost per unit time c(t, 0)/t goes to infinity as t → ∞.
In many applications the cost rate at time t is given by f (t, X(t)). In this case we
define the ETDC over [0, t] as
Z t
−αu
c(t, x) = E e f (u, X(u))du X(0) = x , x ∈ R.
0
468 DIFFUSION PROCESSES
In this case we cannot derive a differential equation for c(t, x). In stead, we use the
brute force formula
Z t
c(t, x) = e−αu E(f (u, X(u)) | X(0) = x)du, x ∈ R.
0
Since we know the distribution of X(u), E(f (u, X(u)) is available in closed form if
f is reasonably simple. Hence the integral can be evaluated as a standard Riemann
integral.
In c(t, x) we computed the ETDC over [0, t], where t is a fixed constant. In many
applications we are interested in the ETDC over [0, T ], where T is a random vari-
able, typically a stopping time for the underlying process. Here we consider the case
where T = Tab , where Tab is the first passage time to the set {a, b}, as defined in
Equation 10.17. Let
Z Tab
e−αu f (X(u))du X(0) = x , a ≤ x ≤ b.
c(x) = E
0
The next theorem describes the differential equation satisfied by c(x) and also the
solution.
Theorem 10.20 ETDC over [0, Tab ]. The function c(x) satisfies the following par-
tial differential equation:
dc(x) σ 2 d2 c(x)
−αc(x) + µ + = −f (x). (10.34)
dx 2 dx2
The boundary condition is c(a) = c(b) = 0.
Example 10.7 Control Policy for an SBM. Suppose the state of a system evolves
according to an SBM. When the system is in state x, the system incurs holding cost
at rate cx2 . We have an option of instantaneously changing the state of the system to
zero at any time by paying a fee of K. Once the system state is reset, it evolves as
an SBM until the next intervention. Since the stochastic evolution is symmetric, we
use the following control policy parameterized by a single parameter b > 0: reset the
system state back to state 0 when it reaches state b or −b. Compute the value of b that
minimizes the long run cost per unit time of operating the system.
470 DIFFUSION PROCESSES
Let Y (t) be the state of the system at time t, and let
T = min{t ≥ 0 : Y (t) = b or Y (t) = −b}.
Then Y (T +) = Y (0) = 0, and the system regenerates at time T . Let c(0) be the
expected holding cost over (0, T ] starting from state 0. From Equation 10.37 we get
Z b Z 0
2
c(0) = c (b − u)u du + (u + b)u du = cb4 /6.
2
(10.38)
0 −b
Thus the total cost over a cycle (0, T ] is K + cb4 /6. The expected length of the
cycle is given by Equation 10.22 as E(T ) = b2 . From the results on renewal reward
processes we get the long run cost per unit time as
K + cb4 /6
.
b2
This is a convex function over b > 0 and is minimized at
1/4
6K
b∗ = .
c
Note that b∗ increases as c decreases or K increases, as expected.
Although we have defined the concept of p-variation for an SBM, the same definition
can be used to define the p-variation of any continuous time stochastic process with
piecewise continuous sample paths.
Proof: Let X k (t) be as defined in Equation 10.3. We begin by computing the to-
tal variation of the {X k (t), t ≥ 0} process over [0, t]. Since the sample paths of
{X k (t), t ≥ 0} are piecewise constant functions of time t, we see that the total vari-
ation of X k over [0, t] is the just the sum of the absolute values of all the jumps√over
[0, t]. Since the jumps in the sample paths of {X k (t), t ≥ 0} are of size ±1/ k at
all integer multiples of 1/k, we get
[kt]
X n n−1
VX1 k (t) = |X k (t ) − X k (t )|
n=1
k k
[kt]
X 1
= √
n=1 k
[kt]
= √ .
k
Similarly, the quadratic variation of X k is given by
[kt]
X n n−1 2
VX2 k (t) = |X k (t ) − X k (t )|
n=1
k k
[kt]
X 1
= ( √ )2
n=1 k
[kt]
= .
k
Now, we know that {X k (t), t ≥ 0} converges to {B(t), t ≥ 0} as k → ∞. Hence
total and quadratic variations of {X k (t), t ≥ 0} converges to that of {B(t), t ≥ 0}
472 DIFFUSION PROCESSES
as k → ∞. (This needs to be proved, but we omit this step.) Hence we get
[kt]
VB1 (t) = lim VX1 k (t) = lim √ = ∞,
k→∞ k→∞ k
and
[kt]
VB2 (t) = lim VX2 k (t) = lim= t.
k→∞ k→∞ k
The result about higher-order variation follows similarly. This “proves” the
theorem.
Thus we cannot use Stieltjes integrals to define the integral in Equation 10.39.
What we are facing is a new kind of integral, and it was properly defined first by Ito.
We shall use a severely simplified version of that definition here.
Definition 10.9 Ito Integral. Let f (t, x) be a continuous function in t and x. Sup-
pose Z t
E(f 2 (u, X(u)))du < ∞.
0
Then the Ito integral of f (t, B(t)) with respect to B(t) is defined as
Z t k
X n−1 n−1 n n−1
f (u, B(u))dB(u) = lim f t , B(t ) B(t ) − B(t ) .
0 k→∞
n=1
k k k k
(10.41)
Here the limit is defined in the mean-squared sense.
Note that the above definition is very similar to the definition of Stieltjes integral,
except that we insist on using the value of the function f at the left ends of the
intervals. Thus, for the n-th interval (t n−1 n n−1 n−1
k , t k ], we use the value f (t k , B(t k ))
at the left end of the interval, and multiply it by the increment in the SBM over the
interval, and then sum these products over all the intervals. This choice is very critical
in the definition of Ito integral and has very important implications.
Theorem 10.22 Linearity of Ito Integral. Ito integral is a linear operator, i.e.,
Z t
(af (u, B(u)) + bg(u, B(u)))dB(u)
0
Z t Z t
=a f (u, B(u))dB(u) + b g(u, B(u))dB(u).
0 0
Note that the Ito integral of Equation 10.41 is a random variable, and hence it
makes sense to compute its moments. The next theorem gives the first two moments.
STOCHASTIC INTEGRATION 473
Theorem 10.23 Moments of the Ito Integral.
Z t
E f (u, B(u))dB(u) = 0, (10.42)
"Z 0 2 #
t Z t
E f (u, B(u))dB(u) = E(f 2 (u, X(u)))du. (10.43)
0 0
Now let k → ∞. The sum on the left hand side reduces to the squared variation of
R SBM over [0, t], while the sum on the right hand side reduces to the Ito integral
the
BdB. Hence, using Theorem 10.21 we get
Z t
VB2 (t) = t = B 2 (t) − 2 B(u)dB(u).
0
This gives
t
1 2
Z
B(u)dB(u) =
(B (t) − t).
0 2
Note that there is an unexpected t/2 term in the integral! This is the contribution of
the non-zero quadratic variation that is absent in the standard calculus, but plays an
important part in the Ito calculus!
We have seen the terms B(t) and B 2 (t) − t as examples of Martingales, see Ex-
amples 10.2 and 10.3. Is this just a coincidence that the two Ito integrals turned out
STOCHASTIC DIFFERENTIAL EQUATIONS 475
to be Martingales? The next theorem shows that this is a general property of the Ito
integrals. We omit the proof, since it is too technical for this book.
One can extend the class of functions f for which the Ito integral can be defined
to the functions of the type f (t, X[0 : t]) where X[0 : t] is short for {X(u) : 0 ≤
u ≤ t}. However, we shall not use this generality in this book. We refer the reader to
many excellent texts on this subject for further information.
We develop three peculiar integrals below that will help us in the development of
stochastic differential calculus in the next section. Following Equation 10.41 we have
Z t k
X n n−1 n n−1
dB(u)dB(u) = lim (B(t ) − B(t ))(B(t ) − B(t ))
0 k→∞
n=1
k k k k
= VB2 (t) = t.
t k
1 n n−1
Z X
dudB(u) = lim (B(t ) − B(t ))
0 k→∞
n=1
k k k
B(t)
= lim = 0.
k→∞ k
t k
1
Z X
dudu = lim ( )2
0 k→∞
n=1
k
k
= lim = 0.
k→∞ k2
Theorem 10.24 shows that we can think of the Ito integral as a continuous time
stochastic process. In fact it motivates us to define more general stochastic processes
as defined below.
476 DIFFUSION PROCESSES
Definition 10.10 Diffusion Process. Let µ(t, x) and σ(t, x) be continuous func-
tions of t and x. Suppose
Z t
E(σ 2 (u, B(u)))du < ∞.
0
Define
Z t Z t
X(t) = X(0) + µ(u, B(u))du + σ(u, B(u))dB(u), t ≥ 0. (10.46)
0 0
Then {X(t), t ≥ 0} is called a diffusion process, µ is called its drift function, and σ
the diffusion function.
Note that the first integral is a simple Riemann integral, while the second inte-
gral is an Ito integral. Diffusion processes are a special case of Ito processes, which
are defined as in Equation 10.46 with more general functions µ(t, X[0 : t]) and
σ(t, X[0 : t]). We shall restrict ourselves to diffusion processes for simplicity. It is
customary to write the integral equation in Equation 10.46 in a notationally equiva-
lent stochastic differential equation form as follows
dX(t) = µ(t, X(t))dt + σ(t, X(t))dB(t). (10.47)
One can even interpret the above stochastic differential equation as follows: if
X(t) = x, the increment X(t+dt)−X(t) in the X process over the interval (t, t+dt)
is a sum of two components: (1) a deterministic drift component given by µ(t, x)dt,
and (2) a random diffusion component σ(t, x)dB(t) that is a N (0, σ 2 (t, x)dt) ran-
dom variable.
Equations 10.48 and 10.49 are known as Ito’s formula. The formula remains valid
if we replace the µ(t, X(t)) and σ(t, X(t)) functions by the path dependent functions
µ(t, X[0 : t]) and σ(t, X[(0 : t]). However, we do not prove it in that generality here.
If X(t) = B(t), i.e., if µ(t, x) = 0 and σ(t, x) = 1, Ito’s formula reduces to
Z t
1
g(t, B(t)) = g(0, 0) + gt (u, B(u)) + gxx (u, B(u)) du
0 2
Z t
+ gx (u, B(u))dB(u). (10.50)
0
We can write the above in an equivalent differential form as follows:
1
dg(t, B(t)) = gt (t, B(t)) + gxx (t, B(t)) dt + gx (t, B(t))dB(t). (10.51)
2
The above formula can be used to compute stochastic integrals defined by Equa-
tion 10.41. Let Z x
G(t, x) = g(t, y)dy.
0
Then using g(t, x) in place of gx (t, x), Equation 10.50 can be written in a more
useful form as
Z t Z t
1
g(u, B(u))dB(u) = G(t, B(t)) − Gt (u, B(u)) + gx (u, B(u)) du.
0 0 2
(10.52)
We illustrate the use of the above formula by several examples below.
Rt
Example 10.10 Compute 0 B(u)dB(u) using Equation 10.52.
Let X(t) be the price of a stock at time t. One of the simplest financial derivative
is called the European call option. (American call options are discussed at the end
of this section.) A broker sells this option at a price C to a buyer. It gives the buyer
(called the owner) the right (but not the obligation) to buy one share of this stock at a
pre-specified time T in the future (called the maturity time, or expiry date), at a pre-
specified price K (called the strike price). Clearly, if the stock price X(T ) at time
T is greater than K, it makes sense for the owner to exercise the option, since the
owner can buy the stock at price K and immediately sell it at price X(T ) and realize
a net profit of X(T ) − K. On the other hand, if X(T ) ≤ K, it makes sense to let the
option lapse. Thus, the payout to the owner of this contract is max(X(T ) − K, 0)
at time T . How much should the broker sell this option for, i.e., what should be the
value of C? This is the famous option pricing problem.
Before we can settle the question of evaluating the proper value of C, we need
to know what else we can do with the money. We assume that we can either invest
it in the stock itself, or put it in a risk-free savings account that yields a continuous
fixed rate of return r. Thus one dollar invested in the stock at time 0 will be worth
$X(t)/X(0) at time t, while one dollar invested in the savings account will be worth
ert at time t.
One would naively argue that the value of C should be given by the expected
discounted (discount factor r) value of the option payout at time T , i.e.,
C = e−rT E(max(X(T ) − K, 0)). (10.63)
One can show (see any book on mathematical finance) that if µ 6= r, the broker can
482 DIFFUSION PROCESSES
make positive profit with probability one by judiciously investing the above proceeds
of $C in the stock and the savings account. Such a possibility is called an arbitrage
opportunity, and it cannot exist in a perfect market. Hence we need to evaluate C
assuming that µ = r. The next theorem gives the expression for C when µ = r. This
is the celebrated Black-Scholes’ formula.
Proof: Define
+ X(T ) if X(T ) > K,
X(T ) =
0 if X(T ) ≤ K.
and
+ K if X(T ) > K,
K =
0 if X(T ) ≤ K.
Then
max(X(T ) − K, 0) = X(T )+ − K + .
The value of the option as given in Equation 10.63 reduces to
C = e−rT [E(X(T )+ ) − E(K + )], (10.67)
where we use µ = r to compute the expectations. Next we compute the two expec-
tations above. Using Equation 10.62 we get
E(X(T )+ ) =
1 √
= E(exp((lnX(0) + (r − σ 2 )T + σ T Z)+ )
2
√
1
= E exp(lnX(0) + (r − σ 2 )T + σ T Z) · 1{lnX(0)+(r− 1 σ2 )T +σ√T Z>lnK}
2 2
1 2 √
= exp(lnX(0) + (r − σ )T )E(exp(σ T Z)1{Z>c1 } ),
2
where
ln(K/X(0)) − (r − 21 σ 2 )T
c1 = √ .
σ T
Now we can show by direct integration that
2
E(exp(aZ)1{Z>b} ) = ea /2
Φ(a − b), (10.68)
APPLICATIONS TO FINANCE 483
for positive a. Substituting in the previous equation, we get
1 1
E(X(T )+ ) = X(0) exp((r − σ 2 )T ) exp( σ 2 T )Φ(d1 ) = X(0)erT Φ(d1 ),
2 2
√
where d1 = σ T − c1 is as given in Equation 10.65. Next, we have
E(K + ) = E(K1X(T )>K ) = KP(X(T ) > K)
1 √
= KP(lnX(0) + (r − σ 2 )T + σ T Z > lnK)
2
= KP(Z > c1 ) = KP(Z ≤ d2 ),
where d2 = −c1 is as given in Equation 10.66. Substituting in Equation 10.67 we
get Equation 10.64.
One can also study the European put option in a similar way. It gives the buyer the
right (but not the obligation) to sell one share of the stock at time T at price K. The
payout of this option at time T is max(K − X(T ), 0). We leave it to the reader to
prove the following theorem.
There are options called the American and call and put options with the maturity
time T and strike price K. These are the same as the European options, except that
they may be exercised at any time t ∈ [0, T ]. If an American call option is exercised at
time t, then its payout is max(X(t) − K, 0). Similarly the payout from an American
put option exercised at time t is max(K − X(t), 0). One can show that under the
assumption that the stock price satisfies Equation 10.62, it is optimal to exercise an
American call option at maturity. Thus its value is same as that of the European
option. Pricing the American put option is much harder for the finite maturity date.
However, if the maturity date is infinity (this case is called the perpetual American
put option) one can evaluate it analytically. We state the main result in the following
theorem.
Note that the result of the above theorem does not depend upon µ. As we have seen
before, this is because we need to do the calculations under the assumption µ = r in
order to avoid arbitrage.
10.1 Let {B(t), t ≥ 0} be an SBM. Show that Cov(B(s), B(t)) = min(s, t).
10.3 Let {X(t), t ≥ 0} be a BM(µ, σ). Compute the joint density of [X(t1 ), X(t2 )],
where 0 < t1 < t2 .
10.4 Let {B(t), t ≥ 0} be an SBM. Let 0 < s < t. Show that, given B(t) = y,
B(s) is a normal random variable with mean ys/t and variance s(t − s)/t.
10.5 Let {X(t), t ≥ 0} be a BM(µ, σ). Let 0 < s < t. Compute the conditional
density of B(s) given B(t) = y.
10.6 Let t ∈ (0, 1). Show that B(t) − tB(1) has the same distribution as the condi-
tional density of B(t) given B(1) = 0.
10.7 Verify that the solution given in Equation 10.8 satisfies the partial differential
equation given in Equation 10.6.
10.8 Prove Theorem 10.7. Show that the pdf of a N (µt, σ 2 t) random variable sat-
isfies this equation.
10.10 Let {X(t), t ≥ 0} be a BM(µ, σ). If we stop the process at time t, we earn a
discounted reward equal to e−αt X(t). Suppose we use the following policy: stop the
process as soon as it reaches a state a. Find the optimal value of a that maximizes the
expected discounted reward of this policy.
10.12 Let {X(t), t ≥ 0} be a BM(µ, σ). Find functions a(t) and b(t) such that
X 2 (t) + a(t)X(t) + b(t) is a Martingale. Hint: Use Example 10.3.
COMPUTATIONAL EXERCISES 485
10.13 Let {X(t), t ≥ 0} be a BM(µ, σ). Find the functions a(t), b(t), and c(t) such
that X 3 (t) + a(t)X 2 (t) + b(t)X(t) + c(t) is a Martingale. Hint: Use Conceptual
Exercise 10.3.
2
10.14 Let m2 (x) = E(Tab |X(0) = x) where Tab is the first passage time as defined
in Equation 10.17 in a BM(µ, σ). Show that m2 (·) satisfies the following differential
equation
σ 2 ′′
m (x) + µm′2 (x) = −2m(x), a < x < b
2 2
where m(x) = E(Tab |X(0) = x). The boundary conditions are m2 (a) = m2 (b) =
0.
10.15 Let {X(t), t ≥ 0} be a BM(µ, σ) with X(0) = 0 and µ < 0, and define
M = maxt≥0 X(t). Show that M is an exp(−2µ/σ 2) random variable. Hint: Argue
that
P(M < y) = lim P(X(Tay ) = a),
a→−∞
and use Theorem 10.11.
10.16 Let {X(t), t ≥ 0} be a BM(µ, σ 2 ) with X(0) = 0 and µ > 0, and define
L = mint≥0 X(t). Use the hint in Computational Exercise 10.15 to compute the
distribution of L.
10.17 Use the exponential Martingale of Example 10.4 to derive Equation 10.13.
Hint: Set
θµ + θ2 σ 2 /2 = s
and use Theorem 10.18.
10.18 Let X(t) be the price of a stock at time t. Suppose {X(t), t ≥ 0} is the
geometric Brownian motion defined by Equation 10.55. Thus a dollar invested in this
stock at time u will be worth X(t)/X(u) at time t > u. Consider a static investment
strategy by which we invest in this stock fresh money at a rate $d at all times t ≥ 0.
Let Y (t) be total value of the investment at time t, assuming that Y (0) = 0. Compute
E(Y (t)).
10.19 Consider the optimal policy derived in Example 10.7. Using the regenerative
nature of {Y (t), t ≥ 0} defined there, compute the limiting mean and variance of
Y (t) as t → ∞.
10.20 Let Y (t) be the level of inventory at time t. Assume that Y (0) = q > 0.
When the inventory reaches 0, an order of size q is placed from an outside source.
Assume that the order arrives instantaneously, so that the inventory level jumps to
q. Between two orders the Y process behaves like an BM(µ, σ) starting in state q,
with µ < 0. Suppose it costs h dollars to hold one unit of the inventory for one
unit of time, and the restoration operation costs K. Find the optimal value of q that
minimizes the long run expected cost per unit time.
486 DIFFUSION PROCESSES
10.21 Using Definition 10.9 show that
Z t Z t
2 1 3
B (u)dB(u) = B (t) − B(u)du.
0 3 0
10.1 Let T be a stopping time for the stochastic process {X(t), t ≥ 0}. Prove or
disprove the following statements:
10.2 Let {X(t), t ≥ 0} be a BM(µ, σ). Are the following random variables stopping
times for the BM?
Z t
T1 = min{t ≥ 0 : X(u)du ≥ 1},
0
10.11 Proof of Theorem 10.28. Consider the perpetual American put option with
strike price K for the stock price process given in Theorem 10.28. Suppose X(0) >
K, for otherwise the buyer would exercise the option right away with a payout of
K − X(0). Let Π(a) be a policy that exercises the option as soon as the stock price
falls below a given constant 0 < a < K. Thus the payout under policy Π(a) is
C(a, x) = E(e−rTa (K − a)|X(0) = x), x>K
where
Ta = min{t ≥ 0 : X(t) = a}
and the expected value is computed under the assumption that µ = r. Show that
2r
C(a, x) = (K − a)(a/x) σ2 , x > a.
Hint: First show that Ta is the same as the first time a BM(r − σ 2 /2, σ), starting from
ln(X(0)), reaches ln(a). Then use Theorem 10.10 to compute the LST of Ta , and
then use Computational Exercise 10.10 to compute the expectations.
10.13 Proof of Theorem 10.28, continued. Starting with the result in the Conceptual
Exercise 10.11 show that the value of a that maximizes C(a, x) is given by the a∗ of
Theorem 10.28. Also, C(a∗ , x) is as given in the theorem.
Epilogue
Here ends our journey. Congratulations! Now is the time to look back to see what we
have learned and to look ahead to see what uncharted territory lies ahead.
What lies ahead? There is far more to diffusion processes than what we have cov-
ered here. Also, we have not covered the class of stationary processes: these pro-
cesses look the same from any point of time. We saw them when studying the Markov
chains starting in their stationary distributions. Stationary processes play an impor-
tant role in statistics and forecasting.
We have also ignored the topic of controlling a stochastic system. The models
studied here are descriptive – they describe the behavior of a system. They do not
tell us how to control it. Of course, a given control scheme can be analyzed using
the descriptive models. However, these models will not show how to find the optimal
control scheme. This direction of inquiry will lead us to Markov decision processes.
Each of these topics merits a whole new book by itself. We stop here by wishing
our readers well in the future as they explore these new and exciting areas.
489
APPENDIX A
Probability of Events
This appendix contains the brief review of probability and analysis topics that we
use in the book. Its main aim is to act as a resource for the main results. It is not
meant to be a source for the beginners to learn these topics.
Probability Model
One can deduce the following important properties of the probability function:
P(φ) = 1, P(E ∪ F ) = P(E) + P(F ) − P(EF ).
491
492 PROBABILITY OF EVENTS
Conditional Probability
Bayes’ Rule. One consequence of the law of total probability is the Bayes’ rule:
P(F |Ei )P(Ei )
P(Ei |F ) = P∞ .
i=1 P(F |Ei )P(Ei )
Limits of Sets
Let E1 , E2 , · · · ∈ F. We define
lim sup En = ∩m≥1 ∪n≥m En , lim inf En = ∪m≥1 ∩n≥m En .
In words, lim sup En is the event that the events En occur infinitely often, and
lim inf En is the event that one of the events En occurs eventually. We have the
following inequalities, Fatou’s lemma for sets:
P(lim sup En ) ≥ lim sup P(En ),
P(lim inf En ) ≤ lim inf P(En ).
The following result is called the Borel-Cantelli lemma:
∞
X
P(En ) < ∞ ⇒ P(lim sup En ) = 0.
n=1
493
494 UNIVARIATE RANDOM VARIABLES
A pdf satisfies Z ∞
fX (x) ≥ 0, fX (u)du = 1.
−∞
It is possible for a random variable to have a discrete part and a continuous part.
There is a third possibility: FX (x) can be continuous, but not absolutely continuous.
Such random variables are called singular, and we will not encounter them in this
book.
Then Z ∞
E(g(X)) = g ′ (x)(1 − FX (x))dx.
0
Expectations of special function of X have special names:
The facts about the commonly occurring integer valued random variables are given
in the tables below.
P(λ) λ λ e−λ(1−z)
496 UNIVARIATE RANDOM VARIABLES
Table of Common Continuous Random Variables
The facts about the commonly occurring real valued continuous random variables
are given in the tables below.
It is possible that for each Xi is a continuous random variable but X is not a jointly
continuous random variable. The marginal pdf fXi (xi ) of Xi from a jointly contin-
uous X is given by
Z
fXi (xi ) = fX (u1 , · · · , ui−1 , xi , ui+1 , · · · , un )du1 · · · dui−1 dui+1 · · · dun .
xj ∈R:j6=i
497
498 MULTIVARIATE RANDOM VARIABLES
Independent Random Variables
Let (X1 , X2 ) be a jointly continuous bivariate random variable. Then the pdf of
Z = X1 + X2 is given by
Z
fZ (z) = fX (x1 , x2 )dx1 dx2 .
(x1 ,x2 ):x1 +x2 =z
This is called the convolution of fX1 and fX2 . As a special case, if X1 and X2 are
non-negative real valued random variables, we have
Z z
fZ (z) = fX1 (x)fX2 (z − x), z ≥ 0.
x=0
MULTIVARIATE RANDOM VARIABLES 499
If X1 and X2 are non-negative real valued random variables (discrete, continuous,
or mixed), Z z
FZ (z) = dFX1 (x)FX2 (z − x), z ≥ 0.
x=0
The following facts about the sums of independent random variables are useful:
In particular we have
n n
!
X X
E Xi = E(Xi ),
i=1 i=1
which holds even if the Xi ’s are dependent. If they are independent, we also have
n n
!
Y Y
E Xi = E(Xi ).
i=1 i=1
Let (X1 , X2 ) be a discrete bivariate random variable. Then the conditional pmf of
X1 given X2 = x2 is given by
pX (x1 , x2 )
pX1 |X2 (x1 |x2 ) = ,
pX2 (x2 )
and the conditional expected value of X1 given X2 = x2 is given by
X
E(X1 |X2 = x2 ) = x1 pX1 |X2 (x1 |x2 ).
x1
Let (X1 , X2 ) be a jointly continuous bivariate random variable. Then the conditional
pdf of X1 given X2 = x2 is given by
fX (x1 , x2 )
fX1 |X2 (x1 |x2 ) = ,
fX2 (x2 )
and the conditional expected value of X1 given X2 = x2 is given by
Z
E(X1 |X2 = x2 ) = x1 fX1 |X2 (x1 |x2 )dx1 .
x1
In general we can define E(X1 |X2 ) to be a random variable that takes value
E(X1 |X2 = x2 ) with “probability dFX2 (x2 ).” With this interpretation we get
Z
E(X1 ) = E(E(X1 |X2 )) = E(X1 |X2 = x2 )dFX2 (x2 ).
x2
Order Statistics
In particular
FYn (t) = F (t)n , FY1 (t) = 1 − (1 − F (t))n .
MULTIVARIATE RANDOM VARIABLES 501
In addition, when (X1 , X2 , · · · , Xn ) are jointly continuous with common pdf f (·),
the (Y1 , Y2 , · · · , Yn ) are also jointly continuous with joint pdf given by
fY (y1 , y2 , · · · , yn ) = n!f (y1 )f (y2 ) · · · f (yn ), y1 ≤ y2 ≤ · · · ≤ yn .
The density is zero outside the above region.
Generating Functions
Let X be a non-negative integer valued random variable with pmf {pk , k ≥ 0}. The
generating function (GF) of X (or its pmf) is defined as
∞
X
gX (z) = E(z X ) = z k pk , |z| ≤ 1.
k=0
The GFs for common random variables in Appendix B. Important and useful prop-
erties of the GF are enumerated below:
Then
lim gXn (z) = gX (z), |z| ≤ 1.
n→∞
The converse also holds.
Let {pk , k ≥ 0} be a sequence of real numbers, not necessarily a pmf. Define its GF
as
X∞
p(z) = z k pk .
k=0
503
504 GENERATING FUNCTIONS
Let R be its radius of convergence. We list some important and useful properties of
the GF:
1. Let q(z) be a GF of a sequence {qk , k ≥ 0}. If p(z) = q(z) for |z| < r ≤ R, then
pk = qk for all k ≥ 0.
2. Let rk = apk + bqk , k ≥ 0, where a and b are constants. Then
r(z) = ap(z) + bq(z).
3. Generating function of {kpk } is zp′ (z).
4. Let q0 = p0 , qk = pk − pk−1 for k ≥ 1. Then
q(z) = (1 − z)p(z).
Pk
5. qk = r=0 pr for k ≥ 0. Then
q(z) = p(z)/(1 − z).
1
Pk
6. limk→∞ k+1 r=0 pr = limz→1 (1 − z)p(z) if the limit on either side exists.
7. limk→∞ pk = limz→1 (1 − z)p(z) if the limit on the left hand side exists.
where
P (1/αi )
ci = −αi , 1 ≤ i ≤ s.
Q′ (1/αi )
APPENDIX E
Laplace-Stieltjes Transforms
Let X be a non-negative real valued random variable with cdf FX (·). The Laplace-
Stieltjes transform (LST) of X (or its cdf) is defined as
Z ∞
−sX
φX (s) = E(e )= e−sx dFX (x), Re(s) ≥ 0.
x=0
The LSTs for common random variables in Appendix B. Important and useful prop-
erties of the LST are enumerated below:
Let F : [0, ∞) → (−∞, ∞), not necessarily a cdf. Define its LST as
Z ∞
LST (F ) = F̃ (s) = e−sx dF (x),
x=0
if the integral exists for some complex s with Re(s) > 0. It is assumed F (0−) = 0
so that there is a jump of size F (0+) at x = 0. We list some important and useful
properties of the LST:
505
506 LAPLACE-STIELTJES TRANSFORMS
1. Let a and b be given constants. Then
LST (aF + bG) = aF̃ (s) + bG̃(s).
Rt
2. H(t) = 0 F (t − u)dG(u), t ≥ 0 ⇔ H̃(s) = F̃ (s)G̃(s).
3. Assuming the limits exist
lim F (t) = lim F̃ (s),
t→∞ s→0
Laplace Transforms
f (t) f ∗ (s)
1 1/s
t 1/s2
tn n!/sn+1
e−at 1/(s + a)
e−at tn−1 /(n − 1)! 1/(s + a)n
(e−at − e−bt )/(b − a) 1/(s + a)(s + b)
507
APPENDIX G
Modes of Convergence
509
510 MODES OF CONVERGENCE
The various modes of convergence are related as follows: Almost sure convergence
implies convergence in probability which implies convergence in distribution. Mean
square convergence implies convergence in probability. Convergence in distribution
or with probability one does not imply convergence in mean. We need an additional
condition called uniform integrability defined as follows: A sequence of random vari-
ables {Xn , n ≥ 0} is called uniformly integrable if for a given ǫ > 0, there exists a
K < ∞ such that
Z
E(|X|; |X| > K) = xdFXn (dx) < ǫ, n ≥ 1.
{x:|x|>K}
We frequently need to interchange the operations of sums, integrals, limits, etc. Such
interchanges are in general not valid unless certain conditions are satisfied. In this
section we collect some useful sufficient conditions which enable us to do such in-
terchanges.
1. Monotone Convergence Theorem for Sums. Let π(i) ≥ 0 for all i ≥ 0 and
{gn (i), n ≥ 1} be a non-decreasing non-negative sequence for each i ≥ 0. Then
∞ ∞
!
X X
lim gn (i)π(i) = lim gn (i) π(i).
n→∞ n→∞
i=0 i=0
3. Fatou’s Lemma for Sums. Suppose {gn (i), n ≥ 1} is a non-negative sequence for
each i ≥ 0 and π(i) ≥ 0 for all i ≥ 0. Then
∞ ∞
!
X X
lim inf gn (i) π(i) ≤ lim inf gn (i)π(i) .
n≥1 n≥1
i=0 i=0
4. Bounded Convergence Theorem for Sums. P Suppose π(i) ≥ 0 for all i ≥ 0, and
there exists a {g(i), i ≥ 0} such that g(i)π(i) < ∞ and |gn (i)| ≤ g(i) for all
n ≥ 1 and i ≥ 0. Then
∞
X ∞
X
lim gn (i)π(i) = lim gn (i) π(i).
n→∞ n→∞
i=0 i=0
511
512 RESULTS FROM ANALYSIS
Suppose {g(i), i ≥ 0} is a non-negative bounded function with 0 ≤ g(i) ≤ c for
all i ≥ 0 for some c < ∞. Then
∞
X ∞
X
lim g(i)πn (i) = g(i) lim πn (i) .
n→∞ n→∞
i=0 i=0
6. Monotone Convergence Theorem for Integrals. Let π(x) ≥ 0 for all x ∈ R and
{gn (x), n ≥ 1} be a non-decreasing non-negative sequence for each x ∈ R. Then
Z Z
lim gn (x)π(x)dx = lim gn (x) π(x)dx.
n→∞ x∈R x∈R n→∞
9. Bounded Convergence Theorem for Integrals. R Let π(x) ≥ 0 for all x ∈ R and
there exists a function g : R → R such that g(x)π(x)dx < ∞, and |gn (x)| ≤
g(x) for each x ∈ R and all n ≥ 1. Then
Z Z
lim gn (x)π(x)dx = lim gn (x) π(x)dx.
n→∞ x∈R x∈R n→∞
10. Let {πn (x), n ≥ 1} be a non-negative sequence for each x ∈ R and let
π(x) = lim πn (x), x ∈ R.
n→∞
Suppose Z Z
lim πn (x)dx = π(x)dx < ∞.
n→∞ x∈R x∈R
Suppose g : R → [0, ∞) is a non-negative bounded function with 0 ≤ g(x) ≤ c
for all x ∈ R for some c < ∞. Then
Z Z
lim g(x)πn (x)dx = g(x) lim πn (x) dx.
n→∞ x∈R x∈R n→∞
APPENDIX I
where {xpn , n ≥ 0} is any one solution (called the particular solution) of the non-
homogeneous equation. The constants cij are to be determined by using the initial
513
514 DIFFERENCE AND DIFFERENTIAL EQUATIONS
conditions.
If the right hand side r(t) is zero for all t ≥ 0, the equation is called homogeneous,
else it is called non-homogeneous. The polynomial
n−1
X
ai αi + αn = 0,
i=0
Chapter 2
MODELING EXERCISES
2.7 {Xn , n ≥ 0} is a space homogeneous random walk on S = {..., −2, −1, 0, 1, 2, ...}
with
pi = p1 (1 − p2 ), qi = p2 (1 − p1 ), ri = 1 − pi − qi .
2.11 Bn (Gn ) = the bar the boy (girl) is in on the nth night. {(Bn , Gn ), n ≥ 0} is a
DTMC on S = {(1, 1), (1, 2), (2, 1), (2, 2)} with the following transition probability
515
516 ANSWERS TO SELECTED PROBLEMS
matrix:
1 0 0 0
a(1 − d) ad (1 − a)(1 − d) (1 − a)d
P =
(1 − b)c (1 − b)(1 − c)
.
bc b(1 − c)
0 0 0 1
The story ends in bar k if the DTMC gets absorbed in state (k, k), for k = 1, 2.
2.15 The state space is S = {(1, 2), (2, 3), (3, 1)}. The transition probability matrix
is
0 b21 b12
P = b23 0 b32 .
b13 b31 0
and
B−1
X
pi,B = 1 − pij ,
j=0
where we use the convention that αk = 0 if k < 0.
2.25 {Xn , n ≥ 0} is a DTMC general simple random walk on {0, 1, 2, ...} with
transition probabilities
p0,0 = 1, pi,i+1 = βi α2 , pi,i−1 = βk α0 , pii = βi α1 + 1 − βi .
2.27 {Xn , n ≥ 0} is a DTMC with state space S = {rr, dr, dd} and transition
probability matrix
0 1 0
P = 0 .5 .5 .
0 0 1
2.31 Xn+1 = max(Xn −1+Yn , Yn ). This is the same as the DTMC in Example 2.16.
COMPUTATIONAL EXERCISES
2.21
21−n 1 − 21−n
0 1 0 0
P = 0 .5 .5 , P n = XDn X −1 = 0 2−n 1 − 2−n , n ≥ 1.
0 0 1 0 0 1
518 ANSWERS TO SELECTED PROBLEMS
2.23
E(Xn |X0 = i) = iµn
inσ 2 if µ = 1
Var(Xn |X0 = i) = n .
iσ 2 µn−1 µµ−1
−1
if µ 6= 1.
CONCEPTUAL EXERCISES
2.5 No.
2.11 {|Xn |, n ≥ 0} is a random walk on {0, 1, 2, ...} with p0,1 = 1, and, for i ≥ 1,
pi+1 + q i+1
pi,i+1 = = 1 − pi,i−1 .
pi + q i
Chapter 3
COMPUTATIONAL EXERCISES
3.3 14.5771.
3.5 2.6241.
3.7 18.
3.15 1/p2 q 2 .
3.27 .2.
3.29 M + 1.
CONCEPTUAL PROBLEMS
P∞
3.3 ṽi = vi , i > 0, ṽ0 = j=1 pij vj .
3.7 For B ⊂ A, and i ≥ 1, let u(i, B) be the conditional probability that the process
visits all states in A before visiting state 0, given that currently it is in state i and it
has visited the states in set B so far. Then
X X
u(i, B) = pi,j u(j, B ∪ {j}) + pi,j u(j, B).
j∈A−B j6=0,j ∈
/ A−B
3.9
∞
X
wi = δi,j pi,0 + pi,k wk .
k=1
Chapter 4
COMPUTATIONAL EXERCISES
M (n)
4.1 All rows of P n and n+1 converge to [.132 .319 .549].
M (n)
4.3 All rows of P n and n+1 converge to [1/(N + 1), 1/(N + 1), · · · , 1/(N + 1)].
4.9 Communicating class: {A, B, C} All states are aperiodic, positive recurrent.
520 ANSWERS TO SELECTED PROBLEMS
4.13 (a)
(b)
4.15 (i) Positive recurrent, (ii) Null recurrent, (iii) Positive recurrent.
4.21 (a) Limiting distribution: [.25 .25 .25 .25]. (b) Limiting occupancy distribu-
tion: [ 16 31 13 16 ].
2α−1 1−α j
4.27 (i) πj = α ( α ) , j ≥ 0.
4.29 1/r.
2
(N )
4.31 πn = PN n N 2 , 0 ≤ n ≤ N.
( )
j=0 j
P∞
4.33 πn = i=n+1 αi /τ, n ≥ 0.
4.35 φ(z) = ∞ n n
Q
n=0 ψ((1 − p) z + 1 − (1 − p) ).
4.37 π0 α0 + πB−1 α2 , where πj is the long run probability that the buffer has j bytes
in it.
4.39 τ1 /(τ1 + τ2 ).
4.43
4/11 7/11 0 0
4/11 7/11 0 0
(a)
4/11
,
7/11 0 0
4/11 7/11 0 0
ANSWERS TO SELECTED PROBLEMS 521
90/172 27/172 55/172 0 0 0
90/172 27/172 55/172 0 0 0
90/172 27/172 55/172 0 0 0
(b) .
0 0 0 1 0 0
1170/3956 351/3956 715/3956 10/23 0 0
540/3956 162/3956 330/3956 17/23 0 0
K−1
P K−1
P
4.45 (C1 r + C2 (1 − r))/T , where r = pi , T = ipi + K(1 − r).
i=1 i=1
CONCEPTUAL EXERCISES
4.17 The results follows from the fact that c(i) is the expected cost incurred at time
n if Xn = i.
4.21 Global balance equations are obtained by summing the local equation over all
j.
4.29 Suppose the DTMC earns one dollar every time it undergoes a transition from
state i to j, and 0 otherwise. Then use Conceptual Exercise 4.18.
4.31 g(i) = ri + ∞
P
j=1 pij g(j), i > 0.
Chapter 5
COMPUTATIONAL EXERCISES
h i
λ2 −λ1 x λ1 −λ2 x
5.1 P(Length of the shortest path > x) = exp(−λ3 x) λ2 −λ1 e + λ1 −λ2 e .
λ2 −λ1 x λ1 −λ2 x
5.3 P(Length of the longest path ≤ x) = 1− λ2 −λ1 e − λ1 −λ2 e (1 −
−λ3 x
e ).
5.5 (1 − e−λx )n .
T
5.7 1−e−λT
− λ1 .
522 ANSWERS TO SELECTED PROBLEMS
1
5.9 λ+µ 2p(1 + 3q 2 ) where p = µ/(λ + µ) and q = 1 − p.
5.25 12 (1 − e−2λt ).
5.27 e−λτ .
5.35 6.
Rt c(t − k) if 2k ≤ t < 2k + 1
5.37 N (t) ∼ P(Λ(t)) where Λ(t) = 0 λ(u)du =
c(k + 1) if 2k + 1 ≤ t < 2k + 2.
Rt
5.39R(t) ∼ P( 0 λ(u)(1 − G(t − u))du).
CONCEPTUAL EXERCISES
5.1 Let H(x) = P(X > x). Definition of hazard rate implies H ′ (x) = −r(x)H(x).
The solution is Equation 5.4.
5.21 It is not an NPP since it does not have independent increments property.
2
5.23 e−λπx .
Chapter 6
MODELING EXERCISES
ANSWERS TO SELECTED PROBLEMS 523
6.1 State space = {0, 1, ..., k}.
qi,i+1 = (k − i)λ, 0 ≤ i ≤ k − 1,
qi,i−1 = iµ, 1 ≤ i ≤ k.
6.3 State space = {0, 1, 2, 12, 21}. The state represents the queue of failed machines.
−(µ1 + µ2 ) µ1 µ2 0 0
λ1 −(λ1 + µ2) 0 µ2 0
Q= λ2 −(λ2 + µ1) 0 0 µ1 .
0 0 λ1 −λ1 0
0 λ2 0 0 −λ2
6.7 The state space = {0, 1A, 1B, 2, 3, 4, ...}. State 1A (1B) = one customer in the
system and he is being served by server A (B). State i = i customers in the system.
q0,1A = λα, q0,1B = λ(1 − α),
q1A,0 = µ1 , q1A,2 = λ,
q1B,0 = µ2 , q1B,2 = λ,
q2,1A = µ2 , q2,1B = µ1 , q2,3 = λ,
qi,i+1 = λ, qi,i−1 = µ1 + µ2 , i ≥ 3.
6.19 Let Xi (t) be the number of customers of type i in the system at time t.
{(X1 (t), X2 (t)), t ≥ 0} is a CTMC on S = {(i, j) : i ≥ 0, 0 ≤ j ≤ s} with
transition rates
q(i,j),(i+1,j) = λ1 , (i, j) ∈ S, q(i,j),(i,j+1) = λ2 , (i, j) ∈ S, j < s,
q(i,j),(i−1,j) = min(i, s − j)µ1 , q(i,j),(i,j−1) = jµ2 , (i, j) ∈ S.
524 ANSWERS TO SELECTED PROBLEMS
6.21 Y (t) = the number of packets in the buffer, Z(t) = the number of tokens in
the token pool at time t. X(t) = M − Z(t) + Y (t). {X(t), t ≥ 0} is a birth and
death process with birth rates λi = λ, i ≥ 0 and death rates µ0 = 0, µi = µ, i ≥ 1.
COMPUTATIONAL EXERCISES
α
6.1 α+β (1 − e−(α+β)t ).
λi µi −(λi +µi )t
6.3 Let αi (t) = λi +µi + λi +µi e . Then
P(X(t) = 0|X(0) = 2) = (1 − α1 (t))(1 − α2 (t)),
P(X(t) = 1|X(0) = 2) = α1 (t)(1 − α2 (t)) + α2 (t)(1 − α1 (t)),
P(X(t) = 2|X(0) = 2) = α1 (t)α2 (t).
ANSWERS TO SELECTED PROBLEMS 525
6.5 P (X(t) = j|X(0) = i) = ji e−jµt (1 − e−µt )i−j .
2αβ α(α−β)
6.7 α+β t + (α+β)2 (1 − e−(α+β)t ).
λ(e−16µ −e−(8λ+24µ) )
6.9 (λ+µ)(1−e−(8λ+24µ) )
.
pi = ρi p0 , i ≥ 0.
λi
λ
µp
1−p
6.15 G(z) = 1−pz .
Pk−1 i Pk−2 i
ρ /i! ρ /i!
6.19 Long run probability that the kth space is occupied = ρ k i −ρ i=0
P i=0
P k−1
i
.
i=0
ρ /i! i=0
ρ /i!
ρK /K!
Long run fraction of the customers lost = PK .
i=0
ρi /i!
6.21 The system is stable if α > 0. State R = computer under repair. State j =
computer is functioning and there are j jobs in the system. The limiting distribution:
θ
pR = θ+α
α
pj = θ+α (1 − b)bj , j ≥ 0
where q
λ θ λ
1+ µ + µ − (1 + µ + µθ )2 − 4 µλ
b= .
2
Long run fraction of job that are completed successfully = λµ (1 − pR − p0 ).
6.23 p0 = (1 + K λk −1 λk
P
k=1 µk ) , pk = µk p0 , 1 ≤ k ≤ K.
2λµ2 (µ+θ+2λ)
6.29 Idle time fraction = 8λ4 +4λ3 θ+8λ3 µ+6λ2 µθ+4λµ2 θ+2λµ3 +µ3 θ+4λ2 µ2 .
6.31 pi = 1/N, 1 ≤ i ≤ N.
2λ+µ
6.35 λ(4λ+µ) .
1 1−(λ/µ)K+1
6.39 λ 1−λ/µ .
r 1−c5
6.47 µ 1−c .
CONCEPTUAL EXERCISES
6.1 Let B be the submatrix of Q obtained by deleting row and column for the state
N . Then the matrix M = [Mi,j ] satisfies BM = −I.
′
6.9 Let {Y (t), t ≥ 0} have transition rates qi,j = qi,j /r(i).
6.11
e−sx/ri
−sT if y > x/ri
E(e |X(0) = i, S1 = y, X(y) = j) =
e−sy φj (s, x − yri ) if y ≤ x/ri .
The result follows by unconditioning.
ANSWERS TO SELECTED PROBLEMS 527
Chapter 7
MODELING EXERCISES
7.1 X(t) = number of waiting customers at time t. {X(t), t ≥ 0} is the queue length
process in an M/M/1 queue with arrival rate λ and service rate µ.
7.5 λi = λ, i ≥ 0, µi = µ1 , 1 ≤ i ≤ 3, µ2 , i ≥ 4.
7.7 Arrival process is P(λ) where λ = ki=1 λi . Service times are iid with common
P
Pk
cdf i=1 (λi /λ)(1 − e−µi x ).
7.9 Interarrival times are deterministic (equal to 1), and the service times are iid
exp(µ).
COMPUTATIONAL EXERCISES
1 ρ
7.3 W q = µ 1−ρ .
i
7.5 1−ρ .
7.23 1 − ρ.
7.47 .9562.
7.53 Condition of stability: λ < 2µ. The second system has smaller L.
7.59 X(t) and Xn have the same limiting distribution, which differs from that of
X̄n .
ρ
7.63 Stability condition: 1+ρ < θ. The solution to Equation 7.41:
1 p
α= ((2λ + µ + θ) − (2λ + µ + θ)2 − 4(θ + λ)).
2
Use Theorem 7.16.
Chapter 8
COMPUTATIONAL EXERCISES
8.3 Recurrent.
2k 2k+1
8.5 P{N (t) = k} = e−λt (λt)
(2k)! + e
−λt (λt)
(2k+1)! .
ANSWERS TO SELECTED PROBLEMS 529
8.7 P (N (n) = k) = nk (1 − α)k αn−k .
8.13 τ = m0 = 1+ 1−α 2 2 2 3
1−β , s = 2+α−β+2/(1−β), N (n) ∼ N (t/τ, (s −τ )t/τ ).
λ1 λ2 r(1−r)(λ1 −λ2 )2
8.15 M (t) = (1−r)λ1 +rλ2 t + ((1−r)λ1 +rλ2 )2 (1 − e−((1−r)λ1 +rλ2 )t ).
λµ λµ
8.17 M (t) = λ+µ t − (λ+µ)2 (1 − e−(λ+µ)t ).
8.33 µi /µ.
q
8.37 a min(1, C
C1 [ 1 +
2 2C1
C2 − 1]).
8.41 Let X(t) = −1 if the machine is under repair at time t. If the machine is up
at time t, let X(t) be the cumulative damage at time t (since the last repair). Then
{X(t), t ≥ 0} is an SMP with state space {−1, 0, 1, 2, · · · , K}. The non-zero entries
of the kernel G(x) = [Gij (x)] are given by
G−1,0 (x) = A(x),
Gi,j (x) = (1 − exp(−λx))αj−i , 0 ≤ i < j ≤ K,
X∞
Gi,0 (x) = (1 − exp(−λx)) αj , 0 ≤ i ≤ K.
j=K+1−i
τ
8.45 M/G/1/1 : 1+λτ , G/M/1/1 : G̃(µ).
530 ANSWERS TO SELECTED PROBLEMS
8.47 Let αi = (λ2i + 2λi µi )/(λi + µi )2 , ρj := j−1
Q P4
i=1 αi ,, πj = ρj / i=1 ρi ,
τj = (λi + 2µi )/(λi + µi )2 . Then
4
X
pj = ρj τj / ρi τi .
i=1
CONCEPTUAL EXERCISES
Chapter 9
MODELING EXERCISES
COMPUTATIONAL EXERCISES
9.1
Hi ∗ Hi+1 ∗ · · · ∗ Hj−2 ∗ Hj−1 (t) 1≤i<j≤N
Fi,j (t) =
Hi ∗ Hi+1 ∗ · · · ∗ HN ∗ H1 ∗ · · · ∗ Hj−2 ∗ Hj−1 (t) 1 ≤ j ≤ i ≤ N
F̃i,j (s)
Mi,j (s) = .
1 − F̃i,i (s)
ANSWERS TO SELECTED PROBLEMS 531
2(1−Ã(µ))
9.3 p1 = 2µτ +Ã(µ)
.
ρj+1 e−ρ
9.5 (1). (j+1)! 1−e−ρ , (2). (1 − e−ρ )/ρ.
h i
1
1 − e−λ r−j k
P
9.7 Long run fraction of the time the MRGP spends in state j = λ k=0 (λ /k!) .
λτ λτ λs2
9.11 limn→∞ E(Xn ) = α , limt→∞ E(X(t)) = α + 2τ .
CONCEPTUAL EXERCISES
9.7 Follows from the elementary renewal theorem for the renewal processes (Theo-
rem 8.12) and for the delayed renewal processes (Conceptual Exercise 8.12).
Chapter 10
COMPUTATIONAL EXERCISES
10.3
1
f (x1 , x2 ) = p ×
2π t1 (t2 − t1 )
1 2 2
exp − [(x1 − µ1 t1 ) t2 − 2(x1 − µ1 t1 )(x2 − µ2 t2 ) + (x2 − µ2 t2 ) t1 ] .
2t1 (t2 − t1 )
CONCEPTUAL EXERCISES
Beard, R. E., T. Pentikeinen, and E. Pesonen (1984). Risk Theory: The Stochastic
Basis of Insurance, Chapman & Hall, London.
Bertsekas, D., and R. Gallager (1987). Data Networks, Prentice Hall, Englewood
Cliffs, NJ.
533
534 REFERENCES
Cox, D. R. (1980). Point Processes, Chapman & Hall, London.
Cox, D. R., and H. D. Miller (1965). The Theory of Stochastic Processes, Chapman
& Hall, London.
Durbin, R., S. Eddy, A. Krogh, and G. Mitchison (2001). Biological Sequence Analy-
sis: Probabilistic Models of Proteins and Nucleic Acids, Cambridge University Press,
Cambridge, UK.
Fuller, L. E. (1962). Basic Matrix Theory, Prentice Hall, Englewood Cliffs, NJ.
Gantmacher, F. R. (1960). Matrix Theory, Vols. I and II, Chelsea Publishing, NY.
Golub, V. K. and C. F. van Loan (1983). Matrix Computations, Johns Hopkins Uni-
versity Press, Baltimore, MD.
Hull, J. C. (1997). Options, Futures, and Other Derivatives, Prentice Hall, Engle-
wood Cliffs, NJ.
Kemeny, J. G. and L. J. Snell (1960). Finite Markov Chains, Van Nostrand, Princeton,
NJ.
Kleinrock, L. (1976). Queueing Systems, Vol. II: Computer Applications, Wiley, NY.
Neuts, M. F. (1989). Structured Stochastic Matrices of M/G/1 Type and Their Appli-
cations, Marcel Dekker, NY.
Rolski, T., H. Schmidli, V. Schmidt, and J. Teugels (1999). Stochastic Processes for
Insurance and Finance, Wiley, NY.
Shreve, S. E. (2000). Stochastic Calculus for Finance I: The Binomial Asset Pricing
Model, Springer-Verlag, NY.
Stoyan, D. (1983). Comparison Methods for Queues and Other Stochastic Mod-
els,Wiley, NY.
Syski, R. (1992). Passage Times for Markov Chains, IOS Press, The Netherlands.
Trivedi, K. S. (1982). Probability and Statistics with Reliability, Queueing, and Com-
puter Science Applications, Prentice Hall, Englewood Cliffs, NJ.
Varga, R. S. (1962). Matrix Iterative Analysis, Prentice Hall, Englewood Cliffs, NJ.
Wolff, R. W. (1989). Stochastic Modeling and the Theory of Queues, Prentice Hall,
Englewood Cliffs, NJ.
Index
539
540 INDEX
Characterization, 11 First Passage Times, 8
Communicating Class, 90 Foster’s Criterion, 103
Communication, 89
Definition, 10
G/M/1 Queue, 437
First Passage Times, 55
Gambler’s Ruin, 17, 61
CDF, 56
Generating Functions, 503
Expectation, 69
Generating Functions, 74
Moments, 74 Hessenberg Matrix
Higher Order, 23 Lower, 21
in Finance, 27 Upper, 20
in Genealogy, 25 Hidden Markov Models, 22
in Genetics, 23
in Genomics, 22 Infinite Server Queue, 173, 201, 243, 324
in Manpower Planning, 28 Inspection Paradox, 367
in Telecommunications, 30 Ito Integral, 472
Initial Distribution, 11 Ito Process, 476
Irreducibility, 89, 91 Ito’s Formula, 478
Limiting Behavior, 85
Limiting Distribution, 113
Marginal Distributions, 31 Laplace Stieltjes Transform, 505
Occupancy Distribution, 116 Laplace Transform, 507
Occupancy Times, 36 Law of Large Numbers
Periodicity, 92 Strong, 510
Recurrence, 94 Weak, 510
Stationary Distribution, 113 Limiting Distribution, 7
Steady State distribution, 113 Linear Growth Model, 201, 219, 228, 248
Time Homogeneous, 10 Little’s Law, 396
Transience, 94
Transient Behavior, 9 M/G/∞ queue Queue
Transition Diagram, 12 Busy Period, 358
Discrete-Time Queue, 17 M/G/1 Queue, 435
Average Costs, 128 M/M/∞ Queue, 296
Batch Arrivals, 19, 67, 72, 101, 120 Markov Property, 10
Batch Service, 20, 68, 72, 102, 121 Markov Regenerative Process, 409
Discounted Costs, 126 Transient Distribution, 428
Drunkard’s Walk, 17 Markov Regenerative process
Limiting Distribution, 429
Ehrenfest Model, 18 Markov Renewal Equation, 415
Excess Life Process, 363 Markov Renewal Function, 415
Exponential Random Variable, 145 Markov Renewal Process
Distribution, 145 Definition, 414
Hazard Rate, 147 Markov Renewal Sequence
Memoryless Property, 146 Characterization, 410
Minimum, 149 Definition, 409
Sums, 152, 153 Regularity, 414
Markov Renewal Theorem
Extended, 421
Fatou’s Lemma, 511 Key, 419
INDEX 541
Markov Renewal Type Equation, 417 Departures, 281
Solution, 417 Entries, 280
Martingale, 463 Little’s Law, 291
Matrix Exponential, 209 Nomenclature, 277
Monotone Convergence Theorem, 511 PASTA, 283, 289
Moran Model, 25
Random Variable
o(h) Functions, 160 CDF, 493
Option Conditional Distributions, 500
American Call, 483 Conditional Expectations, 500
American Perpetual Put, 483 Continuous, 493
American Put, 483 Discrete, 493
European Call, 481 Expectation, 494
European Put, 483 Independence, 498
Optional Sampling Theorem, 465 Multi-Variate, 497
Order Statistics, 500 Multi-Variate
Ornstein-Uhlenbeck process, 481 Normal, 501
Sums, 498
Univariate, 493
Pakes’ Lemma, 104
Random Walk, 6, 16
Parallel System, 151
General, 63, 71, 100, 119, 131
Phase Type Distributions, 252
Simple, 16, 33, 62, 87, 91, 93, 99
Poisson Process, 145, 191, 197, 217
State-Dependent, 16
Characterization, 160, 161
Recurrence Time
Compound, 177, 191
Backward, 363
Covariance, 159
Forward, 363
Definition, 155
Regenerative Process, 390
Distribution, 155
Costs and Rewards, 394
Event Times, 162
Delayed, 390
Non-Homogeneous, 173
Limiting Distribution, 391
Event Times, 177
Renewal Argument, 344
Shifted, 157
Renewal Equation, 350
Splitting, 169
Renewal Function, 349
Bernoulli, 169
Asymptotic Behavior, 361
Non-Homogeneous, 171
Delayed, 368
Superposition, 166
Asymptotic Behavior, 370
Probability
Renewal Process, 339
Axioms, 491
Age, 365, 367
Conditional, 492
Alternating, 373
Model, 491
Delayed, 377
Asymptotic Behavior, 385, 388
Queueing Networks Central Limit Theorem, 347
Closed, 308 Characterization, 342
Open, 298 Definition, 340
State-Dependent Arrivals, 306 Delayed, 368
State-Dependent Service, 306 Limiting Behavior, 369
Queues Equilibrium, 371
Arrivals, 280 Limiting Distribution, 345
Birth and Death, 293, 433 Marginal Distribution, 343
542 INDEX
Moments, 354 Transient Distribution, 7
Periodicity, 359 Transition Probability Matrix
Recurrence and Transience, 346 n-Step, 32
Remaining Life, 364, 366 Eigenvalues, 39
Total Life, 367 Generating Function, 41
Renewal Reward Process, 384 One Step, 10
Renewal Sequence, 340
Renewal Theorem
Weather Model, 13
Almost-Sure, 346
Wright-Fisher Model, 24
Blackwell’s, 360
Continuous, 234
Discrete, 106 Yule Process, 198
Elementary, 352 with Immigration, 199
Key, 360
Renewal-Type Equation, 355
Solution, 355
Restaurant Process, 258
Retrial Queue, 202, 243, 320, 438
Reversibility
CTMC, 254
DTMC, 129
Rumor Model, 48
Sample Path, 2
Semi-Markov Process, 378, 424
First Passage Times, 380
Limiting Behavior, 426
Limiting Distribution, 382, 425
Single Server Queue, 200, 242, 252
Batch Arrivals, 244
Slotted ALOHA, 48
Stationary and Independent Increments, 157
Stochastic Differential Equation, 476
Stochastic Integral, 472
Martingale, 475
Moments, 473
Stochastic Matrix, 10
Stochastic Process, 1
Continuous Time, 1
Discrete Time, 1
Parameter Set, 1
State-Space, 1
Stopping Time, 447
Strong Markov Property, 448
Success Runs, 19, 58, 62, 94, 99, 118, 232