Question:
(a) Show that simple random walk on the infinite (rooted, regular) binary tree
is transient by finding the probability that the walk ever returns to the root
as a function of its starting depth.
(b) Call one subtree of the root the right subtree, and the other the left, and
say that the level of a node is the number of steps needed to reach the root,
multiplied by −1 if the node is in the left subtree. Show that the probability
that the walk escapes to infinity out the right subtree is given by r(k), where
for k ≥ 0, (
1
1 + (1 − 2−k )
2 for k ≥ 0
r(k) = 1 −k
2 1 − (1 − 2 ) for k < 0
Solution: (a) Since we are only concerned with the depth of the current posi-
tion, we may think of the simple random walk on the binary tree as a random
walk (Xn )n≥0 on {0, 1, 2, . . .} where Xn = i means that we are at depth i at
time n. The transition probabilities are given by p0,1 = 1 and
pi,i+1 = p and pi,i−1 = q = 1 − p
for i ≥ 1, where p = 32 .
Fix some positive integer M . Let T0,M be the first time that the walk (Xn )n≥0
hits 0 or M . For 0 ≤ i ≤ M let
fM (i) = P(XT0,M = 0|X0 = i).
That is, fM (k) is the probability that the walk starting at k hits 0 before M .
We know that fM is harmonic on {1, 2, . . . , M − 1}, namely (by conditioning on
the first step) we have for 1 ≤ i ≤ M − 1 that
fM (i) = pi,i−1 fM (i − 1) + pi,i+1 fM (i + 1) = qfM (i − 1) + pfM (i + 1) (1)
along with boundary conditions
fM (0) = 1 and fM (M ) = 0 (2)
Note: We could omit passing to the limit with M , by looking for a function f
satisfying equation (1) for all i > 0 and having f (0) = 1 and limi→∞ f (i) = 0,
but it pays to be careful. For instance, if we made M into a reflecting boundary
(i.e. setting pM,M −1 = 1), then the random walk on {0, 1, . . . , M } would be
recurrent, since any irreducible Markov chain on a finite state space is recurrent,
and one might wonder if the function f we get is the correct one. In this case it
works out, but by taking the limit explicitly, it is more clear what’s going on.
1
The characteristic equation for the recurrence realtion (1) is λ = q + pλ2 , which
has solutions λ = 1 and λ = pq . The general solution to (1) is therefore given by
i
q
fM (i) = AM + BM
p
M
The boundary conditions in (2) imply that AM +BM = 1 and AM +BM pq =
0. Thus
qM pM
AM = − M and B M =
q − pM pM − q M
Let us now take the limit as M → ∞. Suppose, as is true in this case, that
p > q. Then we have limM →∞ AM = 0 and limM →∞ BM = 1. Thus
i
q
P(X hits zero eventually |X0 = i) = lim fM (i) = = 2−i (3)
M →∞ p
Now, if the probability that we ever return to zero, when starting at zero, is less
than one, then the point zero will be transient, and hence the whole chain will
be transient since it is irreducible. But since we always move to depth one when
starting at zero, the probability that we ever return to zero, when starting at
zero, is the same as the probability that starting at depth one we ever hit zero.
By (3), this probability is 1/2. Hence the chain is transient.
If we define transience of zero as the number of returns to zero being almost
surely finite, then we can give the following argument. The number of returns
to zero, when starting at zero, is clearly equal to the number of returns to zero
starting at depth 1, since our first move will always take us to depth 1. The
probability that we ever hit zero starting at depth 1 is 1/2 by (3). If we hit zero
we will return to depth 1, and once again we will make an eventual return to
zero with probability 1/2. Thus the number of returns to zero, when starting at
zero, is a geometric random variable with sucess parameter 1/2. This number is
almost surely finite, and hence zero is a transient state for the chain (Xn )n≥0 .
Since the chain (Xn )n≥0 is irreducible, every state is transient.
(b) If we start at zero, then by symmetry the probability that we escape to +∞
is the same as the probability we escape to −∞; since these sum to one, the
probablity of escaping to + beginning at zero is 1/2.
Then we know that if fM (k) is the probability that the walk hits +M before
hitting −M , that fM is harmonic in {−M + 1, . . . , M − 1}, namely, that
(
= qfM (i − 1) + pfM (i + 1) for 1 ≤ |i| ≤ M − 1
fM (i) = 1
= 2 (fM (−1) + fM (1)) for i = 0
and that fM has boundary conditions fM (−M ) = 0 and fM (M ) = 1. This can
be solved; and the task is made easier by the observation above that f (0) = 1/2,
2
which we can add to our boundary conditions, and use the general form of the
solution to equation (1) from part (a).
Alternatively, we can argue directly using part (a) as follows. If we start on the
right side at depth k, then there are two ways we can excape to +∞. Either we
hit zero at some point, which happens with probability 2−k , and then escape to
+∞ with probability 1/2, or we never hit zero which happens with probability
1 − 2−k . Thus
1 1
r(k) = 2−k · + (1 − 2−k ) = 1 + (1 − 2−k )
2 2
If we start on the left side at depth k, then there is only one way we can excape
to +∞. First we must hit zero, which happens with probability 2−k , and then
escape to +∞ with probability 1/2. Thus
1 1
r(−k) = 2−k · 1 − (1 − 2−k )
=
2 2
3
Question: Let X be a Markov chain on {0, 1, 2, ...} with transition probabilities
given by
P (k, k − 1) = 1 and P (0, k) = P (Y = k)
for k ≥ 0, and where Y is a random variable taking values in {1, 2, 3, ...} with
E[Y ] < ∞. Find the stationary distribution of X.
Solution:
The chain is clearly recurrent; it is positive recurrent if and only if E[Y ] < ∞.
The stationary distribution is
P{Y ≥ k}
πk = ,
1 + E[Y ]
as long as E[Y ] is finite. Here is an easy way to see this. Let {T1 , T2 , . . .} be the
times of a renewal process with inter-renewal intervals i.i.d. with distribution
Y . Then if τ (k) = min{Ti : Ti ≥ k} is the time of the next renewal after time
k, then Zk = τ (k) − k, the time until the next renewal, is a Markov chain with
the same distribution as X, and T1 + 1 corresponds to X0 . Since Z and T are
equivalent, we know that making the renewal process T stationary will give us
a stationary version of Z (and therefore X); and we have already found what
the distribution of T1 must be to make T stationary.
Here is another way to find this. Recall that if µx is the expected return time to
(x)
x, beginning at x, and ρy is the exepcted number of visits to y before hitting
x after starting at x, then
(x)
ρy
πy = .
µx
Take x = 0. It is easy to see that µx = E[Y ] + 1, since if the chain jumps to k,
it takes k + 1 steps to return. Furthermore, the chain visits each state at most
once on each excursion from 0, and the probability it visits state k is P{Y ≥ k};
so ρ0y = P{Y ≥ y}, and hence
P{Y ≥ y}
πy = .
E[Y ] + 1
A third approach, which is widely applicable, is to find π0 and then use the
relation πP = P to find the rest of π. In this case, it is obvious that µ0 =
1
1 + E[Y ], and hence π0 = 1/µ0 = a+E[Y ].