Recursion and Random Walks
Arpit Gupta
October 3, 2007
Abstract
This paper examines conditions for recurrence and transience for ran-
dom walks on discrete surfaces, such as Z
d
, trees, and random environ-
ments.
1 Random Walks on Non-Random Environments
1.1 Definitions
Denition 1. A class of subsets T of a set is an algebra if the following hold:
1. T and T.
2. A
1
, A
2
, . . . , A
n
T =
n
i=1
A
i
T.
3. A T = A
c
T.
If additionally A
1
, A
2
, . . . T =
i=1
A
i
T, then T is called a -algebra.
Denition 2. A Probability Measure is a function P : T [0, 1], where T is a
-algebra, satisfying:
1. P = 0
2. If A
1
, A
2
, . . . is a collection of disjoint members of T, so that A
i
A
j
=
i ,= j, then
P
_
_
i=1
A
i
_
=
i=1
PA
i
.
Denition 3. A Probability Space is represented by the triple (, T, P), where
is a set, called a Sample Space, containing all possible outcomes, T is a -
algebra of subsets of containing events, and P is a probability measure that
assigns a measure of 1 to the whole space. T will be a -algebra on throughout
the paper.
1
Denition 4. A Random Variable is a function X : R with the property
that
: X() x T x R.
Two events A = X
i
x
i
, B = X
j
x
j
are independent when PA B =
PAPB. More generally, random variables X
1
, X
2
, . . . are independent if
k N, x
1
, x
2
, . . . R,
PX
1
x
1
, X
2
x
2
, . . . X
k
x
k
=
k
i=1
PX
i
x
i
.
If X is a random variable, then for every Borel subset B of R, X
1
(B) T.
We dene a measure
X
, called the distribution of the random variable, on Borel
sets by
X
(B) := PX B = X
1
(B).
If
X
takes values only for countable subsets of the real numbers, X is a
discrete random variable; these are the only random variables we consider in this
paper. If random variables X
1
, X
2
, . . . are all independent and have the same
distribution, we say they are independent and identically distributed (i.i.d.).
Denition 5. Intuitively, a discrete stochastic process is characterized by a
state space V , and transitions between states which occur randomly according
to some probability distribution. The process is memoryless if the probabil-
ity of a transition i j does not depend on the history of the process, i.e.
i, j, u
o
, . . . , u
t1
V,
PX
t+1
= j[X
t
= i, X
t1
= u
t1
, . . . , X
0
= u
0
= PX
t+1
= j[X
t
= i.
If additionally p
ij
= PX
t+1
= j[X
t
= i = PX
t
= j[X
t1
= i, so the
transition probability does not depend on time, then the process is homogenous.
Denition 6. A Markov chain is a memoryless homogeneous discrete stochastic
process.
Roughly, this means that, conditional upon knowing the state of the process
up to the n
th
step, the values after n steps do not depend on the values before
the n
th
step.
1.2 Simple Random Walks on Integer Lattices
First we consider a simple random walk on Z where a walker starts at some
z Z and always moves to either adjacent point with probability 1/2. Let S
n
denote the position of the walker after n steps. The model immediately implies
that the random walk is a Markov Chain, i.e.
PS
n+1
= i
n+1
[S
n
= i
n
, S
n1
= i
n1
, . . . , S
1
= i
1
, S
0
= i
0
= z
= PS
n+1
= i
n+1
[S
n
= i
n
=
1
2
, (1)
2
where i
0
= z, i
1
, i
2
, . . . , i
n+1
is a sequence of integers with [i
1
i
0
[ = [i
2
i
1
[ =
= [i
n+1
i
n
[ = 1.
Alternatively, work on the probability space [0, 1] with Lebesgue measure.
Let X
1
, X
2
, . . . be a sequence of i.i.d. random variables with PX
1
= 1 =
PX
1
= 1 = 1/2, where X
i
corresponds to the value of the i
th
coin ip (that
X
1
, X
2
, . . . exist with these properties is beyond the scope of this paper). Then
the position of the random walk after n steps is given by:
S
n
= z +X
1
+X
2
+ +X
n
n = 1, 2, . . . .
Lemma 7. Let S
n
denote the position of a random walker at time n. Let
p
n
= PS
n
= S
0
Then
p
n
1
2n
.
Proof. When n is odd, p
n
= 0, so we only consider the case where the walker
takes an even number of steps 2n. To return to the starting point in that number
of steps, the walk must consist of exactly n steps to the right and n to the left,
and there are
_
2n
n
_
ways to choose such a move. Each particular choice of 2n
steps occurs with probability 2
2n
. Using Stirlings formula, given by
n!
2n e
n
n
n
We have
p
2n
=
_
2n
n
_
1
2
2n
=
(2n)!
n!n!
1
2
2n
4n e
2n
(2n)
2n
(
2n e
n
n
n
)
2
2
2n
=
1
2n
.
We can form an analogous construction for simple random walks on Z
d
. The
random walk S
n
now has 2d choices of moves for each n. The probability of
moving in any one direction at time n (
1
2d
) is equal to the probability of moving
in any other direction, and is independent of the particular path the walker
followed to arrive at n.
Another way to to think of the simple random walk in Z
d
is to think of
d dierent simple random walks in each of the d directions. To choose each
step, one of the one-dimensional random walks is picked at random to make a
move, and the walker moves in the direction indicated by that move, keeping
the position in other directions constant.
Lemma 8. [6] Let S
n
denote the position of a two-dimensional walker after n
steps, and p
n
= PS
n
= S
o
. Then,
p
n
1
2n
.
3
Proof. To return to the starting point in 2 dimensions, the walk must again
comprise an even number of steps 2n, and there are 4
2n
ways to choose such
walks of length 2n returning to the starting point. All the walks returning to
the starting point consist of k steps to the north and south, and n k to the
east and west. So we have
p
2n
= 2
2n
n
k=0
_
2n
k k n k n k
_
= 2
2n
n
k=0
(2n)!
k!k!(n k)!(n k)!
= 2
2n
n
k=0
(2n)!
n!n!
n!n!
k!k!(n k)!(n k)!
= 2
2n
_
2n
n
_
n
k=0
_
n
k
_
2
.
However,
n
k=0
_
n
k
_
2
=
_
2n
n
_
, so
p
2n
=
_
_
2n
n
_
1
2
2n
_
2
,
which is the square of the one-dimensional result. Therefore we have
p
2n
_
1
2n
_
2
=
1
2n
.
Lemma 9. [6] Let S
n
denote the position of a three-dimensional random walk,
and p
n
as before. Then,
p
2n
C
n
3/2
.
Proof. As before, the three-dimensional random walk must make an equal num-
ber of steps in each direction in order to return to its starting point. There are
now 6
2n
ways to choose walks of length 2n returning to the origin. If we let n
1
denote the number of steps in the x direction, n
2
the steps in the y direction,
and n
3
the steps in the z direction, we have
p
2n
= 6
2n
n
1
+n
2
+n
3
=2n
_
2n
n
1
n
1
n
2
n
2
n
3
n
3
_
= 6
2n
n
1
+n
2
+n
3
=2n
n!
n
1
!n
2
!n
3
!
=
1
3
2n
_
n
n
1
n
2
n
3
_
1
3
2n
_
n
n
1
n
2
n
3
_
represents the probability that when 3 balls are able to fall into
3 separate bins, n
1
of them fall into the rst, n
2
into the second, and n
3
into
the third. Informally, we see that if the balls fall randomly, this probability is
4
maximized by letting n
1
, n
2
, n
3
be as close as possible to n/3. Replacing one
of the terms by this fact, we have
p
2n
2
2n
_
2n
n
_
1
3
n
n!
n
3
|!
n
3
|!
n
3
|!
n
1
+n
2
+n
3
=2n
1
3
n
n!
n
1
!n
2
!n
3
!
.
Here
n
3
| denotes the largest integer less than or equal to
n
3
. The last sum
represents the sum of all probabilities for the outcome that n balls t into three
bins, and so is just one. Therefore, we have
p
2n
1
2
2n
_
2n
n
_
1
3
n
n!
n
3
|!
3
.
Applying Stirlings formula, we have
p
2n
C
n
3/2
.
In addition to the probability of the random walk returning to its starting
point, we are also interested in the frequency of return.
Denition 10. Let A
n
be the event that a random walk returns to its starting
point on the n
th
, S
n
= S
0
. Then the event that the random walk returns to
its starting point innitely often is given by:
A
n
innitely often (i.o.) := limsup
n
A
n
=
n=1
_
m=n
A
m
.
If PA
n
i.o. = 1, the walk is called recurrent. If PA
n
i.o. = 0, the walk is
called transient.
Theorem 11 (Recurrence Theorem). [3] A random walk S
n
on Z
d
is recurrent
if d 2. If d 3, then the walk is transient, and
PS
n
,= S
0
n > 0 > 0.
Proof. Let J
n
= 1A
n
be an indicator variable for the event that the random
walk returns to its starting point. The total number of visits to the origin is
given by:
V =
n=0
J
2n
.
Using the linearity of the expected value,
E[V ] =
n=0
E[J
2n
] =
n=0
PS
2n
= S
0
.
5
We know
n
a
converges only when a > 1. So when d = 1, 2, by Lemmas
7 and 8, the sum is divergent. When d = 3, by Lemma 9, the sum is convergent.
For d > 3, we see that to return, the random walk in four dimensions must
at least return in three dimensions. Since the sum of returning probabilities
converges in three dimensions, it must also converge in any greater number of
dimensions. Therefore, we have
E[V ] =
_
d = 1, 2
< d 3
Let q be the probability that the random walker ever returns to its starting
point. Assuming q < 1, its distribution is given by:
PV = k = q
k1
(1 q), k = 1, 2, . . .
Again assuming q < 1, we nd:
E[V ] =
k=1
kPV = k =
k=1
kq
k
(1 q) =
1
1 q
< .
For d = 1, 2 we know E[V ] = , so by contradiction q = 1, and the walk
returns to the origin for some n with probability 1. When d 3,
1
1q
< , so
q < 1, and with non-zero probability the random walk may not return to the
starting point at all.
Now from the distribution of q,
PA
n
nitely often PV = k = 1 q
k
So if d = 1, 2, as k , PA
n
nitely often 0, so PA
n
i.o. = 1. When
d 3, as k , PA
n
1, so PA
n
i.o. = 0.
1.3 The Zero-One Law
The proof of the Recurrence Theorem is based on the fact that
PS
n
= S
0
at least for one n = 1 = PS
n
= S
0
innitely often (i.o.) = 1
And
PS
n
= S
0
at least for one n < 1 = PS
n
= S
0
i.o. = 0.
That is, if the random walk returns once to the origin with probability 1, it
will return again and again with probability 1. Similarly, if the random walk
stays away from the origin with non-zero probability for every n, then it cannot
possibly return innitely often. With innite sequences of independent random
variables, the probabilities of certain events can only be 0 or 1. This observation
is formalized in Kolmogorovs Zero-One Law, which can be used to provide
another proof of the Recurrence Theorem.
6
Denition 12. Let (, P) be a probability space. Two -algebras /, B are
independent if for all A /, B B,
PA B = PAPB.
Denition 13. Assume X
1
, X
2
, . . . are independent random variables on (, T).
Then the -algebra generated by X
1
, X
2
, . . . is the smallest -algebra on which
all the X
i
are measurable.
Denition 14. Let T
n
be the -algebra generated by X
1
, . . . , X
n
, and let
(
n
be the -algebra generated by X
n+1
, X
n+2
, . . . with T
1
T
2
, and
(
1
(
2
. Let T be the -algebra generated by X
1
, X
2
, . . ., the smallest
-algebra containing the algebra T
0
=
n=1
T
n
, and ( dened similarly. The
tail -algebra T is dened by
T =
n=1
(
n
.
Lemma 15. [3] Suppose T and ( are two algebras of events that are indepen-
dent. Then T
1
= (T) and (
1
= (() are independent -algebras.
Proof. Let B ( and dene the measure
B
(A) := PA B,
B
(A) := PAPB
Since
B
,
B
are nite measures, they agree on T. Using the Caratheodory
extension (which states that P can be extended uniquely to a measure space
(, T, P) where T T), they must agree on T. So PA B = PAPB.
Supposing A T
1
, B (
1
, we take the measures
A
(B) := PA B,
A
(B) := PAPB,
completing the proof.
Theorem 16 (Zero-One Law). ( [1], p. 290) Let X
1
, X
2
, . . . be independent
random variables on (, T, P). If A T ,
PA = 0 or PA = 1.
Proof. By Lemma 14, any event in (X
n
, X
n+1
, . . .) is independent of any event
in (X
1
, . . . , X
n
). Dening T
0
and T as above, any event in T
0
is independent
of any event in
n=1
H
n
. So any event in H
is independent of any event in
(
n=1
H
n
). However, H
(X
1
, . . .) = (
n=1
H
n
), so any tail event is
independent of itself. That is, PA = PA A = PAPA, so PA =
0, 1.
Lemma 17 (Borel-Cantelli 1). ( [2], p. 27)
Let A
1
, A
2
, . . . be a sequence of events for which
n=1
PA
n
< . Then
Plimsup
n
A
n
= PA
n
i.o. = 0.
7
So with probability 1, only a nite number of the events A
n
occur.
Proof. Supposing that
n=1
PA
n
= 0 and using Denition 9, one has
Plimsup
n
A
n
= P
n=1
_
m=n
A
m
= lim
n
P
_
m=n
A
n
lim
n
m=n
PA
m
= 0,
where the second equality follows from the fact that if is a measure, A
1
, . . .
are measurable set, A
n+1
A
n
, n = 1, , . . ., and A
n
< , then
n=1
=
lim
n
A
n
. The third inequality comes from the fact that the A
n
are not
necessarily disjoint.
Lemma 18 (Borel-Cantelli 2). ( [4], p. 319) Let A
1
, A
2
, . . . be a sequence of
events for which
n=1
PA
n
= and liminf
n
n
k=1
n
i=1
PA
k
A
i
n
k=1
PA
k
)
2
C C 1
. Then,
Plimsup
n
A
n
C
1
.
Proof. Let J
n
= 1A
n
and N
k
=
k
n=1
J
n
. > 0, let B
n,
denote the
measurable set dened by
N
k
E[N
k
] for some k n.
We pick the measurable functions given by f (t) = E[e
itX
], the characteristic
function of the set B
n,
and g = f (N
n
). By the Cauchy-Schwartz Inequality,
given by E[fg]
2
E[f
2
]E[g
2
], we have
PB
n,
= E[f
2
]
E[fg]
2
E[g
2
]
E[fg]
2
E[N
n
2
]
(E[N
n
] E[N
n
(1 f )])
2
E[N
2
n
].
But E[N
n
(1 f )] E[N
n
], so
PB
n,
(1 )
2
E[N
n
]
2
E[N
2
n
]
n = 1, 2, . . . .
Since E[N
n
] as n ,
P
n=1
_
n=m
A
m
lim
n
PB
n,
.
Since this is true for every > 0,
E
n=1
_
m=n
A
m
limsup
n
E[N
n
]
2
E[N
2
n
]
= limsup
n
n
k=1
n
i=1
PA
k
A
i
n
k=1
PA
k
)
2
C.
8
Lemma 19. ( [2], p. 25) For any < a b < + we have
Pliminf
n
S
n
= a = Plimsup
n
S
n
= b = 0
Proof.
limsup
n
S
n
= lim
n
_
sup
mn
S
m
_
= lim
n
n =
So Pliminf
n
S
n
= a = 0 and similarly for the lim inf.
Lemma 20. ( [2], p. 196) When d = 2,
lim
n
n
j=1
n
k=1
PS
2j
= S
0
, S
2k
= S
0
n
k=1
PS
2k
= S
0
)
2
= 2
Proof.
n
j=1
n
k=1
PS
2j
= S
0
, S
2k
= S
0
= 2
n
j=1
nj
k=1
PS
2j
= S
0
, S
2j+2k
= S
0
+
n
j=1
PS
2j
= S
0
= 2
n
j=1
nj
k=1
PS
2j
= S
0
PS
2k
= S
0
+
n
j=1
PS
2j
= S
0
2
n
j=1
nj
k=1
j
1
k
1
2
_
log n
_
2
2
_
n
k=1
PS
2k
= S
0
_
2
.
We use the above facts to provide another proof of Theorem 11.
Proof of Recurrence Theorem for d = 1.
Proof. Contained in the tail -algebra generated by random variables X
1
, X
2
, . . .
are events such as
A
n
> 0 i.o., limsup
n
A
n
= 0
which do not refer to nite subcollections such as X
1
, . . . X
n
. Our random walk
S
n
, by the Markov Property, has a value after n steps which depend on the n
th
step, but not previous steps. The symmetric nature of the random walk implies
that Pliminf
n
S
n
= = Plimsup
n
S
n
= + = C, while Lemma
18 guarantees that C > 0. Since the Zero-One Law guarantees that C can be
only zero or one, C = 1.
9
Proof of Recurrence Theorem for d = 2. ( [2], p. 197) Lemma 15 shows
that the random walk on two dimensions satises the hypotheses of the second
Borel-Cantelli Lemma, which in turn asserts that
PS
n
= S
0
i.o. 1/2.
By the Zero-One law, since this probability is not zero, it must be one.
Proof of the Recurrence Theorem for d 3. ( [2], p. 197)
Lemma 7 asserts that the probability of return when d 3 converges in
sum. This satises the hypothesis of the rst Borel-Cantelli lemma, which
immediately shows that the probability of return is zero.
1.4 Time
Let
1
(k) = minn : S
n
= k k = 1, 2, . . ..
Lemma 21. ( [2], p. 97) Let
0
= 0 and
k
= minj : j >
k1
, S
j
= 0 k =
1, 2, . . .. Then,
P
1
= 2k = 2
2k+1
k
1
_
2k 2
k 1
_
k = 1, 2, . . .
and
P
1
> 2n = 2
2n
_
2n
n
_
= PS
2n
= 0.
Proof.
P = 2k =
1
2
P
1
= 2k[X
1
= +1 +
1
2
P
1
= 2k[X
1
= 1
Clearly,
P
1
= 2k[X
1
= +1 = P
1
= 2k[X
1
= 1 = P
1
(1) = 2k 1.
Remark
P
1
< =
k=1
P
1
= 2k =
k=1
2
2k+1
k
1
_
2k 2
k 1
_
= 1
So the particle returns to the origin with probability one, i.e. we have a new
proof of the Recurrence Theorem for d = 1.
While it is guaranteed that a random walker will return to the origin, the
mean waiting time of the recurrence is innite.
Theorem 22. The expected time to hit the origin is innite.
Proof.
E[
1
] =
k=1
2
2k+2
_
2k 2
k 1
_
= .
10
1.5 Biased Random Walk
A biased random walk is a random walk which tends to move in some directions
with greater probability than others. More formally, let S
n
denote the position
after n steps of a a biased one-dimensional random walker. Let p > 1/2 and
S
n
= X
1
+. . . X
n
, where X
1
, . . . , X
n
are independent with
Px
j
= 1 Px
j
= 1 = p
Theorem 23. [3] Let S
n
denote the position of a biased one-dimensional ran-
dom walker. Then there is a < 1 such that as n
PS
2n
= 0
n
1
n
,
and the random walk is transient.
Proof. The number of choices available for a random walk to return to its start-
ing point remains the same as in the case of the simple random walk. The
probability of such paths, however, is now given by p
n
(1 p)
n
, since out of 2n
steps, n must be taken in either direction, and the probability of moves is p in
one direction and 1 p in the other.
PS
2n
= 0[P
0
= 0 =
_
2n
n
_
p
n
(1 p)
n
=
(2n)!
(n)!(n)!
p
n
(1 p)
n
.
Using Stirlings formula, we nd that
(2n)!
n!n!
p
n
(1 p)
n
4ne
2n
(2n)
2n
(
2ne
n
n
n
)
2
=
2
2n
2n
(p(1 p))
n
=
n
1
n
.
This sum converges, so by the rst Borel-Cantelli lemma we have transience.
1.6 Random Walks on Graphs
We construct a tree T
1
as follows. The vertices of T
1
are the empty word, de-
noted by o, and all nite sequences of the letters a, b, i.e. words x
1
. . . x
n
where
x
1
, x
2
, . . . x
n
a, b. Both words of one letter are adjacent to o. We say that
a word of length n 1 and of length n are adjacent if they have the exact same
letters, in order, in the rst n 1 positions. Note that each word of positive
length is adjacent to three words and the root is adjacent to only two words.
We construct another tree T
2
similarly, calling the root o and using the letters
a, b. Finally we make a tree T by taking the union of T
1
and T
2
and adding one
more connection: we say that o and o are adjacent.
11
E '
B
r
r
r
r
rj
o
a
b
I
r
r
r
r
rj
aa
ab
aa
bb
r
r
r
r
r
%
o
a
b
r
r
r
r
r
%
aa
ab
aa
bb
Lemma 24. [3] T is a connected tree
Proof. Suppose x, y T
1
. We construct a unique path between them as follows:
If x y we are done. If not, consider the common word consisting of the rst
k letters in which x and y agree (this may be the empty word). Any word of
length n in this tree is adjacent to only one word of length n 1. Call the edge
between them a reduction. Using reductions, we can nd a unique path from
x to the common word, and similarly for y. Uniting the path from x to the
common word, and from y to the common word provides the path from x to y.
If x T
1
and y T
2
, then we use reductions to nd a unique path from x
to o and from y to o. Then we use the fact that o o to nd a unique path
between x and y.
Theorem 25. [3] Let S
n
denote simple random walk on the tree, where the
walker goes to one of the three neighbors each with probability 1/3, and each
choice is independent of the previous choices. Then S
n
is transient.
Proof. Let p
n
= PS
n
= S
0
. For the random walk to return to the starting
point, the walk must still make n steps to the left, and n steps in the other
direction. Steps to the left are made with probability
1
3
while steps to the right
occur with probability
2
3
. The total probability is given by:
p
2n
=
_
2n
n
_
_
1
3
_
n
_
2
3
_
n
=
(2n)!
n!n!
_
2
9
_
n
Using Stirlings formula, we arrive at:
p
2n
=
(2n)!
n!n!
_
2
3
_
n
4ne
2n
(2n)
2n
(
2ne
n
n
n
)
2
2
2n
_
2
9
_
n
=
2
2n
2n
_
2
9
_
n
=
1
2n
_
8
9
_
n
Since the sum
n=1
1
2n
_
8
9
_
n
converges, the rst Borel-Cantelli lemma guarantees that the walk is transient.
12
Theorem 26. [3] With probability 1 the random walk does one of the two
things: either the random walk visits T
1
only nitely often, or it visits T
2
nitely
often.
Proof. In order for a random walk on T to visit either T
1
or T
2
nitely often, it
must at least visit the points o, o (the bridge points) nitely often. Then we
have:
PThe random walk visits T
1
or T
2
only nitely often
Pthe random walk visits the bridge points nitely often
However, transience establishes that the probability of visiting any point nitely
often is one, so the random walk must visit either T
1
or T
2
only nitely often.
2 Random Walks on Random Environments
2.1 The Creation of the Universe
The random walk considered in the rst section is a mathematical model of
linear Brownian motion in a homogenous, or non-random, environment. We now
consider a random environment (in one dimension) encountered, for instance,
in a magnetic eld. Instead of always moving to the right or left with equal
probability (as in the simple random walk) or always moving to the right with
one probability and the left with another (as in the biased random walk), the
chance of moving to the right or left now vary randomly depending on where
the walker is on a one-dimensional integer lattice. Our model is given in two
steps:
God creates the Universe With a sequence of i.i.d. random variables c =
. . . E
2
, E
1
, E
0
, E
1
, E
2
, . . . with distribution P
1
E
0
< x =
E
(x),
E
(0) =
0,
E
(1) = 1, God visits the integers and to i Z randomly assigns a value E
i
(this is a random number between zero and one).
Life in the Universe Into a random environment c, a particle is born on the
origin and begins a random walk. The particle moves a step to the right with
probability E
0
, and left with probability 1 E
0
. If after n steps the particle is
at point i, then the probability of a step to the right is E
i
, while the probability
of a step to the left is 1 E
i
. We have dened a random walk R
n
with R
0
= 0
and
P
E
R
n+1
= i + 1[R
n
= i, R
n1
, R
n2
, . . . , R
1
= 1 P
E
R
n+1
= i 1[R
n
= i, R
n1
, R
n2
, . . . , R
1
= E
i
. (2)
13
Note that when P
1
E
0
= 1/2 =
X
(1/2 + 0)
X
(1/2) = 1, we have our
usual simple random walk.
Formally, let
1
, T
1
, P
1
be a probability space and let (
1
1
) be a
sequence of i.i.d. random variables with P
1
E
1
< x = F(x), F(0) = 1F(1) =
0 and
c = c(
1
) = . . . E
1
= E
1
(
1
), E
0
= E
0
(
1
), E
1
= E
1
(
1
), . . .
Let
2
, T
2
be the measurable space of the sequences
2
=
1
,
2
, . . . where
i
= 1 or 1 for i = 1, 2, . . . and T
2
is the natural -algebra (the -algebra gen-
erated by collections of all product sets). Dene the random variable Y
1
, Y
2
, . . .
on
2
by Y
i
(
2
) =
i
for i = 1, 2, . . . and let R
0
= 0, R
n
= Y
1
+Y
2
+ +Y
n
, n =
1, 2, . . .. Then we construct a probability measure P on the measurable space
=
1
2
, T = T
1
T
2
as follows: for any given
1
1
we dene a
measure P
1
= P
E(1)
= P
E
on T
2
satisfying (2).
Note the dierence between P
1
and P
E
: the former is a probability measure used
when determining the probability that any i Z takes on some value E
i
. With
the creation of the random environment, the probability measure P
E
refers to
the probability applicable for the random walk, referring to the probability of
moving to the right or left.
Let U
k
= (1 E
k
)/E
k
, V
j
= log U
j
, and F(x) as above. We assume the
following conditions:
0 < < 1/2 s.t. P < E
0
< 1 = 1 (3)
E
1
[V
0
] =
_
xdP
1
V
0
< x =
_
1
log
1 x
x
dF(x) = 0 (4)
0 <
2
= E
1
[V
2
0
] =
_
1
_
log
1 x
x
_
2
dF(x) < . (5)
Note that for the simple random walk, PU
0
= 1 = P
1
V
0
= 0 = 1, so (3)
and (4) are satised. However, (5) is not satised since E
1
[V
2
0
] =
2
= 0 (that
is, we expect the simple random walk to be at its starting point).
2.2 Recursion in a Random Environment
We can state a theorem of recurrence in random environments analogous to
the result in non-random environment. With probability one, God creates an
environment in which the random walk is recurrent. First, two Lemmas:
Lemma 27. ( [2], p. 311) Let
p(a, b, c) = P
E
minj : j > m, R
j
= a < minj : j > m, R
j
= cS
m
= b a b c
Where p(a, b, c) = p(a, b, c, c) is the probability that a particle starting from b
hits a before c given the environment c. Then
p(a, b, c) = 1
D(a, b)
D(a, c)
14
where
D(a, b) =
_
_
_
0 b = a
1 b = a + 1
1 +
ba1
j=1
j
i=1
U
a+i
b a + 2
and in particular,
p(0, 1, ) = 1
1
D(n)
where D(b) = D(0, b) + 1 +U
1
+U
1
U
2
+ +U
1
U
2
. . . U
b1
Proof. Obviously p(a, a, c) = 1, p(a, c, c) = 0, and p(a, b, c) = E
b
p(a, b + 1, c) +
(1E
b
)p(a, b1, c). Therefore, p(a, b+1, c)p(a, b, ) =
1E
b
E
b
(p(a, b, c)p(a, b
1, c). By iteration, we have
p(a, b + 1, ) p(a, b, c) = U
b
U
b1
U
a+1
p(a, a + 1, c) p(a, a, c))
= U
b
U
b1
U
a+1
(p(a, a + 1, c) 1) (6)
Adding the above equations for b = a, a + 1, . . . , c 1 we have
1 = p(a, c, c) p(a, a, c) = D(a, c)(p(a, a + 1, c) 1)
or
p(a, a + 1, c) = 1
1
D(a, c)
Two equations above imply
p(a, b + 1, c) p(a, b, c) =
1
D(a, c)
U
b
U
b1
U
a+1
Adding these equations gives
p(a, b
1
, c) 1 = p(a, b + 1, c) p(a, a, c)
=
1
D(a, c)
(1 +U
a+1
+U
a+1
U
a+2
+ +U
a+1
U
a+2
U
b
) =
D(a, b + 1)
D(a, c)
.
So we have the lemma
Consequence
P lim
n
p(0, 1, n; c) = lim
n
(1
1
D(n)
) = 1 = 1 (7)
The consequence follows from the previous Lemma and a claim, left unproven,
that lim
n
D(n) = in probability (with respect to P
1
.)
15
Lemma 28. For any < a b < we have
Plimsup
n
R
n
= a = Plimsup
n
R
n
= b = 0.
Proof. Analogue of Lemma 19.
Theorem 29. ( [2], p.311) Assuming conditions 4, 5, 6 we have
PR
n
= 0 i.o. = P
1
P
E
R
n
= 0 i.o. = 1 = 1
Assuming 4, 6, and E
1
V
0
,= 0 we have
P
1
P
E
R
n
= 0 i.o. > 0 = 0.
Proof. Assume that the walk begins with a move to the right. Then Lemma
28 implies that the walker either returns to 0 or else goes to + before it
returns. But by (7), for any > 0 there is a n
0
= n
0
(, c) such that p(0, 1, n) =
1 1/D(n) 1 when n n
0
Therefore the probability that the walker
returns to zero is greater than 1 for any > 0, and so is one.
References
[1] Grimmett, Stirzaker. Probability and Random Process. Oxford: Clarendon
Press, 1992.
[2] Revesz, Pal. Random Walk in Random and Non-Random Environments.
New Jersey: World Scientic, 2005.
[3] Professor Gregory Lawlers Lecture Notes.
http://www.math.uchicago.edu/%7Elawler/probnotes.pdf
[4] Spitzer, Frank. Principles of Random Walk. Princeton: Springer, 1964.
[5] Kalikow S.A. Generalized Random Walk in a Random Environment. The
Annals of Probability, 9, 753-768.
[6] Doyle, Peter and Snell, J. Random Walks and Electrical Networks.
http://arxiv.org/abs/math/0001057
16