Convergence in Mean https://www.probabilitycourse.com/chapter7/7_2_6_convergence_in_m...
7.2.6 Convergence in Mean
One way of interpreting the convergence of a sequence Xn to X is to say that the ''distance'' between X and
Xn is getting smaller and smaller. For example, if we define the distance between Xn and X as
P(|Xn − X| ≥ ϵ), we have convergence in probability. One way to define the distance between Xn and
X is
E (|Xn − X |r ) ,
where r ≥ 1 is a fixed number. This refers to convergence in mean. (Note: for convergence in mean, it is
usually required that E|Xnr | < ∞ .) The most common choice is r = 2, in which case it is called the
mean-square convergence. (Note: Some authors refer to the case r = 1 as convergence in mean.)
Convergence in Mean
Let r ≥ 1 be a fixed number. A sequence of random variables X1 , X2 , X3 , ⋯ converges in the rth mean or
Lr
r
in the L norm to a random variable X , shown by X −→− X , if
n
lim E (|Xn − X |r ) = 0.
n→∞
m.s.
If r = 2, it is called the mean-square convergence, and it is shown by Xn −−→ X .
Example 7.10
Lr
Let Xn ∼ Unif orm (0, n ). Show that Xn
1
−→
− 0 , for any r ≥ 1.
• Solution
◦ The PDF of Xn is given by
⎧n 0≤x≤ 1
⎪ n
fXn (x) = ⎨
⎪
⎩0 otherwise
We have
1
∫0
E (|Xn − 0|r ) = xr n
n
dx
1
= → 0, for all r ≥ 1.
(r + 1)nr
1 of 4 10/12/22, 3:15 pm
Convergence in Mean https://www.probabilitycourse.com/chapter7/7_2_6_convergence_in_m...
Theorem 7.3
Ls Lr
Let 1 ≤ r ≤ s. If Xn −→
− X , then Xn −→
− X .
• Proof
◦ We can use Hölder's inequality, which was proved in Section . Hölder's Inequality states that
1 1
E|XY| ≤ (E|X | ) (E|Y | ) ,
p p q q
1 1
where 1 < p , q < ∞ and p + q = 1. In Hölder's inequality, choose
X = |Xn − X |r ,
Y = 1,
s
p = > 1.
r
We obtain
1
E|Xn − X |r ≤ (E|Xn − X |s ) p .
Ls
Now, by assumption Xn −→
− X , which means
lim E (|Xn − X |s ) = 0.
n→∞
We conclude
1
lim E (|Xn − X |r ) ≤ lim (E|Xn − X |s ) p
n→∞ n→∞
= 0.
Lr
Therefore, Xn −→
− X .
As we mentioned before, convergence in mean is stronger than convergence in probability. We can prove this
using Markov's inequality.
Theorem 7.4
2 of 4 10/12/22, 3:15 pm
Convergence in Mean https://www.probabilitycourse.com/chapter7/7_2_6_convergence_in_m...
Lr p
If Xn −→
− X for some r ≥ 1, then Xn → X .
• Proof
◦ For any ϵ > 0, we have
P(|Xn − X| ≥ ϵ) = P(|Xn − X |r ≥ ϵr ) (since r ≥ 1)
E|Xn − X |r
≤ (by Markov's inequality).
ϵr
Since by assumption lim E (|Xn − X |r ) = 0 , we conclude
n→∞
lim P(|Xn − X| ≥ ϵ) = 0, for all ϵ > 0.
n→∞
The converse of Theorem 7.4 is not true in general. That is, there are sequences that converge in probability
but not in mean. Let us look at an example.
Example 7.11
Consider a sequence {Xn , n = 1, 2, 3, ⋯} such that
⎧ n2 with probability 1n
⎪
Xn = ⎨
⎪
⎩0 with probability 1 − 1
n
Show that
p
a. Xn → 0.
b. Xn does not converge in the rth mean for any r ≥ 1.
• Solution
p
◦ a. To show Xn → 0, we can write, for any ϵ > 0
lim P(|Xn | ≥ ϵ) = lim P(Xn = n2 )
n→∞ n→∞
1
= lim
n→∞ n
= 0.
p
We conclude that Xn → 0.
3 of 4 10/12/22, 3:15 pm
Convergence in Mean https://www.probabilitycourse.com/chapter7/7_2_6_convergence_in_m...
b. For any r ≥ 1, we can write
lim E (|Xn |r ) = lim (n2r ⋅ + 0 ⋅ (1 − ))
1 1
n→∞ n→∞ n n
= lim n2r−1
n→∞
=∞ (since r ≥ 1).
Therefore, Xn does not converge in the rth mean for any r ≥ 1. In particular, it is
p
interesting to note that, although Xn → 0, the expected value of Xn does not converge
to 0.
← previous
next →
The print version of the book is available through Amazon here.
4 of 4 10/12/22, 3:15 pm