Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
22 views7 pages

HW2-4 1

Uploaded by

b8vg52h8yw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views7 pages

HW2-4 1

Uploaded by

b8vg52h8yw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

ADVANCED PROBABILITY HOMEWORK 2

GROUP 4
(1120240065)
(2120240153)
(2120240161)
(1120240072)
(2120240160)

2.2.2 The L2 weak law generalizes immediately to certain dependent sequences. Suppose EXn = 0 and
EXn Xm ≤ r(n − m) for m ≤ n (no absolute value on the left-hand side!) with r(k) → 0 as k → ∞. Show
that (X1 + . . . + Xn ) /n → 0 in probability.

Proof. We assume that


!" r(0) is finite. Let Sn = X1 + · · · + Xn , since E[Xn ] = 0, we have E[Xn ] = 0.
# $
Sn 2
We first compute E n :

%& '2 ( & ' n n


Sn Sn 1 1 ))
E = Var = Var(S n ) = E[Xi Xj ].
n n n2 n2
i=1 j=1

Expanding the double sum, we get:

n n
* n n−1 n−2
+
1 )) 1 ) ) )
E[Xi Xj ] ≤ 2 E[Xi2 ] +2 E[Xi Xi+1 ] + 2 E[Xi Xi+2 ] + · · · + 2E[X1 Xn ] .
n2 n
i=1 j=1 i=1 i=1 i=1

Now, since r(k) → 0 as k → ∞, for given ε > 0, there exists N such that for n ≥ N , |r(n)| ≤ ε. Moreover,
since E[Xi2 ] = r(0) < ∞, by the Cauchy-Schwarz Inequality, we have E[|Xi Xj |] ≤ (E[Xi2 ])1/2 (E[Xj2 ])1/2 ≤

Date: 2024.10.18.
1
2 GROUP 4 (1120240065) (2120240153) (2120240161) (1120240072) (2120240160)

r(0) < ∞. Then, for n ≥ N + 1, we have


%& ' ( * n n−1 n−N
Sn 2 1 ) ) )+1
E ≤ 2 E[Xi2 ] + 2 E[|Xi Xi+1 |] + · · · + 2 E[|Xi Xi+N −1 |]
n n
i=1 i=1 i=1
n−N
+
)
+2 E[Xi+N Xi ] + · · · + 2E[X1 Xn ]
i=1
* n−N
1 )
≤ 2 nr(0) + 2(n − 1)r(0) + · · · + 2(n − N + 1)r(0) + 2 r(N )
n
i=1
+
)−1
n−N
+2 r(N + 1) + · · · + 2r(n − 1)
i=1
* +
1
n−N
) )−1
n−N
≤ 2 2N nr(0) + 2 |r(N )| + 2 |r(N + 1)| + · · · + 2|r(n − 1)|
n
i=1 i=1
* n−N
+
1 )
≤ 2 2N nr(0) + 2 iε
n
i=1
* n
+
1 )
≤ 2 2N nr(0) + 2 iε
n
i=1
2N r(0) n(n + 1)
≤ + ε.
n n2
!" # 2 $
Since ε > 0 is arbitrary, we conclude that E Snn → 0, which means that Sn /n → 0 in L2 . Therefore,
Sn /n → 0 in probability. !

" #
2.2.4 Let X1 , X2 , . . . be i.i.d. with P Xi = (−1)k k = C/k 2 log k for k ≥ 2 where C is chosen to make
the sum of the probabilities = 1. Show that E |Xi | = ∞, but there is a finite constant µ so that Sn /n → µ
in probability.

Proof.

) ) 1 ∞
C k
E[Xi ] = (−1) 2 =C
k log k k log k
k=2 k=2
Suppose that f (x) = 1
x log x for x ∈ [2, ∞). f (x) is a strictly monotone decreasing and non-negative function.
Since
, ∞ , ∞
-∞
1 -
f (x) dx = dx = log log x-- = ∞,
2 2 x log x 2
.∞
we have that 1
k=2 k log k = ∞. Thus,


) 1
E[Xi ] = C = ∞.
k log k
k=2
ADVANCED PROBABILITY HOMEWORK 2 3

On the other hand,


) ∞
) C
nP(|Xi | > n) = n P(|Xi | = k) = n
k 2 log k
k=n+1 k=n+1
∞ ∞ & '
Cn ) 1 Cn ) 1 1 C
≤ = − = → 0 as n → ∞
log n k(k − 1) log n k−1 k log n
k=n+1 k=n+1
and

n
) n
) (−1)k
k C
µn = E[Xi I{|Xi |≤n} ] = (−1) 2 =C
k log k k log k
k=2 k=2
.∞ .∞ (−1)k .∞ (−1)k
Since k=2 (−1) is a Leibniz series,
k
k=2 k log k < ∞. We denote by µ =C k=2 k log k , and µn → µ as
n → ∞. By WLLN, n − µn → 0 in probability. For ϵ > 0, P(| Snn − µn | > ϵ)
Sn
→ 1 as n → ∞ and |µn − µ| ≤ ϵ
2
when n is sufficiently large, so we have
&- - ' &- - '
- Sn - - Sn - ϵ ϵ
1 ≥ P -- − µ-- > ϵ ≥ P -- − µn -- < , |µn − µ| < → 1 as n → ∞.
n n 2 2
Hence, Sn
n − µ → 0 in probability.
!

2.2.5 Let X1 , X2 , . . . be i.i.d with P (Xi > x) = e/x log x for x ≥ e. Show that E|Xi | = ∞, but there is a
sequence of constants µn → ∞ so that Sn /n − µn → 0 in probability.

Proof. Check the conditions for Theorem 2.2.12 to hold:


Condition(1): we need to verify that xP (|Xi | > x) → 0 as x → ∞. According to the problem statement,
we obtain
xP (|Xi | > x) = xP (Xi > x) = e/ log x → 0 as x → ∞
Condition(2): we need to verify that µn = E(X1 1(|X1 |≤n) ) are exist. Since
, n , n
e
µn = E(X1 1(|X1 |≤n) ) = P (Xi > x)dx = dx = e log(log n)
e e x log x

Therefore, by Theorem 2.2.12, we have


Sn
− µn → 0
n
Moreover, we can show that E|Xi | = ∞ by the following inequality:
, ∞ , n
E|Xi | = EXi = P (Xi > x)dx > P (Xi > x)dx = µn
e e

Since µn → ∞ as n → ∞, so we obtain that E|Xi | = ∞. !


4 GROUP 4 (1120240065) (2120240153) (2120240161) (1120240072) (2120240160)

2.3.5 Dominated convergence. Suppose Xn → X in probability and (a) |Xn | ≤ Y with EY < ∞ or
(b) there is a continuous function g with g(x) > 0 for large x with |x|/g(x) → 0 as |x| → ∞ so that
Eg (Xn ) ≤ C < ∞ for all n . Show that EXn → EX .

Proof (a). By Theorem 2.3.2 in text, for every subsequence Xn(m) , there is a further subsequence Xn(mk ) that
convergence almost surely to X. Since |Xn(mk ) | < Y for all n(mk ), and EY < ∞ so Y is integrable. Therefore
by DCT, EXn(mk ) → EX. That means that every subsequence (EXn(m) ) of (EXn ) has a sub-subsequence
EXn(mk ) converging to EX. Therefore, EXn → EX.
(b) By Theorem 2.3.2 in text, for every subsequence Xn(m) , there is a further subsequence Xn(mk ) that
convergence almost surely to X. Let h(x) = x. By Theorem 1.6.8 in text, we can get EXn(mk ) → EX.
That means that every subsequence (EXn(m) ) of (EXn ) has a sub-subsequence EXn(mk ) converging to EX.
Therefore, EXn → EX.
!

2.3.8 Let An be a sequence of independent events with P (An ) < 1 for all n. Show that P (∪An ) = 1 implies
.
n P (An ) = ∞ and hence P (An i.o. ) = 1.

Proof. Since P (∪n An ) = 1, we can gain that P (∩n Acn ) = P {(∪n An )c } = 1 − P (∪n An ) = 0 according to
De Morgan’s laws.
Then, since An ’s are a sequence of independent events, Acn ’s are also a sequence of independent events. So
we have

* ∞
+ ∞ ∞
/ 0 0
P Acn = P (Acn ) = (1 − P (An )) = 0
n=1 n=1 n=1
1m−1
Consider that P (An ) < 1 for all n, we have n=1 (1 − P (An )) > 0 for m ≥ 2. Then, we have for all
m ∈ N,

* ∞
+
/
P Acn =0
n=m
2∞ 3∞
Note that { lim sup n→∞ Acn } = m=1 n=m An ,
c

* ∞ /

+
4
P ( lim sup n→∞ Acn ) = P Acn
m=1 n=m

* ∞ +
) /
≤ P Acn =0
m=1 n=m
.∞
Then, P (An i.o. ) = 1. Finally, according to Theorem 2.3.1 , we have n=1 P (An ) = ∞. !
ADVANCED PROBABILITY HOMEWORK 2 5

2.3.11 Let X1 , X2 , . . . be independent with P (Xn = 1) = pn and P (Xn = 0) = 1 − pn . Show that (i)
.
Xn → 0 in probability if and only if pn → 0, and (ii) Xn → 0 a.s. if and only if pn < ∞.

Proof. (i) Necessity: Assume Xn → 0 in probability. Then for any 0 < ϵ < 1, we have

P (|Xn − 0| ≥ ϵ) → 0.

Since Xn takes values only in {0, 1}, we get

P (|Xn − 0| ≥ ϵ) = P (Xn = 1) = pn .

Thus, if Xn → 0 in probability, then pn → 0.


Sufficiency: If pn → 0, then for any 0 < ϵ < 1,

P (|Xn − 0| ≥ ϵ) = pn → 0.

This implies that Xn → 0 in probability. Thus, we conclude that:

Xn → 0 in probability if and only if pn → 0.


.∞
(ii) Sufficiency: If n=1 pn < ∞, then by the Borel-Cantelli lemma, for the independent events {Xn = 1}, we
have

)
P (Xn = 1) < ∞ =⇒ P (Xn = 1 i.o.) = 0.
n=1
Thus, Xn → 0 a.s.
Necessity: If Xn → 0 a.s., then
P (Xn = 1 i.o.) = 0.
. .∞
By the second Borel-Cantelli lemma, if Xn are independent and ∞ n=1 P (Xn = 1) = n=1 pn = ∞, then
.∞
P (Xn = 1 i.o.) = 1. Therefore, if Xn → 0 a.s., it must be that n=1 pn < ∞.
Thus, we conclude that:

)
Xn → 0 a.s. if and only if pn < ∞.
n=1
!

2.3.20 Show that if Xn is the outcome of the nth play of the St. Petersburg game (Example 2.2.16) then
lim supn→∞ Xn / (n log2 n) = ∞ a.s. and hence the same result holds for Sn . This shows that the conver-
gence Sn / (n log2 n) → 1 in probability proved in Section 2.2 does not occur a.s.

Proof. For fixed M > 0, Let Tn = ⌊log2 (M · n log2 n)⌋ + 1. Then, for n ≥ 2,


) 1 1 1 1
P (X1 ≥ M · n log2 n) = = Tn −1 = ⌊log (M ·n log n)⌋ ≥
2j 2 2 2 2 M · n log2 n
j=T n

Then, according to Exercise 2.2.4 , we have


6 GROUP 4 (1120240065) (2120240153) (2120240161) (1120240072) (2120240160)


) & ' ∞
)
X1 1
P ≥M ≥ =∞
n log2 n M · n log2 n
n=2 n=2
5
Note that Xn ’s are independent, according to Borel-Cantelli Lemma 2, we get P Xn
n log2 n ≥ M i.o. ) =
1. !

2.4.2 Let X0 = (1, 0) and define Xn ∈ R2 inductively by declaring that Xn+1 is chosen at random from the
ball of radius |Xn | centered at the origin, i.e., Xn+1 / |Xn | is uniformly distributed on the ball of radius 1 and
independent of X1 , . . . , Xn . Prove that n−1 log |Xn | → c a.s. and compute c.

Proof. Let Un = Xn
|Xn−1 | for n = 1, 2, . . .. Then Un is a uniform distribution on the unit ball and U1 , U2 , . . . are
i.i.d.. Then for 0 ≤ x ≤ 1, using polar coordinates,
, 2π , x
1
P(|Un | ≤ x) = r dr dθ = x2 .
0 0 π
Thus, the density function of Un is 2x for 0 ≤ x ≤ 1, and
, 1
1
E[log |Un |] = 2x log x dx = − .
0 2
It is obvious that E[| log |Un ||] = 1/2 < ∞, then by the SLLN, we have
n
) 1
n−1 log |Xn | = n−1 log |Uk | → E[log |Un |] = − a.s.
2
k=1

2.5.2 The converse of Theorem 2.5.12 is much easier. Let p > 0. If Sn /n1/p → 0 a.s. then E|X1 |p < ∞.

Proof. Assume that E|X1 |p = ∞, then


, ∞
p
E|X1 | = P (|X1 |p > y)dy
0

)
≤ P (|X1 |p > n)
n=0
)∞
= P (|X1 | > n1/p )
n=0

Since E|X1 | = ∞ and X1 , X2 , . . . are i.i.d., it follows from the second Borel-Cantelli lemma that P (|Xn | >
p

n1/p i.o.) = 1, which contradicts Sn /n1/p → 0 a.s. !


ADVANCED PROBABILITY HOMEWORK 2 7

2.5.3Let X1 , X2 , . . . be i.i.d. standard normals. Show that for any t



) sin(nπt)
Xn · converges a.s.
n
n=1
We will see this series again at the end of Section 8.1 [REF].

sin(nπt) sin(nπt) .
proof. Let Yn = Xn · n . EYn = · E(Xn ) = 0, so ∞
n n=1 E(Yn ) = 0 < ∞. V ar(Yn ) =
(sin(nπt))2 .∞ .∞ 1 π2
n2
· V ar(Xn )≤ n2 .
1
ar(Yn ) ≤
n=1 V n=1 n2 = 6 < ∞. Hence by Kolmogorov’s two-series
.∞ .∞ sin(nπt)
theorem, n=1 Yn converges a.s. i.e n=1 Xn · n converges a.s..
!

You might also like