Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
41 views23 pages

Mathematical Expectation: For A R.V. X or A Function G (X)

This document discusses key concepts in mathematical expectation including: 1) Expected value and variance for discrete and continuous random variables (RVs) and functions of RVs. 2) The law of unconscious statistician for finding expected values of functions of RVs. 3) Properties of variance including for linear combinations of RVs. 4) Bivariate distributions and how to calculate expected values of functions of two RVs, including examples. 5) Covariance and correlation as measures of the relationship between two RVs. 6) Properties of linear combinations of RVs including how expected value and variance transform.

Uploaded by

Steven Weeks
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views23 pages

Mathematical Expectation: For A R.V. X or A Function G (X)

This document discusses key concepts in mathematical expectation including: 1) Expected value and variance for discrete and continuous random variables (RVs) and functions of RVs. 2) The law of unconscious statistician for finding expected values of functions of RVs. 3) Properties of variance including for linear combinations of RVs. 4) Bivariate distributions and how to calculate expected values of functions of two RVs, including examples. 5) Covariance and correlation as measures of the relationship between two RVs. 6) Properties of linear combinations of RVs including how expected value and variance transform.

Uploaded by

Steven Weeks
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 23

Mathematical Expectation

 For a r.v. X or a function g(X)


• Mean or Expected Value
• Variance
 For two r.v.’s X and Y or a function g(X, Y)
• Expected Value
• Covariance and Correlation
 Linear Combinations of r.v.’s
 Chebyshev’s Inequality
1
Expected Value
 For a discrete r.v. X with p.m.f. fX(x)
 X  E[ X ]   x f X ( x)
x
 For a continuous r.v. X with p.d.f. fX(x)

 X  E[ X ]  

x f X ( x) dx

 Some basic rules: a, b are nonrandom


• E[a] = a
• E[aX] = a E[X]
• E[aX + b] = a E[X] + b
2
Expected Value
 Discrete Example: Flip 2 coins
fX(0) = 1/4, fX(1) = 1/2, fX(2) = 1/4
E[X] = (0)(1/4) + (1)(1/2) + (2)(1/4) = 1
on average we expect to see one head
E[4X+3] = 4 E[X] + 3 = 4(1) + 3 = 1

 Continuous Example:
fX(x) = 2(x1) for 1 < x < 2; o/w fX(x) = 0
2 2 5
 
2
E[ X ]  x 2( x  1) dx  2 x  x dx 
1 1 3
3
Law of the Unconscious Statistician
 For a discrete r.v. X with p.m.f. fX(x), the
expected value for a function g(X) is
 g ( X )  E[ g ( X )]  
g ( x) f X ( x)
x

 For a continuous r.v. X with p.d.f. fX(x), the


expected value for a function g(X) is

 g ( X )  E[ g ( X )]  

g ( x) f X ( x ) dx

 The distribution for g(X) is not needed.


4
Law of the Unconscious Statistician
 Discrete Example: Flip 2 coins
fX(0) = 1/4, fX(1) = 1/2, fX(2) = 1/4
 3   3  1   3  1   3  1  7
E             
 X  1   0  1  4   1  1  2   2  1  4  4
 Continuous Example:
fX(x) = 2(x1) for 1 < x < 2; o/w fX(x) = 0
2
E[ X ]  1

2
x 
2
2( x  1) dx  2 
1
2
3 2
x  x dx 
17
6
5
Variance
 For any r.v. X, the variance is the expected
squared deviation about its mean:
 X2  V ( X )  E[( X   X ) 2 ]  E[ X 2 ]   X2
• Note that variance cannot be negative
 Law of the Unconscious Statistician
2 2
 g ( X )  V ( g ( X ))  E[( g ( X )   g ( X ) ) ]
 Some basic rules: a, b are nonrandom
• V(a) = 0
• V(aX) = a2 V(X)
• V(aX + b) = a2 V(X) 6
Variance
 Discrete Example: Flip 2 coins
fX(0) = 1/4, fX(1) = 1/2, fX(2) = 1/4 ; E[X] = 1
E[X 2] = (02)(1/4) + (12)(1/2) + (22)(1/4) = 3/2
V(X) = E[X 2]  (E[X])2 = 3/2  (1)2 = 1/2
V(4X+3) = (4)2V(X) = (16)(1/2) = 8
5 2 17
 Continuous Example: E[ X ]  ; E[ X ] 
3 6
2
2 2 17  5  1
V ( X )  E[ X ]  ( E[ X ])    
6 3 18
7
Bivariate Distributions
 For a discrete r.v.’s X and Y with joint p.m.f.
fX,Y (x, y), the expected value for a function
g(X,Y) is
 g ( X ,Y )  E[ g ( X , Y )]   g ( x, y) f
x y
X ,Y ( x, y )

 For a continuous r.v.’s X and Y with joint


p.d.f. fX,Y (x, y), the expected value for a
function g(X,Y) is
 
 g ( X ,Y )  E[ g ( X , Y )]   
 
g ( x, y ) f X ,Y ( x, y ) dxdy
8
Discrete Bivariate Distribution
 Example: X
Y 0 1 2
5 0.2 0.1 0.1
10 0.1 0.2 0.3

E  XY   (0)(5)(0.2)  (1)(5)(0.1)  (2)(5)(0.1)


 (0)(10)(0.1)  (1)(10)(0.2)  (2)(10)(0.3)  9.5
X  0 1 2
E     (0.2)   (0.1)   (0.1)
Y  5 5 5
 0   1   2
  (0.1)   (0.2)   (0.3)  0.14
 10   10   10 
9
Continuous Bivariate Distribution
 Example: for 0 < x < 1, 0 < y < 1
12
f X ,Y ( x, y )  x(2  x  y )
5
 
 
2 2
E[ X Y ]  ( x y ) f X ,Y ( x, y ) dxdy
 

12 1 1

2
 ( x y ) x(2  x  y ) dxdy
5 0 0

12 1 1

5 
0 0
[ x 3 (2 y  y 2 )  x 4 y ] dxdy
10
Continuous Bivariate Distribution
 Example (cont’d):
1 
1
4 5
12 x x
 ( 2 y  y 2 )  dy
2
E[ X Y ]  y
5 0  4 5 
 0

12 1 3 1 2

5  
0 10
y  y  dy
4 
1
2 3
 18  y 3 y 9 1 4
      
 25  2  5  3 0
25 5 25
11
Covariance and Correlation
 Covariance measures the association between
two r.v.’s X and Y
 XY  Cov( X , Y )  E[( X   X )(Y  Y )]
 E[ XY ]   X Y

 Correlation is the scale-free version of


covariance  Cov( X , Y )
 XY  XY

 X Y V ( X ) V (Y )
 1   XY  1
12
Covariance and Correlation
 Covariance between two r.v.’s X and Y:
• Cov(X,Y) > 0  Positive relationship.
• Cov(X,Y) < 0  Negative relationship.
• Cov(X,Y) = 0  No relationship.

 Correlation between two r.v.’s X and Y:


• ρXY close to 1  Strong positive correlation.
• ρXY close to 1  Strong negative correlation.
• ρXY = 0  No correlation.

13
Covariance and Correlation
 Discrete Example: E  XY   9.5
X
Y 0 1 2 fY (y)
5 0.2 0.1 0.1 0.4
10 0.1 0.2 0.3 0.6
fX (x) 0.3 0.3 0.4 1.0
E[ X ]  (0)(0.3)  (1)(0.3)  (2)(0.4)  1.1
E[ X 2 ]  (0 2 )(0.3)  (12 )(0.3)  (2 2 )(0.4)  1.9
E[Y ]  (5)(0.4)  (10)(0.6)  8.0
E[Y 2 ]  (52 )(0.4)  (10 2 )(0.6)  70.0 14
Covariance and Correlation
 Discrete Example (cont’d):
V ( X )  E[ X 2 ]  ( E[ X ]) 2  1.9  (1.1) 2  0.69
V (Y )  E[Y 2 ]  ( E[Y ]) 2  70  (8) 2  6.0
Cov( X , Y )  E[ XY ]  ( E[ X ])( E[Y ])
 9.5  (1.1)(8.0)  0.7
Cov( X , Y ) 0.7
 XY    0.344
V ( X ) V (Y ) (0.69) (6.0)

 X and Y are somewhat positively correlated.


15
Linear Combinations
 X and Y are r.v.’s
• E[aX + bY + c] = a E[X] + b E[Y] + c
• E[a g(X) + b h(Y) + c] = a E[g(X)] + b E[h(Y)] + c
• V(aX + bY) = a2 V(X) + b2 V(Y) + 2ab Cov(X, Y)
 Discrete Example (cont’d): Cov( X , Y )  0.7
E[ X ]  1.1; V ( X )  0.69 ; E[Y ]  8.0 ; V (Y )  6.0
• E[4X  3Y + 2] = (4)(1.1)  (3)(8) + 2 = 17.6
• V(4X  3Y) = (4)2(0.69) + (3)2(6) + 2(4)(3)(0.7)
= (16)(0.69) + (9)(6)  (24)(0.7) = 48.24
16
Linear Combinations
 X and Y are independent r.v.’s
• E[XY] = E[X] E[Y]
• Cov(X, Y) = E[XY]  (E[X])(E[Y]) = 0
• V(aX + bY) = a2 V(X) + b2 V(Y)
 X1, X2, … , Xn are mutually independent r.v.’s
• V (a1 X 1  a2 X 2    a n X n )
 a12V ( X 1 )  a 22V ( X 2 )    a n2V ( X n )
n
 
i 1
ai2V ( X i )
17
Chebyshev’s Inequality
 For any distribution, the probability that r.v. X
will take on a value within   k is
1
P[   k  X    k ]  1 
k2

18
Bell-shaped Distributions
 For bell-shaped symmetric distributions
• Approximately 68% of values within   
• Approximately 95% of values within   2
• Nearly 100% of values within   3

19
Comparison
k Chebyshev Bell
1  0% 68%
1.5  55.56%
2  75% 95%
2.5  84%
3  88.89% 100%
3.5  91.84%
20
Chebyshev’s Inequality Examples
 A r.v. X has mean 8 and variance 16, but its
distribution is unknown.
P[1  X  15]
 P[8  k (4)  X  8  k (4)], where k  1.75
 P[  X  (1.75) X  X   X  (1.75) X ]
1
 1 2
 0.673
(1.75)
 Approximately 67.3% of the possible values of
X lie between 1 and 15.
21
Chebyshev’s Inequality Examples
 A r.v. X has mean 8 and variance 16 (cont’d).
P[ X  8  10]  1  P[ X  8  10]
P[ X  8  10]  P[10  X  8  10]
 P[8  10  X  8  10]
 P[8  k (4)  X  8  k (4)], where k  2.5
 P[  X  (2.5) X  X   X  (2.5) X ]
1
 1 2
 0.84
(2.5)
 P[ X  8  10]  1  0.84  0.16
22
Chebyshev’s Inequality Examples
 1000 applicants apply for 70 job positions.
• Each applicant is tested: mean 60, variance 36.
• Can a person scoring 84 get a position?
P[ X  84]  P (36  X  84)
 P[60  k (6)  X  60  k (6)], where k  4
1  P [ X  84]  0.0625
 1  2  0.9375
(4)
• 1000 (0.0625) = 62.5  at most 63 applicants
would have a score at least 84. Since there are 70
positions, a person scoring at least 84 will get one.
23

You might also like