Moments of Random Variables: Let X be a random variable
with space RX and probability density function f .
The nth moment about the origin of a random variable X , as
denoted by E (X n ), is defined to be
8P
>
> x n f (x) if X is discrete
< x2RX
E (X n ) =
>
>
:R 1 n f (x)
1x if X is continuous
for n = 0, 1, 2, 3, . . . , provided the right side converges abso-
lutely.
If n = 1, then E (X ) is called the first moment about the
origin. If n = 2, then E (X 2 ) is called the second moment of
X about the origin.
If n = 1, then E (X ) is called the first moment about the
origin. If n = 2, then E (X 2 ) is called the second moment of
X about the origin.
In general, these moments may or may not exist for a given
random variable. If for a random variable, a particular
moment does not exist, then we say that the random variable
does not have that moment.
MOMENT GENERATING FUNCTIONS
Moment Generating Function: Let X be a random variable
with probability density function f . A real valued function
M : R ! R defined by
M(t) = E (e tX )
is called the moment generating function of X if this expected
value exists for all t in the interval h < t < h for some
h > 0.
MOMENT GENERATING FUNCTIONS
Moment Generating Function: Let X be a random variable
with probability density function f . A real valued function
M : R ! R defined by
M(t) = E (e tX )
is called the moment generating function of X if this expected
value exists for all t in the interval h < t < h for some
h > 0.
Using the definition of expected value of a random variable, we
obtain
8P
>
> e tx f (x) if X is discrete
< x2RX
M(t) =
>
>
:R 1 tx
1 e f (x) if X is continuous.
Example: Let X have the PDF
81 x/2
<2e if x >0
f (x) =
:
0 otherwise.
Then
Z 1
M(t) = e tx e x/2
dx
0
Z 1
1 1
= e (t 2
)x
dx
2 0
1 1
= , t< .
1 2t 2
R1
[Use : 0 e ax dx = 1a , a > 0.]
To explain why we refer to this function as a “moment-generating”
function, let us substitute for e tx its Maclaurins series expansion,
t 2x 2 t 3x 3 tr xr
e tx = 1 + tx + + + ... + + ...
2! 3! r!
For discrete case, thus we get
X t 2x 2 t 3x 3 tr xr
M(t) = [1 + tx + + + ... + + . . .]f (x)
x
2! 3! r!
X X t2 X 2 tr X r
= f (x) + t xf (x) + x f (x) + . . . x f (x) + . . .
x x
2! x r! x
t2 tr
= 1 + E (X )t + E (X 2 ) + . . . + E (X r ) + . . .
2! r!
d r M(t)
Theorem: dt r |t=0 = E (X r ).
Example: Let X have the PDF
81 x/2
<2e if x >0
f (x) =
:
0 otherwise.
Recall
1 1
M(t) = , t< .
1 2t 2
Then
2 8 1
M 0 (t) = , M 00 (t) = , t< .
(1 2t)2 (1 2t)3 2
and hence
E (X ) = 2, E (X 2 ) = 8, and Var (X ) = 4.
Table of Contents
SOME SPECIAL CONTINUOUS DISTRIBUTIONS
SOME SPECIAL CONTINUOUS DISTRIBUTIONS
1. Uniform Distribution
A random variable X is said to be uniform on the interval
[a, b] if its probability density function is of the form
1
f (x) = , ax b
b a
where a and b are constants.
We denote a random variable X with the uniform distribution on
the interval [a, b] as X ⇠ UNIF(a, b).
APPLICATION: Random number generation.
THEOREM: If X is a uniform random variable on the interval
[a, b], then the mean, variance and moment generating functions
are respectively given by
b+a
E (X ) = µX =
2
2 (b a)2
Var(X ) = X =
12
8
>
<1 if x =0
MX (t) =
>
: e tb e ta
t(b a) otherwise.
EXERCISE: Suppose Y ⇠ UNIF(0, 1) and Y = 14 x 2 . What is the
probability density function of X ?
2. Exponential Distribution: A continuous random variable
is said to be an exponential random variable with parameter
✓ if it’s probability density function is of the form
81 x
<✓e ✓ if x > 0
f (x; ✓) =
:
0 otherwise.
where ✓ > 0.
If a random variable X has an exponential density function with
parameter ✓, then we denote it by writing X ⇠ EXP(✓).
APPLICATION: To model lifetime of electronic components.
2. Exponential Distribution: A continuous random variable
is said to be an exponential random variable with parameter
✓ if it’s probability density function is of the form
81 x
<✓e ✓ if x > 0
f (x; ✓) =
:
0 otherwise.
where ✓ > 0.
If a random variable X has an exponential density function with
parameter ✓, then we denote it by writing X ⇠ EXP(✓).
APPLICATION: To model lifetime of electronic components.
Alternate Definition of Exponential Distribution: A continuous
random variable is said to be an exponential random variable with
parameter = 1✓ if it’s probability density function is of the form
8 x
< e if x >0
f (x; ✓) =
:
0 otherwise.
where > 0.
1 1
Theorem: If X ⇠ EXP( ), then E (X ) = and Var (X ) = 2 .
EXERCISE: What is the cumulative density function of a random
variable which has an exponential distribution with variance 25?
The normal distribution plays a central role in probability and
statistics and was discovered by a French mathematician Abraham
DeMoivre (1667-1754). This distribution is also called the
Gaussian distribution after Carl Friedrich Gauss, who proposed it as
a model for measurement errors.
The normal distribution plays a central role in probability and
statistics and was discovered by a French mathematician Abraham
DeMoivre (1667-1754). This distribution is also called the
Gaussian distribution after Carl Friedrich Gauss, who proposed it as
a model for measurement errors.
3. The Normal (or Gaussian) Distribution:
A random variable X is said to have a normal distribution if
its probability density function is given by
1 1 x
( µ 2
)
f (x) = p e 2 , 1<x <1
2⇡
with parameters µ, where 1 < µ < 1 and > 0.
The normal distribution plays a central role in probability and
statistics and was discovered by a French mathematician Abraham
DeMoivre (1667-1754). This distribution is also called the
Gaussian distribution after Carl Friedrich Gauss, who proposed it as
a model for measurement errors.
3. The Normal (or Gaussian) Distribution:
A random variable X is said to have a normal distribution if
its probability density function is given by
1 1 x
( µ 2
)
f (x) = p e 2 , 1<x <1
2⇡
with parameters µ, where 1 < µ < 1 and > 0.
If X has a normal distribution with parameters µ and 2, then we
write X ⇠ N(µ, 2 ).
The graph of a normal probability density function, shaped like the
cross section of a bell.
• From the form of the probability density function, we see that the
density is symmetric about µ, f (µ x) = f (µ + x), where it has a
maximum, and that the rate at which it falls o↵ is determined by .
Theorem: If X ⇠ N(µ, 2 ), then
E (X ) = µX = µ
2 2
Var (X ) = X =
1 2t2
MX (t) = e µt+ 2 .
Proof.
Z 1
1 1 x µ 2
MX (t) = e tx p e 2
( )
dx
1 2⇡
Z 1
1 1 x µ 2
= p e tx e 2
( )
dx
2⇡ 1
Z 1
1 1 x µ 2
= p e tx e 2
( )
dx
2⇡ 1
Z 1
1 1
( 2xt 2 +(x µ)2 )
= p e 2 2 dx
2⇡ 1
Note that
2
2xt + (x µ)2 = [x (µt + 2 2
)] 2µt 2
t2 4
,
and hence
n Z 1 o
1 2t2 1 1 x
(
(µt+ 2 ) 2
)
MX (t) = e µt+ 2 p e 2 dx
2⇡ 1
Since the quantity inside the bracket is the integral from 1 to
1 of a normal probability density function with the parameters
µ + t 2 and , and hence is equal to 1, it follows that
1 2t2
MX (t) = e µt+ 2 .
Further,
1 2t2
MX0 (t) = (µ + 2
t)e µt+ 2 .
µt+ 12 2t2 1 2t2
MX00 (t) = 2
e + (µ + 2
t)2 e µt+ 2 .
=) E (X ) = MX0 (0) = µ and Var (X ) = 2
.
Hence the proof.
Standard Normal Random Variable: A normal random vari-
able is said to be standard normal, if its mean is zero and
variance is one. We denote a standard normal random vari-
able X by X ⇠ N(0, 1) and it’s probability density function
is given by
1 1 2
x
f (x) = p e 2 , 1 < x < 1.
2⇡
Example: If X ⇠ N(0, 1), what is the probability of the random
variable X less than or equal to 1.72?
Example: If X ⇠ N(0, 1), what is the probability of the random
variable X less than or equal to 1.72?
Solution: We have, using Standard Normal Table type I
P(X 1.72) = 1 P(X 1.72)
=1 0.9573
= 0.0427.
Example: If X ⇠ N(0, 1), what is the probability of the random
variable X less than or equal to 1.72?
Solution: We have, using Standard Normal Table type I
P(X 1.72) = 1 P(X 1.72)
=1 0.9573
= 0.0427.
Using Standard Normal Table type II,
P(X 1.72) = 0.0427.
• Probabilities that are not of the form P(Z z) are found by
using the basic rules of probability and the symmetry of the normal
distribution.
• Probabilities that are not of the form P(Z z) are found by
using the basic rules of probability and the symmetry of the normal
distribution.
Examples:
1. P(Z > 1.26)
2.P(Z > 1.37)
3. P(Z < 0.86)
4. P( 1.25 < z < 0.37)
Theorem: If X ⇠ N(µ, 2) then the random variable
Z = X µ ⇠ N(0, 1).
Theorem: If X ⇠ N(µ, 2) then the random variable
Z = X µ ⇠ N(0, 1).
Proof: We will show that Z is standard normal by finding the
probability density function of Z . We compute the probability
density of Z by first computing it’s cumulative distribution
function.
F (z) = P(Z z)
X µ
= P( z)
= P(X z + µ)
Z z+µ
1 1 x
( µ 2
)
= p e 2 dx
1 2⇡
Z z
1 1 2
w
= p e 2 dw ,
1 2⇡
x µ
(where w = )
Hence
1 1 2
f (z) = F 0 (z) = p e 2
z
.
2⇡
Hence the proof.
Example: If X ⇠ N(3, 16), then what is P(4 X 8)?
4 3 X 3 8 3
P(4 X 8) = P( )
4 4 4
1 5
= P( Z )
4 4
= P(Z 1.25) P(Z 0.25)
= 0.8944 0.5987
= 0.2957.