CE 6013:
Statistical Methods in Civil Engineering
Lecture# 3
Statistical distributions of discrete and
continuous variables
Dr. Sheikh Mokhlesur Rahman
Associate Professor, Dept. of CE
Contact:
[email protected]Distribution of Discrete Variables
Example: Wafers
➢ In a semiconductor manufacturing process, 2 wafers from a lot
are sampled. Each wafer is classified as pass or fail. Assume
that the probability that a wafer passes is 0.8, and that wafers are
independent.
➢ The probability that the 1st wafer passes and the 2nd fails,
denoted as pf is P(pf) = 0.8 * 0.2 = 0.16.
➢ The random variable X is defined as the number of wafers that
pass.
Outcome
Wafer #
1 2 Probability x
Pass Pass 0.64 2
Fail Pass 0.16 1
Pass Fail 0.16 1
Fail Fail 0.04 0
1.00
April 2023 Semester Lecture 3_Distribution of Random Variables
Probability Distributions
➢ A random variable X associates the outcomes of a
random experiment to a number on the number line.
➢ The probability distribution of the random variable X is
a description of the probabilities with the possible
numerical values of X.
➢ A probability distribution of a discrete random variable
can be:
1. A list of the possible values along with their
probabilities.
2. A formula that is used to calculate the probability in
response to an input of the random variable’s value.
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 3-3: Digital Channel
➢ There is a chance that a bit transmitted through a
digital transmission channel is received in error.
➢ Let X equal the number of bits received in error of the
next 4 transmitted.
➢ The associated probability distribution of X is shown as
a graph and as a table.
Probability distribution for bits in error.
P(X=0) = 0.6561
P(X=1) = 0.2916
P(X=2) = 0.0486
P(X=3) = 0.0036
P(X=4) = 0.0001
1.0000
April 2023 Semester Lecture 3_Distribution of Random Variables
Probability Mass Function
Suppose a loading on a long, thin beam places mass only at
discrete points. This represents a probability distribution
where the beam is the number line over the range of x and
the probabilities represent the mass. That’s why it is called
a probability mass function.
Loading at discrete points on a long, thin beam.
April 2023 Semester Lecture 3_Distribution of Random Variables
Probability Mass Function Properties
➢ For a discrete random variable 𝑋 with possible values
𝑥1, 𝑥2, ... 𝑥n, a probability mass function is a function
such that:
1 𝑓 𝑥𝑖 ≥ 0
2 σ𝑛𝑖=1 𝑓 𝑥𝑖 = 1
(3) 𝑓 𝑥𝑖 = 𝑃 𝑋 = 𝑥𝑖
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 3-4: Wafer Contamination
➢ Let the random variable X denote the number of wafers
that need to be analyzed to detect a large particle.
Assume that the probability that a wafer contains a
large particle is 0.1, and that the wafers are
independent. Determine the probability distribution of
➢ X.
Let p denote a wafer for which a
large particle is present & let
a denote a wafer in which it is Probability Distribution
absent. P(X=1) = 0.1 = 0.1
➢ The sample space is: P(X=2) = (0.9)*0.1 = 0.09
S = {p, ap, aap, aaap, …} P(X=3) = (0.9)2*0.1 = 0.081
P(X=4) = (0.9)3*0.1 = 0.0729
➢ The range of the values of X is: 0.3439
x = 1, 2, 3, 4, …
April 2023 Semester Lecture 3_Distribution of Random Variables
Cumulative Distribution Function
Properties
➢ The cumulative distribution function (𝑃 𝑋 ≤ 𝑥 ) is built
from the probability mass function and vice versa.
➢ The cumulative distribution function of a discrete
random variable 𝑋, denoted as 𝐹(𝑥).
➢ For a discrete random variable 𝑋, 𝐹 𝑥 satisfies the
following properties:
(1) 𝐹 𝑥 = 𝑃 𝑋 ≤ 𝑥 = 𝑓 𝑥𝑖
𝑥𝑖 ≤𝑥
(2) 0 ≤ 𝐹 𝑥 ≤ 1
(3) If 𝑥 ≤ 𝑦, then 𝐹 𝑥 ≤ 𝐹 𝑦
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 3.5: Cumulative Distribution
Functions
There is a chance that a bit transmitted through a digital
transmission channel is received in error. Let X equal the
number of bits in error in the next four bits transmitted.
The possible values for X are {0, 1, 2, 3, 4}. Find the
probability of three or fewer bits being in error P(X ≤ 3).
➢ The event (X ≤ 3) is the union of x P(X=x) P(X≤x)
the mutually exclusive events: 0 0.6561 0.6561
(X=0), (X=1), (X=2), (X=3). 1 0.2916 0.9477
2 0.0486 0.9963
3 0.0036 0.9999
4 0.0001 1.0000
➢ From the table: 1.0000
P(X ≤ 3) = P(X=0) + P(X=1) + P(X=2) + P(X=3) = 0.9999
P(X = 3) = P(X ≤ 3) - P(X ≤ 2) = 0.0036
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 3-6: Cumulative Distribution
Function
➢ Determine the probability mass function of X from this
cumulative distribution function:
F (x) = 0.0 x < -2 PMF
0.2 -2 ≤ x < 0 f (2) = 0.2
0.7 0≤x <2 f (0) = 0.5
1.0 2≤x f (2) = 0.3
Graph of the CDF
April 2023 Semester Lecture 3_Distribution of Random Variables
Summary Numbers of a Probability
Distribution
➢ The mean is a measure of the center of a probability
distribution.
➢ The variance is a measure of the dispersion or
variability of a probability distribution.
➢ The standard deviation is another measure of the
dispersion. It is the square root of the variance.
April 2023 Semester Lecture 3_Distribution of Random Variables
Mean
➢ The mean or expected value of the discrete random
variable, X denoted as 𝜇 or 𝐸 𝑋 , is
𝜇 = 𝐸 𝑋 = σ𝑥 𝑥 ⋅ 𝑓 𝑥
➢ The mean is the weighted average of the possible
values of X, the weights being the probabilities of
occurrence. It represents the center of the distribution.
It is also called the arithmetic mean.
➢ If f(x) is the probability mass function representing the
loading on a long, thin beam, then E(X) is the fulcrum
or point of balance for the beam.
➢ The mean value may, or may not, be a given value of
x.
April 2023 Semester Lecture 3_Distribution of Random Variables
Variance
➢ The variance of the discrete random variable, X denoted as
𝜎 or 𝑉 𝑋 , is
𝜎2 = 𝑉 𝑋 = 𝐸 𝑋 − 𝜇 2 = 𝑥−𝜇 2 ⋅ 𝑓 𝑥 = 𝑥 2 ⋅ 𝑓 𝑥 − 𝜇2
𝑥 𝑥
➢ The variance is the measure of dispersion or scatter in the
possible values for X.
➢ It is the average of the squared deviations from the
distribution mean.
The mean is the balance point. Distributions (a) & (b) have
equal mean, but (a) has a larger variance.
April 2023 Semester Lecture 3_Distribution of Random Variables
Variance Formula Derivations
V ( X ) = ( x − ) f ( x ) is the definitional formula
2
= ( x 2 − 2 x + 2 ) f ( x )
x
= x f ( x ) − 2 xf ( x ) +
2 2
f ( x)
x x
= x 2 f ( x ) − 2 2 + 2
x
= x 2 f ( x ) − 2 is the computational formula
x
The computational formula is easier to calculate manually.
April 2023 Semester Lecture 3_Distribution of Random Variables
Exercise 3-7: Digital Channel
There is a chance that a bit transmitted through a digital
transmission channel is received in error. X is the
number of bits received in error of the next 4 transmitted.
The possible values for X are {0, 1, 2, 3, 4}. Calculate the
mean & variance.
Definitional formula
x f (x ) x *f (x ) (x -0.4)2 (x -0.4)2*f (x ) x 2*f (x )
0 0.6561 0.0000 0.160 0.1050 0.0000
1 0.2916 0.2916 0.360 0.1050 0.2916
2 0.0486 0.0972 2.560 0.1244 0.1944
3 0.0036 0.0108 6.760 0.0243 0.0324
4 0.0001 0.0004 12.960 0.0013 0.0016
Totals = 0.4000 0.3600 0.5200
= Mean = Variance (σ2) = E(x2)
=μ σ2 = E(x2) - μ2 = 0.3600
Computational formula
April 2023 Semester Lecture 3_Distribution of Random Variables
A Function of a Random Variable
➢ If 𝑋, is a discrete random variable with probability mass
function 𝑓 𝑥
𝐸ℎ 𝑋 = ℎ 𝑥 𝑓 𝑥
𝑥
If ℎ 𝑥 = 𝑋 − 𝜇 2 , then its expectation is the variance of
𝑋.
April 2023 Semester Lecture 3_Distribution of Random Variables
Discrete Uniform Distribution
➢ Simplest discrete distribution.
➢ The random variable X assumes only a finite number
of values, each with equal probability.
➢ A random variable X has a discrete uniform distribution
if each of the n values in its range, say x1, x2, …, xn,
has equal probability.
1
𝑓(𝑥𝑖) =
𝑛
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 3-10: Discrete Uniform
Random Variable
The first digit of a part’s serial number is equally likely to
be the digits 0 through 9. If one part is selected from a
large batch & X is the 1st digit of the serial number,
then X has a discrete uniform distribution as shown.
Probability mass function, f(x) = 1/10 for x = 0, 1, 2, …, 9
April 2023 Semester Lecture 3_Distribution of Random Variables
General Discrete Uniform Distribution
Let X be a discrete uniform random variable of consecutive integers from a to
b for a < b. There are b – a + 1 values in the inclusive interval. Therefore:
1
𝑓(𝑋) =
𝑏−𝑎+1
Its measures are:
1
𝜇 = 𝐸 𝑋 = σ𝑏𝑥=𝑎 𝑥𝑓 𝑥 = σ𝑏𝑥=𝑎 𝑥 𝑏−𝑎+1
𝑏
𝑏(𝑏 + 1) 𝑎 − 1 𝑎 𝑏 2 + 𝑏 − 𝑎2 + 𝑎 𝑏 2 − 𝑎2 + 𝑏 + 𝑎
𝑥= − = =
2 2 2 2
𝑥=𝑎
𝑏 − 𝑎 𝑏 + 𝑎 + 𝑏 + 𝑎 (𝑏 + 𝑎)(𝑏 − 𝑎 + 1)
= =
2 2
(𝑏 − 𝑎)(𝑏 − 𝑎 + 1) 1 𝑎+𝑏
𝜇= ∗ =
2 𝑏−𝑎+1 2
(𝑏−𝑎+1)2 −1
𝜎2 =
12
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 3-11: Number of Voice Lines
Let the random variable X denote the number of the 48
voice lines that are in use at a particular time. Assume
that X is a discrete uniform random variable with a range
of 0 to 48. Find mean and standard deviation.
Answer:
𝑎 + 𝑏 48 + 0
𝜇= = = 24
2 2
(𝑏 − 𝑎 + 1)2 −1 48 − 0 + 1 2 −1 2400
𝜎= = = = = 14.142
12 12 12
April 2023 Semester Lecture 3_Distribution of Random Variables
Binomial Random Variables: Example
1. Flip a coin 10 times. X = # heads obtained.
2. A worn tool produces 1% defective parts. X = # defective parts
in the next 25 parts produced.
3. A multiple-choice test contains 10 questions, each with 4
choices, and you guess. X = # of correct answers.
4. Of the next 20 births, let X = # females.
These are binomial experiments having the following
characteristics:
1. Fixed number of trials (Bernoulli trials – only two outcomes).
2. Each trial is termed a success or failure. X is the # of
successes.
3. The probability of success in each trial is constant (p).
4. The outcomes of successive trials are independent.
April 2023 Semester Lecture 3_Distribution of Random Variables
Binomial Distribution Definition
➢ The random variable X that equals the number of trials
that result in a success is a binomial random variable
with parameters 0 < p < 1 and n = 0, 1, ....
➢ The probability mass function is:
𝑓 𝑥 = 𝐶𝑥𝑛 𝑝 𝑥 1 − 𝑝 𝑛−𝑥 𝑓𝑜𝑟, 𝑥 = 0,1, . . . 𝑛
➢ Based on the binomial expansion:
𝑛
𝑎+𝑏 𝑛
= 𝐶𝑘𝑛 𝑎𝑘 𝑏 𝑛−𝑘
𝑘=0
➢ Note that, no. of outcomes is only two – (“bi”nomial).
For more than two outcomes, there is multinomial
distribution.
April 2023 Semester Lecture 3_Distribution of Random Variables
Binomial Distribution Shapes
Binomial Distributions for selected values of n and p. Distribution
(a) is symmetrical, while distributions (b) are skewed. The skew is
right if p is small.
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 3-13: Digital Channel
The chance that a bit transmitted through a digital
transmission channel is received in error is 0.1. Assume
that the transmission trials are independent. Let X = the
number of bits in error in the next 4 bits transmitted. Find
P(X=2).
Answer: Outcome x Outcome x
Let E denote a bit in error OOOO 0 EOOO 1
OOOE 1 EOOE 2
Let O denote an OK bit. OOEO 1 EOEO 2
Sample space & x listed in table. OOEE 2 EOEE 3
6 outcomes where x = 2. OEOO 1 EEOO 2
Prob of each is 0.12*0.92 = 0.0081 OEOE 2 EEOE 3
OEEO 2 EEEO 3
Prob(X=2) = 6*0.0081 = 0.0486
OEEE 3 EEEE 4
𝑃 𝑋 = 2 = 𝐶24 0.1 2
0.9 2
= 0.0486
April 2023 Semester Lecture 3_Distribution of Random Variables
Exercise 3-15: Organic Pollution-1
Each sample of water has a 10% chance of containing a
particular organic pollutant. Assume that the samples are
independent with regard to the presence of the pollutant. Find
the probability that, in the next 18 samples, exactly 2 contain
the pollutant.
Answer: Let X denote the number of samples that contain the
pollutant in the next 18 samples analyzed. Then X is a
binomial random variable with p = 0.1 and n = 18
𝑃 𝑋 = 2 = 𝐶218 0.1 2 0.9 16
2 16
= 153 0.1 0.9 = 0.2835
April 2023 Semester Lecture 3_Distribution of Random Variables
Exercise 3-15: Organic Pollution-2
Determine the probability that at least 4 samples contain
the pollutant.
Answer: 18
𝑃 𝑋 ≥ 4 = 𝐶𝑥18 0.1 𝑥 0.9 18−𝑥
𝑥=4
=1−𝑃 𝑋 <4
3
= 1 − 𝐶𝑥18 0.1 𝑥 0.9 18−𝑥
𝑥=0
= 1 − 0.015 + 0.3 + 0.284 + 0.168
= 0.098
April 2023 Semester Lecture 3_Distribution of Random Variables
Exercise 3-15: Organic Pollution-3
Now determine the probability that 3 ≤ X < 7.
Answer:
6
𝑃 3 ≤ 𝑋 < 7 = 𝐶𝑥18 0.1 𝑥 0.9 18−𝑥
𝑥=3
= 0.168 + 0.070 + 0.022 + 0.005
= 0.265
Can be rewritten as:
𝑃 𝑋 ≤7 −𝑃 𝑋 ≤2
Cumulative distribution
Appendix A, Table II is a cumulative binomial table for
selected values of p and n.
April 2023 Semester Lecture 3_Distribution of Random Variables
Binomial Mean and Variance
If X is a binomial random variable with parameters
p and n,
Mean, μ = E(X) = np and
Variance, σ2 = V(X) = np(1-p)
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 3-16: Mean and Variance
For the number of transmitted bit received in error in
Example 3-13, n = 4 and p = 0.1. Find the mean
and variance of the binomial random variable.
Answer:
μ = E(X) = np = 4*0.1 = 0,4
σ2 = V(X) = np(1-p) = 4*0.1*0.9 = 3.6
σ = SD(X) = 1.9
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 3-17: New Idea
The probability that a bit, sent through a digital
transmission channel, is received in error is 0.1.
Assume that the transmissions are independent. Let X
denote the number of bits transmitted until the 1st error.
P(X=5) is the probability that the 1st four bits are
transmitted correctly and the 5th bit is in error.
P(X=5) = P(OOOOE) = (0.9)4 (0.1) = 0.0656.
x is the total number of bits sent until the first error.
This illustrates the geometric distribution.
April 2023 Semester Lecture 3_Distribution of Random Variables
Geometric Distribution
to the binomial distribution – a series of
➢ Similar
Bernoulli trials with fixed parameter p.
➢ Binomial distribution has:
• Fixed number of trials. (n)
• Random number of successes. (X)
➢ Geometric distribution has reversed roles:
• Random number of trials. (X)
• Fixed number of successes, in this case 1.
𝑓 𝑥 = 𝑝 1 − 𝑝 𝑥−1 where:
𝑥 = 1, 2, …, the number of failures until the 1st success.
0 < p < 1, the probability of success.
April 2023 Semester Lecture 3_Distribution of Random Variables
Geometric Graphs
Geometric distributions for parameter p values of 0.1 and
0.9. The graphs coincide at x = 2.
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 3.18: Geometric Problem
The probability that a wafer contains a large particle of
contamination is 0.01. Assume that the wafers are
independent. What is the probability that exactly 125
wafers need to be analyzed before a particle is detected?
Answer:
Let X denote the number of samples analyzed until a
large particle is detected. Then X is a geometric random
variable with parameter p = 0.01.
P(X=125) = (0.99)124(0.01) = 0.00288.
𝑥−1
𝑓 𝑥 =𝑝 1−𝑝
April 2023 Semester Lecture 3_Distribution of Random Variables
Geometric Mean & Variance
➢ If X is a geometric random variable with parameter p,
1
Mean, 𝜇 = 𝐸 𝑋 =
𝑝
2 1−𝑝
Variance, 𝜎 = 𝑉 𝑋 =
𝑝2
April 2023 Semester Lecture 3_Distribution of Random Variables
Exercise 3-19: Geometric Problem
Consider the transmission of bits in Exercise 3-17. Here,
p = 0.1. Find the mean and standard deviation.
Answer:
1 1
Mean, 𝜇 = 𝐸 𝑋 = = = 10
𝑝 0.1
1−𝑝 1−0.1
Variance, 𝜎2 =𝑉 𝑋 = 2 = = 90
𝑝 0.12
Standard Deviation,𝜎 = 90 = 9.487
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 3-21: New Idea
➢ The probability that a bit, sent through a digital
transmission channel, is received in error is 0.1.
Assume that the transmissions are independent. Let X
denote the number of bits transmitted until the 4th error.
➢ P(X=10) is the probability that 3 errors occur over the
first 9 trials, then the 4th success occurs on the 10th
trial.
3 errors occur over the first 9 trials = 𝐶39 𝑝3 1 − 𝑝 6
4th error occurs on the 10th trial = 𝐶39 𝑝3 1 − 𝑝 6 × 𝑝
= 𝐶39 𝑝4 1 − 𝑝 6
April 2023 Semester Lecture 3_Distribution of Random Variables
Negative Binomial Definition
➢ Ina series of independent trials with constant
probability of success, let the random variable X
denote the number of trials until r successes occur.
Then X is a negative binomial random variable with
parameters 0 < p < 1 and r = 1, 2, 3, ....
➢ The probability mass function is:
𝑥−1 𝑟 𝑥−𝑟
𝑓 𝑥 = 𝐶𝑟−1 𝑝 1−𝑝 for 𝑥 = 𝑟, 𝑟 + 1, 𝑟 + 2. . .
➢ From the prior example for f(X=10|r=4):
• x-1 = 9
• r-1 = 3
April 2023 Semester Lecture 3_Distribution of Random Variables
Negative Binomial Graphs
Negative binomial distributions for 3 different parameter
combinations.
April 2023 Semester Lecture 3_Distribution of Random Variables
Negative Binomial Mean & Variance
➢ IfX is a negative binomial random variable with
parameters p and r,
𝑟 𝑟 1−𝑝
𝜇=𝐸 𝑋 = and 𝜎2 =𝑉 𝑋 =
𝑝 𝑝2
April 2023 Semester Lecture 3_Distribution of Random Variables
Binomial vs Negative Binomial
➢ Binomial distribution:
• Fixed number of trials (n).
• Random number of successes (x).
➢ Negative binomial distribution:
• Random number of trials (x).
• Fixed number of successes (r).
➢ Because of the reversed roles, a negative binomial can
be considered the opposite or negative of the binomial.
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 3-25: Web Servers-1
A Web site contains 3 identical computer servers. Only
one is used to operate the site, and the other 2 are
spares that can be activated in case the primary system
fails. The probability of a failure in the primary computer
(or any activated spare) from a request for service is
0.0005. Assume that each request represents an
independent trial. What is the mean number of requests
until failure of all 3 servers?
Answer:
• Let X denote the number of requests until all three
servers fail.
• Let r = 3 and p=0.0005 = 1/2000
• Then μ = 3 / 0.0005 = 6,000 requests
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 3-25: Web Servers-2
What is the probability that all 3 servers fail within 5
requests? (X = 5)
Answer:
𝑃 𝑋 ≤5 =𝑃 𝑋 =3 +𝑃 𝑋 =4 +𝑃 𝑋 =5
= 0.0053 + 𝐶23 0.00053 0.9995 + 𝐶24 0.00053 0.99952
April 2023 Semester Lecture 3_Distribution of Random Variables
Poisson Distribution
As the number of trials (n) in a binomial experiment
increases to infinity while the binomial mean (np)
remains constant, the binomial distribution becomes the
Poisson distribution. p is very small.
April 2023 Semester Lecture 3_Distribution of Random Variables
Example: Wire Flaws
➢ Flaws occur at random along the length of a thin copper
wire. Let X denote the random variable that counts the
number of flaws in a length of L mm of wire. Suppose the
average number of flaws in L is λ.
➢ Partition L into n subintervals, which are small enough so
that the probability of more than one flaw is negligible.
➢ Assume that the:
• Flaws occur at random, implying that each subinterval
has the same probability of containing a flaw.
• Probability that a subinterval contains a flaw is
independent of other subintervals.
➢ X is now binomial. E(X) = np = λ and p = λ/n
➢ As n becomes large, p becomes small and a Poisson
process is created.
April 2023 Semester Lecture 3_Distribution of Random Variables
Poisson Distribution Definition
➢ The random variable X that equals the number of
events in a Poisson process is a Poisson random
variable with parameter λ > 0, and the probability mass
function is:
𝑒 −𝜆𝑇 (𝜆𝑇)𝑥
𝑓 𝑥 = for 𝑥 = 0,1,2,3, . . . ∞
𝑥!
𝜆𝑇 = 𝑛𝑝
Keep the unit of 𝜆 and 𝑇 consistent.
April 2023 Semester Lecture 3_Distribution of Random Variables
Poisson Graphs
Poisson distributions for λ = 0.1, 2, 5.
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 3-27: Calculations for Wire
Flaws-1
For the case of the thin copper wire, suppose that the
number of flaws follows a Poisson distribution of 2.3
flaws per mm. Let X denote the number of flaws in 1 mm
of wire. Find the probability of exactly 2 flaws in 1 mm of
wire.
Answer:
𝜆𝑇 = 1 mm ⋅ 2.3 flaws/mm = 2.3 mm
𝑒 −2.3 2. 32
𝑃 𝑋=2 = = 0.265
2!
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 3-27: Calculations for Wire
Flaws-2
Determine the probability of 10 flaws in 5 mm of wire.
Now let X denote the number of flaws in 5 mm of
wire.
Answer:
𝜆𝑇 = 5 mm ⋅ 2.3 flaws/mm =11.5 flaws
10
11. 5
P 𝑋 = 10 = 𝑒 −11.5 = 0.113
10!
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 3-27: Calculations for Wire
Flaws-3
Determine the probability of at least 1 flaw in 2 mm of
wire.
Answer:
Now let X denote the number of flaws in 2 mm of wire.
So. P(X ≥ 1)
𝜆𝑇 = 2 mm ⋅ 2.3 flaws/mm =4.6 flaws
0
4. 6
P 𝑋 ≥ 1 = 1 − 𝑃 𝑋 = 0 = 1 − 𝑒 −4.6 = 0.9899
0!
April 2023 Semester Lecture 3_Distribution of Random Variables
Poisson Mean & Variance
If X is a Poisson random variable over an interval of
length T with parameter λ, then:
μ = E(X) = λ and σ2=V(X) = λ
The mean and variance of the Poisson model are the
same. If the mean and variance of a data set are not
about the same, then the Poisson model would not be
a good representation of that set.
April 2023 Semester Lecture 3_Distribution of Random Variables
Distribution of Continuous
Variables
Continuous Density Functions
Density functions, in contrast to mass functions,
distribute probability continuously along an interval.
The loading on the beam between points a & b is the
integral of the function between points a & b.
Density function as a loading on a long, thin beam.
Most of the load occurs at the larger values of x.
April 2023 Semester Lecture 3_Distribution of Random Variables
Probability Density Functions
A probability density function f(x) describes the
probability distribution of a continuous random
variable. It is analogous to the beam loading.
Probability is determined from the area under f(x) from a to b.
April 2023 Semester Lecture 3_Distribution of Random Variables
Probability Density Function
➢ For a continuous random variable X, a probability
density function is a function such that
(1) 𝑓 𝑥 ≥0 means that the function is always non−negative.
∞
(2) න 𝑓(𝑥) 𝑑𝑥 = 1
−∞
𝑏
(3) 𝑃 𝑎 ≤ 𝑋 ≤ 𝑏 = න 𝑓 𝑥 𝑑𝑥 = area under 𝑓 𝑥 𝑑𝑥 from 𝑎 to 𝑏
𝑎
April 2023 Semester Lecture 3_Distribution of Random Variables
Histogram and Probability Distribution
➢ Base and height of the rectangle creates an area which
represents the relative frequency associated with the values
included in the base.
➢ A continuous probability distribution f(x) is a model
approximating a histogram. A bar has the same area of the
integral of those limits.
Histogram approximates a probability density function
April 2023 Semester Lecture 3_Distribution of Random Variables
Area of a Point
➢ If 𝑋 is a continuous random variable, for any 𝑥1 and 𝑥2 ,
𝑃 𝑥1 ≤ 𝑋 ≤ 𝑥2 = 𝑃 𝑥1 < 𝑋 ≤ 𝑥2
= 𝑃 𝑥1 ≤ 𝑋 < 𝑥2 = 𝑃 𝑥1 < 𝑋 < 𝑥2
which implies that
𝑃 𝑋=𝑥 =0
From another perspective:
As 𝑥1 approaches 𝑥2 , the area or probability
becomes smaller and smaller. As 𝑥1 becomes 𝑥2 the
area or probability becomes zero.
April 2023 Semester Lecture 3_Distribution of Random Variables
Example: Electric Current
Let the continuous random variable X denote the current
measured in a thin copper wire in milliamperes (mA). Assume
that the range of X is 0 ≤ x ≤ 20 and f(x) = 0.05. What is the
probability that a current is less than 10mA? What is the
probability that a current is 5 mA?
Answer:
10
𝑃 𝑋 < 10 = න 0.05𝑑𝑥 = 0.5
0
5.05
𝑃 𝑋 = 5 = 𝑃 4.95 < 𝑋 < 5.05 = න 0.05𝑑𝑥 = 0.005 P(X < 10) illustrated.
4.95
Another example,
20
𝑃 5 < 𝑋 < 20 = න 0.05𝑑𝑥 = 0.75
5
April 2023 Semester Lecture 3_Distribution of Random Variables
Cumulative Distribution Functions
➢ The cumulative distribution function of a continuous
random variable 𝑋 is:
𝑥
𝐹 𝑥 = 𝑃 𝑋 ≤ 𝑥 = න 𝑓 𝑢 𝑑𝑢 for − ∞ < 𝑥 < ∞
−∞
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 4-3: Electric Current
For the copper wire current measurement in Exercise 4-
1, the cumulative distribution function (CDF) consists of
three expressions to cover
𝑥
the entire real number line.
𝑥 = 𝑃 𝑋 ≤ 𝑥 = න 𝑓 𝑢 𝑑𝑢 for − ∞ < 𝑥 < ∞
−∞
0 x <0
F (x ) = 0.05x 0 ≤ x ≤ 20
1 20 < x
This graph shows the CDF as a
continuous function.
April 2023 Semester Lecture 3_Distribution of Random Variables
Density vs. Cumulative Functions
➢ The probability density function (PDF) is the derivative
of the cumulative distribution function (CDF).
➢ The cumulative distribution function (CDF) is the
integral of the probability density function (PDF).
dF ( x )
Given F ( x ) , f ( x ) = as long as the derivative exists.
dx
April 2023 Semester Lecture 3_Distribution of Random Variables
Exercise 4-5: Reaction Time
➢ The time until a chemical reaction is complete (in
milliseconds, ms) is approximated by this CDF:
0 for 𝑥 < 0
𝐹 𝑥 =ቊ
1 − 𝑒 −0.01𝑥 for 0 ≤ 𝑥
➢ What is the PDF?
𝑑𝐹 𝑥 𝑑 0 0 for 𝑥 < 0
𝑓 𝑥 = = ൝ = ቊ
𝑑𝑥 𝑑𝑥 1 − 𝑒 −0.01𝑥 0.01𝑒 −0.01𝑥 for 0 ≤ 𝑥
➢ What proportion of reactions is complete within 200
ms?
𝑃 𝑋 < 200 = 𝐹 200 = 1 − 𝑒 −2 = 0.8647
April 2023 Semester Lecture 3_Distribution of Random Variables
Mean and Variance
➢ Suppose X is a continuous random variable with
probability density function f x . The mean or
“expected value” of 𝑋 denoted as 𝜇 or 𝐸 𝑋 is
∞
𝜇 = 𝐸 𝑋 = න 𝑥𝑓 𝑥 𝑑𝑥
−∞
➢ The variance of 𝑋, denoted as V(𝑋) or 𝜎 2 , is
∞ ∞
𝜎 2 = 𝑉 𝑋 = න 𝑥 − 𝜇 2 𝑓 𝑥 𝑑𝑥 = න 𝑥 2 𝑓 𝑥 𝑑𝑥 − 𝜇2
−∞ −∞
➢ The standard deviation of 𝑋 is 𝜎 = 𝜎 2
April 2023 Semester Lecture 3_Distribution of Random Variables
Example: Electric Current
The PDF of current through a copper wire is, f(x) = 0.05
for 0 ≤ x ≤ 20. Find the mean and variance.
20 20 20
0.05𝑥 2
𝐸 𝑋 = න 𝑥 ⋅ 𝑓 𝑥 𝑑𝑥 = න 𝑥 ∗ 0.05 ∗ 𝑑𝑥 = อ = 10
2
0 0 0
20 20
2𝑓
0.05 𝑥 − 10 3
𝑉 𝑋 = න 𝑥 − 10 𝑥 𝑑𝑥 = อ = 33.33
3
0 0
April 2023 Semester Lecture 3_Distribution of Random Variables
Mean of a Function of a Random
Variable
If X is a continuous random variable
with a probability density function f ( x ) ,
E h ( x ) = h ( x ) f ( x ) dx (4-5)
−
Example: X is the current measured in mA and f(x) = 0.05 for
0 ≤ x ≤ 20. What is the expected value of the squared current?
20
E h ( x ) = E X 2 = x 2 f ( x ) dx
0
20 3 20
0.05 x
= 0.05 x dx =2
= 133.33 mA 2
0
3 0
April 2023 Semester Lecture 3_Distribution of Random Variables
Continuous Uniform Distribution
➢ This is the simplest continuous distribution and
analogous to its discrete counterpart.
➢ A continuous random variable X with probability density
function
1
𝑓 𝑥 = ; 𝑓𝑜𝑟 𝑎 ≤ 𝑥 ≤ 𝑏
𝑏−𝑎
Mean
𝑎+𝑏
𝜇=𝐸 𝑋 =
2
Variance
2
2
𝑏−𝑎
𝜎 =𝑉 𝑋 =
Continuous uniform PDF 12
April 2023 Semester Lecture 3_Distribution of Random Variables
Example: Uniform Current
Let the continuous random variable X denote the
current measured in a thin copper wire in mA. the
PDF is F(x) = 0.05 for 0 ≤ x ≤ 20.
What is the probability that the current measurement
is between 5 & 10 mA? Find Mean and Variance.
10
P ( 5 x 10 ) = 0.05dx = 5 ( 0.05 ) = 0.25
5
April 2023 Semester Lecture 3_Distribution of Random Variables
Continuous Uniform CDF
x−a
x
1
F ( x) = du =
a (
b − a) b−a
The CDF is completely described as
0 xa
F ( x ) = ( x − a ) ( b − a ) a x b
1 bx
Graph of the Cumulative Uniform CDF
April 2023 Semester Lecture 3_Distribution of Random Variables
Normal Distribution
➢ The most widely used distribution is the normal
distribution, also known as the Gaussian distribution.
➢ Random variation of many physical measurements are
normally distributed.
➢ The location and spread of the normal are
independently determined by mean (μ) and standard
deviation (σ).
Normal probability density functions
April 2023 Semester Lecture 3_Distribution of Random Variables
Normal Probability Density Function
➢ Probability density function of a normal random
variable, X, is
− 𝑥−𝜇 2
1
𝑓 𝑥 = 𝑒 2𝜎2 −∞ < 𝑥 < ∞
2𝜋𝜎
➢ with parameters 𝜇 and 𝜎 where −∞ < 𝑥 < ∞ and 𝜎 > 0
➢ Also, 𝐸 𝑋 = 𝜇 and 𝑉 𝑋 = 𝜎 2
➢ and the notation 𝑁(𝜇, 𝜎 2 ) is used to denote the
distribution.
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 4-10: Normal Application
Assume that the current measurements in a strip of wire
follows a normal distribution with a mean of 10 mA and
a variance of 4 mA2. Let X denote the current in mA.
What is the probability that a measurement exceeds 13
mA?
No closed form of
integral of normal PDF
Graphical probability that X > 13 for a normal random variable
with μ = 10 and σ2 = 4.
April 2023 Semester Lecture 3_Distribution of Random Variables
Empirical Rule
P(μ – σ < X < μ + σ) = 0.6827
P(μ – 2σ < X < μ + 2σ) = 0.9545
P(μ – 3σ < X < μ + 3σ) = 0.9973
6σ is often called the width
of a normal distribution
Probabilities associated with a normal distribution
April 2023 Semester Lecture 3_Distribution of Random Variables
Standard Normal Distribution
➢ A normal random variable with
μ = 0 and σ2 = 1
is called a standard normal random variable and
is denoted as Z.
➢ The cumulative distribution function of a standard
normal random variable is denoted as:
Φ(z) = P(Z ≤ z) = F(z)
Values are found in Appendix Table III
April 2023 Semester Lecture 3_Distribution of Random Variables
Standard Normal Distribution Table
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 4-9: Standard Normal
Distribution
Assume Z is a standard normal random variable.
Find P(Z ≤ 1.50). Answer: 0.93319
Standard normal PDF
Find P(Z ≤ 1.53). Answer: 0.93699
Find P(Z ≤ 0.02). Answer: 0.50398
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 4-10: Standard Normal
Exercises
1. P(Z > 1.26) = 0.1038
2. P(Z < -0.86) = 0.195
3. P(Z > -1.37) = 0.915
4. P(-1.25 < 0.37) = 0.5387
5. P(Z ≤ -4.6) ≈ 0
6. Find z for P(Z ≤ z) = 0.05, z = -1.65
7. Find z for (-z < Z < z) = 0.99, z = 2.58
April 2023 Semester Lecture 3_Distribution of Random Variables
Standardizing
➢ Suppose X is a normal random variable with mean 𝜇
and variance 𝜎 2 , then
𝑋−𝜇 𝑥−𝜇
𝑃 𝑋≤𝑥 =𝑃 ≤ =𝑃 𝑍≤𝑧
𝜎 𝜎
here 𝑍 is a standard normal random variable, and
𝑥−𝜇
𝑧=
𝜎
is the z−value obtainedby standardizing X.
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 4-12: Normally Distributed
Current-1
Suppose that the current measurements in a strip of wire are
assumed to follow a normal distribution with a mean of 10 mA
and a variance of 4 mA2. What is the probability that the
current is between 9 and 11 mA?
Answer:
9 − 10 𝑥 − 10 11 − 10
𝑃 9 < 𝑋 < 11 = 𝑃 < <
2 2 2
= 𝑃 −0.5 < 𝑧 < 0.5
= 𝑃 𝑧 < 0.5 − 𝑃 𝑧 < −0.5
= 0.69146 − 0.30854 = 0.38292
Standardizing a normal random variable
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 4-12: Normally Distributed
Current-2
Determine the value for which the probability of a current
measurement below this value is 0.98.
Answer: 𝑋 − 10 𝑥 − 10
𝑃 𝑋<𝑥 =𝑃 <
2 2
𝑥 − 10
=𝑃 𝑍< = 0.98
2
𝑧 = 2.05 is the closest
𝑥−10
2.05 = 2
𝑥 = 14.1 𝑚𝐴
Determining the value of x to meet a specified probability.
April 2023 Semester Lecture 3_Distribution of Random Variables
Normal Approximations
➢ The binomial and Poisson distributions become more
bell-shaped and symmetric as their means increase.
➢ For manual calculations, the normal approximation is
practical – exact probabilities of the binomial and
Poisson, with large means, require technology
➢ The normal is a good approximation for the:
• Binomial if np > 5 and n(1-p) > 5.
• Poisson if λ > 5.
April 2023 Semester Lecture 3_Distribution of Random Variables
Normal Approximation to the Binomial
Suppose we have a binomial
distribution with n = 10 and p =
0.5. Its mean and standard
deviation are 5.0 and 1.58
respectively.
Draw the normal distribution
over the binomial distribution.
The areas of the normal
approximate the areas of the
bars of the binomial with a
Overlaying the normal distribution
continuity correction. upon a binomial with matched
parameters.
April 2023 Semester Lecture 3_Distribution of Random Variables
Normal Approximation Method
If 𝑋 is a binomial random variable with parameters 𝑛 and 𝑝,
𝑋 − 𝑛𝑝
𝑍=
𝑛𝑝 1 − 𝑝
is approximately a standard normal random variable. To approximate a binomial
probability with a normal distribution, a continuity correction is applied as
follows:
𝑥 + 0.5 − 𝑛𝑝
𝑃 𝑋 ≤ 𝑥 = 𝑃 𝑋 ≤ 𝑥 + 0.5 = 𝑃 𝑍 ≤
𝑛𝑝 1 − 𝑝
and
𝑥 − 0.5 − 𝑛𝑝
𝑃 𝑋 ≥ 𝑥 = 𝑃 𝑋 ≤ 𝑥 − 0.5 = 𝑃 𝑍 ≤
𝑛𝑝 1 − 𝑝
The approximation is good for 𝑛𝑝 > 5 and 𝑛 1 − 𝑝 > 5.
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 4-13: Binomial Distribution
In a digital comm channel, assume that the number of bits
received in error can be modeled by a binomial random
variable. The probability that a bit is received in error is 10-5.
If 16 million bits are transmitted, what is the probability that
150 or fewer errors occur? Let X denote the number of
errors.
Answer:
(10 ) (1 − 10 )
150
−5 16000000 − x
P ( X 150 ) = Cx16000000 −5 x
x =0
Using Excel
0.2280 = BINOMDIST(150,16000000,0.00001,TRUE)
Can only be evaluated with technology. Manually, we
must use the normal approximation to the binomial.
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 4-14: Applying the
Approximation
The digital comm problem in the previous example is solved
using the normal approximation to the binomial as follows:
P ( X 150 ) = P ( X 150.5 )
− −
= P
X 160 150.5 160
(
160 1 − 10−5
) 160 (1 − 10 )
−5
−9.5
= PZ = P ( −0.75104 ) = 0.2263
12.6491
April 2023 Semester Lecture 3_Distribution of Random Variables
Normal Approximation to the Poisson
➢ If X is a Poisson random variable with 𝐸 𝑋 = 𝜆
and 𝑉 𝑋 = 𝜆,
𝑋−𝜆
𝑍=
𝜆
➢ isapproximately a standard normal random variable.
The same continuity correction used for the binomial
distribution can also be applied
➢ The approximation is good for 𝜆 ≥ 5
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 4-16: Normal Approximation to
Poisson
Assume that the number of asbestos particles in a square
meter of dust on a surface follows a Poisson distribution
with a mean of 1000. If a square meter of dust is analyzed,
what is the probability that 950 or fewer particles are found?
950
e −10001000 x
P ( X 950 ) = ... too hard manually!
x =0 x!
950.5 − 1000
P ( X 950.5 ) = P Z
1000
= P ( Z −1.57 ) = 0.058
Using Excel
0.0578 = POISSON(950,1000,TRUE)
0.0588 = NORMDIST(950.5, 1000, SQRT(1000), TRUE)
1.6% = (0.0588 - 0.0578) / 0.0578 = percent error
April 2023 Semester Lecture 3_Distribution of Random Variables
Exponential Distribution
➢ The Poisson distribution defined a random variable as the
number of flaws along a length of wire (flaws per mm).
➢ The exponential distribution defines a random variable as
the interval between flaws (mm’s between flaws – the
inverse).
➢ Let 𝑋 denote the number of flaws in 𝑥 mm of wire. If the
mean number of flaws is 𝜆 per mm, 𝑁 has a Poisson
distribution with mean 𝜆𝑥.
𝑒 −𝜆𝑥 𝜆𝑥 0
𝑃 𝑋 > 𝑥 =P 𝑁 = 0 = = 𝑒 −𝜆𝑥
0!
𝐹 𝑥 = 𝑃 𝑋 ≤ 𝑥 = 1 − 𝑒 −𝜆𝑥 , 𝑥 ≥ 0, the CDF.
Now differentiating:
𝑓 𝑥 = 𝜆𝑒 −𝜆𝑥 , 𝑥 ≥ 0, the PDF.
April 2023 Semester Lecture 3_Distribution of Random Variables
Exponential Distribution Definition
➢ The random variable X that equals the distance
between successive events of a Poisson process with
mean number of events λ > 0 per unit interval is an
exponential random variable with parameter λ. The
probability density function of X is:
𝑓 𝑥 = 𝜆𝑒 −𝜆𝑥 , 0 ≤ 𝑥 < ∞
April 2023 Semester Lecture 3_Distribution of Random Variables
Exponential Distribution Graphs
The y-intercept of the
exponential probability
density function is λ.
The random variable is
non-negative and extends
to infinity.
F(x) = 1 – e-λx
PDF of exponential random variables
of selected values of λ.
April 2023 Semester Lecture 3_Distribution of Random Variables
Exponential Mean & Variance
If the random variable X has an
exponential distribution with parameter ,
1 1
= E(X ) = and = V ( X ) = 2
2
(4-15)
Note that, for the:
• Poisson distribution, the mean and variance are
the same.
• Exponential distribution, the mean and standard
deviation are the same.
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 4-17: Computer Usage-1
In a large corporate computer network, user log-ons to the
system can be modeled as a Poisson process with a mean of
25 log-ons per hour. What is the probability that there are no
log-ons in the next 6 minutes (0.1 hour)? Let X denote the
time in hours from the start of the interval until the first log-on.
P ( X 0.1) = 25e 25 x
dx = e −25( 0.1)
0.1
= 1 − F ( 0.1) = 0.082
Desired probability.
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 4-17: Computer Usage-2
Continuing, what is the probability that the time until the
next log-on is between 2 and 3 minutes (0.033 & 0.05
hours)?
0.05
P ( 0.033 X 0.05 ) = 25e −25 x
0.033
= 25 x 0.05
= −e = 0.152
0.033
= F ( 0.05 ) − F ( 0.033) = 0.152
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 4-17: Computer Usage-3
➢ Continuing, what is the interval of time such that the
probability that no log-on occurs during the interval is 0.90?
P ( X x ) = e −25 x = 0.90, − 25 x = ln ( 0.90 )
−0.10536
x= = 0.00421 hour = 0.253 minute
−25
➢ What is the mean and standard deviation of the time until
the next log-in?
1 1
= = = 0.04 hour = 2.4 minutes
25
1 1
= = = 0.04 hour = 2.4 minutes
25
April 2023 Semester Lecture 3_Distribution of Random Variables
Exponential Application in Reliability
➢ The reliability of electronic components is often
modeled by the exponential distribution. A chip might
have mean time to failure of 40,000 operating hours.
➢ The memoryless property implies that the component
does not wear out – the probability of failure in the next
hour is constant, regardless of the component age.
➢ The reliability of mechanical components do have a
memory – the probability of failure in the next hour
increases as the component ages. The Weibull
distribution is used to model this situation.
April 2023 Semester Lecture 3_Distribution of Random Variables
Lognormal Distribution
➢ Let W denote a normal random variable with mean of θ and
variance of ω2, i.e., E(W) = θ and V(W) = ω2
➢ As a change of variable, let X = eW = exp(W) and W = ln(X)
➢ Now X is a lognormal random variable.
F ( x ) = P X x = P exp (W ) x = P W ln ( x )
ln ( x ) -θ ln ( x ) -θ
=P Z =Φ = for x 0
ω ω
= 0 for x 0
ln ( x ) −
2
−
1
f ( x) = e 2
for 0 x
x 2
E(X ) = e + 2 2
and V (X ) = e 2 + 2
(e2
−1) (4-22)
April 2023 Semester Lecture 3_Distribution of Random Variables
Lognormal Graphs
X is log normal random
variable if log(X) is normally
distributed
Lognormal probability density functions with θ = 0 for selected
values of ω2.
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 4-27: Semiconductor Laser-1
The lifetime of a semiconductor laser has a lognormal
distribution with θ = 10 and ω = 1.5 hours. What is the
probability that the lifetime exceeds 10,000 hours?
P ( X 10, 000 ) = 1 − P exp (W ) 10, 000
= 1 − P W ln (10, 000 )
ln (10, 000 ) − 10
= 1−
1.5
= 1 − ( −0.5264 ) = 0.701
April 2023 Semester Lecture 3_Distribution of Random Variables
Example 4-27: Semiconductor Laser-2
➢ What lifetime is exceeded by 99% of lasers?
𝑃 𝑋 > 𝑥 = 𝑃 exp 𝑊 > 𝑥 = 𝑃 𝑊 > ln 𝑥
ln 𝑥 − 10
=1−Φ = 0.99
1.5
= 1 − Φ 𝑧 = 0.99 therefore 𝑧 = −2.33
ln 𝑥 − 10
= −2.33 and 𝑥 = exp 6.505 = 668.48 hours
1.5
➢ What is the mean and variance of the lifetime?
E ( X ) = e + = e10+1.5
2 2
2 2
= exp (11.125 ) = 67,846.29
( )
V ( X ) = e 2 + e − 1 = e 210+1.5 e1.5 − 1
2 2 2
( 2
)
= exp ( 22.25 ) exp ( 2.25 ) − 1 = 39, 070, 059,886.6
SD ( X ) = 197, 661.5
April 2023 Semester Lecture 3_Distribution of Random Variables
Weibull PDF
The random variable X with probability density function
−1 −( x )
f ( x) = x e for x 0 (4-20)
is a Weibull random variable with
scale parameter 0 and shape parameter 0.
The cumulative density function is:
F ( x) = 1− e −( x )
(4-21)
1
= E ( X ) = 1 +
2
2 2 1
= V ( X ) = 1 + − 1 +
2 2
(4-21a)
April 2023 Semester Lecture 3_Distribution of Random Variables
Weibull Distribution Graphs
Weibull probability density
function for selected
values of 𝜕 and β.
April 2023 Semester Lecture 3_Distribution of Random Variables
Thank you
Acknowledgement: Most of the slides of this lecture are based on “Applied Statistics
and Probability for Engineers”. Seventh Edition. D. C. Montgomery, and G. C. Runger,
Wiley, 2018.
April 2023 Semester Lecture 3_Distribution of Random Variables