Chapter 4
Introduction to Probability
Experiments, Counting Rules, and
Assigning Probabilities
Events and Their Probability
Some Basic Relationships of Probability
Conditional Probability
Bayes’ Theorem
Probability
Probability is a numerical measure of the
likelihood that an event will occur.
Probability values are always assigned on a
scale from 0 to 1.
A probability near 0 indicates an event is very
unlikely to occur.
A probability near 1 indicates an event is
almost certain to occur.
A probability of 0.5 indicates the occurrence of
the event is just as likely as it is unlikely.
Probability as a Numerical Measure
of the Likelihood of Occurrence
Increasing Likelihood of Occurrence
0 . 1
Probability 5
:
The occurrence of the
event is
just as likely as it is
unlikely.
An Experiment and Its Sample Space
An experiment is any process that generates
well-defined outcomes.
The sample space for an experiment is the set
of all experimental outcomes.
A sample point is an element of the sample
space, any one particular experimental
outcome.
Example:
Consider the random experiment of tossing a
coin. If we let S denote the sample space, we
can use the following notation to describe the
sample space.
S = {head, Tail}
Consider the process of rolling a die. The
possible experimental outcomes, defined as
the number of dots appearing on the face of
the die, are the six sample points in the
sample space for this random experiment,
S = {1, 2, 3, 4, 5, 6}
A Counting Rule for
Multiple-Step Experiments
If an experiment consists of a sequence of k
steps in which there are n1 possible results for
the first step, n2 possible results for the second
step, and so on, then the total number of
experimental outcomes is given by (n1)(n2) . . .
(nk).
A helpful graphical representation of a
multiple-step experiment is a tree diagram.
Example:
A Counting Rule for Multiple-Step Experiments
Consider the experiment of tossing two coins. Let
the experimental outcomes be defined in terms
of the pattern of heads and tails appearing on the
upward faces of the two coins. How many
experimental outcomes are possible for this
experiment?
Viewing the experiment of tossing two coins as a
sequence of first tossing one coin (n1 = 2) and
then tossing the other coin (n2 = 2), we can see
from the counting rule that (2)(2) = 4 distinct
experimental outcomes are possible. As shown,
they are
Example:
Tree Diagram
Example:
Tree Diagram
Counting Rule for Combinations
Another useful counting rule enables us to count
the
number of experimental outcomes when n
objects are to
be selected from a set of N objects.
Number of combinations of N objects taken n
N N N!
at a time Cn
n n!(N n)!
where N! = N(N - 1)(N - 2) . . . (2)(1)
n! = n(n - 1)( n - 2) . . . (2)(1)
0! = 1
Counting Rule for Permutations
A third useful counting rule enables us to count
the
number of experimental outcomes when n
objects are to
be selected from a set of N objects where the
order of
selection is important.
N N N!
Pn n! of N objects taken n
Number of permutations
n (N n)!
at a time
Assigning Probabilities
Classical Method
Assigning probabilities based on the
assumption of equally likely outcomes.
Relative Frequency Method
Assigning probabilities based on
experimentation or historical data.
Subjective Method
Assigning probabilities based on the
assignor’s judgment.
Assigning Probabilities
Classical Method
If an experiment has n possible outcomes, this
method
would assign a probability of 1/n to each
outcome.
Example
Experiment: Rolling a die
Sample Space: S = {1, 2, 3, 4, 5, 6}
Probabilities: Each sample point has a 1/6
chance
of occurring.
Relative Frequency Method
Relative Frequency Method
The relative frequency method of assigning
probabilities is appropriate when data are
available to estimate the proportion of the time
the experimental outcome will occur if the
experiment is repeated a large number of times.
Number of Days Outcome
Number Waiting Assigned Probability
Occurred
0 2 2/20 = 0.10
1 5 5/20 = 0.25
2 6 6/20 = 0.30
3 4 4/20 = 0.20
4 3 3/20 = 0.15
Total 20 1.00
Subjective Method
When economic conditions and a company’s
circumstances change rapidly it might be
inappropriate to assign probabilities based
solely on historical data.
We can use any data available as well as our
experience and intuition, but ultimately a
probability value should express our degree of
belief that the experimental outcome will
occur.
The best probability estimates often are
obtained by combining the estimates from the
classical or relative frequency approach with
the subjective estimates.
Example: Investments
Consider the case in which Tom and Judy make
an offer to purchase a house. Two outcomes are
possible:
E1 = their offer is accepted
E2 = their offer is rejected
Judy set probability based on subjective method:
P(E1) = .8 and
P(E2) = .2.
Tom set probability that their offer will be accepted
is .6.
Hence, Tom would set P(E1) = .6 and
P(E2) = .4.
Events and Their Probability
An event is a collection of sample points.
The probability of any event is equal to the
sum of the probabilities of the sample points in
the event.
If we can identify all the sample points of an
experiment and assign a probability to each,
we can compute the probability of an event.
Example: Bradley Investments
Events and Their Probabilities
Let C denote the event that the project is completed
in 10 months or less; we write
C = {(2, 6), (2, 7), (2, 8), (3, 6), (3, 7), (4, 6)}
The probability of event C, denoted P(C), is given by
P(C ) = P(2, 6) + P(2, 7) + P(2, 8) + P(3, 6) + P(3, 7)
+ P(4, 6)
Refer to the sample point probabilities in Table 4.3;
we have
P(C ) = .15 + .15 + .05 + .10 + .20 + .05
= .70
Some Basic Relationships of Probability
There are some basic probability relationships
that can be used to compute the probability of
an event without knowledge of all the sample
point probabilities.
• Complement of an Event
• Union of Two Events
• Intersection of Two Events
• Mutually Exclusive Events
Complement of an Event
The complement of event A is defined to be
the event consisting of all sample points that
are not in A.
The complement of A is denoted by Ac.
The Venn diagram below illustrates the
Sample Space S
concept of a complement.
Event A Ac
Complement of an Event
In any probability application, either event A or its
complement Ac must occur. Therefore, we have
P(A) + P(Ac ) = 1
Computing Probability Using the Complement
P(A) = 1 - P(Ac )
Example: consider the case of a sales manager who,
after reviewing sales reports, states that 80% of new
customer contacts result in no sale. A to denote the
event of a sale and Ac to denote the event of no sale,
the manager is stating that P(Ac ) = .80.
P(A) = 1 - P(Ac ) = 1 - .80 = .20
Union of Two Events
The union of events A and B is the event
containing all sample points that are in A or B
or both.
The union is denoted by A B
The union of A and B is illustrated below.
Sample Space S
Event A Event B
Example: Bradley Investments
Union of Two Events
Example: Bradley Investments
Union of Two Events
Intersection of Two Events
The intersection of events A and B is the set of
all sample points that are in both A and B.
The intersection is denoted by A
The intersection of A and B is the area of
overlap in the illustration below.
Intersection Sample Space S
Event A Event B
Example: Bradley Investments
Intersection of Two Events
Event M = Markley Oil Profitable
Event C = Collins Mining Profitable
M C = Markley Oil Profitable
and Collins Mining Profitable
M C = {(10, 8), (5, 8)}
P(M C) = P(10, 8) + P(5, 8)
= .20 + .16
= .36
Addition Law
The addition law provides a way to compute
the probability of event A, or B, or both A and
B occurring.
The law is written as:
P(A B) = P(A) + P(B) - P(A B
Example: Bradley Investments
Addition Law
Markley Oil or Collins Mining Profitable
We know: P(M) = .70, P(C) = .48, P(M C)
= .36
Thus: P(M C) = P(M) + P(C) - P(M C)
= .70 + .48 - .36
= .82
This result is the same as that obtained
earlier using
the definition of the probability of an event.
Mutually Exclusive Events
Two events are said to be mutually exclusive if
the events have no sample points in common.
That is, two events are mutually exclusive if,
when one event occurs, the other cannot
occur. Sample
Space S
Event A Event B
Mutually Exclusive Events
Addition Law for Mutually Exclusive Events
P(A B) = P(A) + P(B)
Conditional Probability
The probability of an event given that another
event has occurred is called a conditional
probability.
The conditional probability of A given B is
denoted by P(A|B).
A conditional probability is computed as
follows: P( A B)
P ( A| B )
P( B)
Example:
Conditional Probability
Promotion status of Police officers
Independent Events
Events A and B are independent if P(A|B) =
P(A).
Independent Events
Multiplication Law for Independent Events
P(A B) = P(A)P(B)
The multiplication law also can be used as a
test to see if two events are independent.
Multiplication Law
The multiplication law provides a way to
compute the probability of an intersection of
two events.
The law is written as:
P(A B) = P(B)P(A|B)
Multiplication Law
Bayes’ Theorem
Often we begin probability analysis with initial
or prior probabilities.
Then, from a sample, special report, or a
product test we obtain some additional
information.
Given this information, we calculate revised or
posterior probabilities.
Bayes’ theorem provides the means for
revising the prior probabilities.
Application
Prior New Posterior
of Bayes’
Probabilities Information Probabilities
Theorem
Example:
Consider a manufacturing firm that receives
shipments of parts from two different
suppliers.
Let:
A1 = the event that a part is from
supplier 1
A2 = the event that a part is from
supplier 2
Prior Probabilities
Currently, 65% of the parts purchased by the
company are from supplier 1 and the remaining
35% are from supplier 2. Hence, assign the prior
probabilities,
Example:
New Information
Let,
G = the event that a part is good
B = the event that a part is bad
Conditional Probabilities
Example: Tree Diagram
Example: Tree Diagram
Each of the experimental outcomes is the
intersection of two events, so we can use the
multiplication rule to compute the probabilities.
For instance,
Example: Tree Diagram
Baye’s Theorem
Let,
B = the event that the part is bad
From the law of conditional probability, we know
Referring to the probability tree,
Hence,
Bayes’ Theorem
Bayes’ theorem for the case of two
events.
Bayes’ Theorem
To find the posterior probability that event Ai
will occur given that event B has occurred we
apply Bayes’ theorem.
P ( Ai ) P ( B| Ai )
P ( Ai | B )
P ( A1 ) P ( B| A1 ) P ( A2 ) P ( B| A2 ) ... P ( An ) P ( B| An )
Bayes’ theorem is applicable when the events
for which we want to compute posterior
probabilities are mutually exclusive and their
union is the entire sample space.
Tabular Approach
Step 1 Prepare the following three columns:
Column 1 - The mutually exclusive events for
which posterior probabilities are desired.
Column 2 - The prior probabilities for the
events.
Column 3 - The conditional probabilities of
the new information given each event.
Tabular Approach
Step 2 In column 4, compute the joint
probabilities for each event and the new
information B by using the multiplication law.
Multiply the prior probabilities in
column 2 by the corresponding conditional
probabilities in column 3. That is, P(Ai IB) =
P(Ai) P(B|Ai).
Tabular Approach
Step 3 Sum the joint probabilities in column
4. The sum is the probability of the new
information P(B).
Tabular Approach
Step 4 In column 5, compute the posterior
probabilities using the basic relationship of
conditional probability.
P ( Ai B )
P ( Ai | B )
P( B)
Note that the joint probabilities P(Ai IB)
are in column 4 and the probability P(B) is the sum
of column 4.
Tabular Approach
End of Chapter 4