Decision Making with Probabilities
The decision-making criteria just presented were based on the assumption that no
information regarding the likelihood of the states of nature was available. Thus, no
probabilities of occurrence were assigned to the states of nature, except in the case of the
equal likelihood criterion. In that case, by assuming that each state of nature was equally
likely and assigning a weight of .50 to each state of nature in our example, we were
implicitly assigning a probability of .50 to the occurrence of each state of nature. It is often
possible for the decision maker to know enough about the future states of nature to assign
probabilities to their occurrence. Given that probabilities can be assigned, several decision
criteria are available to aid the decision maker. We will consider two of these criteria:
expected value and expected opportunity loss (although several others, including the
maximum likelihood criterion, are available).
Expected Value
Expected value is computed by multiplying each decision outcome under each state of
nature by the probability of its occurrence.
To apply the concept of expected value as a decision-making criterion, the decision
maker must first estimate the probability of occurrence of each state of nature. Once these
estimates have been made, the expected value for each decision alternative can be
computed. The expected value is computed by multiplying each outcome (of a decision) by
the probability of its occurrence and then summing these products. The expected value of a
random variable x, written symbolically as EV(x), is computed as follows:
where n = number of values of the random variable x
Using our real estate investment example, let us suppose that, based on several
economic forecasts, the investor is able to estimate a .60 probability that good economic
conditions will prevail and a .40 probability that poor economic conditions will prevail.
This new information is shown in the table below:
Payoff table with probabilities for states of nature
The best decision is the one with the greatest expected value. Because the greatest
expected value is $44,000, the best decision is to purchase the office building. This does not
mean that $44,000 will result if the investor purchases the office building; rather, it is
assumed that one of the payoff values will result (either $100,000 or - $40,000). The
expected value means that if this decision situation occurred a large number of times, an
average payoff of $44,000 would result. Alternatively, if the payoffs were in terms of costs,
the best decision would be the one with the lowest expected value.
Expected Opportunity Loss
A decision criterion closely related to expected value is expected opportunity loss.
To use this criterion, we multiply the probabilities by the regret (i.e., opportunity loss) for
each decision outcome rather than multiplying the decision outcomes by the probabilities
of their occurrence, as we did for expected monetary value.
The concept of regret was introduced in our discussion of the minimax regret
criterion. The regret values for each decision outcome in our example were shown. These
values are repeated in below, with the addition of the probabilities of occurrence for each
state of nature.
Regret (opportunity loss) table with probabilities for states of nature
As with the minimax regret criterion, the best decision results from minimizing the
regret, or, in this case, minimizing the expected regret or opportunity loss. Because $28,000
is the minimum expected regret, the decision is to purchase the office building.
Notice that the decisions recommended by the expected value and expected
opportunity loss criteria were the same—to purchase the office building. This is not a
coincidence because these two methods always result in the same decision. Thus, it is
repetitious to apply both methods to a decision situation when one of the two will suffice.
In addition, note that the decisions from the expected value and expected
opportunity loss criteria are totally dependent on the probability estimates determined by
the decision maker. Thus, if inaccurate probabilities are used, erroneous decisions will
result. It is therefore important that the decision maker be as accurate as possible in
determining the probability of each state of nature.
Note:
Expected opportunity loss is the expected value of the regret for each decision.
The expected value and expected opportunity loss criteria result in the same decision.
Expected Value of Perfect Information
It is often possible to purchase additional information regarding future events and
thus make a better decision. For example, a real estate investor could hire an economic
forecaster to perform an analysis of the economy to more accurately determine which
economic condition will occur in the future. However, the investor (or any decision maker)
would be foolish to pay more for this information than he or she stands to gain in extra
profit from having the information.
That is, the information has some maximum value that represents the limit of what
the decision maker would be willing to spend. This value of information can be computed
as an expected value—hence its name, the expected value of perfect information (also
referred to as EVPI). To compute the expected value of perfect information, we first look at
the decisions under each state of nature. If we could obtain information that assured us
which state of nature was going to occur (i.e., perfect information), we could select the best
decision for that state of nature. For example, in our real estate investment example, if we
know for sure that good economic conditions will prevail, then we will decide to purchase
the office building. Similarly, if we know for sure that poor economic conditions will occur,
then we will decide to purchase the apartment building. These hypothetical “perfect”
decisions are summarized in the table below:
Payoff table with decisions, given perfect information
Note: The expected value of perfect information (EVPI) is the maximum amount a decision
maker would pay for additional information.
The probabilities of each state of nature (i.e., .60 and .40) tell us that good economic
conditions will prevail 60% of the time and poor economic conditions will prevail 40% of
the time (if this decision situation is repeated many times). In other words, even though
perfect information enables the investor to make the right decision, each state of nature
will occur only a certain portion of the time. Thus, each of the decision outcomes obtained
using perfect information must be weighted by its respective probability:
The amount $72,000 is the expected value of the decision, given perfect information,
not the expected value of perfect information. The expected value of perfect information is
the maximum amount that would be paid to gain information that would result in a
decision better than the one made without perfect information. Recall that the expected
value decision without perfect information was to purchase an office building, and the
expected value was computed as:
The expected value of perfect information is computed by subtracting the expected
value without perfect information ($44,000) from the expected value given perfect
information ($72,000):
The expected value of perfect information, $28,000, is the maximum amount that
the investor would pay to purchase perfect information from some other source, such as an
economic forecaster. Of course, perfect information is rare and usually unobtainable.
Typically, the decision maker would be willing to pay some amount less than $28,000,
depending on how accurate (i.e., close to perfection) the decision maker believed the
information was. It is interesting to note that the expected value of perfect information,
$28,000 for our example, is the same as the expected opportunity loss (EOL) for the
decision selected, using this later criterion:
This will always be the case, and logically so, because regret reflects the difference
between the best decision under a state of nature and the decision actually made. This is
actually the same thing determined by the expected value of perfect information.
Notes:
EVPI equals the expected value, given perfect information, minus the expected value without
perfect information.
The expected value of perfect information equals the expected opportunity loss for the best
decision.
Decision Trees
Another useful technique for analyzing a decision situation is using a decision tree.
A decision tree is a graphical diagram consisting of nodes and branches. In a decision tree,
the user computes the expected value of each outcome and makes a decision based on
these expected values.
Note:
A decision tree is a diagram consisting of square decision nodes, circle probability nodes, and
branches representing decision alternatives.
The primary benefit of a decision tree is that it provides an illustration (or picture)
of the decision-making process. This makes it easier to correctly compute the necessary
expected values and to understand the process of making the decision.
Decision tree for real estate investment example
Notes:
The expected value is computed at each probability node.
Branches with the greatest expected value are selected.
The circles (●) and the square (■) in the picture are referred to as nodes. The square
is a decision node, and the branches emanating from a decision node reflect the alternative
decisions possible at that point. For example, node 1 signifies a decision to purchase an
apartment building, an office building, or a warehouse. The circles are probability, or event,
nodes, and the branches emanating from them indicate the states of nature that can occur:
good economic conditions or poor economic conditions.
The decision tree represents the sequence of events in a decision situation. First,
one of the three decision choices is selected at node 1. Depending on the branch selected,
the decision maker arrives at probability node 2, 3, or 4, where one of the states of nature
will prevail, resulting in one of six possible payoffs.
Determining the best decision by using a decision tree involves computing the
expected value at each probability node. This is accomplished by starting with the final
outcomes (payoffs) and working backward through the decision tree toward node 1. First,
the expected value of the payoffs is computed at each probability node:
These values are now shown as the expected payoffs from each of the three
branches emanating from node 1. Each of these three expected values at nodes 2, 3, and 4 is
the outcome of a possible decision that can occur at node 1. Moving toward node 1, we
select the branch that comes from the probability node with the highest expected payoff. In
the picture below, the branch corresponding to the highest payoff, $44,000, is from node 1
to node 3. This branch represents the decision to purchase the office building. The decision
to purchase the office building, with an expected payoff of $44,000, is the same result we
achieved earlier by using the expected value criterion. In fact, when only one decision is to
be made (i.e., there is not a series of decisions), the decision tree will always yield the same
decision and expected payoff as the expected value criterion. As a result, in these decision
situations a decision tree is not very useful. However, when a sequence or series of
decisions is required, a decision tree can be very useful.