Linear Programming Lecture Notes
Linear Programming Lecture Notes
Contents
1 Introduction to Linear Programming 3
1.1 Operations Researches . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 What is a Linear Programming (LP) Problem? . . . . . . . . . . 3
1.3 Modeling LP Problems . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Geometric Preliminaries and Solutions . . . . . . . . . . . . . . 9
1.4.1 Half−Spaces, Hyperplanes, and Convex Sets . . . . . . . 9
1.4.2 The Graphical Solution of Two−Variable LP Problems . 14
1.5 The Corner Point Theorem and its Proof . . . . . . . . . . . . . 25
1
3.5 Shadow Prices . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.6 Duality and Sensitivity Analysis . . . . . . . . . . . . . . . . . . 86
3.7 Complementary Slackness . . . . . . . . . . . . . . . . . . . . . 89
3.8 The Dual−Simplex Method . . . . . . . . . . . . . . . . . . . . 92
Answers 100
2
1 Introduction to Linear Programming
1.1 Operations Researches
The first formal activity of operations research (OR) were initiated in England
during World War II. A team of British scientists set out to make decisions
regarding the best utilization of war materials. After war, the idea was adopted
and improved in the civilian sector.
In OR, we do not have a single general technique to solve all mathematical
models. The most OR techniques are: Linear Programming, Non-Linear Pro-
gramming, Integer Programming, Dynamic Programming, Network Program-
ming, and much more. All techniques are determined by algorithms, and not
by closed form formulas. The algorithms are for deterministic models (not for
probability or stochastic model). Deterministic means when we know the values
of the variables we know the value of the model in certainty.
3
3. A sign restriction is associated with each variable. For
any variable xi , either xi ≥ 0 or xi is unrestricted in
sign (urs).
Example 1.3. Furnco manufactures desks and chairs. Each desk uses 4 units
of wood, and each chair uses 3. A desk contributes $40 to profit, and a chair
contributes $25. Marketing restrictions require that the number of chairs pro-
duced be at least twice the number of desks produced. If 20 units of wood are
available, formulate an LP to maximize Furnco’s profit.
Solution: Let x1 be the number of desks produced, and x2 be the number
of chairs produced. Then, the formulation of the problem is
max z = 40x1 + 25x2
s.t. 4x1 + 3x2 ≤ 20
x2 ≥ 2x1
x1 ≥ 0, x2 ≥ 0
The optimal solution of this problem is x1 = 2, x2 = 4, and z = 180. (We will
see how we find these values later on.)
Note 1. [LP Assumptions]
1. The Proportionality and Additivity Assumptions.
The fact that the objective function for an LP must be a linear function
of the decision variables has two implications.
(a) The contribution of the objective function from each decision vari-
able is proportional to the value of the decision variable. For exam-
ple, the contribution to the objective function in example (1.3) from
making 5 desks is exactly five times the contribution to the objective
function from making one desk.
(b) The contribution to the objective function for any variable is inde-
pendent of the values of the other decision variables. For example,
no matter what the value of x2 , the manufacture of x1 desks will
always contribute 40x1 dollars to the objective function.
Analogously, the fact that each LP constraint must be a linear inequality
or linear equation has two implications.
(a) The contribution of each variable to the lefthand side of each con-
straint is proportional to the value of the variable.
(b) The contribution of a variable to the lefthand side of each constraint
is independent of the values of the variable.
4
2. The Divisibility Assumption.
The divisibility assumption requires that each decision variable be allowed
to assume fractional values. For instance, in example (1.3), the Divisibility
Assumption implies that it is acceptable to produce 1.5 desks or 1.63
chairs. Because Frunco cannot actually produce a fractional number of
desks or chairs, the Divisibility Assumption is not satisfied in the Frunco
problem. A linear programming problem in which some or all of the
variables must be nonnegative integers is called an integer programming
problem.
5
4 hours of labor per week. All wheat can be sold at $4 a bushel, and all corn
can be sold at $3 a bushel. Seven acres of land and 40 hours per week of labor
are available. Government regulations require that at least 30 bushels of corn
be produced during the current year. formulate an LP whose solution will tell
Farmer Jones how to maximize the total revenue from wheat and corn.
Solution: We can formulate this problem in two ways, depending on your
assumption for the decision variables.
6
Steel 1 Steel 1
Mill Cost Time Cost Time
(Minutes) (Minutes)
1 $10 20 $11 22
2 $12 24 $09 18
3 $14 28 $10 30
Solution: Let xij = number of tons of Steel j produced each month at Mill
i. Then a correct formulation is
min w = 10x11 + 12x21 + 14x31 + 11x12 + 9x22 + 10x32
s.t. 20x11 + 22x12 ≤ 12000
24x21 + 18x22 ≤ 12000
28x31 + 30x32 ≤ 12000
x11 + x21 + x31 + ≥ 500
x12 + x22 + x32 ≥ 600
x11 , x12 , x13 , x21 , x22 , x23 ≥ 0
Exercise 1.1.
7
play and how many to work in order to maximize your fun. Formulate
this problem.
6. A banquet hall offers two types of tables for rent: 6−person rectangular
tables at a cost of $28 each and 10−person round tables at a cost of $52
each. Kathleen would like to rent the hall for a wedding banquet and
needs tables for 250 people. The room can have a maximum of 35 tables
and the hall only has 15 rectangular tables available. Formulate an LP to
minimize the cost of each type of tables should be rented.
8
costs $1.30 per pound and food B costs $0.80 per pound, formulate an
LP to minimize the cost of pounds of each food should Katy buy each
month.
Rn = (x1 , x2 , · · · , xn ) xi ∈ R for i = 1, 2, · · · , n
For example, R2 = (x1 , x2 ) x1 and x2 are reals . Geometrically, we
represent R2 as in Figure 1.
Figure 1
Figure 2
9
The graph in R2 of the inequality a1 x1 + a2 x2 ≤ c or a1 x1 + a2 x2 ≥ c is
the set of all points in R2 lying on the line a1 x1 + a2 x2 = c together with all
points lying to one side of this line. For example, the shaded region in Figure
3 is the graph of the inequality 2x1 − 3x2 ≤ 6. To determine on which side of
Figure 3
the line where the region of the inequality 2x1 − 3x2 ≤ 6 lies, consider a point,
say (0, 0), not lying on the line but satisfying the inequality; the side of the line
containing this point is the one corresponding to the inequality.
a1 x1 + a2 x2 + · · · + an xn ≤ c
a1 x1 + a2 x2 + · · · + an xn ≥ c
a1 x1 + a2 x2 + · · · + an xn = c
10
is a hyperplane in R5 , and the set of points in R5 satisfying
1 2
3x1 + x2 − x3 + x4 + x5 ≥ −9
2 3
is a half-space in R5 .
Figure 4
Figure 5
(1 − t)p + tq ; 0≤t≤1
where
11
Example 1.10. The line segment in R2 joining the points p = (3, 6) and
q = (−4, 5) is the set of points
a1 x1 + a2 x2 + · · · + an xn ≤ c
or the inequality
a1 x1 + a2 x2 + · · · + an xn ≥ c
is convex.
Proof. We establish this result for the half-space H defined by the inequality
a1 x1 + a2 x2 + · · · + an xn ≤ c (1.1)
a1 p1 + a2 p2 + · · · + an pn ≤ c
a1 q1 + a2 q2 + · · · + an qn ≤ c
12
Theorem 1.2. If K1 , K2 , · · · , Kr are convex subsets of Rn ,
then the intersection of these sets, K = K1 ∩ K2 ∩ · · · ∩ Kr
is also convex.
a1 x1 + a2 x2 + · · · + an xn = c
is convex
a1 x1 + a2 x2 + · · · + an xn ≤ c
and
a1 x1 + a2 x2 + · · · + an xn ≥ c
By Theorem (1.2), this intersection is convex.
Figure 6
13
Exercise 1.2.
1. Draw the graph in R2 of the following half-spaces.
Figure 7
14
Definition 1.10. The feasible region for an LP is the set
of all points that satisfies all the LP’s constraints and sign
restrictions. Any point that is not in LP’s feasible region is
said to be infeasible point.
The shaded area in Figure 8 indicates the feasible region of the LP in example
(1.3). Note that each of the constraints in the LP defines a half-space. The
feasible set consists of all points in the intersection of these half-spaces. Observe
that the feasible region in Figure 8 is convex. Note that the points (0, 0), (1, 3),
4x1 + 3x2 ≤ 20
x2 ≥ 2x1
x1 ≥ 0, x2 ≥ 0
Figure 8
and (2, 4) are all in the feasible region, while (2, 1) is infeasible, because it does
not satisfy the second constraint.
a1 x1 + a2 x2 + · · · + an xn ≤ b
a1 x1 + a2 x2 + · · · + an xn = b
a1 x1 + a2 x2 + · · · + an xn ≥ b
x1 , x2 , · · · , xn ≥ 0
is convex.
Proof. The inequality constraints define half-spaces, and the equality constraints
define hyperplanes. By Theorems 1.1 and 1.3 these half-spaces and hyperplanes
are convex sets. Since the feasible region is the intersection of these convex sets,
it follows from Theorem 1.2 that the feasible region is convex.
15
Definition 1.11. For a maximization (minimization) problem,
an optimal solution to an LP is a point in the feasible region
with the largest (smallest) objective function value.
The goal of any LP problem is to find the optimum, the best feasible solution
that maximizes the total profit or minimizes the cost. Having identified the
feasible region for the Furnco problem in example (1.3) as shown in Figure 8,
we now search for the optimal solution, which will be the point in the feasible
region with the largest value of z = 40x1 + 25x2 .
To find the optimal solution, we need to graph a line on which all points
have the same z−value. In a max problem, such a line is called an isoprofit
line.
To draw an isoprofit line, choose any point in the feasible region and
calculate its z−value. Let us choose (1, 3). For (1, 3), z = 40(1) +
25(3) = 115. Thus, (1, 3) lies on the isoprofit line z = 40x1 +25x2 = 115.
Because all isoprofit lines are of the form 40x1 + 25x2 = constant, all
isoprofit lines have the same slope. This means that once we have drawn
one isoprofit line, we can find all other isoprofit lines by moving parallel
to the isoprofit line we have drawn in a direction that increases z.
After a point, the isoprofit lines will no longer intersect the feasible region.
The last isoprofit line intersecting (touching) the feasible region defines
the largest z−value of any point in the feasible region and indicates the
optimal solution to the LP.
In our problem, the objective function z = 40x1 + 25x2 will increase if we
move in a direction for which both x1 and x2 increase. Thus, we construct
additional isoprofit lines by moving parallel to 40x1 + 25x2 = 115 in a
northeast direction (upward and to the right), as shown in Figure 9.
Figure 9
16
From Figure 9, we see that the isoprofit line passing through point (2, 4)
is the last isoprofit line to intersect the feasible region. Thus, (2, 4) is the
point in the feasible region with the largest z−value and is therefore the
optimal solution to the Furnco problem. Thus, the optimal value of z is
z = 40(2) + 25(4) = 180.
Example 1.12. Graphically solve the following LP problem.
max z = 3x1 + 2x2
s.t. 2x1 + x2 ≤ 100
x1 + x2 ≤ 80
x1 ≤ 40
x1 ≥ 0, x2 ≥ 0
17
solution is the intersection of the two lines 2x1 + x2 = 8 and x2 = 0, which
yields x1 = 4 and x2 = 0. The minimum value of w is w = −4(4)+7(0) = −16.
Figure 11
For instance, in example (1.12), the first two constraints are binding, while
the third one is nonbinding. While in example (1.13) the third constraint is
binding and the other two constraints are nonbinding.
For example, the feasible region of the following constraints has a redundant
constraint as shown in Figure 12
Constraint [1]: 2x1 + x2 ≤ 6
Constraint [2]: x1 + 3x2 ≤ 9
Constraint [3]: x1 + x2 ≤ 5
Sign Restriction: x1 , x2 ≥ 0
Note that the third constraint is the redundant constraint since its removal from
the region will leave the feasible region unchanged.
18
Figure 12
19
Figure 13
Figure 14
20
A minimization problem is unbounded if we never leave the feasible region
when moving in the direction of decreasing z.
Figure 16
21
Example 1.18. Graphically solve the following LP problem.
Figure 17
Solution: The two variables x1 and x2 are unrestricted in sign means that both
can be positive, negative, or zero. The feasible region is the shaded region
in Figure 17. To find the optimal solution, we draw the isoprofit line passing
through (−3, 3). This isoprofit line has z = 5(−3) + 6(−3) = −33. The
direction of increasing z is to the northeast. Moving parallel to z = 5x1 + 6x2
in a northeast direction, we see that the last isoprofit line we draw will touch
the feasible region at the point (−1, −1). Thus, the optimal z−value is z =
5(−1) + 6(−1) = −11.
Exercise 1.3.
1. Match the solution region of each system of linear inequalities with one
of the four regions shown in Figure 18.
(a) x + 2y ≤ 8
3x − 2y ≥ 0
(b) x + 2y ≥ 8
3x − 2y ≤ 0
(c) x + 2y ≥ 8
3x − 2y ≥ 0
(d) x + 2y ≤ 8
3x − 2y ≤ 0
Figure 18
3. Find the maximum value of each objective function over the feasible region
shown in Figure 19.
22
(a) z = x + y
(b) z = 4x + y
(c) z = 3x + 7y
(d) z = 9x + 3y
Figure 19
4. Find the minimum value of each objective function over the feasible region
shown in Figure 20.
(a) w = 7x + 4y
(b) w = 7x + 9y
(c) w = 3x + 8y
(d) w = 5x + 4y
Figure 20
5. The corner points for the bounded feasible region determined by the sys-
tem of linear inequalities
x + 2y ≤ 10
3x + y ≤ 15
x, y ≥ 0
are O = (0, 0), A = (0, 5), B = (4, 3), and C = (5, 0) as shown in Figure
21. If P = ax + by and a, b > 0, determine conditions on a and b that
will ensure that the maximum value of P occurs
(a) only at A
(b) only at B
(c) only at C
23
6. Identify the direction of increase in z in each of the following cases:
(a) Maximize z = x1 − x2 .
(b) Maximize z = −8x1 − 3x2 .
(c) Maximize z = −x1 + 3x2 .
7. Identify the direction of decrease in w in each of the following cases:
(a) Minimize w = 4x1 − 2x2 .
(b) Minimize w = −6x1 + 2x2 .
8. Determine the solution space graphically for the following inequalities.
Which constraints are redundant? Reduce the system to the smallest
number of constraints that will define the same solution space.
x+y ≤4
4x + 3y ≤ 12
−x + y ≥ 1
x+y ≤6
x, y ≥ 0
9. Write the constraints associated with the solution space shown in Figure
22 and identify the redundant constraints.
Figure 22
Show graphically that at the optimal solution, the variables x1 and x2 can
be increased indefinitely while the value of the objective function remains
constant.
24
11. Consider the following problem:
12. Solve the following problem by inspection without graphing the feasible
region.
25
Theorem 1.5. [Corner Point Theorem]
Consider the LP problem
Maximize ( or Minimize) z = c1 x1 + c2 x2 + · · · + cn xn
c1 x1 + c2 x2 + · · · + cn xn ≤ b
c1 x1 + c2 x2 + · · · + cn xn = b
c1 x1 + c2 x2 + · · · + cn xn ≥ b
x1 , x2 , · · · , xn ≥ 0
Proof. We defer its proof to the end of the section. We will give a proof only for
bounded regions by assuming theorems (1.7) and (1.8) that will not be proven.
The proof of these theorems and the unbounded case of Theorem (1.5) can be
found in: Jan Van Tiel, Convex Analysis, New York: Wiley, 1984.
Note 6. Theorem (1.5) suggests that an optimal solution of an LP problem
may not exist. There can be two reasons for this:
1. The feasible region is empty; that is, there are no feasible solutions as in
example (1.15).
2. The feasible region is unbounded as in example (1.16). However, an LP
problem with an unbounded feasible region can have an optimal solution
as shown in example (1.17).
To prove Theorem (1.5) we need some additional notations and some pre-
liminary results. First, we adopt functional notation to describe the objective
function. We denote
z = c1 x1 + c2 x2 + · · · + cn xn
by the function f : Rn → R defined by
f (x1 , x2 , · · · , xn ) = c1 x1 + c2 x2 + · · · + cn xn .
Note that if p = (p1 , p2 , · · · , pn ) is a point in Rn , then
f (p) = f (p1 , p2 , · · · , pn ) = c1 p1 + c2 p2 + · · · + cn pn .
26
Definition 1.14. A polyhedron is the intersection of a finite
number of half-spaces and/or hyperplanes. Points that lie in
the polyhedron and on one or more of the half-spaces or hy-
perplanes defining the polyhedron are called boundary points.
Points that lie in the polyhedron but are not boundary points
are called interior points.
f (x1 , x2 , · · · , xn ) = c1 x1 + c2 x2 + · · · + cn xn .
Proof. Suppose that f (p) ≤ f (q). (You can prove the case f (q) ≤ f (p)
by interchanging p and q in the following argument.) We first observe that
f (r) = (1 − t)f (p) + tf (q). To see this, note that
and
27
Definition 1.15. Let K1 , K2 , · · · , Km be points in Rn . A
convex combination of K1 , K2 , · · · , Km is any point p that
can be written as
p = a1 K1 + a2 K2 + · · · + am Km
a1 + a2 + · · · + am = 1.
f (x1 , x2 , · · · , xn ) = c1 x1 + c2 x2 + · · · + cn xn
f (b1 K1 + b2 K2 + · · · + bm Km )
= b1 f (K1 ) + b2 f (K2 ) + · · · + bm f (Km ) .
for each i. Let p be any point in K. Then by Theorem (1.7) there are nonneg-
ative constants a1 , a2 , · · · , am whose sum is 1, and so that
p = a1 K1 + a2 K2 + · · · + am Km
28
By Theorem (1.8),
f (p) = f (a1 K1 + a2 K2 + · · · + am Km )
= a1 f (K1 ) + a2 f (K2 ) + · · · + am f (Km )
and
29
2 The Simplex Method
Thus far we have used a geometric approach to solve certain LP problems. We
have observed, in Chapter 1, that this procedure is limited to problems of two
or three variables. The simplex algorithm is essentially algebraic in nature and
is more efficient than its geometric counterpart.
30
2.2 Converting an LP to Standard Form
We have seen that an LP can have both equality and inequality constraints.
It also can have variables that are required to be nonnegative as well as those
allowed to be unrestricted in sign (urs). The development of the simplex method
computations is facilitated by imposing two requirements on the LP model:
3x1 + 2x2 + s = 12 ; s ≥ 0.
The substitution must be effected throughout all the constraints and the
objective function. In the optimal LP solution only one of the two variables
yi1 and yi2 can assume a positive value, but never both. Thus, when
yi1 > 0, yi2 = 0 and vice versa. For example, if xi = 4 then yi1 = 4 and
yi2 = 0, and if xi = −4 then yi1 = 0 and yi2 = 4.
31
Solution: The following changes must be effected.
22x1 − 4x2 ≥ −7
Show that multiplying both sides of the inequality by −1 and then con-
verting the resulting inequality into an equation is the same as converting
it first to an equation and then multiplying both sides by −1.
32
max (or min) z = c1 x1 + c2 x2 + · · · + cn xn
s.t. a11 x1 + a12 x2 + · · · + a1n xn = b1
a21 x1 + a22 x2 + · · · + a2n xn = b2
..
.
am1 x1 + am2 x2 + · · · + amn xn = bm
x1 , x2 , · · · , xn ≥ 0
If we define
a11 a12 ··· a1n x1 b1
a21 a22 ··· a2n x2 b2
A= . .. , x = . and b = .
.. ..
.. . . . .
. ..
am1 am2 · · · amn xn bm
n n!
Note 7. The maximum number of corner points is Cm = .
m! (n − m)!
Of course, the different choices of nonbasic variables will lead to different
basic solutions. To illustrate, we find all the basic solutions to the following
system of two equations (m = 2) in three variables (n = 3):
x1 + x2 = 3
− x2 + x3 = −1
x1 + x2 = 3
− x2 = −1
If NBV= {x2 }, then BV= {x1 , x3 }. We obtain the values of the basic
variables by setting x2 = 0 and we find that x1 = 3, x3 = −1. Thus,
(x1 , x2 , x3 ) = (3, 0, −1) is a basic solution to the system.
33
If NBV= {x1 }, then BV= {x2 , x3 }. We obtain the values of the basic
variables by setting x1 = 0 and solving
x2 = 3
− x2 + x3 = −1
The following table provides all the basic and nonbasic solutions of the above
linear system.
Note 8. Some sets of m variables do not yield a basic solution. For example,
consider the following linear system:
x1 + 2x2 + x3 = 1
2x1 + 4x2 + x3 = 3
If we choose NBV= {x3 } and BV= {x1 , x2 }, the corresponding basic solution
would be obtained by solving
x1 + 2x2 = 1
2x1 + 4x2 = 3
x1 + x2 = 3
− x2 + x3 = −1
34
Theorem 2.1. A point in the feasible region of an LP is an
extreme point if and only if it is a basic feasible solution to
the LP.
Figure 23
s1 = 4 and s2 = 5.
35
This solution corresponds to point A in Figure 23. Another point can be de-
termined by setting s1 = 0 and s2 = 0 and then solving the resulting two
equations
2x1 + x2 = 4
x1 + 2x2 = 5
The associated basic solution is x1 = 1, x2 = 2, or point C in Figure 23. In the
present example, the maximum number of corner points is C24 = 6. Looking at
Figure 23, we can spot the four corner points A, B, C, and D. So, where are
the remaining two? In fact, points E and F also are corner points. But, they
are infeasible, and, hence, are not candidates for the optimum. The following
table provides all the basic and nonbasic solutions of the current example.
Basic Corner
NBVs BVs Solution Point Feasible? z−Value
x1 , x2 s1 , s2 4, 5 A 3 0
x1 , s1 x2 , s2 4, −3 F 7
x1 , s2 x2 , s1 2.5, 1.5 B 3 7.5
x2 , s1 x1 , s2 2, 3 D 3 4
x2 , s2 x1 , s1 5, −6 E 7
s1 , s2 x1 , x2 1, 2 C 3 8 Optimum
Exercise 2.2.
2. Determine the optimum solution for each of the following LPs by enumer-
ating all the basic solutions.
36
(a) max z = 2x1 − 4x2 + 5x3 − 6x4
s.t. x1 + 5x2 − 2x3 + 8x4 ≤ 2
−x1 + 2x2 + 3x3 + 4x4 ≤ 1
x1 , x2 , x3 , x4 ≥ 0
(b) min w = x1 + 2x2 − 3x3 − 2x4
s.t. x1 + 2x2 − 3x3 + x4 = 4
x1 + 2x2 + x3 + 2x4 = 4
x1 , x2 , x3 , x4 ≥ 0
3. Show algebraically that all the basic solutions of the following LP are
infeasible.
max z = x1 + x2
s.t. x1 + 2x2 ≤ 3
2x1 + x2 ≥ 8
x1 , x2 ≥ 0
max z = x1 + 3x2
s.t. x1 + x2 ≤ 2
−2x1 + x2 ≤ 4
x1 urs, x2 ≥ 0
37
For example, in Figure 23, two basic feasible solutions will be adjacent if they
have 2 − 1 = 1 basic variable in common. Thus, the bfs corresponding to
point B in Figure 23 is adjacent to the bfs corresponding to point C but is not
adjacent to bfs D. Intuitively, two basic feasible solutions are adjacent if they
both lie on the same edge of the boundary of the feasible region.
38
in the form
z − c1 x1 − c2 x2 − · · · − cn xn = 0.
We call this format the row 0 version of the objective function (row 0 for
short).
Step 2: Obtain a bfs (if possible) from the standard form. This is easy if all
the constraints are ≤ with nonnegative right-hand sides. Then the slack
variable si may be used as the basic variable for row i. If no bfs is readily
apparent, then use the technique discussed in Section 2.6 to find a bfs.
Step 3: Determine whether the current bfs is optimal. If all nonbasic variables
have nonnegative coefficients in row 0, then the current bfs is optimal. If
any variables in row 0 have negative coefficients, then choose the variable
with the most negative coefficient in row 0 to enter the basis. We call
this variable the entering variable.
Step 4: If the current bfs is not optimal, then determine which nonbasic variable
should become a basic variable and which basic variable should become a
nonbasic variable to find a new bfs with a better objective function value.
When entering a variable into the basis, compute the ratio
Right-hand side of constraint
Coefficient of entering variable in constraint
for every constraint in which the entering variable has a positive coeffi-
cient. The constraint with the smallest ratio is called the winner of the
ratio test. The smallest ratio is the largest value of the entering variable
that will keep all the current basic variables nonnegative.
Step 5: Use elementary row operations (EROs) to find the new bfs with the
better objective function value by making the entering variable a basic
variable (has coefficient 1 in pivot row, and 0 in other rows) in the con-
straint that wins the ratio test. Go back to step 3.
This saves writing the symbols for the variables in each of the equations, but
what is even more important is the fact that it permits highlighting the numbers
39
involved in arithmetic calculations and recording the computations compactly.
For example, the form
z − 2x1 − 3x2 =0
2x1 + x2 + s1 =4
x1 + 2x2 + s2 = 5
40
↓
Iteration [1] Basic x1 x2 s1 s2 RHS
z −1/2 0 0 3/2 15/2
← s1 3/2 0 1 −1/2 3/2 Ratio= 3/2 ÷ 3/2 = 1
x2 1/2 1 0 1/2 5/2 Ratio= 5/2 ÷ 1/2 = 5
Example 2.4. Solve the following LP problem using the simplex method.
max z = 4x1 + 4x2
s.t. 6x1 + 4x2 ≤ 24
x1 + 2x2 ≤ 6
−x1 + x2 ≤ 1
x2 ≤ 2
x1 , x2 ≥ 0
Solution: By adding slack variables s1 , s2 , s3 and s4 , respectively, we obtain
the LP in standard form:
max z − 4x1 − 4x2 = 0
s.t. 6x1 + 4x2 + s1 = 24
x1 + 2x2 + s2 = 6
−x1 + x2 + s3 = 1
x2 + s 4 = 2
x1 , x2 , s1 , s2 , s3 , s4 ≥ 0
The initial tableau and all following tableaus until the optimal solution is reached
are shown below. Note that we can choose to enter either x1 or x2 into the
basis. We arbitrarily choose to enter x1 into basis.
↓
Iteration [0] Basic x1 x2 s1 s2 s3 s4 RHS
z −4 −4 0 0 0 0 0
← s1 6 4 1 0 0 0 24 Ratio= 24/6 = 4
s2 1 2 0 1 0 0 6 Ratio= 6/1 = 6
s3 −1 1 0 0 1 0 1 7
s4 0 1 0 0 0 1 2 7
41
↓
Iteration [1] Basic x1 x2 s1 s2 s3 s4 RHS
z 0 −4/3 2/3 0 0 0 16
x1 1 2/3 1/6 0 0 0 4 Ratio= 4 ÷ 2/3 = 6
← s2 0 4/3 −1/6 1 0 0 2 Ratio= 2 ÷ 4/3 = 3/2
s3 0 5/3 1/6 0 1 0 5 Ratio= 5 ÷ 5/3 = 3
s4 0 1 0 0 0 1 2 Ratio= 2/1 = 2
Example 2.5. Solve the following LP problem using the simplex method.
max z = x1 + 3x2
s.t. x1 + x2 ≤ 2
−x1 + x2 ≤ 4
x1 ≥ 0, x2 urs
Solution: By assuming x2 = y1 − y2 and then adding slack variables s1 and
s2 , respectively, we obtain the LP in standard form:
max z − x1 − 3y1 + 3y2 = 0
s.t. x1 + y1 − y2 + s1 = 2
−x1 + y1 − y2 + s2 = 4
x1 , y1 , y2 , s1 , s2 ≥ 0
The initial tableau and all following tableaus until the optimal solution is reached
are shown below.
↓
Iteration [0] Basic x1 y1 y2 s1 s2 RHS
z −1 −3 3 0 0 0
← s1 1 1 −1 1 0 2 Ratio= 2/1 = 2
s2 −1 1 −1 0 1 4 Ratio= 4/1 = 4
42
Exercise 2.3.
1. Use the simplex algorithm to solve the following problems.
2. Solve the following problem by inspection, and justify the method of so-
lution in terms of the basic solutions of the simplex method.
Method (1) Multiply the objective function for the min problem by −1 and
solve the problem as a maximization problem with objective function
(−w). The optimal solution to the max problem will give you the op-
timal solution to the min problem where
optimal objective function optimal objective function
=−
value for min problem value for max problem
Example 2.6. Solve the following LP problem using the simplex method.
43
max z = −2x1 + 3x2
s.t. x1 + x2 ≤ 4
x1 − x2 ≤ 6
x1 , x2 ≥ 0
The initial tableau and all following tableaus until the optimal solution is
reached are shown below.
↓
Iteration [0] Basic x1 x2 s1 s2 RHS
z 2 −3 0 0 0
← s1 1 1 1 0 4 Ratio= 4/1 = 4
s2 1 −1 0 1 6 7
Iteration [1] Basic x1 x2 s1 s2 RHS Optimal Tableau
z 5 0 3 0 12 w = −z = −12
x2 1 1 1 0 4 x1 = 0, x2 = 4
s2 2 0 1 1 10 s1 = 0, s2 = 10
The initial tableau and all following tableaus until the optimal solution is
reached are shown below. Note that, because x2 has the most positive
coefficient in row 0, we enter x2 into the basis.
44
↓
Iteration [0] Basic x1 x2 s1 s2 RHS
w −2 3 0 0 0
← s1 1 1 1 0 4 Ratio= 4/1 = 4
s2 1 −1 0 1 6 7
Iteration [1] Basic x1 x2 s1 s2 RHS Optimal Tableau
w −5 0 −3 0 −12 w = −12
x2 1 1 1 0 4 x1 = 0, x2 = 4
s2 2 0 1 1 10 s1 = 0, s2 = 10
Exercise 2.4. Use the simplex algorithm to solve the following problems.
45
the optimum iteration is reached (assuming the problem has a feasible solution).
The desired goal is achieved by assigning a penalty defined as:
Artificial variable objective −M in max problems
=
function coefficient M in min problems
3x1 + x2 =3
4x1 + 3x2 − e2 =6
x1 + 2x2 + s3 = 4
The third equation has its slack variable, s3 , but the first and second equations
do not. Thus, we add the artificial variables a1 and a2 in the first two equations
and penalize them in the objective function with M a1 + M a2 (because we are
minimizing). The resulting LP becomes
min w = 4x1 + x2 + M a1 + M a2
s.t. 3x1 + x2 + a1 = 3
4x1 + 3x2 − e2 + a2 = 6
x1 + 2x2 + s3 = 4
x1 , x2 , s3 , e2 , a1 , a2 ≥ 0
After writing the objective function as w − 4x1 − x2 − M a1 − M a2 = 0, the
initial tableau will be
Iteration [0] Basic x1 x2 s3 e2 a1 a2 RHS
w −4 −1 0 0 −M −M 0
a1 3 1 0 0 1 0 3
a2 4 3 0 −1 0 1 6
s3 1 2 1 0 0 0 4
Before proceeding with the simplex method computations, row 0 must be made
consistent with the rest of the tableau. The right−hand side of row 0 in the
tableau currently shows w = 0. However, given the nonbasic solution x1 =
x2 = e2 = 0, the current basic solution is a1 = 3, a2 = 6, and s3 = 4 yields
w = (4 × 0) + (1 × 0) + (3 × M ) + (6 × M ) = 9M 6= 0.
46
The inconsistency stems from the fact that a1 and a2 have nonzero coefficients
in row 0. To eliminate the inconsistency, we use EROs. The modified tableau
thus becomes (verify!):
Iteration [0] Basic x1 x2 s3 e2 a1 a2 RHS
w 7M − 4 4M − 1 0 −M 0 0 9M
a1 3 1 0 0 1 0 3
a2 4 3 0 −1 0 1 6
s3 1 2 1 0 0 0 4
The last tableau is ready for the application of the simplex optimality and the
feasibility conditions. Because the objective function is minimized, the variable
x1 having the most positive coefficient in the row 0 enters the solution. The
minimum ratio of the feasibility condition specifies a1 as the leaving variable.
All tableaus until the optimal solution is reached are shown below.
↓
Iteration [0] Basic x1 x2 s3 e2 a1 a2 RHS
w 7M − 4 4M − 1 0 −M 0 0 9M
← a1 3 1 0 0 1 0 3 Ratio= 3/3 = 1
a2 4 3 0 −1 0 1 6 Ratio= 6/4 = 3/2
s3 1 2 1 0 0 0 4 Ratio= 4/2 = 2
↓
Iteration [1] Basic x1 x2 s3 e2 a1 a2 RHS
w 0 (1+5M )/3 0 −M (4−7M )/3 0 4 + 2M
x1 1 1/3 0 0 1/3 0 1 Ratio= 1 ÷ 1/3 = 3
← a2 0 5/3 0 −1 −4/3 1 2 Ratio= 2 ÷ 5/3 = 6/5
s3 0 5/3 1 0 −1/3 0 3 Ratio= 3 ÷ 5/3 = 9/5
↓
Iteration [2] Basic x1 x2 s3 e2 a1 a2 RHS
w 0 0 0 1/5 8/5 − M −1/5 − M 18/5
x1 1 0 0 1/5 3/5 −1/5 3/5 Ratio= 3/5 ÷ 1/5 = 3
x2 0 1 0 −3/5 −4/5 3/5 6/5 7
← s3 0 0 1 1 1 −1 1 Ratio= 1/1 = 1
47
use the computer). We break away from the long tradition of manipulating M
algebraically and use a numerical substitution instead. The intent is to simplify
the presentation without losing substance. What value of M should we use?
The answer depends on the data of the original LP. Recall that the penalty
M must be sufficiently large relative to the original objective coefficients to
force the artificial variables to be zero (which happens only if a feasible solution
exists). At the same time, since computers are the main tool for solving LPs,
M should not be unnecessarily too large, as this may lead to serious round-off
error. In the present example, the objective coefficients of x1 and x2 are 2 and
1, respectively, and it appears reasonable to set M = 100.
Example 2.8. Solve the following LP problem using the simplex method.
max z = 2x1 + x2
s.t. x1 + x2 ≤ 10
−x1 + x2 ≥ 2
x1 , x2 ≥ 0
Solution: To convert the constraint to equations, use s1 as a slack in the first
constraint and e2 as a surplus in the second constraint.
x1 + x2 + s1 = 10
−x1 + x2 − e2 = 2
We add the artificial variables a2 in the second equation and penalize it in the
objective function with −M a2 = −100a2 (because we are maximizing). The
resulting LP becomes
max z = 2x1 + x2 − 100a2
s.t. x1 + x2 + s1 = 10
−x1 + x2 − e2 + a2 = 2
x1 , x2 , s1 , e2 , a2 ≥ 0
After writing the objective function as z − 2x1 − x2 + 100a2 = 0, the initial
tableau will be
Iteration [0] Basic x1 x2 s1 e2 a2 RHS
z −2 −1 0 0 100 0
s1 1 1 1 0 0 10
a2 −1 1 0 −1 1 2
Before proceeding with the simplex method computations, row 0 must be made
consistent with the rest of the tableau. The inconsistency stems from the fact
that a2 has nonzero coefficients in row 0. To eliminate the inconsistency, we use
EROs. The modified tableau and all other tableaus until the optimal solution
is reached are:
48
↓
Iteration [0] Basic x1 x2 s1 e2 a2 RHS
z 98 −101 0 100 0 −200
s1 1 1 1 0 0 10 Ratio= 10/1 = 10
← a2 −1 1 0 −1 1 2 Ratio= 2/1 = 2
↓
Iteration [1] Basic x1 x2 s1 e2 a2 RHS
z −3 0 0 −1 101 2
← s1 2 0 1 1 −1 8 Ratio= 8/2 = 4
x2 −1 1 0 −1 1 2 7
Solution: The main difference here from the usual simplex is that x3 and x4
have nonzero objective coefficients in row 0: z − 2x1 − 4x2 − 4x3 + 3x4 = 0. To
eliminate their coefficients, we use EROs. The initial tableaus and all following
tableaus until the optimal solution is reached are shown below.
49
Exercise 2.5.
1. Use the Big M -method to solve the following LPs:
2.7.1 Degeneracy
In the application of the feasibility condition of the simplex method, a tie for the
minimum ratio may occur and can be broken arbitrarily. When this happens, at
least one basic variable will be zero in the next iteration, and the new solution
is said to be degenerate. This situation may reveal that the model has at least
one redundant constraint.
50
Definition 2.4. An LP is degenerate if it has at least one bfs
in which a basic variable is equal to zero.
If one of these degenerate basic variables retains its value of zero until it is
chosen at a subsequent iteration to be a leaving basic variable, the corresponding
entering basic variable also must remain zero, so the value of the objective
function must remain unchanged. However, if the objective function may remain
the same rather than change at each iteration, the simplex method may then go
around in a loop, repeating the same sequence of solutions periodically rather
than eventually changing the objective function toward an optimal solution.
This occurrence is called cycling.
Example 2.10. Solve the following LP problem.
max z = 3x1 + 9x2
s.t. x1 + 4x2 ≤ 8
x1 + 2x2 ≤ 4
x1 , x2 ≥ 0
Solution: By adding slack variables s1 and s2 , we obtain the LP in standard
form
max z − 3x1 − 9x2 = 0
s.t. x1 + 4x2 + s1 = 8
x1 + 2x2 + s2 = 4
x1 , x2 , s1 , s2 ≥ 0
The initial tableau and all following tableaus until the optimal solution is reached
are shown below.
↓
Iteration [0] Basic x1 x2 s1 s2 RHS
z −3 −9 0 0 0
← s1 1 4 1 0 8 Ratio= 8/4 = 2
s2 1 2 0 1 4 Ratio= 4/2 = 2
In iteration 0, s1 and s2 tie for the leaving variable, leading to degeneracy in
iteration 1 because the basic variable s2 assumes a zero value.
↓
Iteration [1] Basic x1 x2 s1 s2 RHS
z −3/4 0 9/4 0 18
x2 1/4 1 1/4 0 2 Ratio= 2 ÷ 1/4 = 8
← s2 1/2 0 −1/2 1 0 Ratio= 0 ÷ 1/2 = 0
51
The following example illustrates the occurrence of cycling in the simplex
iterations and the possibility that the algorithm may never converge to the
optimum solution.
Example 2.11. This example was authored by E.M. Beale1 . Consider the
following LP:
3 1
max C = x1 − 150x2 + x3 − 6x4
4 50
1 1
s.t. x1 − 60x2 − x3 + 9x4 ≤ 0
4 25
1 1
x1 − 90x2 − x3 + 3x4 ≤ 0
2 50
x3 ≤ 1
x1 , x2 , x3 , x4 ≥ 0
1 1
Actually, the optimal solution of this example is C = when x1 = ,
20 25
x3 = 1, and x2 = x4 = 0. However, in order to solve this LP using the Simplex
algorithm, we write it in standard form as follows.
3 1
max C − x1 + 150x2 − x3 + 6x4 = 0
4 50
1 1
s.t. x1 − 60x2 − x3 + 9x4 + s1 = 0
4 25
1 1
x1 − 90x2 − x3 + 3x4 + s2 = 0
2 50
x3 + s3 = 1
x1 , x2 , x3 , x4 , s1 , s2 , s3 ≥ 0
Let us start applying the Simplex algorithm, and see what will happen through
the iterations.
↓
Iteration [0] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C −3/4 150 −1/50 6 0 0 0 0
← s1 1/4 −60 −1/25 9 1 0 0 0 Ratio= 0
s2 1/2 −90 −1/50 3 0 1 0 0 Ratio= 0
s3 0 0 1 0 0 0 1 1 7
↓
Iteration [1] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C 0 −30 −7/50 33 3 0 0 0
x1 1 −240 −4/25 36 4 0 0 0 7
← s2 0 30 3/50 −15 −2 1 0 0 Ratio= 0
s3 0 0 1 0 0 0 1 1 7
1
Saul I. Gass, Sasirekha Vinjamuri. Cycling in linear programming problems. Computers
& Operations Research 31 (2004)
52
↓
Iteration [2] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C 0 0 −2/25 18 1 1 0 0
← x1 1 0 8/25 −84 −12 8 0 0 Ratio= 0
x2 0 1 1/500 −1/2 −1/15 1/30 0 0 Ratio= 0
s3 0 0 1 0 0 0 1 1 Ratio= 1
↓
Iteration [3] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C 1/4 0 0 −3 −2 3 0 0
x3 25/8 0 1 −525/2 −75/2 25 0 0 7
← x2 −1/160 1 0 1/40 1/120 −1/60 0 0 Ratio= 0
s3 −25/8 0 0 525/2 75/2 −25 1 1 Ratio= 2/525
↓
Iteration [4] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C −1/2 120 0 0 −1 1 0 0
← x3 −125/2 10500 1 0 50 −150 0 0 Ratio= 0
x4 −1/4 40 0 1 1/3 −2/3 0 0 Ratio= 0
s3 125/2 −10500 0 0 −50 150 1 1 7
↓
Iteration [5] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C −7/4 330 1/50 0 0 −2 0 0
s1 −5/4 210 1/50 0 1 −3 0 0 7
← x4 1/6 −30 −1/150 1 0 1/3 0 0 Ratio= 0
s3 0 0 1 0 0 0 1 1 7
Note 10. There are several ways to solve the LP problem in example (2.11).
We review these methods as follows.
53
3 1
max C − x1 + 150x2 − x3 + 6x4 = 0
4 50
s.t. 25x1 − 6000x2 − 4x3 + 900x4 + s1 = 0
25x1 − 4500x2 − x3 + 150x4 + s2 = 0
x3 + s 3 = 1
x1 , x2 , x3 , x4 , s1 , s2 , s3 ≥ 0
↓
Iteration [0] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C −3/4 150 −1/50 6 0 0 0 0
← s1 25 −6000 −4 900 1 0 0 0 Ratio= 0
s2 25 −4500 −1 150 0 1 0 0 Ratio= 0
s3 0 0 1 0 0 0 1 1 7
↓
Iteration [1] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C 0 −30 −7/50 33 3/100 0 0 0
x1 1 −240 −4/25 36 1/25 0 0 0 7
← s2 0 1500 3 −750 −1 1 0 0 Ratio= 0
s3 0 0 1 0 0 0 1 1 7
↓
Iteration [2] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C 0 0 −2/25 18 1/100 1/50 0 0
x1 1 0 8/25 −84 −3/25 4/25 0 0 Ratio= 0
← x2 0 1 1/500 −1/2 −1/1500 1/1500 0 0 Ratio= 0
s3 0 0 1 0 0 0 1 1 Ratio= 1
↓
Iteration [3] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C 0 40 0 −2 −1/60 7/150 0 0
x1 1 −160 0 −4 −1/75 4/75 0 0 7
x3 0 500 1 −250 −1/3 1/3 0 0 7
← s3 0 −500 0 250 1/3 −1/3 1 1 Ratio= 1/250
↓
Iteration [4] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C 0 36 0 0 −7/500 11/250 1/125 1/125
x1 1 −168 0 0 −1/125 6/125 2/125 2/125 7
x3 0 0 1 0 0 0 1 1 7
← x4 0 −2 0 1 1/750 −1/750 1/250 1/250 Ratio= 3
54
Iteration [5] Basic x1 x2 x3 x4 s1 s2 s3 RHS Optimal
C 0 15 0 21/2 0 3/100 1/20 1/20 Tableau
x1 1 −180 0 6 0 1/25 1/25 1/25
x3 0 0 1 0 0 0 1 1
s1 0 −1500 0 750 1 −1 3 3
(a) For the entering basic variable: Of all negative coefficients in the
objective row (Row 0), choose the one with smallest subscript.
(b) For the departing basic variable: When there is a tie between one
or more ratios computed, choose the candidate for departing basic
variable that has the smallest subscript.
↓
Iteration [0] Basic x1 x2 x3 x4 x5 x6 x7 RHS
C −3/4 150 −1/50 6 0 0 0 0
← x5 1/4 −60 −1/25 9 1 0 0 0 Ratio= 0
x6 1/2 −90 −1/50 3 0 1 0 0 Ratio= 0
x7 0 0 1 0 0 0 1 1 7
↓
Iteration [1] Basic x1 x2 x3 x4 x5 x6 x7 RHS
C 0 −30 −7/50 33 3 0 0 0
x1 1 −240 −4/25 36 4 0 0 0 7
← x6 0 30 3/50 −15 −2 1 0 0 Ratio= 0
x7 0 0 1 0 0 0 1 1 7
↓
Iteration [2] Basic x1 x2 x3 x4 x5 x6 x7 RHS
C 0 0 −2/25 18 1 1 0 0
← x1 1 0 8/25 −84 −12 8 0 0 Ratio= 0
x2 0 1 1/500 −1/2 −1/15 1/30 0 0 Ratio= 0
x7 0 0 1 0 0 0 1 1 Ratio= 1
3
James Calvert and William Voxman, Linear Programming, 1st Edition, Harcourt Brace
Jovanovich Publishers, 1989.
55
↓
Iteration [3] Basic x1 x2 x3 x4 x5 x6 x7 RHS
C 1/4 0 0 −3 −2 3 0 0
x3 25/8 0 1 −525/2 −75/2 25 0 0 7
← x2 −1/160 1 0 1/40 1/120 −1/60 0 0 Ratio= 0
x7 −25/8 0 0 525/2 75/2 −25 1 1 Ratio= 2/525
↓
Iteration [4] Basic x1 x2 x3 x4 x5 x6 x7 RHS
C −1/2 120 0 0 −1 1 0 0
x3 −125/2 10500 1 0 50 −150 0 0 7
x4 −1/4 40 0 1 1/3 −2/3 0 0 7
← x7 125/2 −10500 0 0 −50 150 1 1 Ratio= 2/125
↓
Iteration [5] Basic x1 x2 x3 x4 x5 x6 x7 RHS
C 0 36 0 0 −7/5 11/5 1/125 1/125
x3 0 0 1 0 0 0 1 1 7
← x4 0 −2 0 1 2/15 −1/15 1/250 1/250 Ratio= 100/3
x1 1 −168 0 0 −4/5 12/5 2/125 2/125 7
56
I2 , where, in general, Ij is formed from Ij−1 as follows:
yrj yij
Ij = r : = min
yrk i∈Ij−1 yik
↓
Iteration [0] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C −3/4 150 −1/50 6 0 0 0 0
s1 1/4 −60 −1/25 9 1 0 0 0 Ratio= 0
← s2 1/2 −90 −1/50 3 0 1 0 0 Ratio= 0
s3 0 0 1 0 0 0 1 1 7
Here, I0 = {1, 2}, and then I1 = {2}, and therefore xB2 = s2 leaves the
basis.
↓
Iteration [1] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C 0 15 −1/20 21/2 0 3/2 0 0
s1 0 −15 −3/100 15/2 1 −1/2 0 0 7
x1 1 −180 −1/25 6 0 2 0 0 7
← s3 0 0 1 0 0 0 1 1 Ratio= 1
57
nonbasic variables. The zero coefficient of nonbasic xj indicates that xj can
be made basic, altering the values of the basic variables without changing the
value of z.
In practice, alternative optima are useful because we can choose from many
solutions without experiencing deterioration in the objective value. If the exam-
ple represents a product-mix situation, it may be advantageous to market two
products instead of one.
Mathematically, we can determine all the points (x1 , x2 ) on the line segment
joining the optimal solutions 0, 25 and (3, 1) as follows:
x1 = t(0)+ (1 − t)(3) = 3 − 3t
; 0≤t≤1
x2 = t 25 + (1 − t)(1) = 1 + 3t
2
58
2.7.3 Unbounded Solutions
In some LP models, as in example (1.16) of Section 1.4, the solution space is
unbounded in at least one variable, meaning that variables may be increased
indefinitely without violating any of the constraints. The associated objective
value may also be unbounded in this case. An unbounded LP for a max problem
occurs when a variable with a negative coefficient (positive for min LP) in row
0 has a nonpositive coefficient in each constraint.
An unbounded solution space may signal that the model is poorly con-
structed. The most likely irregularity in such models is that some key con-
straints have not been accounted for. Another possibility is that estimates of
the constraint coefficients may not be accurate.
59
of the type ≤ with nonnegative right-hand sides because the slacks provide
an obvious feasible solution. For other types of constraints, penalized artificial
variables are used to start the solution. If at least one artificial variable is
positive in the optimum iteration, then the LP has no feasible solution. From
the practical standpoint, an infeasible space points to the possibility that the
model is not formulated correctly.
2x1 + x2 + s1 = 2
3x1 + 4x2 − e2 = 12
We add the artificial variables a2 in the second equation and penalize it in the
objective function with −M a2 = −100a2 (because we are maximizing). The
resulting LP becomes
max z = 3x1 + 2x2 − 100a2
s.t. 2x1 + x2 + s1 = 2
3x1 + 4x2 − e2 + a2 = 12
x1 , x2 , s1 , e2 , a2 ≥ 0
After writing the objective function as z − 3x1 − 2x2 + 100a2 = 0, the initial
tableau will be and all following tableaus until the optimal solution is reached
are shown below.
Iteration [0] Basic x1 x2 s1 e2 a2 RHS
z −3 −2 0 0 100 0
s1 2 1 1 0 0 2
a2 3 4 0 −1 1 12
↓
Iteration [0] Basic x1 x2 s1 e2 a2 RHS
z −303 −402 0 100 0 −1200
← s1 2 1 1 0 0 2 Ratio= 2/1 = 2
a2 3 4 0 −1 1 12 Ratio= 12/4 = 3
60
Optimum iteration 1 shows that the artificial variable a2 is positive (= 4),
meaning that the LP is infeasible. The result is what we may call a pseudo-
optimal solution.
Exercise 2.6.
1. Consider the following LP:
(a) Show that the associated simplex iterations are temporarily degen-
erate. How many iterations are needed to reach the optimum?
(b) Verify the result by solving the problem graphically.
(c) Interchange constraints (1) and (3) and resolve the problem. How
many iterations are needed to solve the problem?
2. Solve the following problem, using the lexicographic rule for noncycling.
Repeat using Bland’s Rule:
max z = x1 + 2x2 + x3
s.t. x1 + 4x2 + 3x3 ≤ 4
−x1 + x2 + 4x3 ≤ 1
x1 + 3x2 + x3 ≤ 6
x1 , x2 , x3 ≥ 0
3. For the following LP, identify three alternative optimal basic solutions.
From the optimal tableau, show that all the alternative optima are not
corner points (i.e., nonbasic).
61
5. For the following LP, show that the optimal solution is degenerate and
that none of the alternative solutions are corner points.
max z = 3x1 + x2
s.t. x1 + 2x2 ≤ 5
x1 + x2 − x3 ≤ 2
7x1 + 3x2 − 5x3 ≤ 20
x1 , x2 , x3 ≥ 0
Use hand computations to show that the optimal solution can include an
artificial basic variable at zero level. Does the problem have a feasible
optimal solution?
Basic x1 x2 x3 x4 x5 x6 x7 x8 RHS
z 0 −5 0 4 −1 −10 0 0 620
x8 0 3 0 −2 −3 −1 5 1 12
x3 0 1 1 3 1 0 3 0 6
x1 1 −1 0 0 6 −4 0 0 0
62
(a) Categorize the variables as basic and nonbasic, and provide the cur-
rent values of all the variables.
(b) Assuming that the problem is of the maximization type, identify the
nonbasic variables that have the potential to improve the value of
z. If each such variable enters the basic solution, determine the
associated leaving variable, if any, and the associated change in z.
(c) Repeat part (b) assuming that the problem is of the minimization
type.
(d) Which nonbasic variable(s) will not cause a change in the value of z
when selected to enter the solution?
9. You are given the tableau shown below for a maximization problem.
Basic x1 x2 x3 x4 x5 RHS
z −c 2 0 0 0 10
x3 −1 a1 1 0 0 4
x4 a2 −4 0 1 0 1
x5 a3 3 0 0 1 b
10. Suppose we have obtained the tableau shown below for a maximization
problem.
Basic x1 x2 x3 x4 x5 x6 RHS
z c1 c2 0 0 0 0 10
x3 4 a1 1 0 a2 0 b
x4 −1 −5 0 1 −1 0 2
x6 a3 −3 0 0 −4 1 3
(a) The current solution is optimal, and there are alternative optimal
solutions.
(b) The current basic solution is not a basic feasible solution.
(c) The current basic solution is a degenerate bfs.
63
(d) The current basic solution is feasible, but the LP is unbounded.
(e) The current basic solution is feasible, but the objective function value
can be improved by replacing x6 as a basic variable with x1 .
11. The starting and current tableaux of a given problem are shown below.
Find the values of the unknowns a through n.
64
3 Sensitivity Analysis and Duality
Two of the most important topics in linear programming are sensitivity analysis
and duality. After studying these important topics, the reader will have an
appreciation of the beauty and logic of linear programming.
max z = c1 x1 + c2 x2 + · · · + cn xn
s.t. a11 x1 + a12 x2 + · · · + a1n xn = b1
a21 x1 + a22 x2 + · · · + a2n xn = b2
.. (3.1)
.
am1 x1 + am2 x2 + · · · + amn xn = bm
x1 , x2 , · · · , xn ≥ 0
Definition 3.1.
65
8. aj is the column (in the constraints) of the variable xj
in the initial tableau.
66
Formulas Derivation:
1. To expressing the constraints in any tableau in terms of B−1 and the
original LP, we observe that
BXBV + NXN BV = b
Example 3.1. For the following LP, the optimal basis is BV = {x2 , s2 }. Com-
pute the optimal tableau.
max z = x1 + 4x2
s.t. x1 + 2x2 ≤ 6
2x1 + x2 ≤ 8
x1 , x2 ≥ 0
Solution: After adding slack variables s1 and s2 , the LP in standard form
max z = x1 + 4x2
s.t. x1 + 2x2 + s1 = 6
2x1 + x2 + s2 = 8
x1 , x2 , s1 , s2 ≥ 0
Since BV = {x2 , s2 } and N BV = {x1 , s1 }, then
6
CBV = 4 0 , CN BV = 1 0 , b=
8
2 0 −1
1/2 0 1 1
B= , B = −1 , N=
1 1 /2 1 2 0
67
So, the optimal tableau entries are
61/2 3 0
b = B−1 b =
−1/2
=
8 5 1
1/2 0 1 1 1/2 1/2
−1
N = B N = −1 = 3
/2 1 2 0 /2 −1/2
1/2 1/2
CN BV = CBV N − CN BV = 4 0 3 −1
− 1 0 = 1 2
/2 /2
3
z = CBV b = 4 0 = 12.
5
Example 3.2. For the following LP, the optimal basis is BV = {x2 , x4 }. Com-
pute the optimal tableau.
max z = x1 + 4x2 + 7x3 + 5x4
s.t. 2x1 + x2 + 2x3 + 4x4 = 10
3x1 − x2 − 2x3 + 6x4 = 5
x1 , x2 , x3 , x4 ≥ 0
Solution: Note that the constraints are in equation form, and no need to add
artificial variables here (we do not solve by simplex). Since BV = {x2 , x4 } and
N BV = {x1 , x3 }, then
10
CBV = 4 5 , CN BV = 1 7 , b=
5
1 4 3/5 −2/5 2 2
B= , −1
B = 1 , N=
−1 6 /10 1/10 3 −2
10 3/5 4 −2/5
−1
b=B b= 1 = 3
/10 /10
1 5 /2
3/5 −2/5 2 2 0 2
−1
N=B N= 1 = 1
/10 1/10 3 −2 /2 0
0 2
CN BV = CBV N − CN BV = 4 5 1 − 1 7 = 3/2 1
/2 0
68
4
47
z = CBV b = 4 5 3 = .
/2 2
x2 0 1 2 0 4
x4 1/2 0 0 1 3/2
Note 11. We have used the formulas of this section to create an LP’s optimal
tableau, but they can also be used to create the tableau for any set of basic
variables.
Exercise 3.1.
1. For the following LP, x1 and x2 are basic variables in the optimal tableau.
Use the formulas of matrices to determine the optimal tableau.
max z = 3x1 + x2
s.t. 2x1 − x2 ≤ 2
−x1 + x2 ≤ 4
x1 , x2 ≥ 0
2. For the following LP, x2 and s1 are basic variables in the optimal tableau.
Use the formulas of matrices to determine the optimal tableau.
max z = −x1 + x2
s.t. 2x1 + x2 ≤ 4
x1 + x2 ≤ 2
x1 , x2 ≥ 0
Basic x1 x2 x3 s1 s2 RHS
z 0 a 7 d e 150
x1 1 b 2 1 0 30
s2 0 c −8 −1 1 10
69
4. For the following LP, x1 and x2 are basic variables in the optimal tableau.
Determine the optimal tableau using the laws of matrices.
Step 1: Using the formulas of Section 3.1, determine how changes in the LP’s
parameters change the right-hand side and row 0 of the optimal tableau
(the tableau having BV as the set of basic variables).
Step 2: If each variable in row 0 has a non-negative coefficient and each con-
straint has a nonnegative right-hand side, then BV is still optimal. Oth-
erwise, BV is no longer optimal.
70
2. Changing the objective function coefficient of a basic variable:
If the objective function coefficient of a basic variable xj is changed, then
the current basis remains optimal if the coefficient of every variable in row
0 of the BV tableau remains nonnegative. If any variable in row 0 has a
negative coefficient, then the current basis is no longer optimal.
71
mization problem, just remember that a tableau is optimal if and only if each
variable has a nonpositive coefficient in row 0 and the right-hand side of each
constraint is nonnegative.
48
CBV = 0 20 60 , CN BV = 30 0 0 , b = 20
8
1 1 8 1 2 −8 6 0 0
B= 0
3/2 4 , B−1 = 0
2 −4 , N = 2 1 0
0 1/2 2 0 −1/2 3/2 3/2 0 1
CN BV = CBV B−1 N − CN BV
1 2 −8 6 0 0
= 0 20 60 0 2 −4 2 1 0 − 30 + ∆ 0 0
0 −1/2 3/2 3/2 0 1
= 5 − ∆ 10 10 ≥ 0
∴ ∆≤5
72
2. Suppose we change the objective function coefficient of x1 from 60 to
60 + ∆. For what values of ∆ will the current set of basic variables
remain optimal ?
Solution: The BV will remain optimal if
CN BV = CBV B−1 N − CN BV
1 2 −8 6 0 0
= 0 20 60 + ∆ 0 2 −4 2 1 0 − 30 0 0
0 −1/2 3/2 3/2 0 1
= 5 + 45 ∆ 10 − 12 ∆ 10 + 32 ∆ ≥ 0
∴ ∆ ∈ [−4, 20]
b = B−1 b
1 2 −8 48 24 + 2∆
= 0 2 −4 20 + ∆ = 8 + 2∆ ≥ 0
0 −1 /2 3/2 8 2 − 21 ∆
∴ ∆ ∈ [−4, 4]
30
6
4. Suppose we change the elements of the column for x2 from 2 to
3/2
43
5
. Would this change the optimal solution to the problem ?
2
2
Solution: Thus, BV will remain optimal if CN BV ≥ 0. But
CN BV = CBV B−1 N − CN BV
1 2 −8 5 0 0
= 0 20 60 0 2 −4 2 1 0 − 43 0 0
0 −1/2 3/2 2 0 1
= −3 10 10 6≥ 0
∴ The current basis is no longer optimal.
73
5.
Suppose
we add new activity x4 to the problem, and we add the column
15
1
for x4 to the problem. How will the addition of the new activity
1
1
change the optimal tableau?
Solution:Because
Exercise 3.2. Dorian Auto manufactures luxury cars and trucks. The company
believes that its most likely customers are high-income women and men. To
reach these groups, Dorian Auto has embarked on an ambitious TV advertising
campaign and has decided to purchase 1−minute commercial spots on two types
of programs: comedy shows and football games. Each comedy commercial is
seen by 7 million high-income women and 2 million high-income men. Each
football commercial is seen by 2 million high-income women and 12 million
high-income men. A 1−minute comedy ad costs $50, 000, and a 1−minute
football ad costs $100, 000. Dorian would like the commercials to be seen by at
least 28 million high-income women and 24 million high-income men. Suppose
74
1. Find the range of values of the cost of a comedy ad (currently $50,000)
for which the current basis remains optimal.
2. Find the range of values of the number of required HIW exposures (cur-
rently 28 million) for which the current basis remains optimal. If 40 million
HIW exposures were required, what would be the new optimal solution?
max z = c1 x1 + c2 x2 + · · · + cn xn
s.t. a11 x1 + a12 x2 + · · · + a1n xn ≤ b1
a21 x1 + a22 x2 + · · · + a2n xn ≤ b2
.. (3.5)
.
am1 x1 + am2 x2 + · · · + amn xn ≤ bm
x1 , x2 , · · · , xn ≥ 0
min w = b1 y1 + b2 y2 + · · · + bm ym
s.t. a11 y1 + a21 y2 + · · · + am1 ym ≥ c1
a12 y1 + a22 y2 + · · · + am2 ym ≥ c2
.. (3.6)
.
a1n y1 + a2n y2 + · · · + amn ym ≥ cn
y1 , y2 , · · · , ym ≥ 0
A min problem such as (3.6) that has all (≥) constraints and all variables
nonnegative is called a normal min problem. If the primal is a normal min
problem such as (3.6), then we define the dual of (3.6) to be (3.5).
75
Finding the Dual:
4. The variables and constraints in the primal and dual problems are related
as follows.
(max) ⇐⇒ (min)
Constraint Sign Variable Sign
≥ ⇐⇒ ≤0
≤ ⇐⇒ ≥0
= ⇐⇒ u.r.s
Variable Sign Constraint Sign
≤0 ⇐⇒ ≤
≥0 ⇐⇒ ≥
u.r.s ⇐⇒ =
Solution:
77
Proof. Consider the primal LP
min w = Yb
s.t. YA ≥ CN BV
(3.8)
YI ≥ CBV
Y urs
YAXN BV + YIXBV = Yb = w.
Also, multiply the first constraint in (3.8) by XBV from the right, and the
second constraint by XN BV from the right, to obtain:
YAXBV ≥ CN BV XBV
YIXN BV ≥ CBV XN BV
Example 3.5. Consider the following pair of primal and dual problems.
Primal Dual
min w = 5x1 + 2x2 max z = 3y1 + 5y2
s.t. x1 − x2 ≥3 s.t. y1 + 2y2 ≤5
2 x1 + 3x2 ≥5 − y1 + 3y2 ≤2
x1 , x2 ≥ 0 y1 , y 2 ≥ 0
Feasible Solution: Feasible Solution (Optimal):
x1 = 4, x2 = 1 y1 = 5, y2 = 0
Objective Function: Objective Function:
w = 22 z = 15
78
Theorem 3.2. The Dual Theorem.a
Suppose BV is an optimal basis for the primal. Then y =
CBV B−1 is an optimal solution to the dual. Also, z = w.
a
For proof see: Wayne L. Winston, Munirpallam Venkataramanan.
Introduction to Mathematical Programming. Thomson Learning; 4th
edition (2002)
max z = 3x1 + x2
s.t. 2x1 + x2 ≤ 8
4x1 + x2 ≤ 10
x1 , x2 ≥ 0
How to Read the Optimal Dual Solution from Row 0 of the Optimal
Tableau ?
Constraint i Sign Optimal yi Value Problem Type
≤ Coefficient of si Max or Min
≥ −1× Coefficient of ei Max or Min
= Coefficient of ai − M Max
= Coefficient of ai + M Min
In general,
The optimal Optimal primal z−coefficient of the starting variable di
value of the dual = +
variable yi Original objective function coefficient of di
79
Example 3.7. Consider the following LP.
max z = −2x1 − x2 + x3
s.t. x1 + x2 + x3 ≤ 3
x2 + x3 ≥ 2
x1 + x3 = 1
x1 , x2 , x3 ≥ 0
1. Find the dual of this LP.
Solution: The dual LP is
80
Corollary 3.3. The primal problem is infeasible if and only if
the normal form of the dual problem is unbounded (and vice
versa).
Note 12. With regard to the primal and dual linear programming problems,
exactly one of the following statements is true:
From this note we see that duality is not completely symmetric. The best we
can say is that (here optimal means having a finite optimum, and unbounded
means having an unbounded optimal objective value):
Note 13. The relationship between degeneracy and multiplicity of the primal
and the dual optimal solutions is formulated in Theorem 3.4. Recall that de-
generacy and multiplicity always refer to LP models with inequality constraints,
and that degeneracy is defined for basic feasible solutions. In this theorem,
the term nondegenerate in the expression “multiple and nondegenerate” means
that there are multiple optimal solutions, and that there exists an optimal basic
feasible solution that is nondegenerate.
81
Exercise 3.4.
1. Find the optimal value of the objective function for the following LP using
its dual. (Do NOT solve the dual using the simplex algorithm)
min w = 10y1 + 4y2 + 5y3
s.t. 5y1 − 7y2 + 3y3 ≥ 50
y1 , y2 , y3 ≥ 0
82
5. Solve the dual of the following problem, and then find its optimal solu-
tion from the solution of the dual. Does the solution of the dual offer
computational advantages over solving the primal directly?
Given that the artificial variable a1 and the slack variable s2 form the
starting basic variables and that M was set equal to 100 when solving the
problem, the optimal tableau is given as:
Basic x1 x2 x3 a1 s2 RHS
z 0 23 7 105 0 75
x1 1 5 2 1 0 15
s2 0 −10 −8 −1 1 5
Write the associated dual problem, and determine its optimal solution in
two ways.
83
3.5 Shadow Prices
It is often important for managers to determine how a change in a constraint’s
right-hand side changes the LP’s optimal z−value.
Note 14.
1. The previous definition assumes that after the RHS of constraint i has
been changed to bi + 1, the current basis remains optimal.
2. The shadow price of the ith constraint of a max LP is the optimal value
of the ith dual variable y i . Also, the shadow price of the ith constraint
of a min LP is −1× the optimal value of the ith dual variable y i
b1
..
.
z new = Yb = y 1 · · · y i · · · y m bi + 1
..
.
bm
= y 1 b1 + · · · + y i (bi + 1) + · · · + y m bm
= (y 1 b1 + · · · + y i bi + · · · + y m bm ) + y i
= z old + yi
∴ z new − z old = y i
3. In max LP, the shadow price for a (≤) constraint is nonnegative, for a
(≥) is nonpositive, and for (=) is urs. Also, in min LP, the shadow price
for a (≤) constraint is nonpositive, for a (≥) is nonnegative, and for (=)
is urs.
84
Example 3.8. Consider the following LP.
max z = 15x1 + 25x2
s.t. 3x1 + 4x2 ≤ 100
2x1 + 3x2 ≤ 70
x1 + 2x2 ≤ 30
x2 ≥ 3
x1 , x 2 ≥ 0
The optimal solution of the problem is z = 435, when x1 = 24 and x2 = 3,
where the Row 0 in the optimal tableau (after adding slack variables s1 , s2 , s3 to
the first three constraints respectively and subtracting excess variable e4 from
the last constraint then adding to it an artificial variable a4 ) is
z + 15s3 + 5e4 + (M − 5)a4 = 435.
1. Find the shadow price of each constraint.
Solution: The shadow price of each constraint is the optimal value of the
corresponding dual variable of each constraint. So,
y1 = 0 , y2 = 0 , y3 = 15 , y4 = −5
2. Assuming the current basis remains optimal, what would the change on
the z−value be if the RHS of the
(a) 3rd constraint were changed from 30 to 35 ?
100
70
Solution: z new = yb = 0 0 15 −5 35 = 510.
3
(b) 4th constraint were changed from 3 to 2 ?
100
70
Solution: z new = yb = 0 0 15 −5 30 = 440.
2
Exercise 3.5. Sugarco can manufacture three types of candy bar. Each candy
bar consists totally of sugar and chocolate. The compositions of each type of
candy bar and the profit earned from each candy bar are shown in the table
below.
Amount of Amount of Profit
Bar Sugar (Ounces) Chocolate (Ounces) (Cents)
1 1 2 3
2 1 3 7
3 1 1 5
85
Fifty oz of sugar and 100 oz of chocolate are available. After defining xi to
be the number of Type i candy bars manufactured, Sugarco should solve the
following LP:
After adding slack variables s1 and s2 , the optimal tableau is as shown in the
table below.
Basic x1 x2 x3 s1 s2 RHS
z 3 0 0 4 1 300
x3 1/2 0 1 3/2 −1/2 25
x2 1/2 1 0 −1/2 1/2 25
Using this optimal tableau, answer the following questions:
This result can be used for an alternative way of doing the following types of
sensitivity analysis:
Since primal optimality and dual feasibility are equivalent, the above changes will
leave the current basic optimal if and only if the current dual solution CBV B−1
remains dual feasible.
86
3
4x1 + 2x2 + x3 ≤ 20
2
3 1
2x1 + x2 + x3 ≤ 8
2 2
x1 , x 2 , x 3 ≥ 0
3/2
43
5
, does the current basis remain optimal?
2
2
Solution: Changing the column for the nonbasic variable leaves the first
and third dual constraints unchanged but changes the second to
87
Because y1 = 0, y2 = 10, y3 = 10 does not satisfy the new second dual
constraint, dual feasibility is not maintained, and the current basis is no
longer optimal.
3. Supposewe add new activity x4 to the problem, and we add of the x4
15
1
column
1 to the problem. Does the current basis remain optimal?
1
Solution: Introducing the new activity leaves the three dual constraints
unchanged, but the new variable x4 adds a new dual constraint. The new
dual constraint will be y1 + y2 + y3 ≥ 15. Because 0 + 10 + 10 ≥ 15, the
current basis remains optimal.
Exercise 3.6.
1. The following questions refer to the Sugarco problem (Exercise 3.5):
(a) For what values of profit on a Type 1 candy bar does the current
basis remain optimal?
(b) If a Type 1 candy bar used 0.5 oz of sugar and 0.75 oz of chocolate,
would the current basis remain optimal?
(c) A Type 4 candy bar is under consideration. A Type 4 candy bar
yields a 10 profit and uses 2 oz of sugar and 1 oz of chocolate.
Does the current basis remain optimal?
2. Consider the following LP and its optimal tableau:
max z = 5x1 + x2 + 2x3
s.t. x1 + x2 + 1x3 ≤ 6
6x1 + 1x3 ≤ 8
x2 + 1x3 ≤ 2
x1 , x 2 , x 3 ≥ 0
Basic x1 x2 x3 s1 s2 s3 RHS
z 0 1/6 0 0 5/6 1/6 9
s1 0 1/6 0 1 −1/6 −5/6 3
x1 1 −1/6 0 0 1/6 −1/6 1
x3 0 1 1 0 0 1 2
88
3.7 Complementary Slackness
The Theorem of Complementary Slackness is an important result that relates
the optimal primal and dual solutions. To state this theorem, we assume that
the primal is a normal max problem with variables x1 , x2 , · · · , xn and m of (≤)
constraints. Then the dual is a normal min problem with variables y1 , y2 , · · · , ym
and n of (≥) constraints.
max z = c1 x1 + · · · + cn xn min w = b1 y1 + · · · + bm ym
s.t. a11 x1 + · · · + a1n xn ≤ b1 s.t. a11 y1 + · · · + am1 ym ≥ c1
a21 x1 + · · · + a2n xn ≤ b2 a12 y1 + · · · + am2 ym ≥ c2
.. .. .. .. .. ..
. . . . . .
am1 x1 + · · · + amn xn ≤ bm a1n y1 + · · · + amn ym ≥ cn
xi ≥ 0, ∀i = 1, 2, · · · , n yj ≥ 0, ∀j = 1, 2, · · · , m
max z = c1 x1 + · · · + cn xn min w = b1 y1 + · · · + bm ym
s.t. a11 x1 + · · · + a1n xn + s1 = b1 s.t. a11 y1 + · · · + am1 ym − e1 = c1
a21 x1 + · · · + a2n xn + s2 = b2 a12 y1 + · · · + am2 ym − e2 = c2
.. .. .. .. .. ..
. . . . . .
am1 x1 + · · · + amn xn + sm = bm a1n y1 + · · · + amn ym − en = cn
xi ≥ 0, ∀i = 1, 2, · · · , n yj ≥ 0, ∀j = 1, 2, · · · , m
sj ≥ 0, ∀j = 1, 2, · · · , m ei ≥ 0, ∀i = 1, 2, · · · , n
x1
Theorem 3.5. Let x = ... be a feasible primal solution
xn
and y = y1 · · · ym be a feasible dual solution. Then x is
primal optimal and y is dual optimal, (z = w), if and only if
xi ei = 0 , ∀i = 1, 2, · · · , n
yj sj = 0 , ∀j = 1, 2, · · · , m
89
In other words, if a constraint in either the primal or dual is
non binding (sj > 0 or ei > 0), then the corresponding (com-
plementary) variable in the other problem must equal 0.
Proof. Multiply each constraint in the primal in standard form by its corre-
sponding (complementary) dual variable:
a11 x1 y1 + · · · + a1n xn y1 + s1 y1 = b1 y1
a21 x1 y2 + · · · + a2n xn y2 + s2 y2 = b2 y2
.. .. ..
. . .
am1 x1 ym + · · · + amn xn ym + sm ym = bm ym
(a11 x1 y1 + · · · + a1n xn y1 + · · ·
+am1 x1 ym + · · · + amn xn ym )
+ (s1 y1 + · · · + sm ym ) (3.9)
= b1 y1 + · · · + bm ym
=w
Now, multiply each constraint in the dual in standard form by its corresponding
(complementary) primal variable:
a11 y1 x1 + · · · + am1 ym x1 − e1 x1 = c1 x1
a12 y1 x2 + · · · + am2 ym x2 − e2 x2 = c2 x2
.. .. ..
. . .
a1n y1 xn + · · · + amn ym xn − en xn = cn xn
(a11 y1 x1 + · · · + am1 ym x1 + · · ·
+a1n y1 xn + · · · + amn ym xn )
− (e1 x1 + · · · + en xn ) (3.10)
= c1 x1 + · · · + cn xn
=z
s1 y1 + s2 y2 + · · · + sm ym + e1 x1 + e2 x2 + · · · + en xn = w − z (3.11)
s1 y1 + s2 y2 + · · · + sm ym + e1 x1 + e2 x2 + · · · + en xn = 0
90
Since, all x’s, y’s, s’s, and e’s are all nonnegative, then
xi ei = 0 , ∀i = 1, 2, · · · , n
yj sj = 0 , ∀j = 1, 2, · · · , m
Also, if
xi ei = 0 , ∀i = 1, 2, · · · , n
yj sj = 0 , ∀j = 1, 2, · · · , m
49 5 8
The optimal solution to the problem is z = , x1 = , x2 = , and x3 =
3 3 3
s1 = s2 = 0. Use complementary slackness theorem to find the optimal dual
solution of the problem.
Solution: The dual LP is
Because x1 > 0 and x2 > 0 then the optimal dual solution must have e1 = 0
and e2 = 0. This means that for the optimal dual solution, the first and second
constraints must be binding. So we know that the optimal values of y1 and
y2 may be found by solving the first and second dual constraints as equalities.
Thus, the optimal values of y1 and y2 must satisfy
2y1 + y2 = 5
y1 + 2y2 = 3
Solving these equations simultaneously shows that the optimal dual solution
7 1 49
must have y1 = and y2 = , with w = z = .
3 3 3
91
Exercise 3.7. Consider the following LP.
max z = 3x1 + 4x2 + x3 + 5x4
s.t. x1 + 2x2 + x3 + 2x4 ≤ 5
2x1 + 3x2 + x3 + 3x4 ≤ 8
x1 , x 2 , x 3 , x 4 ≥ 0
The optimal solution to the problem is z = 13, x1 = 1, x2 = x3 = 0, and
x4 = 2. Use complementary slackness theorem to find the optimal dual solution
of the problem.
Dual Simplex Algorithm: The crux of the dual simplex method is to start
with a better than optimal and infeasible basic solution. The optimality and
feasibility conditions are designed to preserve the optimality of the basic solutions
while moving the solution iterations toward feasibility.
To start the LP optimal and infeasible, two requirements must be met:
– The objective function must satisfy the optimality condition of the
regular simplex method.
– All the constraints must be of the type (≤), regardless the type of
problem either max or min. This condition requires converting any
(≥) to (≤) simply by multiplying both sides of the inequality (≥)
by −1. If the LP includes (=) constraints, the equation can be
replaced
( by two inequalities.
( For example, x1 + x2 = 1 is equivalent
x1 + x2 ≤ 1 x1 + x2 ≤ 1
to or
x1 + x2 ≥ 1 −x1 − x2 ≤ −1
92
After converting all the constraints to (≤), the starting solution is infea-
sible if at least one of the right-hand sides of the inequalities is strictly
negative.
Note that
Example 3.11. Use the dual−simplex algorithm for solving the following LP
problem.
Solution: After converting all the constraints to (≤), then adding slack variables
s1 and s2 to the constraints, the LP in standard form is
The initial tableau and all following tableaus, using the dual-simplex algorithm,
are shown below.
↓
Iteration [0] Basic x1 x2 s1 s2 RHS Optimal
w −5 −6 0 0 0 but not
s1 −1 −1 1 0 −2 feasible
← s2 −4 1 0 1 −4
93
↓
Iteration [1] Basic x1 x2 s1 s2 RHS Optimal
w 0 −29/4 0 −5/4 5 but not
← s1 0 −5/4 1 −1/4 −1 feasible
x1 1 −1/4 0 −1/4 1
Solution: After converting all the constraints to (≤), then adding slack variables
s1 , s2 and s3 to the constraints, the LP in standard form is
The initial tableau and all following tableaus, using the dual-simplex algorithm,
are shown below.
↓
Iteration [0] Basic x1 x2 s1 s2 s3 RHS Optimal
z 4 2 0 0 0 0 but not
s1 1 1 1 0 0 1 feasible
s2 −1 −1 0 1 0 −1
← s3 3 −1 0 0 1 −2
94
Note 15. The dual simplex method is often used to find the new optimal
solution to an LP after a constraint is added. When a constraint is added, one
of the following three cases will occur:
2. The current optimal solution does not satisfy the new constraint, but the
LP still has a feasible solution.
1. 3x1 + x2 ≤ 10.
Solution: After converting the constraint to (=) by adding s3 to the
constraint, we obtain:
Basic x1 x2 s1 s2 s3 RHS
z 0 2 0 3 0 18
s1 0 1/2 1 −1/2 0 2
x1 1 1/2 0 1/2 0 3
s3 3 1 0 0 1 10
2. x1 − 2x2 ≥ 6.
Solution: After converting the constraint to (≤) then adding s3 to the
constraint, we obtain:
Basic x1 x2 s1 s2 s3 RHS
z 0 2 0 3 0 18
s1 0 1/2 1 −1/2 0 2
x1 1 1/2 0 1/2 0 3
s3 −1 2 0 0 1 −6
95
Basic x1 x2 s1 s2 s3 RHS Since s3 leaves
z 0 2 0 3 0 18 with no entering
s1 0 1/2 1 −1/2 0 2 variable, then the
x1 1 1/2 0 1/2 0 3 solution is infeasible
s3 0 5/2 0 1/2 1 −3
3. 8x1 + x2 ≤ 12.
Solution: After converting the constraint to (=) by adding s3 to the
constraint, we obtain:
max z = x1 − 3x2
s.t. x1 − x2 ≤ 2
x1 + x2 ≥ 4
2x1 + 2x2 ≥ 3
x1 , x 2 ≥ 0
96
The model can be put in the following tableau form in which the starting basic
solution (s1 , s2 , s3 ) is both non-optimal (because x1 has a negative reduced
cost) and infeasible (because s2 = −4, s3 = −3).
↓
Iteration [0] Basic x1 x2 s1 s2 s3 RHS The solution
z −1 3 0 0 0 0 is not optimal
s1 1 −1 1 0 0 2 and infeasible
← s2 −1 −1 0 1 0 −4
s3 −2 −2 0 0 1 −3
Remove infeasibility first by applying a version of the dual simplex feasibility
condition that selects s2 as the leaving variable. To determine the entering
variable, all we need is a nonbasic variable whose constraint coefficient in the
s2 −row is strictly negative. The selection can be done without regard to opti-
mality, because it is nonexistent at this point anyway. In the present example,
x1 and x2 have negative coefficient in the s2 −row and x1 is selected as the
entering variable. The result is the following tableau:
↓
Iteration [1] Basic x1 x2 s1 s2 s3 RHS The solution
z 0 4 0 −1 0 4 is not optimal
← s1 0 −2 1 1 0 −2 and infeasible
x1 1 1 0 −1 0 4
s3 0 0 0 −2 1 5
At this point, s1 leaves the solution and x2 have negative coefficient in the
s1 −row and is selected as the entering variable. The result is the following
tableau:
Dual Simplex with Artificial Constraints: In example (3.14), the dual sim-
plex is not applicable directly, because x1 does not satisfy the maximization
optimality condition. Show that by adding the artificial constraint x1 ≤ M
(where M is sufficiently large not to eliminate any feasible points in the original
solution space), and then using the new constraint as a pivot row, the selection
of x1 as the entering variable (because it has the most negative objective coeffi-
cient) will render an all-optimal objective row. Next, carry out the regular dual
simplex method on the modified problem. All the iterations are shown below.
97
↓
Iteration [0] Basic x1 x2 s1 s2 s3 s4 RHS
z −1 3 0 0 0 0 0
s1 1 −1 1 0 0 0 2
s2 −1 −1 0 1 0 0 −4
s3 −2 −2 0 0 1 0 −3
← s4 1 0 0 0 0 1 M
↓
Iteration [1] Basic x1 x2 s1 s2 s3 s4 RHS
z 0 3 0 0 0 1 M
← s1 0 −1 1 0 0 −1 −M + 2
s2 0 −1 0 1 0 1 M −4
s3 0 −2 0 0 1 2 2M − 3
x1 1 0 0 0 0 1 M
↓
Iteration [2] Basic x1 x2 s1 s2 s3 s4 RHS
z 0 2 1 0 0 0 2
s4 0 1 −1 0 0 1 M −2
← s2 0 −2 1 1 0 0 −2
s3 0 −4 2 0 1 0 −1
x1 1 −1 1 0 0 0 2
Exercise 3.8.
max z = 2x1 − x2 + x3
s.t. 2 x1 + 4x2 − x3 ≥ 4
−x1 + x2 − x3 ≥ 2
x1 + 2x2 + 2x3 ≤ 8
x1 , x 2 , x 3 ≥ 0
by using
98
2. Use the dual simplex method to solve the following LP:
max z = −2x1 − x3
s.t. x1 + x2 − x3 ≥ 5
x1 − 2x2 + 4x3 ≥ 8
x1 , x 2 , x 3 ≥ 0
99
Answers
max z = x1 + 5x2
s.t. x1 − 3x2 ≥ 0
x1 + x2 ≤ 8
x1 ≥ 0, x2 ≥ 0
min w = 4x1 + x2
s.t. 3x1 + x2 ≥ 10
x1 + x2 ≥ 5
x1 ≥ 3
x1 ≥ 0, x2 ≥ 0
100
5. Let x1 = number of units of A, x2 = number of units of B. Then, the
formulation of the problem is
101
Chapter 1 Exercise 1.2
1.
2. If there are > and or < constraints, then a problem may have no optimal
solution. Consider the problem max z = x subject to x < 1. Clearly, this
problem has no optimal solution (there is no largest number smaller than
1!!)
102
a a
5. (a) 2a < b (c) b < (e) b =
a 3 3
(b) < b < 2a (d) b = 2a
3
6. (a) south-east (down-right) (c) north-west (up-left)
(b) south-west (down-left)
−x + y ≤ 1
x+y ≤5
x − 2y ≤ 2
y≤3
x≥1
x, y ≥ 0
10.
103
11. No feasible region
3. x y1 y2
−6 0 6
10 10 0
0 0 6
104
(b) NBVs BVs values feasible? z−value
x1 , x2 s1 , s2 (12, 12) yes 0
x2 , s2 s1 , x1 (8, 4) yes 8
x1 , s2 s1 , x2 (−6, 6) no
s1 , x2 s2 , x1 (−24, 12) no
x1 , s1 s2 , x2 (4, 4) yes 12
s1 , s2 x1 , x2 (12/7, 24/7) yes 96/7
(c) from the table above, the optimum solution is z = 96/7, x1 = 12/7,
x2 = 24/7.
(d) the solution is left to the student
(e) the solution is left to the student
105
(d) z = 12, x1 = 4, x2 = 4, x3 = 4
1. w = −5, x1 = 0, x2 = 5
2. w = −2, x1 = 0, x2 = 2
3. w = −7.5, x1 = 0, x2 = 1.5
4. w = −9, x1 = 3, x2 = 0
5. w = −48, x1 = 0, x2 = 4, x3 = 0, x4 = 8
1. (a) w = 1, x1 = 0, x2 = 0, x3 = 1
(b) w = 2, x1 = 2, x2 = 0, x3 = 0
(c) w = 4, x1 = 2, x2 = 0
(d) z = 5, x1 = 1, x2 = 2
3. z = 10, x1 = 4, x2 = 0, x3 = 2
2. z = 4, x1 = 4, x2 = 0, x3 = 0
3. z = 10 when: x1 = 0, x2 = 0, x3 = 10/3
x1 = 0, x2 = 5, x3 = 0
x1 = 1, x2 = 4, x3 = 1/3
106
4. x3 and s1 can yield alternative optima, but because all their constraint
coefficients are non-positive, non can yield an alternative basic solution.
Basic x1 x2 x3 s1 s2 RHS
z 0 0 0 0 1 20
x1 1 0 −2 −1 0 15
s2 0 1 −7 −2 1 10
9. (a) c ≤ 0, b ≥ 0
(b) c = 0, b ≥ 0, a2 > 0 and/or a3 > 0. If only a3 > 0 then b > 0
(c) c > 0, a2 ≤ 0, a3 ≤ 0
107
10. (a) b ≥ 0 is necessary.
If c1 = 0 and c2 ≥ 0 we can pivot in x1 to obtain an alternative
optimum.
If c1 ≥ 0, c2 ≥ 0 and a2 > 0 we can pivot in x5 and obtain an
alternative optimum.
If c2 = 0, a1 > 0 and c1 ≥ 0 we can pivot in x2 and obtain an
alternative optimum.
(b) b < 0
(c) b = 0
(d) b ≥ 0 makes the solution feasible. If c2 < 0 and a1 ≤ 0 we can
make x2 as large as desired and obtained an unbounded solution.
(e) b ≥ 0 makes the current basic solution feasible. For x6 to replace x1
we need c1 < 0 (this ensures that increasing x1 will increase z) and
we need Row 3 to win the ratio test for x1 . This requires 3/a3 ≤ b/4.
11. the solution is left to the student
1. Basic x1 x2 s1 s2 RHS
z 0 0 4 5 28
x1 1 0 1 1 6
x2 0 1 1 2 10
2. Basic x1 x2 s1 s2 RHS
z 2 0 0 1 2
x2 1 1 0 1 2
s1 1 0 1 −1 2
3. the solution is left to the student
4. Basic x1 x2 e1 e2 RHS
z 0 0 −5 −15/2 3800
x1 1 0 −3/20 1/40 18/5
x2 0 1 1/40 −7/80 7/5
108
Chapter 3 Exercise 3.3
1. w = 250/3.
5. x1 = 0, x2 = 20, x3 = 0, z = 1200
1. y1 = 4, y2 = 1
109
(c) the dual constraint for a Type 4 Candy Bar is 2y1 + y2 ≥ 10. since
y1 = 4 and y2 = 1 does not satisfy this constraint, the current basis
is no longer optimal. the new optimal solution would make Type 4
Candy Bars.
y1 = 1, y2 = 1, w = 13
2. x1 = 0, x2 = 14, x3 = 9, z = −9
3. x1 = 0, x2 = 10, x3 = 0, x4 = 0, w = 70
110