Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
251 views110 pages

Linear Programming Lecture Notes

This document contains lecture notes on linear programming. It begins with an introduction to operations research and defines what constitutes a linear programming problem. Specifically, a linear programming problem involves maximizing or minimizing a linear objective function subject to linear equality and inequality constraints. The document goes on to discuss modeling linear programming problems, geometric preliminaries such as half-spaces and convex sets, and graphical solutions for two-variable problems. It also covers the corner point theorem. The remainder of the document discusses the simplex method for solving linear programs, sensitivity analysis, duality, and the dual simplex method.

Uploaded by

AymanAlam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
251 views110 pages

Linear Programming Lecture Notes

This document contains lecture notes on linear programming. It begins with an introduction to operations research and defines what constitutes a linear programming problem. Specifically, a linear programming problem involves maximizing or minimizing a linear objective function subject to linear equality and inequality constraints. The document goes on to discuss modeling linear programming problems, geometric preliminaries such as half-spaces and convex sets, and graphical solutions for two-variable problems. It also covers the corner point theorem. The remainder of the document discusses the simplex method for solving linear programs, sensitivity analysis, duality, and the dual simplex method.

Uploaded by

AymanAlam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 110

Linear Programming

Lecture Notes for Math 373


Feras Awad

June 21, 2019

Contents
1 Introduction to Linear Programming 3
1.1 Operations Researches . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 What is a Linear Programming (LP) Problem? . . . . . . . . . . 3
1.3 Modeling LP Problems . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Geometric Preliminaries and Solutions . . . . . . . . . . . . . . 9
1.4.1 Half−Spaces, Hyperplanes, and Convex Sets . . . . . . . 9
1.4.2 The Graphical Solution of Two−Variable LP Problems . 14
1.5 The Corner Point Theorem and its Proof . . . . . . . . . . . . . 25

2 The Simplex Method 30


2.1 The Idea of the Simplex Method . . . . . . . . . . . . . . . . . 30
2.2 Converting an LP to Standard Form . . . . . . . . . . . . . . . 31
2.3 Basic Feasible Solutions . . . . . . . . . . . . . . . . . . . . . . 32
2.4 The Simplex Algorithm . . . . . . . . . . . . . . . . . . . . . . 37
2.4.1 Iterative Nature of the Simplex Method . . . . . . . . . 38
2.4.2 Computational Details of the Simplex Algorithm . . . . . 38
2.4.3 Representing the Simplex Tableau . . . . . . . . . . . . 39
2.5 Solving Minimization Problem . . . . . . . . . . . . . . . . . . 43
2.6 Artificial Starting Solution and the Big M −Method . . . . . . . 45
2.7 Special Cases in the Simplex Method . . . . . . . . . . . . . . . 50
2.7.1 Degeneracy . . . . . . . . . . . . . . . . . . . . . . . . 50
2.7.2 Alternative Optima . . . . . . . . . . . . . . . . . . . . 57
2.7.3 Unbounded Solutions . . . . . . . . . . . . . . . . . . . 59
2.7.4 Nonexisting (or Infeasible) Solutions . . . . . . . . . . . 59

3 Sensitivity Analysis and Duality 65


3.1 Some Important Formulas . . . . . . . . . . . . . . . . . . . . . 65
3.2 Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . 70
3.3 Finding the Dual of an LP . . . . . . . . . . . . . . . . . . . . 75
3.4 The Dual Theorem and its Consequences . . . . . . . . . . . . 77

1
3.5 Shadow Prices . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.6 Duality and Sensitivity Analysis . . . . . . . . . . . . . . . . . . 86
3.7 Complementary Slackness . . . . . . . . . . . . . . . . . . . . . 89
3.8 The Dual−Simplex Method . . . . . . . . . . . . . . . . . . . . 92

Answers 100

2
1 Introduction to Linear Programming
1.1 Operations Researches
The first formal activity of operations research (OR) were initiated in England
during World War II. A team of British scientists set out to make decisions
regarding the best utilization of war materials. After war, the idea was adopted
and improved in the civilian sector.
In OR, we do not have a single general technique to solve all mathematical
models. The most OR techniques are: Linear Programming, Non-Linear Pro-
gramming, Integer Programming, Dynamic Programming, Network Program-
ming, and much more. All techniques are determined by algorithms, and not
by closed form formulas. The algorithms are for deterministic models (not for
probability or stochastic model). Deterministic means when we know the values
of the variables we know the value of the model in certainty.

1.2 What is a Linear Programming (LP) Problem?

Definition 1.1. A function f (x1 , x2 , · · · , xn ) of the vari-


ables x1 , x2 , · · · , xn is a linear function if f (x1 , x2 , · · · , xn ) =
c1 x1 + c2 x2 + · · · + cn xn where c1 , c2 , · · · , cn are constants.

Example 1.1. f (x1 , x2 ) = 2x1 + x2 is linear function, while f (x1 , x2 ) = x21 x2


is not linear.

Definition 1.2. For any linear function f (x1 , x2 , · · · , xn ) and


any real number b, the inequalities f (x1 , x2 , · · · , xn ) ≤ b and
f (x1 , x2 , · · · , xn ) ≥ b are linear inequalities.

Example 1.2. 2x1 + 3x2 ≤ 3 is linear inequality, while x1 x2 + x2 ≥ 3 is not
linear inequality.

Definition 1.3. A linear programming (LP) problem is an


optimization problem for which we do the following

1. We attempt to maximize (profit) or minimize (cost)


a linear function (called the objective function) of the
decision variables.

2. The values of the decision variables must satisfy a set of


constraints, and each constraint must be linear equation
or linear inequality.

3
3. A sign restriction is associated with each variable. For
any variable xi , either xi ≥ 0 or xi is unrestricted in
sign (urs).

max (or min) z = f (x1 , x2 , · · · , xn )


Subject to Constraints
Sign Restriction.

Example 1.3. Furnco manufactures desks and chairs. Each desk uses 4 units
of wood, and each chair uses 3. A desk contributes $40 to profit, and a chair
contributes $25. Marketing restrictions require that the number of chairs pro-
duced be at least twice the number of desks produced. If 20 units of wood are
available, formulate an LP to maximize Furnco’s profit.
Solution: Let x1 be the number of desks produced, and x2 be the number
of chairs produced. Then, the formulation of the problem is
max z = 40x1 + 25x2
s.t. 4x1 + 3x2 ≤ 20
x2 ≥ 2x1
x1 ≥ 0, x2 ≥ 0
The optimal solution of this problem is x1 = 2, x2 = 4, and z = 180. (We will
see how we find these values later on.)
Note 1. [LP Assumptions]
1. The Proportionality and Additivity Assumptions.
The fact that the objective function for an LP must be a linear function
of the decision variables has two implications.
(a) The contribution of the objective function from each decision vari-
able is proportional to the value of the decision variable. For exam-
ple, the contribution to the objective function in example (1.3) from
making 5 desks is exactly five times the contribution to the objective
function from making one desk.
(b) The contribution to the objective function for any variable is inde-
pendent of the values of the other decision variables. For example,
no matter what the value of x2 , the manufacture of x1 desks will
always contribute 40x1 dollars to the objective function.
Analogously, the fact that each LP constraint must be a linear inequality
or linear equation has two implications.
(a) The contribution of each variable to the lefthand side of each con-
straint is proportional to the value of the variable.
(b) The contribution of a variable to the lefthand side of each constraint
is independent of the values of the variable.

4
2. The Divisibility Assumption.
The divisibility assumption requires that each decision variable be allowed
to assume fractional values. For instance, in example (1.3), the Divisibility
Assumption implies that it is acceptable to produce 1.5 desks or 1.63
chairs. Because Frunco cannot actually produce a fractional number of
desks or chairs, the Divisibility Assumption is not satisfied in the Frunco
problem. A linear programming problem in which some or all of the
variables must be nonnegative integers is called an integer programming
problem.

3. The Certainty Assumption.


The certainty assumption is that each parameter (objective function coef-
ficient, righthand side, and constraint coefficient) is known with certainty.
If we were unsure of the exact amount of wood used by desks and chairs,
the Certainty Assumption would be violated.

1.3 Modeling LP Problems


This section presents LP models in which the definition of the variables and the
construction of the objective function and constraints are not straightforward.
Example 1.4. Giapetto’s Woodcarving, Inc., manufactures two types of wooden
toys: soldiers and trains. A soldier sells for $27 and uses $10 worth of raw
materials. Each soldier that is manufactured increases Giapetto’s variable labor
and overhead costs by $14. A train sells for $21 and uses $9 worth of raw
materials. Each train built increases Giapetto’s variable labor and overhead
costs by $10. The manufacture of wooden soldiers and trains requires two types
of skilled labor: carpentry and finishing. A soldier requires 2 hours of finishing
labor and 1 hour of carpentry labor. A train requires 1 hour of finishing and
1 hour of carpentry labor. Each week, Giapetto can obtain all the needed raw
material but only 100 finishing hours and 80 carpentry hours. Demand for trains
is unlimited, but at most 40 soldiers are bought each week. Giapetto wants to
maximize weekly profit (revenues - costs). Formulate a mathematical model of
Giapetto’s situation that can be used to maximize Giapetto’s weekly profit.
Solution: Let x1 = number of soldiers produced each week, x2 = number
of trains produced each week. Then, the formulation of the problem is
max z = 3x1 + 2x2
s.t. 2x1 + x2 ≤ 100
x1 + x2 ≤ 80
x1 ≤ 40
x1 ≥ 0, x2 ≥ 0
Example 1.5. Farmer Jones must determine how many acres of corn and wheat
to plant this year. An acre of wheat yields 25 bushels of wheat and requires 10
hours of labor per week. An acre of corn yields 10 bushels of corn and requires

5
4 hours of labor per week. All wheat can be sold at $4 a bushel, and all corn
can be sold at $3 a bushel. Seven acres of land and 40 hours per week of labor
are available. Government regulations require that at least 30 bushels of corn
be produced during the current year. formulate an LP whose solution will tell
Farmer Jones how to maximize the total revenue from wheat and corn.
Solution: We can formulate this problem in two ways, depending on your
assumption for the decision variables.

Method [1] : Method [2] :

x1 = number of acres of wheat x1 = number of bushels of


planted. wheat produced.
x2 = number of acres of corn x2 = number of bushels of corn
planted. produced.

max z = 100x1 + 30x2 max z = 4x1 + 3x2


s.t. x1 + x2 ≤ 7 1 1
s.t. x1 + x2 ≤ 7
10x2 ≥ 30 25 10
10x1 + 4x2 ≤ 40 x2 ≥ 30
10 4
x1 ≥ 0, x2 ≥ 0 x1 + x2 ≤ 40
25 10
x1 ≥ 0, x2 ≥ 0
Example 1.6. The Village Butcher Shop traditionally makes its meat loaf from
a combination of lean ground beef and ground pork. The ground beef contains
80 percent meat and 20 percent fat, and costs the shop $80 ; per pound; the
ground pork contains 68 percent meat and 32 percent fat, and costs $60 ; per
pound. Formulate an LP to minimize Village Butcher Shop’s cost of each kind
of meat should the shop use in each pound of meat loaf and to keep the fat
content of the meat loaf to no more than 25 percent?
Solution: Let x1 = poundage of ground beef used in each pound of meat
loaf, x2 = poundage of ground pork used in each pound of meat loaf. Then,
the formulation of the problem is
min w = 80x1 + 60x2
s.t. 0.20x1 + 0.32x2 ≤ 0.25
x1 + x2 = 1
x1 ≥ 0, x2 ≥ 0
Example 1.7. Steelco manufactures two types of steel at three different steel
mills. During a given month, each steel mill has 200 hours of blast furnace time
available. Because of differences in the furnaces at each mill, the time and cost
to produce a ton of steel differs for each mill. The time and cost for each mill
are shown in the table. Each month, Steelco must manufacture at least 500
tons of steel 1 and 600 tons of steel 2. Formulate an LP to minimize the cost
of manufacturing the desired steel.

6
Steel 1 Steel 1
Mill Cost Time Cost Time
(Minutes) (Minutes)
1 $10 20 $11 22
2 $12 24 $09 18
3 $14 28 $10 30

Solution: Let xij = number of tons of Steel j produced each month at Mill
i. Then a correct formulation is
min w = 10x11 + 12x21 + 14x31 + 11x12 + 9x22 + 10x32
s.t. 20x11 + 22x12 ≤ 12000
24x21 + 18x22 ≤ 12000
28x31 + 30x32 ≤ 12000
x11 + x21 + x31 + ≥ 500
x12 + x22 + x32 ≥ 600
x11 , x12 , x13 , x21 , x22 , x23 ≥ 0
Exercise 1.1.

1. A furniture company manufactures desks and chairs. The sawing depart-


ment cuts the lumber for both products, which is then sent to separate
assembly departments. Assembled items are sent for finishing to the paint-
ing department. The daily capacity of the sawing department is 200 chairs
or 80 desks. The chair assembly department can produce 120 chairs daily
and the desk assembly department 60 desks daily. The paint department
has a daily capacity of either 150 chairs or 110 desks. Given that the
profit per chair is $50 and that of a desk is $100, formulate an LP to
maximize the company’s profit.

2. Truckco manufactures two types of trucks: 1 and 2. Each truck must go


through the painting shop and assembly shop. If the painting shop were
completely devoted to painting Type 1 trucks, then 800 per day could be
painted; if the painting shop were completely devoted to painting Type
2 trucks, then 700 per day could be painted. If the assembly shop were
completely devoted to assembling truck 1 engines, then 1,500 per day
could be assembled; if the assembly shop were completely devoted to
assembling truck 2 engines, then 1,200 per day could be assembled. Each
Type 1 truck contributes $300 to profit; each Type 2 truck contributes
$500. Formulate an LP to maximize Truckco’s profit.

3. Assume you want to decide between alternate ways of spending an eight-


hour day, that is, you want to allocate your resource time. Assume you
find it five times more fun to play ping-pong in the lounge than to work,
but you also feel that you should work at least three times as many hours
as you play ping-pong. Now the decision problem is how many hours to

7
play and how many to work in order to maximize your fun. Formulate
this problem.

4. Leary Chemical manufactures three chemicals: A, B, and C. These chemi-


cals are produced via two production processes: 1 and 2. Running process
1 for an hour costs $4 and yields 3 units of A, 1 of B, and 1 of C. Running
process 2 for an hour costs $1 and produces 1 unit of A and 1 of B. To
meet customer demands, at least 10 units of A, 5 of B, and 3 of C must
be produced daily. Formulate an LP to minimize Leary Chemical’s cost
of daily demands.

5. A company produces two products, A and B. The sales volume for A is


at least 80% of the total sales of both A and B. However, the company
cannot sell more than 100 units of A per day. Both products use one raw
material, of which the maximum daily availability is 240 lb. The usage
rates of the raw material are 2 lb per unit of A and 4 lb per unit of B.
The profit units for A and B are $20 and $50, respectively. Formulate an
LP to maximize the company’s profit.

6. A banquet hall offers two types of tables for rent: 6−person rectangular
tables at a cost of $28 each and 10−person round tables at a cost of $52
each. Kathleen would like to rent the hall for a wedding banquet and
needs tables for 250 people. The room can have a maximum of 35 tables
and the hall only has 15 rectangular tables available. Formulate an LP to
minimize the cost of each type of tables should be rented.

7. A diet is to contain at least 400 units of vitamins, 500 units of minerals,


and 1400 calories. Two foods are available: F1, which costs $0.05 per
unit, and F2, which costs $0.03 per unit. A unit of food F1 contains 2
units of vitamins, 1 unit of minerals, and 4 calories; a unit of food F2
contains 1 unit of vitamins, 2 units of minerals, and 4 calories. Formulate
an LP to minimize the cost for a diet that consists of a mixture of these
two foods and also meets the minimal nutrition requirements.

8. An appliance company has a warehouse and two terminals. To minimize


shipping costs, the manager must decide how many appliances should be
shipped to each terminal. There is a total supply of 1200 units in the
warehouse and a demand for 400 units in terminal A and 500 units in
terminal B. It costs $12 to ship each unit to terminal A and $16 to ship
to terminal B. Formulate an LP to minimize the cost of units shipping to
each terminal.

9. Katy needs at least 60 units of carbohydrates, 45 units of protein, and 30


units of fat each month. From each pound of food A, she receives 5 units
of carbohydrates, 3 of protein, and 4 of fat. Food B contains 2 units of
carbohydrates, 2 units of protein, and 1 unit of fat per pound. If food A

8
costs $1.30 per pound and food B costs $0.80 per pound, formulate an
LP to minimize the cost of pounds of each food should Katy buy each
month.

1.4 Geometric Preliminaries and Solutions


Any LP with only two variables can be solved graphically. We always label the
variables x1 and x2 and the coordinate axes the x1 and x2 axes.

1.4.1 Half−Spaces, Hyperplanes, and Convex Sets

Definition 1.4. We define the Euclidean plane Rn to be the


set of all n−tuples of real numbers; that is

Rn = (x1 , x2 , · · · , xn ) xi ∈ R for i = 1, 2, · · · , n



For example, R2 = (x1 , x2 ) x1 and x2 are reals . Geometrically, we


represent R2 as in Figure 1.

Figure 1

The graph in R2 of an equation of the form a1 x1 +a2 x2 = c (where a1 , a2 , c


are constants) is a straight line. For example, the graph in R2 of the equation
2x1 − 3x2 = 6 is the line indicated in Figure 2.

Figure 2

9
The graph in R2 of the inequality a1 x1 + a2 x2 ≤ c or a1 x1 + a2 x2 ≥ c is
the set of all points in R2 lying on the line a1 x1 + a2 x2 = c together with all
points lying to one side of this line. For example, the shaded region in Figure
3 is the graph of the inequality 2x1 − 3x2 ≤ 6. To determine on which side of

Figure 3

the line where the region of the inequality 2x1 − 3x2 ≤ 6 lies, consider a point,
say (0, 0), not lying on the line but satisfying the inequality; the side of the line
containing this point is the one corresponding to the inequality.

Definition 1.5. A half−space in Rn is the set of all points in


Rn satisfying an inequality of the form

a1 x1 + a2 x2 + · · · + an xn ≤ c

or an inequality of the form

a1 x1 + a2 x2 + · · · + an xn ≥ c

where at least one of the constants a1 , a2 , · · · , an is nonzero.

Definition 1.6. A hyperplane in Rn is the set of all points in


Rn satisfying an equality of the form

a1 x1 + a2 x2 + · · · + an xn = c

where at least one of the constants a1 , a2 , · · · , an is nonzero.

For example, the set of points in R5 satisfying


1 2
3x1 + x2 − x3 + x4 + x5 = −9
2 3

10
is a hyperplane in R5 , and the set of points in R5 satisfying
1 2
3x1 + x2 − x3 + x4 + x5 ≥ −9
2 3
is a half-space in R5 .

Definition 1.7. A subset K of Rn is convex if K is empty,


or K is a single point, or if for each two distinct points p and
q in K, the line segment connecting p and q lies entirely in
K.

Example 1.8. The sets in Figure 4 are convex.

Figure 4

Example 1.9. The sets in Figure 5 are not convex.

Figure 5

Definition 1.8. If p = (p1 , p2 , · · · , pn ) and q = (q1 , q2 , · · · , qn )


are points in Rn , then the line segment joining p and q con-
sists of all points of the form

(1 − t)p + tq ; 0≤t≤1

where

(1 − t)p + tq = (1 − t) (p1 , p2 , · · · , pn ) + t (q1 , q2 , · · · , qn )


h i
= (1 − t)p1 + tq1 , · · · , (1 − t)pn + tqn

Observe that if t = 0, then (1 − t)p + tq = p, and if t = 1,


then (1 − t)p + tq = q.

11
Example 1.10. The line segment in R2 joining the points p = (3, 6) and
q = (−4, 5) is the set of points

(1 − t)p + tq = (1 − t) (3, 6) + t (−4, 5)


h i 
= 3(1 − t) − 4t, 6(1 − t) + 5t = 3 − 7t, 6 − t ; 0 ≤ t ≤ 1

Theorem 1.1. A half-space H in Rn that is defined either by


the inequality

a1 x1 + a2 x2 + · · · + an xn ≤ c

or the inequality

a1 x1 + a2 x2 + · · · + an xn ≥ c

is convex.

Proof. We establish this result for the half-space H defined by the inequality

a1 x1 + a2 x2 + · · · + an xn ≤ c (1.1)

A similar argument hold for half-spaces defined by a1 x1 +a2 x2 +· · ·+an xn ≥ c.


Suppose the points p = (p1 , · · · , pn ) and q = (q1 , · · · , qn ) lie in H; that is,
these points satisfy inequality (1.1), so we have

a1 p1 + a2 p2 + · · · + an pn ≤ c
a1 q1 + a2 q2 + · · · + an qn ≤ c

To show that the line segment connecting


  these two points lies entirely in H,
it suffices to show that for each t ∈ 0, 1 , the point
h i
(1 − t)p + tq = (1 − t)p1 + tq1 , · · · , (1 − t)pn + tqn

also satisfies inequality (1.1). To show this, we have

a1 [(1 − t)p1 + tq1 ] + a2 [(1 − t)p2 + tq2 ] + · · · + an [(1 − t)pn + tqn ]


= (1 − t) (a1 p1 + a2 p2 + · · · + an pn ) + t (a1 q1 + a2 q2 + · · · + an qn )
≤ (1 − t)c + tc = c

and this concludes the proof. 

12
Theorem 1.2. If K1 , K2 , · · · , Kr are convex subsets of Rn ,
then the intersection of these sets, K = K1 ∩ K2 ∩ · · · ∩ Kr
is also convex.

Proof. If K is empty or consists of a single point, then K is convex by definition


(1.7). Suppose then that K consists of more than one point, and let p and
q be any two distinct points in K. Since p and q are in each convex set Ki ;
1 ≤ i ≤ r, the line segment L connecting p and q also lies entirely in each Ki .
Therefore, L lies in the intersection K of these sets, and we conclude that K
is convex. 

Theorem 1.3. A hyperplane M in Rn defined by

a1 x1 + a2 x2 + · · · + an xn = c

is convex

Proof. M is the intersection of the convex half-spaces

a1 x1 + a2 x2 + · · · + an xn ≤ c

and
a1 x1 + a2 x2 + · · · + an xn ≥ c
By Theorem (1.2), this intersection is convex. 

Definition 1.9. A point q is a corner point (or an extreme


point) of a convex set K if q is not an interior point of any
line segment contained in K.

Example 1.11. The points q1 , q2 , q3 , q4 , q5 are corner points of the convex


set in Figure 6.

Figure 6

13
Exercise 1.2.
1. Draw the graph in R2 of the following half-spaces.

(a) −2x1 + 4x2 ≥ 12. (c) x1 ≥ 4.


(b) x2 ≤ 2x1 . (d) −3x2 ≤ 9

2. Which of the following expressions define hyperplanes, half-spaces, or


neither?

(a) 2x1 + 3x2 = x2 − x4 + 3. 2


(d) x1 = 6 + .
x2
(b) x1 − 3x4 ≥ 3x2 + x3 . (e) 2.5x1 − 3.2x2 = 10.
(c) x1 x2 ≤ 1. (f) x1 + x32 ≥ 9.

3. Which of the following sets in Figure 7 are convex?

Figure 7

4. Let p = (1, 3, 2) and q = (2, 4, −1) be two points in R3 .


(a) Find the set of points that lie on the line segment joining the points
p and q.
(b) Show that the point (1.5, 3.5, 0.5) lies on the line segment joining
the points p and q.

1.4.2 The Graphical Solution of Two−Variable LP Problems


Two of the most basic concepts associated with a linear programming problem
are feasible region and optimal solution. For defining these concepts, we use
the term point to mean a specification of the value for each decision variable.

14
Definition 1.10. The feasible region for an LP is the set
of all points that satisfies all the LP’s constraints and sign
restrictions. Any point that is not in LP’s feasible region is
said to be infeasible point.

The shaded area in Figure 8 indicates the feasible region of the LP in example
(1.3). Note that each of the constraints in the LP defines a half-space. The
feasible set consists of all points in the intersection of these half-spaces. Observe
that the feasible region in Figure 8 is convex. Note that the points (0, 0), (1, 3),

4x1 + 3x2 ≤ 20
x2 ≥ 2x1
x1 ≥ 0, x2 ≥ 0

Figure 8

and (2, 4) are all in the feasible region, while (2, 1) is infeasible, because it does
not satisfy the second constraint.

Theorem 1.4. The feasible region in Rn corresponding to any


number of constraints of the types

a1 x1 + a2 x2 + · · · + an xn ≤ b
a1 x1 + a2 x2 + · · · + an xn = b
a1 x1 + a2 x2 + · · · + an xn ≥ b
x1 , x2 , · · · , xn ≥ 0

is convex.

Proof. The inequality constraints define half-spaces, and the equality constraints
define hyperplanes. By Theorems 1.1 and 1.3 these half-spaces and hyperplanes
are convex sets. Since the feasible region is the intersection of these convex sets,
it follows from Theorem 1.2 that the feasible region is convex. 

15
Definition 1.11. For a maximization (minimization) problem,
an optimal solution to an LP is a point in the feasible region
with the largest (smallest) objective function value.

The goal of any LP problem is to find the optimum, the best feasible solution
that maximizes the total profit or minimizes the cost. Having identified the
feasible region for the Furnco problem in example (1.3) as shown in Figure 8,
we now search for the optimal solution, which will be the point in the feasible
region with the largest value of z = 40x1 + 25x2 .
ˆ To find the optimal solution, we need to graph a line on which all points
have the same z−value. In a max problem, such a line is called an isoprofit
line.
ˆ To draw an isoprofit line, choose any point in the feasible region and
calculate its z−value. Let us choose (1, 3). For (1, 3), z = 40(1) +
25(3) = 115. Thus, (1, 3) lies on the isoprofit line z = 40x1 +25x2 = 115.
ˆ Because all isoprofit lines are of the form 40x1 + 25x2 = constant, all
isoprofit lines have the same slope. This means that once we have drawn
one isoprofit line, we can find all other isoprofit lines by moving parallel
to the isoprofit line we have drawn in a direction that increases z.
ˆ After a point, the isoprofit lines will no longer intersect the feasible region.
The last isoprofit line intersecting (touching) the feasible region defines
the largest z−value of any point in the feasible region and indicates the
optimal solution to the LP.
ˆ In our problem, the objective function z = 40x1 + 25x2 will increase if we
move in a direction for which both x1 and x2 increase. Thus, we construct
additional isoprofit lines by moving parallel to 40x1 + 25x2 = 115 in a
northeast direction (upward and to the right), as shown in Figure 9.

Figure 9

16
ˆ From Figure 9, we see that the isoprofit line passing through point (2, 4)
is the last isoprofit line to intersect the feasible region. Thus, (2, 4) is the
point in the feasible region with the largest z−value and is therefore the
optimal solution to the Furnco problem. Thus, the optimal value of z is
z = 40(2) + 25(4) = 180.
Example 1.12. Graphically solve the following LP problem.
max z = 3x1 + 2x2
s.t. 2x1 + x2 ≤ 100
x1 + x2 ≤ 80
x1 ≤ 40
x1 ≥ 0, x2 ≥ 0

Solution: From Figure 10,the isoprofit


lines maximizes the value of the objective
function by increasing z in the northeast
direction (up-right). The optimum solu-
tion is the intersection of the two lines
2x1 + x2 = 100 and x1 + x2 = 80, which
yields x1 = 20 and x2 = 60. The maxi-
mum value of z is z = 3(20) + 2(60) =
180.
Figure 10

Note 2. In a minimization problem, to find the optimal solution, we need to


graph a line on which all points have the same w−value, such a line is called an
isocost line. Once we have drawn one isocost line, we can find all other isocost
lines by moving parallel to the isocost line we have drawn first in a direction
that decreases w. After a point, the isocost lines will no longer intersect the
feasible region. The last isocost line intersecting the feasible region defines the
smallest w−value of any point in the feasible region and indicates the optimal
solution to the LP.
Example 1.13. Graphically solve the following LP problem.
min w = −4x1 + 7x2
s.t. x1 + x2 ≥ 3
−x1 + x2 ≤ 3
2x1 + x2 ≤ 8
x1 ≥ 0, x2 ≥ 0
Solution: From Figure 11,The isocost lines minimizes the value of the objective
function by decreasing w in the southeast direction (down-right). The optimum

17
solution is the intersection of the two lines 2x1 + x2 = 8 and x2 = 0, which
yields x1 = 4 and x2 = 0. The minimum value of w is w = −4(4)+7(0) = −16.

Figure 11

Once the optimal solution to an LP has been found, it is useful to classify


each constraint as being a binding constraint or a nonbinding constraint.

Definition 1.12. A constraint is binding if the left-hand side


and the right-hand side of the constraint are equal when the
optimal values of the decision variables are substituted into
the constraint. Otherwise, it is called nonbinding.

For instance, in example (1.12), the first two constraints are binding, while
the third one is nonbinding. While in example (1.13) the third constraint is
binding and the other two constraints are nonbinding.

Definition 1.13. A constraint is said to be redundant if its


removal from the model leaves the feasible solution space un-
changed.

For example, the feasible region of the following constraints has a redundant
constraint as shown in Figure 12
Constraint [1]: 2x1 + x2 ≤ 6
Constraint [2]: x1 + 3x2 ≤ 9
Constraint [3]: x1 + x2 ≤ 5
Sign Restriction: x1 , x2 ≥ 0
Note that the third constraint is the redundant constraint since its removal from
the region will leave the feasible region unchanged.

18
Figure 12

Note 3. [Alternative Optima] An LP problem may have an infinite number


of alternative optima when the objective function is parallel to a nonredundant
binding constraint. This is indicated by the fact that as an isoprofit (or isocost)
line leaves the feasible region, it will intersect an entire line segment corre-
sponding to the binding constraint. It seems reasonable that if two points are
optimal, then any point on the line segment joining these two points will also
be optimal. If an alternative optimum occurs, then the decision maker can use
a secondary criterion to choose between optimal solutions. The technique of
goal programming is often used to choose among alternative optimal solutions.
The next example demonstrates the practical significance of such solutions.

Example 1.14. Graphically solve the following LP problem.


max z = 4x1 + x2
s.t. 8x1 + 2x2 ≤ 16
5x1 + 2x2 ≤ 12
x1 , x2 ≥ 0
Solution: The feasible region for this LP is the shaded region in Figure 13.
For our isoprofit line, we choose the line passing through the point (0.4, 1).
Because (0.4, 1) has a z−value of 4(0.4) + (1) = 2.6, this yields the isoprofit
line z = 4x1 + x2 = 2.6. Examining lines parallel to this isoprofit line in the
direction of increasing z (northeast), we find that the last “point” in the feasible
region to intersect an isoprofit line is the entire line segment joining the corner
points 34 , 83 and (2, 0). This means that any point on this line segment is


optimal. These points are given by


   
4 8 2t 8t
t , + (1 − t)(2, 0) = 2 − , ; 0≤t≤1
3 3 3 3

19
Figure 13

Note 4. [Infeasible LP] It is possible for an LP’s feasible region to be empty


(contain no points), resulting in an infeasible LP. Because the optimal solution
to an LP is the best point in the feasible region, an infeasible LP has no optimal
solution.
Example 1.15. The following LP problem has no feasible solution as shown in
Figure 14.

max z = 3x1 + 2x2


s.t. 3x1 + 2x2 ≤ 120
x1 + x2 ≤ 50
x1 ≥ 30
x2 ≥ 20
x1 , x2 ≥ 0

Figure 14

Note 5. [Unbounded LP] For a max problem, an unbounded LP occurs if it is


possible to find points in the feasible region with arbitrarily large z−values, which
corresponds to a decision maker earning arbitrarily large revenues or profits.
This would indicate that an unbounded optimal solution should not occur in a
correctly formulated LP. Thus, if the reader ever solves an LP on the computer
and finds that the LP is unbounded, then an error has probably been made in
formulating the LP or in inputting the LP into the computer. For a minimization
problem, an LP is unbounded if there are points in the feasible region with
arbitrarily small z−values. When graphically solving an LP, we can spot an
unbounded LP as follows:
ˆ A max problem is unbounded if, when we move parallel to our original
isoprofit line in the direction of increasing z, we never entirely leave the
feasible region.

20
ˆ A minimization problem is unbounded if we never leave the feasible region
when moving in the direction of decreasing z.

Example 1.16. Graphically solve the following LP problem.


max z = 2x1 − x2
s.t. x1 − x2 ≤ 1
2x1 + x2 ≥ 6
x1 , x2 ≥ 0

Solution: The feasible region is the


(shaded) unbounded region in Figure 15.
To find the optimal solution, we draw the
isoprofit line passing through (2, 5). This
isoprofit line has z = 2(2) − (5) = −1.
The direction of increasing z is to the
southeast (this makes x1 larger and x2
smaller). Moving parallel to z = 2x1 −x2
in a southeast direction, we see that
any isoprofit line we draw will intersect
the feasible region. (This is because
any isoprofit line is steeper than the line
x1 − x2 = 1.) Thus, there are points in
the feasible region that have arbitrarily Figure 15
large z-values.

Example 1.17. Graphically solve the following LP problem.

min w = 0.3x1 + 0.9x2


s.t. x1 + x2 ≥ 800
7x1 − 10x2 ≤ 0
3x1 − x2 ≥ 0
x1 , x2 ≥ 0

Figure 16

Solution: Although the feasible region of the LP is unbounded, but it has


optimal solution as shown in Figure 16.

21
Example 1.18. Graphically solve the following LP problem.

max z = 5x1 + 6x2


s.t. x1 − 2x2 ≥ 1
−2x1 + x2 ≥ 1
x1 , x2 urs

Figure 17

Solution: The two variables x1 and x2 are unrestricted in sign means that both
can be positive, negative, or zero. The feasible region is the shaded region
in Figure 17. To find the optimal solution, we draw the isoprofit line passing
through (−3, 3). This isoprofit line has z = 5(−3) + 6(−3) = −33. The
direction of increasing z is to the northeast. Moving parallel to z = 5x1 + 6x2
in a northeast direction, we see that the last isoprofit line we draw will touch
the feasible region at the point (−1, −1). Thus, the optimal z−value is z =
5(−1) + 6(−1) = −11.

Exercise 1.3.

1. Match the solution region of each system of linear inequalities with one
of the four regions shown in Figure 18.

(a) x + 2y ≤ 8
3x − 2y ≥ 0

(b) x + 2y ≥ 8
3x − 2y ≤ 0

(c) x + 2y ≥ 8
3x − 2y ≥ 0

(d) x + 2y ≤ 8
3x − 2y ≤ 0
Figure 18

2. Why don’t we allow an LP to have < or > constraints?

3. Find the maximum value of each objective function over the feasible region
shown in Figure 19.

22
(a) z = x + y

(b) z = 4x + y

(c) z = 3x + 7y

(d) z = 9x + 3y

Figure 19

4. Find the minimum value of each objective function over the feasible region
shown in Figure 20.

(a) w = 7x + 4y

(b) w = 7x + 9y

(c) w = 3x + 8y

(d) w = 5x + 4y
Figure 20

5. The corner points for the bounded feasible region determined by the sys-
tem of linear inequalities

x + 2y ≤ 10
3x + y ≤ 15
x, y ≥ 0

are O = (0, 0), A = (0, 5), B = (4, 3), and C = (5, 0) as shown in Figure
21. If P = ax + by and a, b > 0, determine conditions on a and b that
will ensure that the maximum value of P occurs

(a) only at A

(b) only at B

(c) only at C

(d) at both A and B

(e) at both B and C Figure 21

23
6. Identify the direction of increase in z in each of the following cases:
(a) Maximize z = x1 − x2 .
(b) Maximize z = −8x1 − 3x2 .
(c) Maximize z = −x1 + 3x2 .
7. Identify the direction of decrease in w in each of the following cases:
(a) Minimize w = 4x1 − 2x2 .
(b) Minimize w = −6x1 + 2x2 .
8. Determine the solution space graphically for the following inequalities.
Which constraints are redundant? Reduce the system to the smallest
number of constraints that will define the same solution space.
x+y ≤4
4x + 3y ≤ 12
−x + y ≥ 1
x+y ≤6
x, y ≥ 0

9. Write the constraints associated with the solution space shown in Figure
22 and identify the redundant constraints.

Figure 22

10. Consider the following problem:

max z = 6x1 − 2x2


s.t. 3x1 − x2 ≤ 6
x1 − x2 ≤ 1
x1 , x2 ≥ 0

Show graphically that at the optimal solution, the variables x1 and x2 can
be increased indefinitely while the value of the objective function remains
constant.

24
11. Consider the following problem:

max z = 3x1 + 2x2


s.t. 2x1 + x2 ≤ 2
3x1 + 4x2 ≥ 12
x1 , x2 ≥ 0

Show graphically that the problem has no feasible optimal solution.

12. Solve the following problem by inspection without graphing the feasible
region.

min w = 5x1 + 2x2


s.t. x1 + x2 = 5
7x1 − 5x2 = −1
x1 , x2 ≥ 0

13. Solve the following problems graphically.

(a) min w = 4x1 + x2 (d) max z = −4x1 + x2


s.t. 3x1 + x2 ≥ 6 s.t. 3x1 + 2x2 ≤ 8
4x1 + x2 ≥ 12 x1 + x2 ≤ 12
x1 ≥ 2 x1 ≥ 0 and x2 urs
x1 , x2 ≥ 0
(e) max z = x1 + 5x2
(b) max z = 5x1 − x2
s.t. x1 − 3x2 ≥ 0
s.t. 2x1 + 3x2 ≥ 12
x1 + x2 ≤ 8
x1 − 3x2 ≥ 0
x1 , x2 ≥ 0
x1 , x2 ≥ 0
(c) min w = 5x1 + x2 (f) min w = 4x1 + x2
s.t. 2x1 + x2 ≥ 6 s.t. 3x1 + x2 ≥ 10
x1 + x2 ≥ 4 x1 + x2 ≥ 5
x1 + 5x2 ≥ 10 x1 ≥ 3
x1 , x2 ≥ 0 x1 , x2 ≥ 0

1.5 The Corner Point Theorem and its Proof


In practice, a typical LP may include hundreds or even thousands of variables
and constraints. Of what good then is the study of a two-variable LP? The
answer is that the graphical solution provides one of the most important key
result in linear programming: “The optimum solution of an LP, when it exists,
is always associated with a corner point of the solution space”, thus limiting
the search for the optimum from an infinite number of feasible points to a finite
number of corner points. This powerful result is the basis for the development
of the general algebraic simplex method presented in Chapter 2.

25
Theorem 1.5. [Corner Point Theorem]
Consider the LP problem

Maximize ( or Minimize) z = c1 x1 + c2 x2 + · · · + cn xn

subject to a system of inequalities of the types

c1 x1 + c2 x2 + · · · + cn xn ≤ b
c1 x1 + c2 x2 + · · · + cn xn = b
c1 x1 + c2 x2 + · · · + cn xn ≥ b
x1 , x2 , · · · , xn ≥ 0

1. If the feasible region is bounded, then the optimal solu-


tion is attained at a corner point of this feasible region.

2. If the feasible region is unbounded, then an optimal


solution may not exist; however, if an optimal solution
exists, it is attained at a corner point of this feasible
region.

Proof. We defer its proof to the end of the section. We will give a proof only for
bounded regions by assuming theorems (1.7) and (1.8) that will not be proven.
The proof of these theorems and the unbounded case of Theorem (1.5) can be
found in: Jan Van Tiel, Convex Analysis, New York: Wiley, 1984. 
Note 6. Theorem (1.5) suggests that an optimal solution of an LP problem
may not exist. There can be two reasons for this:
1. The feasible region is empty; that is, there are no feasible solutions as in
example (1.15).
2. The feasible region is unbounded as in example (1.16). However, an LP
problem with an unbounded feasible region can have an optimal solution
as shown in example (1.17).
To prove Theorem (1.5) we need some additional notations and some pre-
liminary results. First, we adopt functional notation to describe the objective
function. We denote
z = c1 x1 + c2 x2 + · · · + cn xn
by the function f : Rn → R defined by
f (x1 , x2 , · · · , xn ) = c1 x1 + c2 x2 + · · · + cn xn .
Note that if p = (p1 , p2 , · · · , pn ) is a point in Rn , then
f (p) = f (p1 , p2 , · · · , pn ) = c1 p1 + c2 p2 + · · · + cn pn .

26
Definition 1.14. A polyhedron is the intersection of a finite
number of half-spaces and/or hyperplanes. Points that lie in
the polyhedron and on one or more of the half-spaces or hy-
perplanes defining the polyhedron are called boundary points.
Points that lie in the polyhedron but are not boundary points
are called interior points.

Theorem 1.6. Suppose that f : Rn → R is defined by

f (x1 , x2 , · · · , xn ) = c1 x1 + c2 x2 + · · · + cn xn .

If p = (p1 , p2 , · · · , pn ) and q = (q1 , q2 , · · · , qn ) are two points


in Rn , and if r = (1−t)p+tq is any point on the line segment
joining p and q, then f (r) is between p and q. That is, if
f (p) ≤ f (q) then f (p) ≤ f (r) ≤ f (q); or if f (q) ≤ f (p)
then f (q) ≤ f (r) ≤ f (p).

Proof. Suppose that f (p) ≤ f (q). (You can prove the case f (q) ≤ f (p)
by interchanging p and q in the following argument.) We first observe that
f (r) = (1 − t)f (p) + tf (q). To see this, note that

f (r) = f ((1 − t)p + tq)


= f ((1 − t)p1 + tq1 , (1 − t)p2 + tq2 , · · · , (1 − t)pn + tqn )
= c1 ((1 − t)p1 + tq1 ) + c2 ((1 − t)p2 + tq2 ) + · · · + cn ((1 − t)pn + tqn )
= (1 − t) (c1 p1 + c2 p2 + · · · + cn pn ) + t (c1 q1 + c2 q2 + · · · + cn qn )
= (1 − t)f (p) + tf (q).

Since f (p) ≤ f (q) we have

f (p) = (1 − t)f (p) + tf (p)


≤ (1 − t)f (p) + tf (q) = f (r)

and

f (r) = (1 − t)f (p) + tf (q)


≤ (1 − t)f (q) + tf (q) = f (q)

from which it follows that f (p) ≤ f (r) ≤ f (q). 

27
Definition 1.15. Let K1 , K2 , · · · , Km be points in Rn . A
convex combination of K1 , K2 , · · · , Km is any point p that
can be written as

p = a1 K1 + a2 K2 + · · · + am Km

where a1 , a2 , · · · , am are nonnegative numbers such that

a1 + a2 + · · · + am = 1.

For instance, the point 21 7



8 , 4 is a convex combination of the points (1, 2),
(0, 1), (3, 1), and (4, 2) because
 
21 7 1 1 1 1
, = (1, 2) + (0, 1) + (3, 1) + (4, 2)
8 4 4 8 8 2
1 1 1 1
and 4 + 8 + 8 + 2 = 1.

Theorem 1.7. Suppose that K is a bounded polyhedron with


corner points K1 , K2 , · · · , Km . Then any point p in K is a
convex combination of K1 , K2 , · · · , Km .

Theorem 1.8. Suppose that f : Rn → R is defined by

f (x1 , x2 , · · · , xn ) = c1 x1 + c2 x2 + · · · + cn xn

and let K1 , K2 , · · · , Km be any m points in Rn . Then for


any constants b1 , b2 , · · · , bm ,

f (b1 K1 + b2 K2 + · · · + bm Km )
= b1 f (K1 ) + b2 f (K2 ) + · · · + bm f (Km ) .

Proof of Theorem (1.5) for a Bounded Feasible Region K in Rn . Suppose


that K1 , K2 , · · · , Km are the corner points of the feasible region K in Theorem
(1.5), and assume that these points have been labeled so that

f (K1 ) ≤ f (Ki ) ≤ f (Km )

for each i. Let p be any point in K. Then by Theorem (1.7) there are nonneg-
ative constants a1 , a2 , · · · , am whose sum is 1, and so that

p = a1 K1 + a2 K2 + · · · + am Km

28
By Theorem (1.8),

f (p) = f (a1 K1 + a2 K2 + · · · + am Km )
= a1 f (K1 ) + a2 f (K2 ) + · · · + am f (Km )

Since a1 + a2 + · · · + am = 1, it follows that

f (K1 ) = (a1 + a2 + · · · + am ) f (K1 )


= a1 f (K1 ) + a2 f (K1 ) + · · · + am f (K1 )
≤ a1 f (K1 ) + a2 f (K2 ) + · · · + am f (Km ) = f (p)

and

f (p) = a1 f (K1 ) + a2 f (K2 ) + · · · + am f (Km ) ≤ a1 f (Km ) + a2 f (Km ) + · · · + am f (Km )


= (a1 + a2 + · · · + am ) f (Km ) = f (Km )

from which it follows that

f (K1 ) ≤ f (p) ≤ f (Km )

for any point p in K. We conclude that the objective function f (x1 , x2 , · · · , xn )


takes on a maximum value at the corner point Km and a minimum value at the
corner point K1 .

29
2 The Simplex Method
Thus far we have used a geometric approach to solve certain LP problems. We
have observed, in Chapter 1, that this procedure is limited to problems of two
or three variables. The simplex algorithm is essentially algebraic in nature and
is more efficient than its geometric counterpart.

2.1 The Idea of the Simplex Method


The graphical method represented in Chapter 1 demonstrates that the optimum
LP is always associated with a corner point of the solution space. What the
simplex method does is to translate the geometric definition of the extreme
point into an algebraic definition.
As an initial step, the simplex method requires that each of of the constraints
be put in a special standard form in which all the constraints are expressed as
equations. This conversion results in a set of equations in which the number
of variables exceeds the number of equations, which means that the equations
yields an infinite number of solution points.
The extreme points of this space can be identified algebraically by setting
certain number of variables to zero and then solving for the remaining variables
(called the basic solutions), provided that the condition results in a unique
solution.
Corner Points ⇐⇒ Basic Solutions
What is the simplex method does is to identify a starting basic solution and the
move systematically to other basic solutions that have the potential to improve
the value of the objective function. Eventually, the basic solution corresponding
to the optimum will be identified and the computational process will end.

30
2.2 Converting an LP to Standard Form
We have seen that an LP can have both equality and inequality constraints.
It also can have variables that are required to be nonnegative as well as those
allowed to be unrestricted in sign (urs). The development of the simplex method
computations is facilitated by imposing two requirements on the LP model:

1. All the constraints are equations with nonnegative right-hand side.

2. All the variables are nonnegative.

An LP in this form is said to be in standard form. To convert an LP into


standard form, we do the following steps.

1. If the right-hand of a constraint is negative, multiply both sides of the


constraint by −1. This multiplication will convert a ≤ sign to ≥ and vise
versa.

2. To convert a ≤ inequality to an equation, a nonnegative slack variable


(unused amount) is added to the left-hand side of the constraint. For
example, the constraint 3x1 + 2x2 ≤ 12 is converted into an equation as

3x1 + 2x2 + s = 12 ; s ≥ 0.

3. Conversion from ≥ to = is achieved by subtracting a nonnegative surplus


(excess amount) variable from the left-hand side of the inequality. For
example, the surplus variable e converts the constraint 5x1 + 3x2 ≥ 4 to
the equation
5x1 + 3x2 − e = 4 ; e ≥ 0.

4. An unrestricted variable xi can be presented in terms of two nonnegative


variables by using the substitution

xi = yi1 − yi2 ; yi1 , yi2 ≥ 0

The substitution must be effected throughout all the constraints and the
objective function. In the optimal LP solution only one of the two variables
yi1 and yi2 can assume a positive value, but never both. Thus, when
yi1 > 0, yi2 = 0 and vice versa. For example, if xi = 4 then yi1 = 4 and
yi2 = 0, and if xi = −4 then yi1 = 0 and yi2 = 4.

Example 2.1. Write the following LP problem in standard form.


min w = 2x1 + 3x2
s.t. x1 + x2 = 10
−2x1 + 3x2 ≤ −5
7x1 − 4x2 ≤ 6
x1 urs, x2 ≥ 0

31
Solution: The following changes must be effected.

1. Multiply both sides of the second constraint by −1 to get 2x1 − 3x2 ≥


5, then subtract excess variable e2 ≥ 0 from the left-hand side of the
constraint.

2. Add a slack variable s3 ≥ 0 to the left-hand side of the third constraint.

3. Substitute x1 = y11 − y12 , where y11 , y12 ≥ 0, in the objective function


and all the other constraints.

Thus we get the standard form as


min w = 2y11 − 2y12 + 3x2
s.t. y11 − y12 + x2 = 10
2y11 − 2y12 − 3x2 − e2 = 5
7y11 − 7y12 − 4x2 + s3 = 6
y11 , y12 , x2 , e2 , s3 ≥ 0
Exercise 2.1.

1. Convert the following LP to the standard form.

max z = 2x1 + 3x2 + 5x3


s.t. x1 + x2 − x3 ≥ −5
−6x1 + 7x2 − 9x3 ≤ 4
x1 + x2 + 4x3 = 10
x1 , x2 ≥ 0, x3 urs

2. Consider the inequality

22x1 − 4x2 ≥ −7

Show that multiplying both sides of the inequality by −1 and then con-
verting the resulting inequality into an equation is the same as converting
it first to an equation and then multiplying both sides by −1.

3. The substitution x = y1 − y2 is used in an LP to replace unrestricted x


by the two nonnegative variables y1 and y2 . If x assumes the respective
values −6, 10, and 0, determine the associated optimal values of y1 and
y2 in each case.

2.3 Basic Feasible Solutions


Suppose we have converted an LP with m constraints into standard form. As-
suming that the standard form contains n variables (labeled for convenience
x1 , x2 , · · · , xn ), where n ≥ m, the standard form for such an LP is

32
max (or min) z = c1 x1 + c2 x2 + · · · + cn xn
s.t. a11 x1 + a12 x2 + · · · + a1n xn = b1
a21 x1 + a22 x2 + · · · + a2n xn = b2
..
.
am1 x1 + am2 x2 + · · · + amn xn = bm
x1 , x2 , · · · , xn ≥ 0
If we define
     
a11 a12 ··· a1n x1 b1
 a21 a22 ··· a2n   x2   b2 
A= . ..  , x =  .  and b =  . 
     
.. ..
 .. . . .  .
 .   .. 
am1 am2 · · · amn xn bm

the constraints for the LP may be written as the system of equations Ax = b.

Definition 2.1. A basic solution to Ax = b is obtained by


setting n−m variables (the nonbasic variables, or NBV) equal
to 0 and solving for the values of the remaining m variables
(the basic variables, or BV), provided the resulting solution is
unique.

n n!
Note 7. The maximum number of corner points is Cm = .
m! (n − m)!
Of course, the different choices of nonbasic variables will lead to different
basic solutions. To illustrate, we find all the basic solutions to the following
system of two equations (m = 2) in three variables (n = 3):

x1 + x2 = 3
− x2 + x3 = −1

We begin by choosing a set of n − m = 3 − 2 = 1 nonbasic variable, and note


that there are C23 = 3 choices of this nonbasic variable.
ˆ If NBV= {x3 }, then BV= {x1 , x2 }. We obtain the values of the basic
variables by setting x3 = 0 and solving

x1 + x2 = 3
− x2 = −1

We find that x1 = 2, x2 = 1. Thus, (x1 , x2 , x3 ) = (2, 1, 0) is a basic


solution to the system.

ˆ If NBV= {x2 }, then BV= {x1 , x3 }. We obtain the values of the basic
variables by setting x2 = 0 and we find that x1 = 3, x3 = −1. Thus,
(x1 , x2 , x3 ) = (3, 0, −1) is a basic solution to the system.

33
ˆ If NBV= {x1 }, then BV= {x2 , x3 }. We obtain the values of the basic
variables by setting x1 = 0 and solving

x2 = 3
− x2 + x3 = −1

We find that x2 = 3, x3 = 2. Thus, (x1 , x2 , x3 ) = (0, 3, 2) is a basic


solution to the system.

The following table provides all the basic and nonbasic solutions of the above
linear system.

NBVs BVs Basic Solution


x1 x2 , x3 3, 2
x2 x1 , x3 3, −1
x3 x1 , x2 2, 1

Note 8. Some sets of m variables do not yield a basic solution. For example,
consider the following linear system:

x1 + 2x2 + x3 = 1
2x1 + 4x2 + x3 = 3

If we choose NBV= {x3 } and BV= {x1 , x2 }, the corresponding basic solution
would be obtained by solving

x1 + 2x2 = 1
2x1 + 4x2 = 3

Because this system has no solution, there is no basic solution corresponding to


BV= {x1 , x2 }.

Definition 2.2. Any basic solution to the constraints Ax = b


of an LP in which all variables are nonnegative is a basic
feasible solution (or bfs).

For example, for an LP with the constraints given by

x1 + x2 = 3
− x2 + x3 = −1

the basic solutions x1 = 2, x2 = 1, x3 = 0, and x1 = 0, x2 = 3, x3 = 2 are


basic feasible solutions, but the basic solution x1 = 3, x2 = 0, x3 = −1 fails to
be a feasible solution (because x3 < 0).

34
Theorem 2.1. A point in the feasible region of an LP is an
extreme point if and only if it is a basic feasible solution to
the LP.

Example 2.2. Consider the following LP with two variables.


max z = 2x1 + 3x2
s.t. 2x1 + x2 ≤ 4
x1 + 2x2 ≤ 5
x1 , x2 ≥ 0
Solution: By adding slack variables s1 and s2 , respectively, we obtain the LP
in standard form:
max z = 2x1 + 3x2
s.t. 2x1 + x2 + s1 = 4
x1 + 2x2 + s2 = 5
x1 , x2 , s1 , s2 ≥ 0
Figure 23 provides the graphical solution space for the problem. Algebraically,

Figure 23

the solution space of the LP is represented by the following m = 2 equations


and n = 4 variables:
2x1 + x2 + s1 =4
x1 + 2x2 + s2 = 5
The basic solutions are determined by setting n − m = 2 variables equal to zero
and solving for the remaining m = 2 variables. For example, if we set x1 = 0
and x2 = 0, the equations provide the unique basic solution

s1 = 4 and s2 = 5.

35
This solution corresponds to point A in Figure 23. Another point can be de-
termined by setting s1 = 0 and s2 = 0 and then solving the resulting two
equations
2x1 + x2 = 4
x1 + 2x2 = 5
The associated basic solution is x1 = 1, x2 = 2, or point C in Figure 23. In the
present example, the maximum number of corner points is C24 = 6. Looking at
Figure 23, we can spot the four corner points A, B, C, and D. So, where are
the remaining two? In fact, points E and F also are corner points. But, they
are infeasible, and, hence, are not candidates for the optimum. The following
table provides all the basic and nonbasic solutions of the current example.
Basic Corner
NBVs BVs Solution Point Feasible? z−Value
x1 , x2 s1 , s2 4, 5 A 3 0
x1 , s1 x2 , s2 4, −3 F 7
x1 , s2 x2 , s1 2.5, 1.5 B 3 7.5
x2 , s1 x1 , s2 2, 3 D 3 4
x2 , s2 x1 , s1 5, −6 E 7
s1 , s2 x1 , x2 1, 2 C 3 8 Optimum
Exercise 2.2.

1. Consider the following LP:

max z = 2x1 + 3x2


s.t. x1 + 3x2 ≤ 12
3x1 + 2x2 ≤ 12
x1 , x2 ≥ 0

(a) Express the problem in equation form.


(b) Determine all the basic solutions of the problem, and classify them
as feasible and infeasible.
(c) Use direct substitution in the objective function to determine the
optimum basic feasible solution.
(d) Verify graphically that the solution obtained in (c) is the optimum LP
solution, hence, conclude that the optimum solution can be deter-
mined algebraically by considering the basic feasible solutions only.
(e) Show how the infeasible basic solutions are represented on the graph-
ical solution space.

2. Determine the optimum solution for each of the following LPs by enumer-
ating all the basic solutions.

36
(a) max z = 2x1 − 4x2 + 5x3 − 6x4
s.t. x1 + 5x2 − 2x3 + 8x4 ≤ 2
−x1 + 2x2 + 3x3 + 4x4 ≤ 1
x1 , x2 , x3 , x4 ≥ 0
(b) min w = x1 + 2x2 − 3x3 − 2x4
s.t. x1 + 2x2 − 3x3 + x4 = 4
x1 + 2x2 + x3 + 2x4 = 4
x1 , x2 , x3 , x4 ≥ 0
3. Show algebraically that all the basic solutions of the following LP are
infeasible.

max z = x1 + x2
s.t. x1 + 2x2 ≤ 3
2x1 + x2 ≥ 8
x1 , x2 ≥ 0

4. Consider the following LP:

max z = x1 + 3x2
s.t. x1 + x2 ≤ 2
−2x1 + x2 ≤ 4
x1 urs, x2 ≥ 0

(a) Determine all the basic feasible solutions of the problem.


(b) Use direct substitution in the objective function to determine the
best basic solution.
(c) Solve the problem graphically, and verify that the solution obtained
in (b) is the optimum.

2.4 The Simplex Algorithm


Rather than enumerating all the basic solutions (corner points) of the LP prob-
lem, as we did in example (2.2), the simplex method investigates only a “se-
lect few” of these solutions. This section describes the iterative nature of the
method, and provides the computational details of the simplex algorithm.
Before describing the simplex algorithm in general terms, we need to define
the concept of an adjacent basic feasible solution.

Definition 2.3. For any LP with m constraints, two basic


feasible solutions are said to be adjacent if their sets of basic
variables have m − 1 basic variables in common.

37
For example, in Figure 23, two basic feasible solutions will be adjacent if they
have 2 − 1 = 1 basic variable in common. Thus, the bfs corresponding to
point B in Figure 23 is adjacent to the bfs corresponding to point C but is not
adjacent to bfs D. Intuitively, two basic feasible solutions are adjacent if they
both lie on the same edge of the boundary of the feasible region.

2.4.1 Iterative Nature of the Simplex Method


Figure 23 provides the solution space of the LP of example (2.2). For the sake
of standardizing the algorithm, the simplex method always starts at the origin
where all the decision variables, xi are zero. In Figure 23, point A is the origin
(x1 = x2 = 0) and the associated objective value, z, is zero. The logical
question now is whether an increase in the values of nonbasic x1 and x2 above
their current zero values can improve (increase) the value of z. We can answer
this question by investigating the objective function:
max z = 2x1 + 3x2
An increase in x1 or x2 (or both) above their current zero values will improve
the value of z. The design of the simplex method does not allow simultaneous
increases in variables. Instead, it targets the variables one at a time. The
variable slated for increase is the one with the largest rate of improvement in
z. In the present example, the rate of improvement in the value of z is 2 for x1
and 3 for x2 . We thus elect to increase x2 (the variable with the largest rate of
improvement among all nonbasic variables). Figure 23 shows that the value of
x2 must be increased until corner point B is reached (recall that stopping short
of corner point B is not an option because a candidate for the optimum must
be a corner point). At point B, the simplex method, as will be explained later,
will then increase the value of x1 to reach the improved corner point C, which
is the optimum.
The path of the simplex algorithm always connects adjacent corner points.
In the present example the path to the optimum is A → B → C. Each corner
point along the path is associated with an iteration. It is important to note that
the simplex method always moves alongside the edges of the solution space,
which means that the method does not cut across the solution space. For
example, the simplex algorithm cannot go from A to C directly since they are
not adjacent.

2.4.2 Computational Details of the Simplex Algorithm


We now describe how the simplex algorithm can be used to solve LPs in which
the goal is to maximize the objective function. The solution of minimization
problems is discussed in Section 2.5. The simplex algorithm proceeds as follows:
Step 1: Convert the LP to standard form. Then, write the objective function
z = c1 x1 + c2 x2 + · · · + cn xn

38
in the form
z − c1 x1 − c2 x2 − · · · − cn xn = 0.
We call this format the row 0 version of the objective function (row 0 for
short).

Step 2: Obtain a bfs (if possible) from the standard form. This is easy if all
the constraints are ≤ with nonnegative right-hand sides. Then the slack
variable si may be used as the basic variable for row i. If no bfs is readily
apparent, then use the technique discussed in Section 2.6 to find a bfs.

Step 3: Determine whether the current bfs is optimal. If all nonbasic variables
have nonnegative coefficients in row 0, then the current bfs is optimal. If
any variables in row 0 have negative coefficients, then choose the variable
with the most negative coefficient in row 0 to enter the basis. We call
this variable the entering variable.

Step 4: If the current bfs is not optimal, then determine which nonbasic variable
should become a basic variable and which basic variable should become a
nonbasic variable to find a new bfs with a better objective function value.
When entering a variable into the basis, compute the ratio
Right-hand side of constraint
Coefficient of entering variable in constraint
for every constraint in which the entering variable has a positive coeffi-
cient. The constraint with the smallest ratio is called the winner of the
ratio test. The smallest ratio is the largest value of the entering variable
that will keep all the current basic variables nonnegative.

Step 5: Use elementary row operations (EROs) to find the new bfs with the
better objective function value by making the entering variable a basic
variable (has coefficient 1 in pivot row, and 0 in other rows) in the con-
straint that wins the ratio test. Go back to step 3.

2.4.3 Representing the Simplex Tableau


The tabular form of the simplex method records only the essential information:

ˆ the coefficients of the variables,

ˆ the constants on the right-hand sides of the equations,

ˆ the basic variable appearing in each equation.

This saves writing the symbols for the variables in each of the equations, but
what is even more important is the fact that it permits highlighting the numbers

39
involved in arithmetic calculations and recording the computations compactly.
For example, the form

z − 2x1 − 3x2 =0
2x1 + x2 + s1 =4
x1 + 2x2 + s2 = 5

would be written in abbreviated form as shown in the following table.


Basic x1 x2 s1 s2 RHS
z −2 −3 0 0 0
s1 2 1 1 0 4
s2 1 2 0 1 5
The layout of the simplex tableau automatically provides the solution at the
starting iteration. The solution starts at the origin (x1 , x2 ) = (0, 0), thus
defining (x1 , x2 ) as the nonbasic variables and (s1 , s2 ) as the basic variables.
The associated objective z and the basic variables (s1 , s2 ) are listed in the
leftmost Basic-column. Their values, z = 0, s1 = 4, s2 = 5 appearing in
the rightmost Solution-column, are given directly by the right-hand sides of the
model’s equations (a convenient consequence of starting at the origin). The
result can be seen by setting the nonbasic variables (x1 , x2 ) equal to zero in all
the equations, and also by noting the special identity-matrix arrangement of the
constraint coefficients of the basic variables (all diagonal elements are 1, and
all off-diagonal elements are 0).
Example 2.3. Solve the following LP problem using the simplex method.
max z = 2x1 + 3x2
s.t. 2x1 + x2 ≤ 4
x1 + 2x2 ≤ 5
x1 , x2 ≥ 0
Solution: By adding slack variables s1 and s2 , respectively, we obtain the LP
in standard form:
max z − 2x1 − 3x2 = 0
s.t. 2x1 + x2 + s1 = 4
x1 + 2x2 + s2 = 5
x1 , x2 , s1 , s2 ≥ 0
The initial tableau and all following tableaus until the optimal solution is reached
are shown below.

Iteration [0] Basic x1 x2 s1 s2 RHS
z −2 −3 0 0 0
s1 2 1 1 0 4 Ratio= 4/1 = 4
← s2 1 2 0 1 5 Ratio= 5/2 = 2.5

40

Iteration [1] Basic x1 x2 s1 s2 RHS
z −1/2 0 0 3/2 15/2
← s1 3/2 0 1 −1/2 3/2 Ratio= 3/2 ÷ 3/2 = 1
x2 1/2 1 0 1/2 5/2 Ratio= 5/2 ÷ 1/2 = 5

Iteration [2] Basic x1 x2 s1 s2 RHS Optimal Tableau


z 0 0 1/3 4/3 8 z=8
x1 1 0 2/3 −1/3 1 x1 = 1, x2 = 2
x2 0 1 −1/3 2/3 2 s1 = 0, s2 = 0

Example 2.4. Solve the following LP problem using the simplex method.
max z = 4x1 + 4x2
s.t. 6x1 + 4x2 ≤ 24
x1 + 2x2 ≤ 6
−x1 + x2 ≤ 1
x2 ≤ 2
x1 , x2 ≥ 0
Solution: By adding slack variables s1 , s2 , s3 and s4 , respectively, we obtain
the LP in standard form:
max z − 4x1 − 4x2 = 0
s.t. 6x1 + 4x2 + s1 = 24
x1 + 2x2 + s2 = 6
−x1 + x2 + s3 = 1
x2 + s 4 = 2
x1 , x2 , s1 , s2 , s3 , s4 ≥ 0
The initial tableau and all following tableaus until the optimal solution is reached
are shown below. Note that we can choose to enter either x1 or x2 into the
basis. We arbitrarily choose to enter x1 into basis.


Iteration [0] Basic x1 x2 s1 s2 s3 s4 RHS
z −4 −4 0 0 0 0 0
← s1 6 4 1 0 0 0 24 Ratio= 24/6 = 4
s2 1 2 0 1 0 0 6 Ratio= 6/1 = 6
s3 −1 1 0 0 1 0 1 7
s4 0 1 0 0 0 1 2 7

41

Iteration [1] Basic x1 x2 s1 s2 s3 s4 RHS
z 0 −4/3 2/3 0 0 0 16
x1 1 2/3 1/6 0 0 0 4 Ratio= 4 ÷ 2/3 = 6
← s2 0 4/3 −1/6 1 0 0 2 Ratio= 2 ÷ 4/3 = 3/2
s3 0 5/3 1/6 0 1 0 5 Ratio= 5 ÷ 5/3 = 3
s4 0 1 0 0 0 1 2 Ratio= 2/1 = 2

Iteration [2] Basic x1 x2 s1 s2 s3 s4 RHS Optimal Tableau


z 0 0 1/2 1 0 0 18 z = 18
x1 1 0 1/4 −1/2 0 0 3 x1 = 3, x2 = 3/2
x2 0 1 −1/8 3/4 0 0 3/2 s1 = 0, s2 = 0
s3 0 0 3/8 −5/4 1 0 3/2 s3 = 3/2, s4 = 1/2
s4 0 0 1/8 −3/4 0 1 1/2

Example 2.5. Solve the following LP problem using the simplex method.
max z = x1 + 3x2
s.t. x1 + x2 ≤ 2
−x1 + x2 ≤ 4
x1 ≥ 0, x2 urs
Solution: By assuming x2 = y1 − y2 and then adding slack variables s1 and
s2 , respectively, we obtain the LP in standard form:
max z − x1 − 3y1 + 3y2 = 0
s.t. x1 + y1 − y2 + s1 = 2
−x1 + y1 − y2 + s2 = 4
x1 , y1 , y2 , s1 , s2 ≥ 0
The initial tableau and all following tableaus until the optimal solution is reached
are shown below.

Iteration [0] Basic x1 y1 y2 s1 s2 RHS
z −1 −3 3 0 0 0
← s1 1 1 −1 1 0 2 Ratio= 2/1 = 2
s2 −1 1 −1 0 1 4 Ratio= 4/1 = 4

Iteration [1] Basic x1 y1 y2 s1 s2 RHS Optimal Tableau


z 2 0 0 3 0 6 z=6
y1 1 1 −1 1 0 2 x1 = 0, x2 = 3
s2 −2 0 0 −1 1 2 s1 = 0, s2 = 2
Note that from the optimal tableau we have y1 = 3 and y2 = 0, so that
x2 = y1 − y2 = 3 − 0 = 3.

42
Exercise 2.3.
1. Use the simplex algorithm to solve the following problems.

(a) max z = 2x1 + 3x2 (c) max z = x1 − x2


s.t. x1 + 2x2 ≤ 6 s.t. 4x1 + x2 ≤ 100
2x1 + x2 ≤ 8 x1 + x2 ≤ 80
x1 , x2 ≥ 0 x1 ≤ 40
x1 , x2 ≥ 0
(b) max z = 2x1 − x2 + x3 (d) max z = x1 + x2 + x3
s.t. 3x1 + x2 + x3 ≤ 60 s.t. x1 + 2x2 + 2x3 ≤ 20
x1 − x2 + 2x3 ≤ 10 2x1 + x2 + 2x3 ≤ 20
x1 + x2 − x3 ≤ 20 2x1 + 2x2 + x3 ≤ 20
x1 , x2 , x3 ≥ 0 x1 , x2 , x3 ≥ 0

2. Solve the following problem by inspection, and justify the method of so-
lution in terms of the basic solutions of the simplex method.

max z = 5x1 − 6x2 + 3x3 − 5x4 + 12x5


s.t. x1 + 3x2 + 5x3 + 6x4 + 3x5 ≤ 30
x1 , x2 , x3 , x4 , x5 ≥ 0

2.5 Solving Minimization Problem


There are two different ways that the simplex algorithm can be used to solve
minimization problems.

Method (1) Multiply the objective function for the min problem by −1 and
solve the problem as a maximization problem with objective function
(−w). The optimal solution to the max problem will give you the op-
timal solution to the min problem where
   
optimal objective function optimal objective function
=−
value for min problem value for max problem
Example 2.6. Solve the following LP problem using the simplex method.

min w = 2x1 − 3x2


s.t. x1 + x2 ≤ 4
x1 − x2 ≤ 6
x1 , x2 ≥ 0

Solution: The optimal solution to the LP is the point (x1 , x2 ) in the


feasible region for the LP that makes w = 2x1 − 3x2 the smallest. Equiv-
alently, we may say that the optimal solution to the LP is the point in
the feasible region that makes z = −w = −2x1 + 3x2 the largest. This
means that we can find the optimal solution to the LP by solving:

43
max z = −2x1 + 3x2
s.t. x1 + x2 ≤ 4
x1 − x2 ≤ 6
x1 , x2 ≥ 0

By adding slack variables s1 and s2 , respectively, we obtain the LP in


standard form:

max z + 2x1 − 3x2 = 0


s.t. x1 + x2 + s1 = 4
x1 − x2 + s2 = 6
x1 , x2 , s1 , s2 ≥ 0

The initial tableau and all following tableaus until the optimal solution is
reached are shown below.


Iteration [0] Basic x1 x2 s1 s2 RHS
z 2 −3 0 0 0
← s1 1 1 1 0 4 Ratio= 4/1 = 4
s2 1 −1 0 1 6 7
Iteration [1] Basic x1 x2 s1 s2 RHS Optimal Tableau
z 5 0 3 0 12 w = −z = −12
x2 1 1 1 0 4 x1 = 0, x2 = 4
s2 2 0 1 1 10 s1 = 0, s2 = 10

Method (2) A simple modification of the simplex algorithm can be used to


solve min problems directly. Modify Step 3 of the simplex as follows:
If all nonbasic variables in row 0 have nonpositive coefficients, then the
current bfs is optimal. If any nonbasic variable in row 0 has a positive
coefficient, choose the variable with the “most positive” coefficient in row
0 to enter the basis. This modification of the simplex algorithm works
because increasing a nonbasic variable with a positive coefficient in row 0
will decrease w. If we use this method to solve the LP in example (2.6),
then after adding slack variables s1 and s2 , respectively, we obtain the LP
in standard form:

min w − 2x1 + 3x2 = 0


s.t. x1 + x2 + s 1 = 4
x1 − x2 + s 2 = 6
x1 , x2 , s1 , s2 ≥ 0

The initial tableau and all following tableaus until the optimal solution is
reached are shown below. Note that, because x2 has the most positive
coefficient in row 0, we enter x2 into the basis.

44

Iteration [0] Basic x1 x2 s1 s2 RHS
w −2 3 0 0 0
← s1 1 1 1 0 4 Ratio= 4/1 = 4
s2 1 −1 0 1 6 7
Iteration [1] Basic x1 x2 s1 s2 RHS Optimal Tableau
w −5 0 −3 0 −12 w = −12
x2 1 1 1 0 4 x1 = 0, x2 = 4
s2 2 0 1 1 10 s1 = 0, s2 = 10

Exercise 2.4. Use the simplex algorithm to solve the following problems.

1. min w = 4x1 − x2 4. min w = −3x1 + 8x2


s.t. 2x1 + x2 ≤ 8 s.t. 4x1 + 2x2 ≤ 12
x2 ≤ 5 2x1 + 3x2 ≤ 6
x1 − x2 ≤ 4 x1 , x2 ≥ 0
x1 , x2 ≥ 0
5. min w = 5x1 + 4x2 + 6x3 − 8x4
2. min w = x1 − x2 s.t. x1 + 2x2 + 2x3 + 4x4 ≤ 40
s.t. x1 − x2 ≤ 1 2x1 − 2x2 + x3 + 2x4 ≤ 8
x1 + x2 ≤ 2 4x1 − 2x2 + x3 − x4 ≤ 10
x1 , x2 ≥ 0 x1 , x2 , x3 , x4 ≥ 0
3. min w = 2x1 − 5x2
s.t. 3x1 + 8x2 ≤ 12
2x1 + 3x2 ≤ 6
x1 , x2 ≥ 0

2.6 Artificial Starting Solution and the Big M −Method


Recall that the simplex algorithm requires a starting bfs. In all the problems we
have solved so far, we found a starting bfs by using the slack variables as our
basic variables. If an LP has any (≥) or (=) constraints, a starting bfs may not
be readily apparent. When a bfs is not readily apparent, the Big M −method
may be used to solve the problem. The Big M −method is a version of the
simplex algorithm that first finds a bfs by adding “artificial” variables to the
problem. The objective function of the original LP must be modified to ensure
that the artificial variables are all equal to 0 at the conclusion of the simplex
algorithm.
The big M −method starts with the LP in equation form. If equation i does
not have a slack (or a variable that can play the role of a slack), an artificial
variable, ai , is added to form a starting solution similar to the all-slack basic
solution. However, because the artificial variables are not part of the original
problem, a modeling “trick” is needed to force them to zero value by the time

45
the optimum iteration is reached (assuming the problem has a feasible solution).
The desired goal is achieved by assigning a penalty defined as:

Artificial variable objective −M in max problems
=
function coefficient M in min problems

where M is a sufficiently large positive value (mathematically, M → ∞).


Example 2.7. Solve the following LP problem using the simplex method.
min w = 4x1 + x2
s.t. 3x1 + x2 = 3
4x1 + 3x2 ≥ 6
x1 + 2x2 ≤ 4
x1 , x2 ≥ 0
Solution: To convert the constraint to equations, use e2 as a surplus in the
second constraint and s3 as a slack in the third constraint.

3x1 + x2 =3
4x1 + 3x2 − e2 =6
x1 + 2x2 + s3 = 4

The third equation has its slack variable, s3 , but the first and second equations
do not. Thus, we add the artificial variables a1 and a2 in the first two equations
and penalize them in the objective function with M a1 + M a2 (because we are
minimizing). The resulting LP becomes
min w = 4x1 + x2 + M a1 + M a2
s.t. 3x1 + x2 + a1 = 3
4x1 + 3x2 − e2 + a2 = 6
x1 + 2x2 + s3 = 4
x1 , x2 , s3 , e2 , a1 , a2 ≥ 0
After writing the objective function as w − 4x1 − x2 − M a1 − M a2 = 0, the
initial tableau will be
Iteration [0] Basic x1 x2 s3 e2 a1 a2 RHS
w −4 −1 0 0 −M −M 0
a1 3 1 0 0 1 0 3
a2 4 3 0 −1 0 1 6
s3 1 2 1 0 0 0 4
Before proceeding with the simplex method computations, row 0 must be made
consistent with the rest of the tableau. The right−hand side of row 0 in the
tableau currently shows w = 0. However, given the nonbasic solution x1 =
x2 = e2 = 0, the current basic solution is a1 = 3, a2 = 6, and s3 = 4 yields

w = (4 × 0) + (1 × 0) + (3 × M ) + (6 × M ) = 9M 6= 0.

46
The inconsistency stems from the fact that a1 and a2 have nonzero coefficients
in row 0. To eliminate the inconsistency, we use EROs. The modified tableau
thus becomes (verify!):
Iteration [0] Basic x1 x2 s3 e2 a1 a2 RHS
w 7M − 4 4M − 1 0 −M 0 0 9M
a1 3 1 0 0 1 0 3
a2 4 3 0 −1 0 1 6
s3 1 2 1 0 0 0 4
The last tableau is ready for the application of the simplex optimality and the
feasibility conditions. Because the objective function is minimized, the variable
x1 having the most positive coefficient in the row 0 enters the solution. The
minimum ratio of the feasibility condition specifies a1 as the leaving variable.
All tableaus until the optimal solution is reached are shown below.


Iteration [0] Basic x1 x2 s3 e2 a1 a2 RHS
w 7M − 4 4M − 1 0 −M 0 0 9M
← a1 3 1 0 0 1 0 3 Ratio= 3/3 = 1
a2 4 3 0 −1 0 1 6 Ratio= 6/4 = 3/2
s3 1 2 1 0 0 0 4 Ratio= 4/2 = 2

Iteration [1] Basic x1 x2 s3 e2 a1 a2 RHS
w 0 (1+5M )/3 0 −M (4−7M )/3 0 4 + 2M
x1 1 1/3 0 0 1/3 0 1 Ratio= 1 ÷ 1/3 = 3
← a2 0 5/3 0 −1 −4/3 1 2 Ratio= 2 ÷ 5/3 = 6/5
s3 0 5/3 1 0 −1/3 0 3 Ratio= 3 ÷ 5/3 = 9/5

Iteration [2] Basic x1 x2 s3 e2 a1 a2 RHS
w 0 0 0 1/5 8/5 − M −1/5 − M 18/5
x1 1 0 0 1/5 3/5 −1/5 3/5 Ratio= 3/5 ÷ 1/5 = 3
x2 0 1 0 −3/5 −4/5 3/5 6/5 7
← s3 0 0 1 1 1 −1 1 Ratio= 1/1 = 1

Iteration [3] Basic x1 x2 s3 e2 a1 a2 RHS Optimal Tableau


w 0 0 −1/5 0 7/5 −M −M 17/5 w = 17/5
x1 1 0 −1/5 0 2/5 0 2/5 x1 = 2/5, x2 = 9/5
x2 0 1 3/5 0 −1/5 0 9/5 e2 = 1
e2 0 0 1 1 1 −1 1 a1 = 0, a2 = 0
Note 9. From a computational standpoint, solving the problem on the computer
requires replacing M with a sufficiently large numeric value. The result is an
unnecessary layer of computational difficulty that can be avoided by substituting
an appropriate numeric value for M (which is what we would do anyway if we

47
use the computer). We break away from the long tradition of manipulating M
algebraically and use a numerical substitution instead. The intent is to simplify
the presentation without losing substance. What value of M should we use?
The answer depends on the data of the original LP. Recall that the penalty
M must be sufficiently large relative to the original objective coefficients to
force the artificial variables to be zero (which happens only if a feasible solution
exists). At the same time, since computers are the main tool for solving LPs,
M should not be unnecessarily too large, as this may lead to serious round-off
error. In the present example, the objective coefficients of x1 and x2 are 2 and
1, respectively, and it appears reasonable to set M = 100.

Example 2.8. Solve the following LP problem using the simplex method.
max z = 2x1 + x2
s.t. x1 + x2 ≤ 10
−x1 + x2 ≥ 2
x1 , x2 ≥ 0
Solution: To convert the constraint to equations, use s1 as a slack in the first
constraint and e2 as a surplus in the second constraint.

x1 + x2 + s1 = 10
−x1 + x2 − e2 = 2

We add the artificial variables a2 in the second equation and penalize it in the
objective function with −M a2 = −100a2 (because we are maximizing). The
resulting LP becomes
max z = 2x1 + x2 − 100a2
s.t. x1 + x2 + s1 = 10
−x1 + x2 − e2 + a2 = 2
x1 , x2 , s1 , e2 , a2 ≥ 0
After writing the objective function as z − 2x1 − x2 + 100a2 = 0, the initial
tableau will be
Iteration [0] Basic x1 x2 s1 e2 a2 RHS
z −2 −1 0 0 100 0
s1 1 1 1 0 0 10
a2 −1 1 0 −1 1 2
Before proceeding with the simplex method computations, row 0 must be made
consistent with the rest of the tableau. The inconsistency stems from the fact
that a2 has nonzero coefficients in row 0. To eliminate the inconsistency, we use
EROs. The modified tableau and all other tableaus until the optimal solution
is reached are:

48

Iteration [0] Basic x1 x2 s1 e2 a2 RHS
z 98 −101 0 100 0 −200
s1 1 1 1 0 0 10 Ratio= 10/1 = 10
← a2 −1 1 0 −1 1 2 Ratio= 2/1 = 2

Iteration [1] Basic x1 x2 s1 e2 a2 RHS
z −3 0 0 −1 101 2
← s1 2 0 1 1 −1 8 Ratio= 8/2 = 4
x2 −1 1 0 −1 1 2 7

Iteration [2] Basic x1 x2 s1 e2 a2 RHS Optimal Tableau


z 0 0 3/2 1/2 199/2 14 z = 14
x1 1 0 1/2 1/2 −1/2 4 x1 = 4,x2 = 6
x2 0 1 1/2 −1/2 1/2 6 s1 = e2 = a2 = 0
Example 2.9. Consider the problem.
max z = 2x1 + 4x2 + 4x3 − 3x4
s.t. x1 + x2 + x3 =4
x1 + 4x2 + x4 = 8
x1 , x2 , x3 , x4 ≥ 0
The variables x3 and x4 play the role of slack variables. So, without using any ar-
tificial variables, solve the problem with x3 and x4 as the starting basic variables.

Solution: The main difference here from the usual simplex is that x3 and x4
have nonzero objective coefficients in row 0: z − 2x1 − 4x2 − 4x3 + 3x4 = 0. To
eliminate their coefficients, we use EROs. The initial tableaus and all following
tableaus until the optimal solution is reached are shown below.

Iteration [0] Basic x1 x2 x3 x4 RHS


z −2 −4 −4 3 0
x3 1 1 1 0 4
x4 1 4 0 1 8

Iteration [0] Basic x1 x2 x3 x4 RHS
z −1 −12 0 0 −8
x3 1 1 1 0 4 Ratio= 4/1 = 4
← x4 1 4 0 1 8 Ratio= 8/4 = 2

Iteration [1] Basic x1 x2 x3 x4 RHS Optimal Tableau


z 2 0 0 3 16 z = 16
x3 3/4 0 1 −1/4 2 x1 = 0, x2 = 2
x2 1/4 1 0 1/4 2 x3 = 2, x4 = 0

49
Exercise 2.5.
1. Use the Big M -method to solve the following LPs:

(a) min w = 4x1 + 4x2 + x3 (c) min w = 2x1 + 3x2


s.t. x1 + x2 + x3 ≤ 2 s.t. 2x1 + x2 ≥ 4
2x1 + x2 ≤ 3 x1 − x2 ≥ −1
2x1 + x2 + 3x3 ≥ 3 x1 , x2 ≥ 0
x1 , x2 , x3 ≥ 0
(b) min w = x1 + x2 (d) max z = 3x1 + x2
s.t. 2x1 + x2 + x3 = 4 s.t. 2x1 + x2 ≤ 4
x1 + x2 + 2x3 ≤ 2 x1 + x2 = 3
x1 , x2 , x3 ≥ 0 x1 , x2 ≥ 0

2. Solve the following problem using x3 and x4 as starting basic feasible


variables. As in example (2.9), do not use any artificial variables.

min z = 3x1 + 2x2 + 3x3 + 2x4


s.t. x1 + 4x2 + x3 ≥ 14
2x1 + x2 + x4 ≥ 20
x1 , x2 , x3 , x4 ≥ 0

3. Consider the problem

max z = x1 + 5x2 + 3x3


s.t. x1 + 2x2 + x3 = 6
2x1 − x2 =8
x1 , x2 , x3 ≥ 0

The variable x3 plays the role of a slack. Thus, no artificial variable


is needed in the first constraint. In the second constraint, an artificial
variable, a2 , is needed. Solve the problem using x3 and a2 as the starting
variables.

2.7 Special Cases in the Simplex Method


This section considers four special cases that arise in the use of the simplex
method.

2.7.1 Degeneracy
In the application of the feasibility condition of the simplex method, a tie for the
minimum ratio may occur and can be broken arbitrarily. When this happens, at
least one basic variable will be zero in the next iteration, and the new solution
is said to be degenerate. This situation may reveal that the model has at least
one redundant constraint.

50
Definition 2.4. An LP is degenerate if it has at least one bfs
in which a basic variable is equal to zero.

If one of these degenerate basic variables retains its value of zero until it is
chosen at a subsequent iteration to be a leaving basic variable, the corresponding
entering basic variable also must remain zero, so the value of the objective
function must remain unchanged. However, if the objective function may remain
the same rather than change at each iteration, the simplex method may then go
around in a loop, repeating the same sequence of solutions periodically rather
than eventually changing the objective function toward an optimal solution.
This occurrence is called cycling.
Example 2.10. Solve the following LP problem.
max z = 3x1 + 9x2
s.t. x1 + 4x2 ≤ 8
x1 + 2x2 ≤ 4
x1 , x2 ≥ 0
Solution: By adding slack variables s1 and s2 , we obtain the LP in standard
form
max z − 3x1 − 9x2 = 0
s.t. x1 + 4x2 + s1 = 8
x1 + 2x2 + s2 = 4
x1 , x2 , s1 , s2 ≥ 0
The initial tableau and all following tableaus until the optimal solution is reached
are shown below.

Iteration [0] Basic x1 x2 s1 s2 RHS
z −3 −9 0 0 0
← s1 1 4 1 0 8 Ratio= 8/4 = 2
s2 1 2 0 1 4 Ratio= 4/2 = 2
In iteration 0, s1 and s2 tie for the leaving variable, leading to degeneracy in
iteration 1 because the basic variable s2 assumes a zero value.

Iteration [1] Basic x1 x2 s1 s2 RHS
z −3/4 0 9/4 0 18
x2 1/4 1 1/4 0 2 Ratio= 2 ÷ 1/4 = 8
← s2 1/2 0 −1/2 1 0 Ratio= 0 ÷ 1/2 = 0

Iteration [2] Basic x1 x2 s1 s2 RHS Optimal Tableau


z 0 0 3/2 3/2 18 z = 18
x2 0 1 1/2 −1/2 2 x1 = 0, x2 = 2
x1 1 0 −1 2 0 s1 = 0, s2 = 0

51
The following example illustrates the occurrence of cycling in the simplex
iterations and the possibility that the algorithm may never converge to the
optimum solution.

Example 2.11. This example was authored by E.M. Beale1 . Consider the
following LP:
3 1
max C = x1 − 150x2 + x3 − 6x4
4 50
1 1
s.t. x1 − 60x2 − x3 + 9x4 ≤ 0
4 25
1 1
x1 − 90x2 − x3 + 3x4 ≤ 0
2 50
x3 ≤ 1
x1 , x2 , x3 , x4 ≥ 0
1 1
Actually, the optimal solution of this example is C = when x1 = ,
20 25
x3 = 1, and x2 = x4 = 0. However, in order to solve this LP using the Simplex
algorithm, we write it in standard form as follows.
3 1
max C − x1 + 150x2 − x3 + 6x4 = 0
4 50
1 1
s.t. x1 − 60x2 − x3 + 9x4 + s1 = 0
4 25
1 1
x1 − 90x2 − x3 + 3x4 + s2 = 0
2 50
x3 + s3 = 1
x1 , x2 , x3 , x4 , s1 , s2 , s3 ≥ 0
Let us start applying the Simplex algorithm, and see what will happen through
the iterations.


Iteration [0] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C −3/4 150 −1/50 6 0 0 0 0
← s1 1/4 −60 −1/25 9 1 0 0 0 Ratio= 0
s2 1/2 −90 −1/50 3 0 1 0 0 Ratio= 0
s3 0 0 1 0 0 0 1 1 7

Iteration [1] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C 0 −30 −7/50 33 3 0 0 0
x1 1 −240 −4/25 36 4 0 0 0 7
← s2 0 30 3/50 −15 −2 1 0 0 Ratio= 0
s3 0 0 1 0 0 0 1 1 7
1
Saul I. Gass, Sasirekha Vinjamuri. Cycling in linear programming problems. Computers
& Operations Research 31 (2004)

52

Iteration [2] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C 0 0 −2/25 18 1 1 0 0
← x1 1 0 8/25 −84 −12 8 0 0 Ratio= 0
x2 0 1 1/500 −1/2 −1/15 1/30 0 0 Ratio= 0
s3 0 0 1 0 0 0 1 1 Ratio= 1

Iteration [3] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C 1/4 0 0 −3 −2 3 0 0
x3 25/8 0 1 −525/2 −75/2 25 0 0 7
← x2 −1/160 1 0 1/40 1/120 −1/60 0 0 Ratio= 0
s3 −25/8 0 0 525/2 75/2 −25 1 1 Ratio= 2/525

Iteration [4] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C −1/2 120 0 0 −1 1 0 0
← x3 −125/2 10500 1 0 50 −150 0 0 Ratio= 0
x4 −1/4 40 0 1 1/3 −2/3 0 0 Ratio= 0
s3 125/2 −10500 0 0 −50 150 1 1 7

Iteration [5] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C −7/4 330 1/50 0 0 −2 0 0
s1 −5/4 210 1/50 0 1 −3 0 0 7
← x4 1/6 −30 −1/150 1 0 1/3 0 0 Ratio= 0
s3 0 0 1 0 0 0 1 1 7

Iteration [6] Basic x1 x2 x3 x4 s1 s2 s3 RHS


C −3/4 150 −1/50 6 0 0 0 0 We back to
s1 1/4 −60 −1/25 9 1 0 0 0 iteration [0]
s2 1/2 −90 −1/50 3 0 1 0 0 and we stuck
s3 0 0 1 0 0 0 1 1 in a cycle

Note 10. There are several ways to solve the LP problem in example (2.11).
We review these methods as follows.

1. Computer Systems: like Excel Solver, LINDO and Mathematica.

2. Convert all the coefficients in the constraints to integer values by


using proper multiples:2 this can be done by multiplying the first con-
straint in the original LP by lcm(4, 25) = 100 and the second constrain
by lcm(2, 50) = 50. Then we write the LP in standard form.
2
Hamdy A. Taha, Operations Research: An Introduction, 9th Edition, Prentice Hall.
2011. Call number in PU library: 658.4034 TAH.

53
3 1
max C − x1 + 150x2 − x3 + 6x4 = 0
4 50
s.t. 25x1 − 6000x2 − 4x3 + 900x4 + s1 = 0
25x1 − 4500x2 − x3 + 150x4 + s2 = 0
x3 + s 3 = 1
x1 , x2 , x3 , x4 , s1 , s2 , s3 ≥ 0

Now we apply the Simplex algorithm.


Iteration [0] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C −3/4 150 −1/50 6 0 0 0 0
← s1 25 −6000 −4 900 1 0 0 0 Ratio= 0
s2 25 −4500 −1 150 0 1 0 0 Ratio= 0
s3 0 0 1 0 0 0 1 1 7

Iteration [1] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C 0 −30 −7/50 33 3/100 0 0 0
x1 1 −240 −4/25 36 1/25 0 0 0 7
← s2 0 1500 3 −750 −1 1 0 0 Ratio= 0
s3 0 0 1 0 0 0 1 1 7

Iteration [2] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C 0 0 −2/25 18 1/100 1/50 0 0
x1 1 0 8/25 −84 −3/25 4/25 0 0 Ratio= 0
← x2 0 1 1/500 −1/2 −1/1500 1/1500 0 0 Ratio= 0
s3 0 0 1 0 0 0 1 1 Ratio= 1

Iteration [3] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C 0 40 0 −2 −1/60 7/150 0 0
x1 1 −160 0 −4 −1/75 4/75 0 0 7
x3 0 500 1 −250 −1/3 1/3 0 0 7
← s3 0 −500 0 250 1/3 −1/3 1 1 Ratio= 1/250

Iteration [4] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C 0 36 0 0 −7/500 11/250 1/125 1/125
x1 1 −168 0 0 −1/125 6/125 2/125 2/125 7
x3 0 0 1 0 0 0 1 1 7
← x4 0 −2 0 1 1/750 −1/750 1/250 1/250 Ratio= 3

54
Iteration [5] Basic x1 x2 x3 x4 s1 s2 s3 RHS Optimal
C 0 15 0 21/2 0 3/100 1/20 1/20 Tableau
x1 1 −180 0 6 0 1/25 1/25 1/25
x3 0 0 1 0 0 0 1 1
s1 0 −1500 0 750 1 −1 3 3

3. Bland’s Rule for selecting entering and leaving variables3 .

(a) For the entering basic variable: Of all negative coefficients in the
objective row (Row 0), choose the one with smallest subscript.
(b) For the departing basic variable: When there is a tie between one
or more ratios computed, choose the candidate for departing basic
variable that has the smallest subscript.

When we use Bland’s rule to solve the LP in example (2.11), we name


the slake variables s1 , s2 , s3 as x5 , x6 , x7 respectively. The iterations will
be as follows.


Iteration [0] Basic x1 x2 x3 x4 x5 x6 x7 RHS
C −3/4 150 −1/50 6 0 0 0 0
← x5 1/4 −60 −1/25 9 1 0 0 0 Ratio= 0
x6 1/2 −90 −1/50 3 0 1 0 0 Ratio= 0
x7 0 0 1 0 0 0 1 1 7

Iteration [1] Basic x1 x2 x3 x4 x5 x6 x7 RHS
C 0 −30 −7/50 33 3 0 0 0
x1 1 −240 −4/25 36 4 0 0 0 7
← x6 0 30 3/50 −15 −2 1 0 0 Ratio= 0
x7 0 0 1 0 0 0 1 1 7

Iteration [2] Basic x1 x2 x3 x4 x5 x6 x7 RHS
C 0 0 −2/25 18 1 1 0 0
← x1 1 0 8/25 −84 −12 8 0 0 Ratio= 0
x2 0 1 1/500 −1/2 −1/15 1/30 0 0 Ratio= 0
x7 0 0 1 0 0 0 1 1 Ratio= 1
3
James Calvert and William Voxman, Linear Programming, 1st Edition, Harcourt Brace
Jovanovich Publishers, 1989.

55

Iteration [3] Basic x1 x2 x3 x4 x5 x6 x7 RHS
C 1/4 0 0 −3 −2 3 0 0
x3 25/8 0 1 −525/2 −75/2 25 0 0 7
← x2 −1/160 1 0 1/40 1/120 −1/60 0 0 Ratio= 0
x7 −25/8 0 0 525/2 75/2 −25 1 1 Ratio= 2/525

Iteration [4] Basic x1 x2 x3 x4 x5 x6 x7 RHS
C −1/2 120 0 0 −1 1 0 0
x3 −125/2 10500 1 0 50 −150 0 0 7
x4 −1/4 40 0 1 1/3 −2/3 0 0 7
← x7 125/2 −10500 0 0 −50 150 1 1 Ratio= 2/125

Iteration [5] Basic x1 x2 x3 x4 x5 x6 x7 RHS
C 0 36 0 0 −7/5 11/5 1/125 1/125
x3 0 0 1 0 0 0 1 1 7
← x4 0 −2 0 1 2/15 −1/15 1/250 1/250 Ratio= 100/3
x1 1 −168 0 0 −4/5 12/5 2/125 2/125 7

Iteration [6] Basic x1 x2 x3 x4 x5 x6 x7 RHS Optimal


C 0 15 0 21/2 0 3/2 1/20 1/20 Tableau
x3 0 0 1 0 0 0 1 1
x5 0 −15 0 15/2 1 −1/2 3/100 3/100
x1 1 −180 0 6 0 2 1/25 1/25

4. Lexicographic Rule for selecting an exiting variable4 .


Given a basic feasible solution with basis B, suppose that the nonbasic
variable xk is chosen to enter the basis (the most negative value in Row
0 for maximization LP). The index r of the variable xB leaving the basis
is determined as follows. Let
  
br bi
I0 = r : = min : yik > 0
yrk 0≤i≤m yik

If I0 is a singleton, namely I0 = {r}, then xBr leaves the basis. Otherwise,


form I1 as follows:
  
yr1 yi1
I1 = r : = min
yrk i∈I0 yik

where y∗1 is the first column of the m × m identity matrix. If I1 is


singleton, namely, I1 = {r}, then xBr leaves the basis. Otherwise, form
4
Mokhtar S. Bazaraa, John J. Jarvis, Hanif D. Sher, Linear Programming and Network
Flows, 4th Edition, John Wiley & Sons, Inc. 2010. Call number in PU library: 519.72 BAZ

56
I2 , where, in general, Ij is formed from Ij−1 as follows:
  
yrj yij
Ij = r : = min
yrk i∈Ij−1 yik

where y∗j is the jth column of the m × m identity matrix. Eventually,


for some j ≤ m, Ij will be a singleton. If Ij = {r}, then xBr leaves the
basis.


Iteration [0] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C −3/4 150 −1/50 6 0 0 0 0
s1 1/4 −60 −1/25 9 1 0 0 0 Ratio= 0
← s2 1/2 −90 −1/50 3 0 1 0 0 Ratio= 0
s3 0 0 1 0 0 0 1 1 7

Here, I0 = {1, 2}, and then I1 = {2}, and therefore xB2 = s2 leaves the
basis.


Iteration [1] Basic x1 x2 x3 x4 s1 s2 s3 RHS
C 0 15 −1/20 21/2 0 3/2 0 0
s1 0 −15 −3/100 15/2 1 −1/2 0 0 7
x1 1 −180 −1/25 6 0 2 0 0 7
← s3 0 0 1 0 0 0 1 1 Ratio= 1

Here, I0 = {3}. Therefore, xB3 = s3 leaves.

Iteration [2] Basic x1 x2 x3 x4 s1 s2 s3 RHS Optimal


C 0 15 0 21/2 0 3/2 1/20 1/20 Tableau
s1 0 −15 0 15/2 1 −1/2 3/100 3/100
x1 1 −180 0 6 0 2 1/25 1/25
x3 0 0 1 0 0 0 1 1

2.7.2 Alternative Optima


Recall from example (1.14) of Section 1.4 that for some LPs, more than one
extreme point is optimal. If an LP has more than one optimal solution, then
we say that it has multiple or alternative optimal solutions. An LP problem
may have an infinite number of alternative optima when the objective function
is parallel to a nonredundant binding constraint. The existence of alternative
can be detected in the optimal tableau by examining row 0 coefficients of the

57
nonbasic variables. The zero coefficient of nonbasic xj indicates that xj can
be made basic, altering the values of the basic variables without changing the
value of z.
In practice, alternative optima are useful because we can choose from many
solutions without experiencing deterioration in the objective value. If the exam-
ple represents a product-mix situation, it may be advantageous to market two
products instead of one.

Example 2.12. Solve the following LP problem.


max z = 2x1 + 4x2
s.t. x1 + 2x2 ≤ 5
x1 + x2 ≤ 4
x1 , x2 ≥ 0
Solution: By adding slack variables s1 and s2 , we obtain the LP in standard
form
max z − 2x1 − 4x2 = 0
s.t. x1 + 2x2 + s1 = 5
x1 + x2 + s 2 = 4
x1 , x2 , s1 , s2 ≥ 0
The initial tableau and all following tableaus until the optimal solution is reached
are shown below.

Iteration [0] Basic x1 x2 s1 s2 RHS
z −2 −4 0 0 0
← s1 1 2 1 0 5 Ratio= 5/2
s2 1 1 0 1 4 Ratio= 4/1 = 4

Iteration [1] Basic x1 x2 s1 s2 RHS Optimal
z 0 0 2 0 10 Tableau
x2 1/2 1 1/2 0 5/2 Ratio= 5/2 ÷ 1/2 = 5
← s2 1/2 0 −1/2 1 3/2 Ratio= 3/2 ÷ 3/2 = 3

Iteration [2] Basic x1 x2 s1 s2 RHS Alternative


z 0 0 2 0 10 Optima
x2 0 1 1 −1 1
x1 1 0 −1 2 3

Mathematically, we can determine all the points (x1 , x2 ) on the line segment
joining the optimal solutions 0, 25 and (3, 1) as follows:

x1 = t(0)+ (1 − t)(3) = 3 − 3t
; 0≤t≤1
x2 = t 25 + (1 − t)(1) = 1 + 3t
2

58
2.7.3 Unbounded Solutions
In some LP models, as in example (1.16) of Section 1.4, the solution space is
unbounded in at least one variable, meaning that variables may be increased
indefinitely without violating any of the constraints. The associated objective
value may also be unbounded in this case. An unbounded LP for a max problem
occurs when a variable with a negative coefficient (positive for min LP) in row
0 has a nonpositive coefficient in each constraint.
An unbounded solution space may signal that the model is poorly con-
structed. The most likely irregularity in such models is that some key con-
straints have not been accounted for. Another possibility is that estimates of
the constraint coefficients may not be accurate.

Example 2.13. Solve the following LP problem.


max z = 2x1 + x2
s.t. x1 − x2 ≤ 10
2x1 ≤ 40
x1 , x2 ≥ 0
Solution: By adding slack variables s1 and s2 , we obtain the LP in standard
form
max z − 2x1 − x2 = 0
s.t. x1 − x2 + s1 = 10
2x1 + s2 = 40
x1 , x2 , s1 , s2 ≥ 0
The initial tableau and all following tableaus until the optimal solution is reached
are shown below.

Iteration [0] Basic x1 x2 s1 s2 RHS


z −2 −1 0 0 0
s1 1 −1 1 0 10
s2 2 0 0 1 40
In the starting tableau, both x1 and x2 have negative z−equation coefficients,
meaning that an increase in their values will increase the objective value. Al-
though x1 should be the entering variable (it has the most negative z−coefficient),
we note that all the constraint coefficients under x2 are ≤ 0, meaning that x2
can be increased indefinitely without violating any of the constraints. The result
is that z can be increased indefinitely.

2.7.4 Nonexisting (or Infeasible) Solutions


LP models with inconsistent constraints have no feasible solution, see example
(1.15) of Section 1.4. This situation does not occur if all the constraints are

59
of the type ≤ with nonnegative right-hand sides because the slacks provide
an obvious feasible solution. For other types of constraints, penalized artificial
variables are used to start the solution. If at least one artificial variable is
positive in the optimum iteration, then the LP has no feasible solution. From
the practical standpoint, an infeasible space points to the possibility that the
model is not formulated correctly.

Example 2.14. Solve the following LP problem.


max z = 3x1 + 2x2
s.t. 2x1 + x2 ≤ 2
3x1 + 4x2 ≥ 12
x1 , x2 ≥ 0
Solution: To convert the constraint to equations, use s1 as a slack in the first
constraint and e2 as a surplus in the second constraint.

2x1 + x2 + s1 = 2
3x1 + 4x2 − e2 = 12

We add the artificial variables a2 in the second equation and penalize it in the
objective function with −M a2 = −100a2 (because we are maximizing). The
resulting LP becomes
max z = 3x1 + 2x2 − 100a2
s.t. 2x1 + x2 + s1 = 2
3x1 + 4x2 − e2 + a2 = 12
x1 , x2 , s1 , e2 , a2 ≥ 0
After writing the objective function as z − 3x1 − 2x2 + 100a2 = 0, the initial
tableau will be and all following tableaus until the optimal solution is reached
are shown below.
Iteration [0] Basic x1 x2 s1 e2 a2 RHS
z −3 −2 0 0 100 0
s1 2 1 1 0 0 2
a2 3 4 0 −1 1 12

Iteration [0] Basic x1 x2 s1 e2 a2 RHS
z −303 −402 0 100 0 −1200
← s1 2 1 1 0 0 2 Ratio= 2/1 = 2
a2 3 4 0 −1 1 12 Ratio= 12/4 = 3

Iteration [1] Basic x1 x2 s1 e2 a2 RHS Optimal


z 501 0 402 100 0 −396 Tableau
x2 2 1 1 0 0 2
a2 −5 0 −4 −1 1 4

60
Optimum iteration 1 shows that the artificial variable a2 is positive (= 4),
meaning that the LP is infeasible. The result is what we may call a pseudo-
optimal solution.
Exercise 2.6.
1. Consider the following LP:

max z = 3x1 + 2x2


s.t. 4x1 − x2 ≤ 4
4x1 + 3x2 ≤ 6
4x1 + x2 ≤ 4
x1 , x2 ≥ 0

(a) Show that the associated simplex iterations are temporarily degen-
erate. How many iterations are needed to reach the optimum?
(b) Verify the result by solving the problem graphically.
(c) Interchange constraints (1) and (3) and resolve the problem. How
many iterations are needed to solve the problem?

2. Solve the following problem, using the lexicographic rule for noncycling.
Repeat using Bland’s Rule:

max z = x1 + 2x2 + x3
s.t. x1 + 4x2 + 3x3 ≤ 4
−x1 + x2 + 4x3 ≤ 1
x1 + 3x2 + x3 ≤ 6
x1 , x2 , x3 ≥ 0

3. For the following LP, identify three alternative optimal basic solutions.

max z = x1 + 2x2 + 3x3


s.t. x1 + 2x2 + 3x3 ≤ 10
x1 + x2 ≤ 5
x1 ≤ 1
x1 , x2 , x3 ≥ 0

4. Solve the following LP:

max z = 2x1 − x2 + 3x3


s.t. x1 − x2 + 5x3 ≤ 5
2x1 − x2 + 3x3 ≤ 20
x1 , x2 , x3 ≥ 0

From the optimal tableau, show that all the alternative optima are not
corner points (i.e., nonbasic).

61
5. For the following LP, show that the optimal solution is degenerate and
that none of the alternative solutions are corner points.

max z = 3x1 + x2
s.t. x1 + 2x2 ≤ 5
x1 + x2 − x3 ≤ 2
7x1 + 3x2 − 5x3 ≤ 20
x1 , x2 , x3 ≥ 0

6. Consider the LP:

max z = 20x1 + 5x2 + x3


s.t. 3x1 + 5x2 − 5x3 ≤ 50
x1 ≤ 10
x1 + 3x2 − 4x3 ≤ 20
x1 , x2 , x3 ≥ 0

(a) By inspecting the constraints, determine the direction (x1 , x2 , x3 ) in


which the solution space is unbounded.
(b) Without further computations, what can you conclude regarding the
optimum objective value?

7. Consider the LP model

max z = 3x1 + 2x2 + 3x3


s.t. 2x1 + x2 + x3 ≤ 4
3x1 + 4x2 + 2x3 ≥ 16
x1 , x2 , x3 ≥ 0

Use hand computations to show that the optimal solution can include an
artificial basic variable at zero level. Does the problem have a feasible
optimal solution?

8. The following tableau represents a specific simplex iteration. All variables


are nonnegative. The tableau is not optimal for either maximization or
minimization. Thus, when a nonbasic variable enters the solution, it can
either increase or decrease z or leave it unchanged, depending on the
parameters of the entering nonbasic variable.

Basic x1 x2 x3 x4 x5 x6 x7 x8 RHS
z 0 −5 0 4 −1 −10 0 0 620
x8 0 3 0 −2 −3 −1 5 1 12
x3 0 1 1 3 1 0 3 0 6
x1 1 −1 0 0 6 −4 0 0 0

62
(a) Categorize the variables as basic and nonbasic, and provide the cur-
rent values of all the variables.
(b) Assuming that the problem is of the maximization type, identify the
nonbasic variables that have the potential to improve the value of
z. If each such variable enters the basic solution, determine the
associated leaving variable, if any, and the associated change in z.
(c) Repeat part (b) assuming that the problem is of the minimization
type.
(d) Which nonbasic variable(s) will not cause a change in the value of z
when selected to enter the solution?

9. You are given the tableau shown below for a maximization problem.

Basic x1 x2 x3 x4 x5 RHS
z −c 2 0 0 0 10
x3 −1 a1 1 0 0 4
x4 a2 −4 0 1 0 1
x5 a3 3 0 0 1 b

Give conditions on the unknowns a1 , a2 , a3 , b, and c that make the


following statements true:

(a) The current solution is optimal.


(b) The current solution is optimal, and there are alternative optimal
solutions.
(c) The LP is unbounded (in this part, assume that b ≥ 0).

10. Suppose we have obtained the tableau shown below for a maximization
problem.

Basic x1 x2 x3 x4 x5 x6 RHS
z c1 c2 0 0 0 0 10
x3 4 a1 1 0 a2 0 b
x4 −1 −5 0 1 −1 0 2
x6 a3 −3 0 0 −4 1 3

State conditions on a1 , a2 , a3 , b, c1 , and c2 that are required to make


the following statements true:

(a) The current solution is optimal, and there are alternative optimal
solutions.
(b) The current basic solution is not a basic feasible solution.
(c) The current basic solution is a degenerate bfs.

63
(d) The current basic solution is feasible, but the LP is unbounded.
(e) The current basic solution is feasible, but the objective function value
can be improved by replacing x6 as a basic variable with x1 .

11. The starting and current tableaux of a given problem are shown below.
Find the values of the unknowns a through n.

Starting Basic x1 x2 x3 x4 x5 RHS


Tableau z a 1 −3 0 0 0
x4 b c d 1 0 6
x5 −1 2 e 0 1 1
Current Basic x1 x2 x3 x4 x5 RHS
Tableau z 0 −1/3 j k ` n
x1 g 2/3 2/3 2/3 0 f
x5 h i −1/3 2/3 1 m

64
3 Sensitivity Analysis and Duality
Two of the most important topics in linear programming are sensitivity analysis
and duality. After studying these important topics, the reader will have an
appreciation of the beauty and logic of linear programming.

3.1 Some Important Formulas


In this section, we use our knowledge of matrices to show how an LP’s opti-
mal tableau can be expressed in terms of the LP’s parameters. The formulas
developed in this section are used in our study of sensitivity analysis and duality.
Assume that we are solving a max problem that has m constraints and n
variables. Although some of these variables may be slack, excess, or artificial,
we choose to label them x1 , x2 , · · · , xn . Then the LP may be written as

max z = c1 x1 + c2 x2 + · · · + cn xn
s.t. a11 x1 + a12 x2 + · · · + a1n xn = b1
a21 x1 + a22 x2 + · · · + a2n xn = b2
.. (3.1)
.
am1 x1 + am2 x2 + · · · + amn xn = bm
x1 , x2 , · · · , xn ≥ 0

Suppose we have found the optimal solution to (3.1). We define:

Definition 3.1.

1. BV is the set of basic variables in optimal tableau.

2. XBV is the m × 1 vector of basic variables in optimal


tableau.

3. N BV is the set of nonbasic variables in optimal tableau.

4. XN BV is the (n − m) × 1 vector of nonbasic variables


in optimal tableau.

5. CBV is the 1 × m row vector contains the coefficients


of basic variables in the initial tableau.

6. CN BV is the 1 × (n − m) row vector contains the co-


efficients of nonbasic variables in the initial tableau.

7. The m × m matrix B is the matrix whose jth column


is the column of BVj in the initial tableau.

65
8. aj is the column (in the constraints) of the variable xj
in the initial tableau.

9. N is the m × (n − m) matrix whose columns are the co-


efficients of the nonbasic variables in the initial tableau.

10. b is the 1 × m column vector contains the right-hand


side of the constraints in the initial tableau.

So, the LP in (3.1) can be written as

max z = CBV XBV + CN BV XN BV


s.t. BXBV + NXN BV = b (3.2)
XBV , XN BV ≥ 0

and the initial tableau has the form


Basic BV N BV RHS
z CBV CN BV z value

XBV Bm×m Nm×(n−m) b

Formulas for Computing the Optimal Tableau


1. Right-hand side of optimal tableau’s constraints: b = B−1 b.

2. Right-hand side of optimal row 0: z = CBV B−1 b = CBV b.




3. xj columns in optimal tableau’s constraints: aj = B−1 aj . In general,


XN BV columns in optimal tableau’s constraints: N = B−1 N and XN BV
column in optimal tableau’s constraints = Im .

4. (a) cj = CBV B−1 aj − cj = CBV aj − cj .




(b) csi = ith element in CBV B−1 .


(c) cei = −ith element in CBV B−1 .
(d) In maximization LP, cai = ith element in CBV B−1 + M .
(e) In minimization LP, cai = ith element in CBV B−1 − M .
(f) In general, CN BV = CBV B−1 N − CN BV = CBV N − CN BV .


The optimal tableau then has the form:


Basic BV N BV RHS 
B−1 N CBV B−1 b

z 0 CBV − CN BV

XBV Im×m B−1 Nm×(n−m) B−1 b

66
Formulas Derivation:
1. To expressing the constraints in any tableau in terms of B−1 and the
original LP, we observe that

BXBV + NXN BV = b

Multiplying both sides from the left by B−1 , we obtain

B−1 BXBV + B−1 NXN BV = B−1 b

which implies that

XBV + B−1 NXN BV = B−1 b

2. To determining the optimal tableau’s Row 0 in terms of the initial LP, we


rewrite the original objective function, z = CBV XBV + CN BV XN BV as

z − CBV XBV − CN BV XN BV = 0 (3.3)

Also, we multiply the constraints expressed in the form BXBV +NXN BV =


b through by the vector CBV B−1 to obtain

CBV XBV + CBV B−1 NXN BV = CBV B−1 b (3.4)

By adding equation (3.3) to equation (3.4), we obtain

z + CBV B−1 N − CN BV XN BV = CBV B−1 b




Example 3.1. For the following LP, the optimal basis is BV = {x2 , s2 }. Com-
pute the optimal tableau.
max z = x1 + 4x2
s.t. x1 + 2x2 ≤ 6
2x1 + x2 ≤ 8
x1 , x2 ≥ 0
Solution: After adding slack variables s1 and s2 , the LP in standard form
max z = x1 + 4x2
s.t. x1 + 2x2 + s1 = 6
2x1 + x2 + s2 = 8
x1 , x2 , s1 , s2 ≥ 0
Since BV = {x2 , s2 } and N BV = {x1 , s1 }, then
 
    6
CBV = 4 0 , CN BV = 1 0 , b=
    8 
2 0 −1
1/2 0 1 1
B= , B = −1 , N=
1 1 /2 1 2 0

67
So, the optimal tableau entries are


   
61/2 3 0
b = B−1 b =
−1/2
=
8 5 1
    
1/2 0 1 1 1/2 1/2
−1
N = B N = −1 = 3
/2 1 2 0 /2 −1/2
 
  1/2 1/2    
CN BV = CBV N − CN BV = 4 0 3 −1
− 1 0 = 1 2
/2 /2
 
  3
z = CBV b = 4 0 = 12.
5

and the optimal tableau is


Basic x1 x2 s1 s2 RHS
z 1 0 2 0 12
x2 1/2 1 1/2 0 3
s2 3/2 0 −1/2 1 5

Example 3.2. For the following LP, the optimal basis is BV = {x2 , x4 }. Com-
pute the optimal tableau.
max z = x1 + 4x2 + 7x3 + 5x4
s.t. 2x1 + x2 + 2x3 + 4x4 = 10
3x1 − x2 − 2x3 + 6x4 = 5
x1 , x2 , x3 , x4 ≥ 0
Solution: Note that the constraints are in equation form, and no need to add
artificial variables here (we do not solve by simplex). Since BV = {x2 , x4 } and
N BV = {x1 , x3 }, then
 
    10
CBV = 4 5 , CN BV = 1 7 , b=
    5 
1 4 3/5 −2/5 2 2
B= , −1
B = 1 , N=
−1 6 /10 1/10 3 −2

So, the optimal tableau entries are

    
10 3/5 4 −2/5
−1
b=B b= 1 = 3
/10 /10
1 5 /2
    
3/5 −2/5 2 2 0 2
−1
N=B N= 1 = 1
/10 1/10 3 −2 /2 0
 
  0 2    
CN BV = CBV N − CN BV = 4 5 1 − 1 7 = 3/2 1
/2 0

68
 
 4
 47
z = CBV b = 4 5 3 = .
/2 2

and the optimal tableau is


Basic x1 x2 x3 x4 RHS
z 3/2 0 1 0 47/2

x2 0 1 2 0 4
x4 1/2 0 0 1 3/2

Note 11. We have used the formulas of this section to create an LP’s optimal
tableau, but they can also be used to create the tableau for any set of basic
variables.

Exercise 3.1.

1. For the following LP, x1 and x2 are basic variables in the optimal tableau.
Use the formulas of matrices to determine the optimal tableau.

max z = 3x1 + x2
s.t. 2x1 − x2 ≤ 2
−x1 + x2 ≤ 4
x1 , x2 ≥ 0

2. For the following LP, x2 and s1 are basic variables in the optimal tableau.
Use the formulas of matrices to determine the optimal tableau.

max z = −x1 + x2
s.t. 2x1 + x2 ≤ 4
x1 + x2 ≤ 2
x1 , x2 ≥ 0

3. Consider the following LP model:

max z = 5x1 + 2x2 + 3x3


s.t. x1 + 5x2 + 2x3 ≤ b1
x1 − 5x2 − 6x3 ≤ b2
x1 , x2 , x3 ≥ 0

The following optimal tableau corresponds to specific values of b1 and b2 :

Basic x1 x2 x3 s1 s2 RHS
z 0 a 7 d e 150
x1 1 b 2 1 0 30
s2 0 c −8 −1 1 10

Determine the elements a, b, c, d, e, b1 and b2 .

69
4. For the following LP, x1 and x2 are basic variables in the optimal tableau.
Determine the optimal tableau using the laws of matrices.

min w = 50x1 + 100x2


s.t. 7x1 + 2x2 ≥ 28
2x1 + 12x2 ≥ 24
x1 , x2 ≥ 0

3.2 Sensitivity Analysis


We now explore how changes in an LP’s parameters (objective function co-
efficients, right-hand sides, and technological coefficients) change the optimal
solution. The study of how an LP’s optimal solution depends on its parameters
is called sensitivity analysis. Our discussion focuses on maximization problems
and relies heavily on the formulas of Section 3.1. (The modifications for min
problems are straightforward; see Exercise 3.2 at the end of this section.)
As in Section 3.1, we let BV be the set of basic variables in the optimal
tableau. Given a change (or changes) in an LP, we want to determine whether
BV remains optimal. The mechanics of sensitivity analysis hinge on the follow-
ing important observation. From Chapter 2, we know that a simplex tableau
(for a max problem) for a set of basic variables BV is optimal if and only if
each constraint has a nonnegative right-hand side and each variable has a non-
negative coefficient in row 0. This implies that whether a tableau is feasible
and optimal depends only on the right-hand sides of the constraints and on the
coefficients of each variable in row 0.
Suppose we have solved an LP and have found that BV is an optimal basis.
We can use the following procedure to determine if any change in the LP will
cause BV to be no longer optimal.

Step 1: Using the formulas of Section 3.1, determine how changes in the LP’s
parameters change the right-hand side and row 0 of the optimal tableau
(the tableau having BV as the set of basic variables).

Step 2: If each variable in row 0 has a non-negative coefficient and each con-
straint has a nonnegative right-hand side, then BV is still optimal. Oth-
erwise, BV is no longer optimal.

We will discuss how 6 types of changes on LP’s parameters change the


optimal solution:

1. Changing the objective function coefficient of a nonbasic variable:


If the objective function coefficient for a nonbasic variable xj is changed,
the current basis remains optimal if cj ≥ 0. If cj < 0, then the current
basis is no longer optimal, and xj will be a basic variable in the new
optimal solution.

70
2. Changing the objective function coefficient of a basic variable:
If the objective function coefficient of a basic variable xj is changed, then
the current basis remains optimal if the coefficient of every variable in row
0 of the BV tableau remains nonnegative. If any variable in row 0 has a
negative coefficient, then the current basis is no longer optimal.

3. Changing the right-hand side of a constraint:


If the right-hand side of a constraint is changed, then the current basis
remains optimal if the right-hand side of each constraint in the tableau
remains nonnegative. If the right-hand side of any constraint is negative,
then the current basis is infeasible, and a new optimal solution must be
found.

4. Changing a column of a nonbasic variable:


If the column of a nonbasic variable xj is changed, then the current basis
remains optimal if cj ≥ 0. If cj < 0, then the current basis is no longer
optimal and xj will be a basic variable in the new optimal solution. If
the column of a basic variable is changed, then it is usually difficult to
determine whether the current basis remains optimal. This is because
the change may affect both B and CBV and thus the entire row 0 and
the entire right-hand side of the optimal tableau. As always, the current
basis would remain optimal if and only if each variable has a nonnegative
coefficient in row 0 and each constraint has a nonnegative right-hand side.

5. Adding a new variable: If a new column (corresponding to a variable


xj ) is added to an LP, then the current basis remains optimal if cj ≥ 0. If
cj < 0, then the current basis is no longer optimal and xj will be a basic
variable in the new optimal solution.

6. Adding a new constraint. (see Section 3.8)

The following figure presents a summary of sensitivity analyses for a max-


imization problem. When applying the techniques of this section to a mini-

71
mization problem, just remember that a tableau is optimal if and only if each
variable has a nonpositive coefficient in row 0 and the right-hand side of each
constraint is nonnegative.

Example 3.3. Consider the following LP:


max z = 60x1 + 30x2 + 20x3
s.t. 8x1 + 6x2 + x3 ≤ 48
4x1 + 2x2 + 23 x3 ≤ 20
4x1 + 32 x2 + 12 x3 ≤ 8
x1 , x2 , x3 ≥ 0
After adding slack variables s1 , s2 , and s3 , the optimal tableau is:
Basic x1 x2 x3 s1 s2 s3 RHS
z 0 5 0 0 10 10 280
s1 0 −2 0 1 2 −8 24
x3 0 −2 1 0 2 −4 8
x1 1 5/4 0 0 −1/2 3/2 2

1. Suppose we change the objective function coefficient of x2 from 30 to


30 + ∆. For what values of ∆ will the current set of basic variables
remain optimal ?
Solution: From the optimal tableau we know that BV = {s1 , x3 , x1 }
and N BV = {x2 , s2 , s3 }, then

 
    48
CBV = 0 20 60 , CN BV = 30 0 0 , b = 20

    8 
1 1 8 1 2 −8 6 0 0
B= 0
 3/2 4 , B−1 = 0
 2 −4 , N =  2 1 0
0 1/2 2 0 −1/2 3/2 3/2 0 1

Because x2 is a nonbasic variable, CBV has not changed. Thus, BV will


remain optimal if

CN BV = CBV B−1 N − CN BV
  
  1 2 −8 6 0 0  
= 0 20 60 0 2 −4  2 1 0 − 30 + ∆ 0 0
0 −1/2 3/2 3/2 0 1
 
= 5 − ∆ 10 10 ≥ 0
∴ ∆≤5

72
2. Suppose we change the objective function coefficient of x1 from 60 to
60 + ∆. For what values of ∆ will the current set of basic variables
remain optimal ?
Solution: The BV will remain optimal if

CN BV = CBV B−1 N − CN BV
  
  1 2 −8 6 0 0  
= 0 20 60 + ∆ 0 2 −4  2 1 0 − 30 0 0
0 −1/2 3/2 3/2 0 1

= 5 + 45 ∆ 10 − 12 ∆ 10 + 32 ∆ ≥ 0
 

∴ ∆ ∈ [−4, 20]

3. Suppose we change the right-hand-side of the second constraint from 20


to 20 + ∆. For what values of ∆ will the current set of basic variables
remain optimal ?
Solution: The BV will remain optimal if

b = B−1 b
    
1 2 −8 48 24 + 2∆
= 0 2 −4 20 + ∆ =  8 + 2∆  ≥ 0
0 −1 /2 3/2 8 2 − 21 ∆
∴ ∆ ∈ [−4, 4]

 
30
 6 
4. Suppose we change the elements of the column for x2 from   2  to

3/2
 
43
5
 . Would this change the optimal solution to the problem ?
2
2
Solution: Thus, BV will remain optimal if CN BV ≥ 0. But

CN BV = CBV B−1 N − CN BV
  
  1 2 −8 5 0 0  
= 0 20 60 0 2 −4 2 1 0 − 43 0 0
0 −1/2 3/2 2 0 1
 
= −3 10 10 6≥ 0
∴ The current basis is no longer optimal.

73
5. 
Suppose
 we add new activity x4 to the problem, and we add the column
15
1
  for x4 to the problem. How will the addition of the new activity
1
1
change the optimal tableau?
Solution:Because

cx4 = CBV B−1 a4 − cx4


  
  1 2 −8 1
= 0 20 60 0 2 −4 1 − 15
0 −1/2 3/2 1
=5≥0
∴ The current basis is still optimal.

Exercise 3.2. Dorian Auto manufactures luxury cars and trucks. The company
believes that its most likely customers are high-income women and men. To
reach these groups, Dorian Auto has embarked on an ambitious TV advertising
campaign and has decided to purchase 1−minute commercial spots on two types
of programs: comedy shows and football games. Each comedy commercial is
seen by 7 million high-income women and 2 million high-income men. Each
football commercial is seen by 2 million high-income women and 12 million
high-income men. A 1−minute comedy ad costs $50, 000, and a 1−minute
football ad costs $100, 000. Dorian would like the commercials to be seen by at
least 28 million high-income women and 24 million high-income men. Suppose

x1 = number of 1-minute comedy ads purchased,


x2 = number of 1-minute football ads purchased.

Then the formulation of this model is


min w = 50x1 + 100x2
s.t. 7x1 + 2x2 ≥ 28 (HIW)
2x1 + 12x2 ≥ 24 (HIM)
x1 , x2 ≥ 0
The optimal tableau is given in the table below. Remember that for a min
problem, a tableau is optimal if and only if each variable has a nonpositive
coefficient in row 0 and the right-hand side of each constraint is nonnegative.
Basic x1 x2 e1 e2 a1 a2 RHS
z 0 0 −5 −15/2 5−M 15/2 −M 320
x1 1 0 −3/20 1/40 3/20 −1/40 18/5
x2 0 1 1/40 −7/80 −1/40 7/80 7/5

74
1. Find the range of values of the cost of a comedy ad (currently $50,000)
for which the current basis remains optimal.

2. Find the range of values of the number of required HIW exposures (cur-
rently 28 million) for which the current basis remains optimal. If 40 million
HIW exposures were required, what would be the new optimal solution?

3. Suppose an ad on a news program costs $110,000 and reaches 12 million


HIW and 7 million HIM. Should Dorian advertise on the news program?

3.3 Finding the Dual of an LP


Associated with any LP is another LP, called the dual. Knowing the relation
between an LP and its dual is vital to understanding advanced topics in linear
and nonlinear programming. This relation is important because it gives us
interesting economic insights. Knowledge of duality will also provide additional
insights into sensitivity analysis.
When taking the dual of a given LP, we refer to the given LP as the primal.
The two problems are closely related, in the sense that the optimal solution of
one problem automatically provides the optimal solution to the other. If the
primal is a max problem, then the dual will be a min problem, and vice versa.
We begin by explaining how to find the dual of a max problem in which all
variables are required to be nonnegative and all constraints are (≤) constraints
(called a normal max problem). A normal max problem may be written as

max z = c1 x1 + c2 x2 + · · · + cn xn
s.t. a11 x1 + a12 x2 + · · · + a1n xn ≤ b1
a21 x1 + a22 x2 + · · · + a2n xn ≤ b2
.. (3.5)
.
am1 x1 + am2 x2 + · · · + amn xn ≤ bm
x1 , x2 , · · · , xn ≥ 0

The dual of a normal max problem such as (3.5) is defined to be

min w = b1 y1 + b2 y2 + · · · + bm ym
s.t. a11 y1 + a21 y2 + · · · + am1 ym ≥ c1
a12 y1 + a22 y2 + · · · + am2 ym ≥ c2
.. (3.6)
.
a1n y1 + a2n y2 + · · · + amn ym ≥ cn
y1 , y2 , · · · , ym ≥ 0

A min problem such as (3.6) that has all (≥) constraints and all variables
nonnegative is called a normal min problem. If the primal is a normal min
problem such as (3.6), then we define the dual of (3.6) to be (3.5).

75
Finding the Dual:

1. If the primal is a maximization problem, the dual will be a minimization


problem, and vice versa. The dual of the dual problem yields the original
problem.

2. A dual variable is defined for each primal constraint equation. Also, a


dual constraint is defined for each primal variable.

3. The column coefficients in a constraint of a primal variable defines the left-


hand side coefficients of the dual constraint and its objective coefficient
defines the right-hand side of that constraint. The objective coefficients
of the dual equal the right-hand side of the primal constraint.

4. The variables and constraints in the primal and dual problems are related
as follows.

(max) ⇐⇒ (min)
Constraint Sign Variable Sign
≥ ⇐⇒ ≤0
≤ ⇐⇒ ≥0
= ⇐⇒ u.r.s
Variable Sign Constraint Sign
≤0 ⇐⇒ ≤
≥0 ⇐⇒ ≥
u.r.s ⇐⇒ =

Example 3.4. Find the dual of the following LPs:

1. min w = x1 + 2x2 3. min w = 15x1 + 12x2


s.t. 2x1 + x2 ≥ 4 s.t. x1 + 2x2 ≥ 3
x1 − 2x2 ≥ 8 x1 − 4x2 ≤ 5
x1 , x2 ≥ 0 x1 , x2 ≥ 0

4. max z = 5x1 + 6x2


2. max z = 5x1 + 12x2 + 4x3 s.t. x1 + 2x2 = 5
s.t. x1 + 2x2 + x3 ≤ 10 −x1 + 5x2 ≥ 3
2x1 − x2 + x3 = 8 4x1 + 7x2 ≤ 8
x1 , x2 , x3 ≥ 0 x1 urs, x2 ≥ 0

Solution:

1. max z = 4y1 + 8y2 2. min w = 10y1 + 8y2


s.t. 2y1 + y2 ≤ 1 s.t. y1 + 2y2 ≥ 5
y1 − 2y2 ≤ 2 2y1 − y2 ≥ 12
y1 , y2 ≥ 0 y1 + y2 ≥ 4
y1 ≥ 0, y2 urs
76
3. max z = 3y1 + 5y2 4. min w = 5y1 + 3y2 + 8y3
s.t. y1 + y2 ≤ 15 s.t. y1 − y2 + 4y3 = 5
2y1 − 4y2 ≤ 12 2y1 + 5y2 + 7y3 ≥ 6
y1 ≥, y2 ≤ 0 y1 urs, y2 ≤ 0, y3 ≥ 0

Exercise 3.3. Find the dual of the following LPs:

1. max z = 2x1 + x2 3. max z = 4x1 − x2 + 2x3


s.t. −x1 + x2 ≤ 1 s.t. x1 + x2 ≤ 5
x1 + x2 ≤ 3 2x1 + x2 ≤ 7
x1 − 2x2 ≤ 4 2x2 + x3 ≥ 6
x1 , x2 ≥ 0 x1 + x3 = 4
x1 ≥ 0, x2 , x3 urs
2. min w = y1 − y2
s.t. 2y1 + y2 ≥ 4 4. min w = 4y1 + 2y2 − y3
y1 + y2 ≥ 1 s.t. y1 + 2y2 ≤ 6
y1 + 2y2 ≥ 3 y1 − y2 + 2y3 = 8
y1 , y2 ≥ 0 y1 , y2 ≥, y3 urs

3.4 The Dual Theorem and its Consequences


In this section, we discuss one of the most important results in linear program-
ming: the Dual Theorem. In essence, the Dual Theorem states that the primal
and dual have equal optimal objective function values (if the problems have
optimal solutions).
If we choose any feasible solution to the max LP and any feasible solution
to the min LP (one is primal and the other is dual), the value for the min LP
feasible solution will be at least as large as the value for the max LP feasible
solution. This result is formally stated in Lemma 3.1. Observe that the following
two results say nothing about which problem is primal and which is dual. The
objective function type, max or min, that matters in this case.

Lemma 3.1. The objective values in a pair of primal-dual


problems must satisfy the following relationships:

1. For any pair of feasible primal and dual solutions,


   
objective value objective value

in MAX LP in MIN LP

2. At the optimum solution for both problems,


   
objective value objective value
=
in MAX LP in MIN LP

77
Proof. Consider the primal LP

max z = CBV XBV + CN BV XN BV


s.t. AXN BV + IXBV = b (3.7)
XBV , XN BV ≥ 0

Then, the dual LP will be

min w = Yb
s.t. YA ≥ CN BV
(3.8)
YI ≥ CBV
Y urs

Multiply the constraint in (3.7) by Y from the left to obtain:

YAXN BV + YIXBV = Yb = w.

Also, multiply the first constraint in (3.8) by XBV from the right, and the
second constraint by XN BV from the right, to obtain:

YAXBV ≥ CN BV XBV
YIXN BV ≥ CBV XN BV

By adding the two inequalities above, we have

YAXN BV + YIXBV ≥ CBV XBV + CN BV XN BV


w ≥ |{z}
|{z} z
min max

Example 3.5. Consider the following pair of primal and dual problems.
Primal Dual
min w = 5x1 + 2x2 max z = 3y1 + 5y2
s.t. x1 − x2 ≥3 s.t. y1 + 2y2 ≤5
2 x1 + 3x2 ≥5 − y1 + 3y2 ≤2
x1 , x2 ≥ 0 y1 , y 2 ≥ 0
Feasible Solution: Feasible Solution (Optimal):
x1 = 4, x2 = 1 y1 = 5, y2 = 0
Objective Function: Objective Function:
w = 22 z = 15

78
Theorem 3.2. The Dual Theorem.a
Suppose BV is an optimal basis for the primal. Then y =
CBV B−1 is an optimal solution to the dual. Also, z = w.
a
For proof see: Wayne L. Winston, Munirpallam Venkataramanan.
Introduction to Mathematical Programming. Thomson Learning; 4th
edition (2002)

Example 3.6. The optimal solution of the following LP is z = 9 when x1 = 1


and x2 = 6. Find its dual problem, then find the solution for the dual problem.

max z = 3x1 + x2
s.t. 2x1 + x2 ≤ 8
4x1 + x2 ≤ 10
x1 , x2 ≥ 0

Solution: Since, in the optimal solution, BV = {x1 , x2 } then


   
2 1 −1/2 1/2
−1
 
CBV = 3 1 , B= , B =
4 1 2 −1

Hence, y = y1 y2 = CBV B−1 = 1/2


   
1/2 and w = z = 9. Note that the
dual LP is

min w = 8y1 + 10y2


s.t. 2y1 + 4y2 ≥ 3
y1 + y2 ≥ 1
y1 , y2 ≥ 0

How to Read the Optimal Dual Solution from Row 0 of the Optimal
Tableau ?
Constraint i Sign Optimal yi Value Problem Type
≤ Coefficient of si Max or Min
≥ −1× Coefficient of ei Max or Min
= Coefficient of ai − M Max
= Coefficient of ai + M Min

In general,

   
 The optimal   Optimal primal z−coefficient of the starting variable di 
value of the dual = +
variable yi Original objective function coefficient of di
   

79
Example 3.7. Consider the following LP.
max z = −2x1 − x2 + x3
s.t. x1 + x2 + x3 ≤ 3
x2 + x3 ≥ 2
x1 + x3 = 1
x1 , x2 , x3 ≥ 0
1. Find the dual of this LP.
Solution: The dual LP is

min w = 3y1 + 2y2 + y3


s.t. y1 + y3 ≥ −2
y1 + y2 ≥ −1
y1 + y2 + y3 ≥ 1
y1 ≥ 0, y2 ≤ 0, y3 urs

2. After adding slack variable s1 , subtracting excess variable e2 , and adding


artificial variables a2 and a3 , the Row 0 of the LP’s optimal tableau is
found to be
z + 4x1 + e2 + (M − 1)a2 + (M + 2)a3 = 0.
Find the optimal solution of the dual problem.
Solution: The starting primal variables s1 , a2 and a3 uniquely correspond
to the dual variables y1 , y2 and y3 , respectively. Thus, the optimum dual
solution is w = z = 0, and

Using the Table Using the General Formula


y1 = 0 y1 = 0 + 0 = 0
y2 = −1 y2 = −M + (M − 1) = −1
y3 = (M + 2) − M = 2 y3 = −M + (M + 2) = 2

3. Suppose we change the right-hand side of the third constraint from 1 to


2, what is the change on the z−value will be? Assume the current basis
remains optimal.
 
−1
  3
Solution: znew = CBV B bnew = 0 −1 2 2 = 2.
| {z }
y 2
4. Repeat part (3) if we change the right-hand side of the second constraint
from 2 to 5.
 
 3
Solution: znew = CBV B−1 bnew = 0 −1 2 5 = −3.

| {z }
y 1

80
Corollary 3.3. The primal problem is infeasible if and only if
the normal form of the dual problem is unbounded (and vice
versa).

Note 12. With regard to the primal and dual linear programming problems,
exactly one of the following statements is true:

1. Both possess optimal solutions.

2. One problem has an unbounded optimal objective value, in which case


the other problem must be infeasible.

3. Both problems are infeasible.

From this note we see that duality is not completely symmetric. The best we
can say is that (here optimal means having a finite optimum, and unbounded
means having an unbounded optimal objective value):

Primal Optimal ⇔ Dual Optimal


Primal (Dual) Unbounded ⇒ Dual (Primal) Infeasible
Primal (Dual) Infeasible ⇒ Dual (Primal) Unbounded or Infeasible
Primal (Dual) Infeasible ⇔ Dual (Primal) Unbounded in normal form

Note 13. The relationship between degeneracy and multiplicity of the primal
and the dual optimal solutions is formulated in Theorem 3.4. Recall that de-
generacy and multiplicity always refer to LP models with inequality constraints,
and that degeneracy is defined for basic feasible solutions. In this theorem,
the term nondegenerate in the expression “multiple and nondegenerate” means
that there are multiple optimal solutions, and that there exists an optimal basic
feasible solution that is nondegenerate.

Theorem 3.4. Duality relationships between degeneracy


and multiplicity.
For any pair of primal and dual standard LP-models where
both have optimal solutions, the following implications hold:
Primal optimal solution Dual optimal solution
Multiple ⇒ Degenerate
Unique and nondeg. ⇒ Unique and nondeg.
Multiple and nondeg. ⇒ Unique and degenerate
Unique and degenerate ⇒ Multiple

81
Exercise 3.4.
1. Find the optimal value of the objective function for the following LP using
its dual. (Do NOT solve the dual using the simplex algorithm)
min w = 10y1 + 4y2 + 5y3
s.t. 5y1 − 7y2 + 3y3 ≥ 50
y1 , y2 , y3 ≥ 0

2. Consider the following LP.


max z = 2x1 + 4x2 + 4x3 − 3x4
s.t. x1 + x2 + x3 =4
4x2 + x4 = 8
x1 , x 2 , x 3 , x 4 ≥ 0
(a) Write the associated dual problem.
(b) Show that the basic solution x1 and x2 is not optimal.
(c) Using x3 and x4 as starting variables, the optimal tableau is given
below. Determine the dual optimal solution in TWO ways, using
the tableau.
Basic x1 x2 x3 x4 RHS
z 2 0 0 3 16
x3 3/4 0 1 −1 /4 2
x2 1/4 1 0 1/4 2
3. For the following LP,
max z = −x1 + 5x2
s.t. x1 + 2x2 ≤ 0.5
−x1 + 3x2 ≤ 0.5
x1 , x 2 ≥ 0

row 0 of the optimal tableau is z + 0.4s1 + 1.4s2 = ? . Determine the


optimal z−value for the given LP.
4. Consider the following linear programming problem:
max z = 4x1 + x2
s.t. 3x1 + 2x2 ≤ 6
6x1 + 3x2 ≤ 10
x1 , x 2 ≥ 0
Suppose that in solving this problem, row 0 of the optimal tableau is
20
found to be z + 2x2 + s2 = . Use the Dual Theorem to prove that the
3
computations must be incorrect.

82
5. Solve the dual of the following problem, and then find its optimal solu-
tion from the solution of the dual. Does the solution of the dual offer
computational advantages over solving the primal directly?

min w = 50x1 + 60x2 + 30x3


s.t. 5 x1 + 5x2 + 3 x3 ≥ 50
x1 + x2 − x3 ≥ 20
7 x1 + 6x2 − 9 x3 ≥ 30
5 x1 + 5x2 + 5 x3 ≥ 35
2 x1 + 4x2 − 15x3 ≥ 10
12x1 + 10x2 ≥ 90
x2 − 10x3 ≥ 20
x1 , x 2 , x 3 ≥ 0

6. Consider the following LP:

max z = 5x1 + 2x2 + 3x3


s.t. x1 + 5x2 + 2x3 = 15
x1 − 5x2 − 6x3 ≤ 20
x1 , x 2 , x 3 ≥ 0

Given that the artificial variable a1 and the slack variable s2 form the
starting basic variables and that M was set equal to 100 when solving the
problem, the optimal tableau is given as:

Basic x1 x2 x3 a1 s2 RHS
z 0 23 7 105 0 75
x1 1 5 2 1 0 15
s2 0 −10 −8 −1 1 5

Write the associated dual problem, and determine its optimal solution in
two ways.

83
3.5 Shadow Prices
It is often important for managers to determine how a change in a constraint’s
right-hand side changes the LP’s optimal z−value.

Definition 3.2. The shadow price of the ith constraint is the


amount by which the optimal value of the objective function is
improved (improved means increased in max LP and decreased
in min LP) if we increase bi (the RHS of that constraint) by
1 (from bi to bi + 1).

Note 14.

1. The previous definition assumes that after the RHS of constraint i has
been changed to bi + 1, the current basis remains optimal.

2. The shadow price of the ith constraint of a max LP is the optimal value
of the ith dual variable y i . Also, the shadow price of the ith constraint
of a min LP is −1× the optimal value of the ith dual variable y i
 
b1
 .. 
 . 
 

z new = Yb = y 1 · · · y i · · · y m  bi + 1

 .. 
 . 
bm
= y 1 b1 + · · · + y i (bi + 1) + · · · + y m bm
= (y 1 b1 + · · · + y i bi + · · · + y m bm ) + y i
= z old + yi
∴ z new − z old = y i

3. In max LP, the shadow price for a (≤) constraint is nonnegative, for a
(≥) is nonpositive, and for (=) is urs. Also, in min LP, the shadow price
for a (≤) constraint is nonpositive, for a (≥) is nonnegative, and for (=)
is urs.

4. In general, if the RHS of the ith constraint is increased by an amount


∆bi , then

z new = z old + (∆bi ) y i if the LP is max


z new = z old − (∆bi ) y i if the LP is min

84
Example 3.8. Consider the following LP.
max z = 15x1 + 25x2
s.t. 3x1 + 4x2 ≤ 100
2x1 + 3x2 ≤ 70
x1 + 2x2 ≤ 30
x2 ≥ 3
x1 , x 2 ≥ 0
The optimal solution of the problem is z = 435, when x1 = 24 and x2 = 3,
where the Row 0 in the optimal tableau (after adding slack variables s1 , s2 , s3 to
the first three constraints respectively and subtracting excess variable e4 from
the last constraint then adding to it an artificial variable a4 ) is
z + 15s3 + 5e4 + (M − 5)a4 = 435.
1. Find the shadow price of each constraint.
Solution: The shadow price of each constraint is the optimal value of the
corresponding dual variable of each constraint. So,
y1 = 0 , y2 = 0 , y3 = 15 , y4 = −5

2. Assuming the current basis remains optimal, what would the change on
the z−value be if the RHS of the
(a) 3rd constraint were changed from 30 to 35 ?
 
100
   70 
Solution: z new = yb = 0 0 15 −5   35  = 510.

3
(b) 4th constraint were changed from 3 to 2 ?
 
100
   70 
Solution: z new = yb = 0 0 15 −5   30  = 440.

2
Exercise 3.5. Sugarco can manufacture three types of candy bar. Each candy
bar consists totally of sugar and chocolate. The compositions of each type of
candy bar and the profit earned from each candy bar are shown in the table
below.
Amount of Amount of Profit
Bar Sugar (Ounces) Chocolate (Ounces) (Cents)
1 1 2 3
2 1 3 7
3 1 1 5

85
Fifty oz of sugar and 100 oz of chocolate are available. After defining xi to
be the number of Type i candy bars manufactured, Sugarco should solve the
following LP:

max z = 3x1 + 7x2 + 5x3


s.t. x1 + x2 + x3 ≤ 50
2x1 + 3x2 + x3 ≤ 100
x1 , x 2 , x 3 ≥ 0

After adding slack variables s1 and s2 , the optimal tableau is as shown in the
table below.
Basic x1 x2 x3 s1 s2 RHS
z 3 0 0 4 1 300
x3 1/2 0 1 3/2 −1/2 25
x2 1/2 1 0 −1/2 1/2 25
Using this optimal tableau, answer the following questions:

1. Find the shadow prices for the Sugarco problem.

2. If 60 oz of sugar were available, what would be Sugarco’s profit?

3.6 Duality and Sensitivity Analysis


From the dual theorem we demonstrate the following: Assuming that a set of
basic variables BV is feasible, then BV is optimal if and only if the associated
dual solution CBV B−1 is dual feasible.

Unbounded =⇒ infeasible, then feasible =⇒ bounded

This result can be used for an alternative way of doing the following types of
sensitivity analysis:

1. Changing the objective function coefficient of a nonbasic variable.

2. Changing a column of a nonbasic variable.

3. Adding a new variable.

Since primal optimality and dual feasibility are equivalent, the above changes will
leave the current basic optimal if and only if the current dual solution CBV B−1
remains dual feasible.

Example 3.9. Consider the following LP.

max z = 60x1 + 30x2 + 20x3


s.t. 8x1 + 6x2 + x3 ≤ 48

86
3
4x1 + 2x2 + x3 ≤ 20
2
3 1
2x1 + x2 + x3 ≤ 8
2 2
x1 , x 2 , x 3 ≥ 0

The dual of the problem is:

min w = 48y1 + 20y2 + 8y3


s.t. 8y1 + 4y2 + 2 y3 ≥ 60
3
6y1 + 2y2 + y3 ≥ 30
2
3 1
y1 + y2 + y3 ≥ 20
2 2
y1 , y 2 , y 3 ≥ 0

The optimal solution for the primal was z = 280, s1 = 24, x3 = 8, x1 = 2,


| {z }
Basic
x2 = s2 = s3 = 0. Also, the optimal dual solution (constraint shadow prices)
| {z }
NonBasic
are y1 = 0, y2 = 10, y3 = 10.

1. Let c2 be the coefficient of x2 in the objective function. For what values


of c2 will the current basis remain optimal?
Solution: If y1 = 0, y2 = 10, y3 = 10 remains dual feasible, then the
current basis and the values of all the variables are unchanged. Note that
if the objective function coefficient for x2 is changed, then the first and
third dual constraints remain unchanged, but the second dual constraint
3
is changed to 6y1 + 2y2 + y3 ≥ c2 . Thus, the current basis remains
2
3
optimal if c2 satisfies 6(0) + 2(10) + (10) ≥ c2 , or c2 ≤ 35.
2
 
30
 6 
2. Suppose we change the elements of the column for x2 from   2  to

3/2
 
43
5
 , does the current basis remain optimal?
2
2
Solution: Changing the column for the nonbasic variable leaves the first
and third dual constraints unchanged but changes the second to

5y1 + 2y2 + 2y3 ≥ 43

87
Because y1 = 0, y2 = 10, y3 = 10 does not satisfy the new second dual
constraint, dual feasibility is not maintained, and the current basis is no
longer optimal.
3. Supposewe add new activity x4 to the problem, and we add of the x4
15
1
column 
 1  to the problem. Does the current basis remain optimal?

1
Solution: Introducing the new activity leaves the three dual constraints
unchanged, but the new variable x4 adds a new dual constraint. The new
dual constraint will be y1 + y2 + y3 ≥ 15. Because 0 + 10 + 10 ≥ 15, the
current basis remains optimal.
Exercise 3.6.
1. The following questions refer to the Sugarco problem (Exercise 3.5):
(a) For what values of profit on a Type 1 candy bar does the current
basis remain optimal?
(b) If a Type 1 candy bar used 0.5 oz of sugar and 0.75 oz of chocolate,
would the current basis remain optimal?
(c) A Type 4 candy bar is under consideration. A Type 4 candy bar
yields a 10‹ profit and uses 2 oz of sugar and 1 oz of chocolate.
Does the current basis remain optimal?
2. Consider the following LP and its optimal tableau:
max z = 5x1 + x2 + 2x3
s.t. x1 + x2 + 1x3 ≤ 6
6x1 + 1x3 ≤ 8
x2 + 1x3 ≤ 2
x1 , x 2 , x 3 ≥ 0

Basic x1 x2 x3 s1 s2 s3 RHS
z 0 1/6 0 0 5/6 1/6 9
s1 0 1/6 0 1 −1/6 −5/6 3
x1 1 −1/6 0 0 1/6 −1/6 1
x3 0 1 1 0 0 1 2

(a) Find the dual to this LP and its optimal solution.


(b) Find the range of values of the objective function coefficient of x2
for which the current basis remains optimal.
(c) Find the range of values of the objective function coefficient of x1
for which the current basis remains optimal.

88
3.7 Complementary Slackness
The Theorem of Complementary Slackness is an important result that relates
the optimal primal and dual solutions. To state this theorem, we assume that
the primal is a normal max problem with variables x1 , x2 , · · · , xn and m of (≤)
constraints. Then the dual is a normal min problem with variables y1 , y2 , · · · , ym
and n of (≥) constraints.

max z = c1 x1 + · · · + cn xn min w = b1 y1 + · · · + bm ym
s.t. a11 x1 + · · · + a1n xn ≤ b1 s.t. a11 y1 + · · · + am1 ym ≥ c1
a21 x1 + · · · + a2n xn ≤ b2 a12 y1 + · · · + am2 ym ≥ c2
.. .. .. .. .. ..
. . . . . .
am1 x1 + · · · + amn xn ≤ bm a1n y1 + · · · + amn ym ≥ cn
xi ≥ 0, ∀i = 1, 2, · · · , n yj ≥ 0, ∀j = 1, 2, · · · , m

Let s1 , s2 , · · · , sm be the slack variables for the primal, and e1 , e2 , · · · , en


be the excess variables for the dual.

max z = c1 x1 + · · · + cn xn min w = b1 y1 + · · · + bm ym
s.t. a11 x1 + · · · + a1n xn + s1 = b1 s.t. a11 y1 + · · · + am1 ym − e1 = c1
a21 x1 + · · · + a2n xn + s2 = b2 a12 y1 + · · · + am2 ym − e2 = c2
.. .. .. .. .. ..
. . . . . .
am1 x1 + · · · + amn xn + sm = bm a1n y1 + · · · + amn ym − en = cn
xi ≥ 0, ∀i = 1, 2, · · · , n yj ≥ 0, ∀j = 1, 2, · · · , m
sj ≥ 0, ∀j = 1, 2, · · · , m ei ≥ 0, ∀i = 1, 2, · · · , n

 
x1
Theorem 3.5. Let x =  ...  be a feasible primal solution
 

xn
 
and y = y1 · · · ym be a feasible dual solution. Then x is
primal optimal and y is dual optimal, (z = w), if and only if

xi ei = 0 , ∀i = 1, 2, · · · , n
yj sj = 0 , ∀j = 1, 2, · · · , m

89
In other words, if a constraint in either the primal or dual is
non binding (sj > 0 or ei > 0), then the corresponding (com-
plementary) variable in the other problem must equal 0.

Proof. Multiply each constraint in the primal in standard form by its corre-
sponding (complementary) dual variable:

a11 x1 y1 + · · · + a1n xn y1 + s1 y1 = b1 y1
a21 x1 y2 + · · · + a2n xn y2 + s2 y2 = b2 y2
.. .. ..
. . .
am1 x1 ym + · · · + amn xn ym + sm ym = bm ym

Add all the constraints above together:

(a11 x1 y1 + · · · + a1n xn y1 + · · ·
+am1 x1 ym + · · · + amn xn ym )
+ (s1 y1 + · · · + sm ym ) (3.9)
= b1 y1 + · · · + bm ym
=w

Now, multiply each constraint in the dual in standard form by its corresponding
(complementary) primal variable:

a11 y1 x1 + · · · + am1 ym x1 − e1 x1 = c1 x1
a12 y1 x2 + · · · + am2 ym x2 − e2 x2 = c2 x2
.. .. ..
. . .
a1n y1 xn + · · · + amn ym xn − en xn = cn xn

Add all the constraints above together:

(a11 y1 x1 + · · · + am1 ym x1 + · · ·
+a1n y1 xn + · · · + amn ym xn )
− (e1 x1 + · · · + en xn ) (3.10)
= c1 x1 + · · · + cn xn
=z

Subtract equation (3.10) from equation (3.9) to obtain:

s1 y1 + s2 y2 + · · · + sm ym + e1 x1 + e2 x2 + · · · + en xn = w − z (3.11)

If x is primal optimal and y is dual optimal, then z = w, and then

s1 y1 + s2 y2 + · · · + sm ym + e1 x1 + e2 x2 + · · · + en xn = 0

90
Since, all x’s, y’s, s’s, and e’s are all nonnegative, then

xi ei = 0 , ∀i = 1, 2, · · · , n
yj sj = 0 , ∀j = 1, 2, · · · , m

Also, if

xi ei = 0 , ∀i = 1, 2, · · · , n
yj sj = 0 , ∀j = 1, 2, · · · , m

then 0 = z − w and hence z = w. So, x is primal optimal and y is dual


optimal. 

Example 3.10. Consider the following LP.

max z = 5x1 + 3x2 + x3


s.t. 2x1 + x2 + x3 ≤ 6
x1 + 2x2 + x3 ≤ 7
x1 , x 2 , x 3 ≥ 0

49 5 8
The optimal solution to the problem is z = , x1 = , x2 = , and x3 =
3 3 3
s1 = s2 = 0. Use complementary slackness theorem to find the optimal dual
solution of the problem.
Solution: The dual LP is

min w = 6y1 + 7y2


s.t. 2y1 + y2 ≥ 5
y1 + 2y2 ≥ 3
y1 + y2 ≥ 1
y1 , y 2 ≥ 0

Because x1 > 0 and x2 > 0 then the optimal dual solution must have e1 = 0
and e2 = 0. This means that for the optimal dual solution, the first and second
constraints must be binding. So we know that the optimal values of y1 and
y2 may be found by solving the first and second dual constraints as equalities.
Thus, the optimal values of y1 and y2 must satisfy

2y1 + y2 = 5
y1 + 2y2 = 3

Solving these equations simultaneously shows that the optimal dual solution
7 1 49
must have y1 = and y2 = , with w = z = .
3 3 3

91
Exercise 3.7. Consider the following LP.
max z = 3x1 + 4x2 + x3 + 5x4
s.t. x1 + 2x2 + x3 + 2x4 ≤ 5
2x1 + 3x2 + x3 + 3x4 ≤ 8
x1 , x 2 , x 3 , x 4 ≥ 0
The optimal solution to the problem is z = 13, x1 = 1, x2 = x3 = 0, and
x4 = 2. Use complementary slackness theorem to find the optimal dual solution
of the problem.

3.8 The Dual−Simplex Method


In the simplex algorithm presented in Chapter 2 the problem starts at a (basic)
feasible solution. Successive iterations continue to be feasible until the optimal
is reached at the last iteration. The algorithm is sometimes referred to as the
primal simplex method.
This section presents two additional algorithms: The dual simplex and the
generalized simplex.
ˆ In the dual simplex, the LP starts at a better than optimal infeasible
(basic) solution. Successive iterations remain infeasible and (better than)
optimal until feasibility is restored at the last iteration.
ˆ The generalized simplex combines both the primal and dual simplex meth-
ods in one algorithm. It deals with problems that start both non-optimal
and infeasible. In this algorithm, successive iterations are associated with
basic feasible or infeasible (basic) solutions. At the final iteration, the
solution becomes optimal and feasible (assuming that one exists).

Dual Simplex Algorithm: The crux of the dual simplex method is to start
with a better than optimal and infeasible basic solution. The optimality and
feasibility conditions are designed to preserve the optimality of the basic solutions
while moving the solution iterations toward feasibility.
ˆ To start the LP optimal and infeasible, two requirements must be met:
– The objective function must satisfy the optimality condition of the
regular simplex method.
– All the constraints must be of the type (≤), regardless the type of
problem either max or min. This condition requires converting any
(≥) to (≤) simply by multiplying both sides of the inequality (≥)
by −1. If the LP includes (=) constraints, the equation can be
replaced
( by two inequalities.
( For example, x1 + x2 = 1 is equivalent
x1 + x2 ≤ 1 x1 + x2 ≤ 1
to or
x1 + x2 ≥ 1 −x1 − x2 ≤ −1

92
ˆ After converting all the constraints to (≤), the starting solution is infea-
sible if at least one of the right-hand sides of the inequalities is strictly
negative.

ˆ Dual feasibility condition. The leaving variable, xr is the basic variable


having the most negative value (ties are broken arbitrarily). If all the basic
variables are nonnegative, the algorithm ends.

ˆ Dual optimality condition. Given that xr is the leaving variable, let


cj be the reduced cost of nonbasic variable xj and arj the constraint
coefficient in the xr −row and xj −column of the tableau. The entering
variable is the nonbasic variable with arj < 0 that corresponds to
 
cj
min , arj < 0
NonBasic xj arj

Note that

1. Ties are broken arbitrarily.


2. If arj ≥ 0 for all nonbasic xj , the problem has no feasible solution.

Example 3.11. Use the dual−simplex algorithm for solving the following LP
problem.

min w = 5x1 + 6x2


s.t. x1 + x2 ≥ 2
4x1 − x2 ≥ 4
x1 , x 2 ≥ 0

Solution: After converting all the constraints to (≤), then adding slack variables
s1 and s2 to the constraints, the LP in standard form is

min w − 5x1 − 6x2 = 0


s.t. − x1 − x2 + s1 = −2
−4x1 + x2 + s2 = −4
x1 , x 2 , s 1 , s 2 ≥ 0

The initial tableau and all following tableaus, using the dual-simplex algorithm,
are shown below.

Iteration [0] Basic x1 x2 s1 s2 RHS Optimal
w −5 −6 0 0 0 but not
s1 −1 −1 1 0 −2 feasible
← s2 −4 1 0 1 −4

93

Iteration [1] Basic x1 x2 s1 s2 RHS Optimal
w 0 −29/4 0 −5/4 5 but not
← s1 0 −5/4 1 −1/4 −1 feasible
x1 1 −1/4 0 −1/4 1

Iteration [2] Basic x1 x2 s1 s2 RHS Optimal


w 0 −1 −5 0 10 and
s2 0 5 −4 1 4 feasible
x1 1 1 −1 0 2
Example 3.12. Use the dual−simplex algorithm for solving the following LP
problem.

max z = −4x1 − 2x2


s.t. x1 + x2 = 1
− 3x1 + x2 ≥ 2
x1 , x 2 ≥ 0

Solution: After converting all the constraints to (≤), then adding slack variables
s1 , s2 and s3 to the constraints, the LP in standard form is

max z + 4x1 + 2x2 = 0


s.t. x1 + x2 + s1 = 1
− x1 − x2 + s2 = −1
3x1 − x2 + s3 = −2
x1 , x2 , s1 , s2 , s3 ≥ 0

The initial tableau and all following tableaus, using the dual-simplex algorithm,
are shown below.

Iteration [0] Basic x1 x2 s1 s2 s3 RHS Optimal
z 4 2 0 0 0 0 but not
s1 1 1 1 0 0 1 feasible
s2 −1 −1 0 1 0 −1
← s3 3 −1 0 0 1 −2

Iteration [1] Basic x1 x2 s1 s2 s3 RHS Since s1 leaves


z 10 0 0 0 2 −4 with no entering
← s1 4 0 1 0 1 −1 variable, then the
s2 −4 0 0 1 −1 1 solution is infeasible
x2 −3 1 0 0 −1 2

94
Note 15. The dual simplex method is often used to find the new optimal
solution to an LP after a constraint is added. When a constraint is added, one
of the following three cases will occur:

1. The current optimal solution satisfies the new constraint.

2. The current optimal solution does not satisfy the new constraint, but the
LP still has a feasible solution.

3. The additional constraint causes the LP to have no feasible solutions.

Example 3.13. Consider the following LP and its optimal tableau.

max z = 6x1 + x2 Basic x1 x2 s1 s2 RHS


s.t. x1 + x2 ≤5 z 0 2 0 3 18
2 x1 + x2 ≤6 s1 0 1/2 1 −1/2 2
x1 , x 2 ≥ 0 x1 1 1/2 0 1/2 3
Find the optimal solution to this LP if we add the constraint

1. 3x1 + x2 ≤ 10.
Solution: After converting the constraint to (=) by adding s3 to the
constraint, we obtain:

Basic x1 x2 s1 s2 s3 RHS
z 0 2 0 3 0 18
s1 0 1/2 1 −1/2 0 2
x1 1 1/2 0 1/2 0 3
s3 3 1 0 0 1 10

Basic x1 x2 s1 s2 s3 RHS The solution


z 0 2 0 3 0 18 is optimal
s1 0 1/2 1 −1/2 0 2 and feasible
x1 1 1/2 0 1/2 0 3
s3 0 −1/2 0 −3/2 1 1

2. x1 − 2x2 ≥ 6.
Solution: After converting the constraint to (≤) then adding s3 to the
constraint, we obtain:

Basic x1 x2 s1 s2 s3 RHS
z 0 2 0 3 0 18
s1 0 1/2 1 −1/2 0 2
x1 1 1/2 0 1/2 0 3
s3 −1 2 0 0 1 −6

95
Basic x1 x2 s1 s2 s3 RHS Since s3 leaves
z 0 2 0 3 0 18 with no entering
s1 0 1/2 1 −1/2 0 2 variable, then the
x1 1 1/2 0 1/2 0 3 solution is infeasible
s3 0 5/2 0 1/2 1 −3

3. 8x1 + x2 ≤ 12.
Solution: After converting the constraint to (=) by adding s3 to the
constraint, we obtain:

Iteration [0] Basic x1 x2 s1 s2 s3 RHS


z 0 2 0 3 0 18
s1 0 1/2 1 −1/2 0 2
x1 1 1/2 0 1/2 0 3
s3 8 1 0 0 1 12

Iteration [0] Basic x1 x2 s1 s2 s3 RHS The solution
z 0 2 0 3 0 18 is optimal
s1 0 1/2 1 −1/2 0 2 but infeasible
x1 1 1/2 0 1/2 0 3
← s3 0 −3 0 −4 1 −12

Iteration [1] Basic x1 x2 s1 s2 s3 RHS The solution


z 0 0 0 1/3 2/3 10 is optimal
s1 0 0 1 −7/3 1/6 0 and feasible
x1 1 0 0 −1/6 1/6 1
x2 0 1 0 4/3 −1/3 4

Generalized Simplex Algorithm: The (primal) simplex algorithm starts fea-


sible but not optimal. The dual simplex starts (better than) optimal but infeasi-
ble. What if an LP model starts both not optimal and infeasible? The following
example illustrates what we call the generalized simplex algorithm for solving
LP problems with this situation.

Example 3.14. Consider the following LP.

max z = x1 − 3x2
s.t. x1 − x2 ≤ 2
x1 + x2 ≥ 4
2x1 + 2x2 ≥ 3
x1 , x 2 ≥ 0

96
The model can be put in the following tableau form in which the starting basic
solution (s1 , s2 , s3 ) is both non-optimal (because x1 has a negative reduced
cost) and infeasible (because s2 = −4, s3 = −3).

Iteration [0] Basic x1 x2 s1 s2 s3 RHS The solution
z −1 3 0 0 0 0 is not optimal
s1 1 −1 1 0 0 2 and infeasible
← s2 −1 −1 0 1 0 −4
s3 −2 −2 0 0 1 −3
Remove infeasibility first by applying a version of the dual simplex feasibility
condition that selects s2 as the leaving variable. To determine the entering
variable, all we need is a nonbasic variable whose constraint coefficient in the
s2 −row is strictly negative. The selection can be done without regard to opti-
mality, because it is nonexistent at this point anyway. In the present example,
x1 and x2 have negative coefficient in the s2 −row and x1 is selected as the
entering variable. The result is the following tableau:

Iteration [1] Basic x1 x2 s1 s2 s3 RHS The solution
z 0 4 0 −1 0 4 is not optimal
← s1 0 −2 1 1 0 −2 and infeasible
x1 1 1 0 −1 0 4
s3 0 0 0 −2 1 5
At this point, s1 leaves the solution and x2 have negative coefficient in the
s1 −row and is selected as the entering variable. The result is the following
tableau:

Iteration [2] Basic x1 x2 s1 s2 s3 RHS The solution


z 0 0 2 1 0 0 is optimal
x2 0 1 −1/2 −1/2 0 1 and feasible
x1 1 0 1/2 −1/2 0 3
s3 0 0 0 −2 1 5
The solution in the preceding tableau is now feasible and fortunately optimal.

Dual Simplex with Artificial Constraints: In example (3.14), the dual sim-
plex is not applicable directly, because x1 does not satisfy the maximization
optimality condition. Show that by adding the artificial constraint x1 ≤ M
(where M is sufficiently large not to eliminate any feasible points in the original
solution space), and then using the new constraint as a pivot row, the selection
of x1 as the entering variable (because it has the most negative objective coeffi-
cient) will render an all-optimal objective row. Next, carry out the regular dual
simplex method on the modified problem. All the iterations are shown below.

97

Iteration [0] Basic x1 x2 s1 s2 s3 s4 RHS
z −1 3 0 0 0 0 0
s1 1 −1 1 0 0 0 2
s2 −1 −1 0 1 0 0 −4
s3 −2 −2 0 0 1 0 −3
← s4 1 0 0 0 0 1 M

Iteration [1] Basic x1 x2 s1 s2 s3 s4 RHS
z 0 3 0 0 0 1 M
← s1 0 −1 1 0 0 −1 −M + 2
s2 0 −1 0 1 0 1 M −4
s3 0 −2 0 0 1 2 2M − 3
x1 1 0 0 0 0 1 M

Iteration [2] Basic x1 x2 s1 s2 s3 s4 RHS
z 0 2 1 0 0 0 2
s4 0 1 −1 0 0 1 M −2
← s2 0 −2 1 1 0 0 −2
s3 0 −4 2 0 1 0 −1
x1 1 −1 1 0 0 0 2

Iteration [3] Basic x1 x2 s1 s2 s3 s4 RHS


z 0 0 2 1 0 0 0 The solution
s4 0 0 −1/2 1/2 0 1 M −3 is optimal
x2 0 1 −1/2 −1/2 0 0 1 and feasible
s3 0 0 0 −2 1 0 3
x1 1 0 1/2 −1/2 0 0 3

Exercise 3.8.

1. Solve the following LP

max z = 2x1 − x2 + x3
s.t. 2 x1 + 4x2 − x3 ≥ 4
−x1 + x2 − x3 ≥ 2
x1 + 2x2 + 2x3 ≤ 8
x1 , x 2 , x 3 ≥ 0

by using

(a) the generalized simplex algorithm,


(b) the dual simplex algorithm after adding the constraint x1 + x3 ≤ M .

98
2. Use the dual simplex method to solve the following LP:

max z = −2x1 − x3
s.t. x1 + x2 − x3 ≥ 5
x1 − 2x2 + 4x3 ≥ 8
x1 , x 2 , x 3 ≥ 0

3. Solve the following LP in three different ways. Which method appears to


be the most efficient computationally?

min w = 6x1 + 7x2 + 3x3 + 5x4


s.t. 5x1 + 6x2 − 3x3 + 4x4 ≥ 12
x2 − 5x3 − 6x4 ≥ 10
2x1 + 5x2 + x3 + x4 ≥ 8
x1 , x 2 , x 3 , x 4 ≥ 0

4. Instead of solving a problem using all of its constraints, we can start by


identifying the so called secondary constraints. These are the constraints
that we suspect are least restrictive in terms of the optimum solution.
The model is solved using the remaining (primary) constraints. We may
then add the secondary constraints one at a time. A secondary constraint
is discarded if it satisfies the available optimum. The process is repeated
until all the secondary constraints are accounted for. Apply the proposed
procedure to the following LP:

max z = 5x1 + 6x2 + 3x3


s.t. 5 x1 + 5x2 + 3x3 ≤ 50
x1 + x2 − x3 ≤ 20
7 x1 + 6x2 − 9x3 ≤ 30
5 x1 + 5x2 + 5x3 ≤ 35
12x1 + 6x2 ≤ 90
x2 − 9x3 ≤ 20
x1 , x 2 , x 3 ≥ 0
h
Hint: Start with the first, third, and fourth constraint. The associated
solution is x1 = 0, x2 = 6.2, x3 = 0.8. This solution automatically
satisfies the second, fifth, and sixth constraint. Hence, these conditions
are discarded as redundant and the optimal
i solution for the problem is
x1 = 0, x2 = 6.2, x3 = 0.8, z = 39.6.

99
Answers

DON’T EVEN DARE PEEK AT THE SOLUTIONS TO AN EXERCISE UNTIL


YOU’VE GENUINELY TRIED TO SOLVE THE EXERCISE !!

Chapter 1 Exercise 1.1

1. Let x1 = number of desks produced per day, x2 = number of chairs


produced per day. Then, the formulation of the problem is

max z = 50x1 + 100x2


1 1
s.t. x1 + x2 ≤ 1
200 80
1 1
x1 + x2 ≤ 1
150 110
x1 ≤ 120
x2 ≤ 60
x1 ≥ 0, x2 ≥ 0

2. Let x1 = number of Type 1 Trucks produced daily, x2 = number of Type


2 Trucks produced daily. Then, the formulation of the problem is

max z = 300x1 + 500x2


1 1
s.t. x1 + x2 ≤ 1
800 700
1 1
x1 + x2 ≤ 1
500 200
x1 ≥ 0, x2 ≥ 0

3. Let x1 = number of hours spent working, x2 = number of hours spent


playing. Then, the formulation of the problem is

max z = x1 + 5x2
s.t. x1 − 3x2 ≥ 0
x1 + x2 ≤ 8
x1 ≥ 0, x2 ≥ 0

4. Let x1 = number of hours of Process 1, x2 = number of hours of Process


2. Then, the formulation of the problem is

min w = 4x1 + x2
s.t. 3x1 + x2 ≥ 10
x1 + x2 ≥ 5
x1 ≥ 3
x1 ≥ 0, x2 ≥ 0

100
5. Let x1 = number of units of A, x2 = number of units of B. Then, the
formulation of the problem is

max z = 20x1 + 50x2


x1
s.t. ≥ 0.80
x1 + x2
x1 ≤ 100
2x1 + 4x2 ≤ 240
x1 ≥ 0, x2 ≥ 0

6. Let x1 = number of rectangular tables, x2 = number of round tables.


Then, the formulation of the problem is

min w = 28x1 + 52x2


s.t. 6x1 + 10x2 ≥ 250
x1 + x2 ≤ 35
x1 ≤ 15
x1 ≥ 0, x2 ≥ 0

7. Let x1 = number of units of food F1 to be eaten, x2 = number of units


of food F2 to be eaten. Then, the formulation of the problem is

min w = 0.05x1 + 0.03x2


s.t. 2x1 + x2 ≥ 400
x1 + 2x2 ≥ 500
4x1 + 4x2 ≥ 1400
x1 ≥ 0, x2 ≥ 0

8. Let x1 = number of appliances shipped to terminal A, x2 = number of


appliances shipped to terminal A. Then, the formulation of the problem
is

min w = 12x1 + 16x2


s.t. x1 + x2 ≤ 1200
x1 ≥ 400
x2 ≥ 500
x1 ≥ 0, x2 ≥ 0

9. Let x1 = pounds of food A purchased, x2 = pounds of food B purchased.


Then, the formulation of the problem is

min w = 1.3x1 + 0.8x2


s.t. 5x1 + 2x2 ≥ 60
3x1 + 2x2 ≥ 45
4x1 + x2 ≥ 30
x1 ≥ 0, x2 ≥ 0

101
Chapter 1 Exercise 1.2

1.

2. Hyperplane: (a), (e). Half-Space: (b). Neither: (c), (d), (f)

3. Convex: (A), (B), (F), (H)



4. (a) 1 + t, 3 + t, 2 − 3t ; 0 ≤ t ≤ 1
   
(b) 1 + t, 3 + t, 2 − 3t = 1.5, 3.5, 0.5 if t = 0.5 ∈ 0, 1

Chapter 1 Exercise 1.3

1. (a) IV (b) II (c) I (d) III

2. If there are > and or < constraints, then a problem may have no optimal
solution. Consider the problem max z = x subject to x < 1. Clearly, this
problem has no optimal solution (there is no largest number smaller than
1!!)

3. (a) 16 (b) 55 (c) 84 (d) 90

4. (a) 32 (b) 55 (c) 36 (d) 32

102
a a
5. (a) 2a < b (c) b < (e) b =
a 3 3
(b) < b < 2a (d) b = 2a
3
6. (a) south-east (down-right) (c) north-west (up-left)
(b) south-west (down-left)

7. (a) north-west (up-left) (b) south-east (down-right)

8. The constraints x + y ≤ 6 and x + y ≤ 4 are redundant.

9. The redundant constraint is y ≤ 3

−x + y ≤ 1
x+y ≤5
x − 2y ≤ 2
y≤3
x≥1
x, y ≥ 0

10.

103
11. No feasible region

12. x1 = 2, x2 = 3 by solving the two constraint-equations.

13. (a) w = 12, x1 = 3, x2 = 0 (d) z = 4, x1 = 0, x2 = 6


(b) unbounded optimal solution (e) z = 16, x1 = 6, x2 = 2
(c) w = 6, x1 = 0, x2 = 6 (f) w = 14, x1 = 3, x2 = 2

Chapter 2 Exercise 2.1

1. max z = 2x1 + 3x2 + 5y3 + 5y4


s.t. −x1 − x2 + y3 + y4 + s1 = 5
−6x1 + 7x2 − 9y3 − 9y4 + s2 = 4
x1 + x2 + 4y3 + 4y4 = 10
x1 , x2 , y3 , y4 , s1 , s2 ≥ 0

2. 22x1 − 4x2 ≥ −7 ⇒ 22x1 − 4x2 − e1 = −7 ⇒ −22x1 + 4x2 + e1 = 7


22x1 − 4x2 ≥ −7 ⇒ −22x1 + 4x2 ≤ 7 ⇒ −22x1 + 4x2 + s1 = 7

3. x y1 y2
−6 0 6
10 10 0
0 0 6

Chapter 2 Exercise 2.2

1. (a) max z = 2x1 + 3x2


s.t. x1 + 3x2 + s1 = 12
3x1 + 2x2 + s2 = 12
x1 , x2 , s1 , s2 ≥ 0

104
(b) NBVs BVs values feasible? z−value
x1 , x2 s1 , s2 (12, 12) yes 0
x2 , s2 s1 , x1 (8, 4) yes 8
x1 , s2 s1 , x2 (−6, 6) no
s1 , x2 s2 , x1 (−24, 12) no
x1 , s1 s2 , x2 (4, 4) yes 12
s1 , s2 x1 , x2 (12/7, 24/7) yes 96/7

(c) from the table above, the optimum solution is z = 96/7, x1 = 12/7,
x2 = 24/7.
(d) the solution is left to the student
(e) the solution is left to the student

2. (a) the solution is left to the student


(b) BVs values feasible? w−value
x1 , x3 (4, 0) yes 4
x1 , x4 (4, 0) yes 4
x2 , x3 (2, 0) yes 4
x2 , x4 (2, 0) yes 4
x3 , x4 −4
( /7, 16/7) no

3. BVs values feasible? z−value


s1 , e2 (3/2, −8) no
s1 , x1 (−1, 4) no
s1 , x2 (−13, 8) no
e2 , x1 (−5, 3) no
e2 , x2 (−13/2, 3/2) no
x1 , x2 (13/3, −2/3) no

4. (a) BVs values feasible? z−value


s1 , s2 (2, 4) yes 0
s1 , x1 (4, −2) yes −2
s1 , x2 (−2, 4) no
s2 , x1 (8, 2) yes 2
s2 , x2 (2, 2) yes 6
x1 , x2 (−2/3, 8/3) yes 22/3

(b) see the table above


(c) the solution is left to the student

Chapter 2 Exercise 2.3

1. (a) z = 32/3, x1 = 10/3, x2 = 4/3


(b) z = 25, x1 = 15, x2 = 5, x3 = 0
(c) z = 25, x1 = 25, x2 = 0

105
(d) z = 12, x1 = 4, x2 = 4, x3 = 4

2. BVs values feasible? z−value


x1 30 yes 150
x2 10 yes −60
x3 6 yes 18
x4 5 yes −25
x5 10 yes 120
s1 30 yes 0

Chapter 2 Exercise 2.4

1. w = −5, x1 = 0, x2 = 5

2. w = −2, x1 = 0, x2 = 2

3. w = −7.5, x1 = 0, x2 = 1.5

4. w = −9, x1 = 3, x2 = 0

5. w = −48, x1 = 0, x2 = 4, x3 = 0, x4 = 8

Chapter 2 Exercise 2.5

1. (a) w = 1, x1 = 0, x2 = 0, x3 = 1
(b) w = 2, x1 = 2, x2 = 0, x3 = 0
(c) w = 4, x1 = 2, x2 = 0
(d) z = 5, x1 = 1, x2 = 2

2. w = 214/7, x1 = 66/7, x2 = 8/7, x3 = 0, x4 = 0

3. z = 10, x1 = 4, x2 = 0, x3 = 2

Chapter 2 Exercise 2.6

1. (a) iterations 2 and 3 are degenerate, and degeneracy is removed in


iteration 4.
(b) the solution is left to the student
(c) 3 iterations

2. z = 4, x1 = 4, x2 = 0, x3 = 0

3. z = 10 when: x1 = 0, x2 = 0, x3 = 10/3
x1 = 0, x2 = 5, x3 = 0
x1 = 1, x2 = 4, x3 = 1/3

106
4. x3 and s1 can yield alternative optima, but because all their constraint
coefficients are non-positive, non can yield an alternative basic solution.
Basic x1 x2 x3 s1 s2 RHS
z 0 0 0 0 1 20
x1 1 0 −2 −1 0 15
s2 0 1 −7 −2 1 10

5. the optimal solution is degenerate because s3 is basic and equal 0, also it


has alternative nonbasic solution because s2 has zero coefficient in z−row
and all its coefficients are ≤ 0.
Basic x1 x2 x3 s1 s2 s3 RHS
z 0 5 0 3 0 0 15
x3 0 1 1 1 −1 0 3
x1 1 2 0 1 0 0 5
s3 0 −6 0 −2 −5 1 0

6. (a) solution space unbounded in the direction of x3


(b) objective value is unbounded because each unit increase in x3 , in-
creases z by 1.

7. because a2 = 0 in the optimal tableau, the problem has feasible optimal


solution: x1 = 0, x2 = 4, z = 8.
Basic x1 x2 x3 s1 e2 a2 RHS
z 5M − 1 0 2M − 1 M 4M + 2 0 8
x2 2 1 1 0 1 0 4
a2 −5 0 −2 −1 −4 1 0

8. (a) BVs : (x8 , x3 , x1 ) = (12, 6, 0) ; z = 620


NBVs : (x2 , x4 , x5 , x6 , x7 ) = (0, 0, 0, 0, 0)
 
12 6
(b) x2 enters: x2 = min , , − = 4. so x8 leaves with ∆z = 5 × 4 = 20.
3 1
 
6 0
x5 enters: x5 = min −, , = 0. so x1 leaves with ∆z = 1 × 0 = 0.
1 6
x6 enters: x2 = min (−, −, −) = 4. so no leaving variable leaves and x6
can be increased to ∞ with ∆z = ∞.
(c) the solution is left to the student
(d) x5 and x7

9. (a) c ≤ 0, b ≥ 0
(b) c = 0, b ≥ 0, a2 > 0 and/or a3 > 0. If only a3 > 0 then b > 0
(c) c > 0, a2 ≤ 0, a3 ≤ 0

107
10. (a) b ≥ 0 is necessary.
ˆ If c1 = 0 and c2 ≥ 0 we can pivot in x1 to obtain an alternative
optimum.
ˆ If c1 ≥ 0, c2 ≥ 0 and a2 > 0 we can pivot in x5 and obtain an
alternative optimum.
ˆ If c2 = 0, a1 > 0 and c1 ≥ 0 we can pivot in x2 and obtain an
alternative optimum.
(b) b < 0
(c) b = 0
(d) b ≥ 0 makes the solution feasible. If c2 < 0 and a1 ≤ 0 we can
make x2 as large as desired and obtained an unbounded solution.
(e) b ≥ 0 makes the current basic solution feasible. For x6 to replace x1
we need c1 < 0 (this ensures that increasing x1 will increase z) and
we need Row 3 to win the ratio test for x1 . This requires 3/a3 ≤ b/4.
11. the solution is left to the student

Chapter 3 Exercise 3.1

1. Basic x1 x2 s1 s2 RHS
z 0 0 4 5 28
x1 1 0 1 1 6
x2 0 1 1 2 10

2. Basic x1 x2 s1 s2 RHS
z 2 0 0 1 2
x2 1 1 0 1 2
s1 1 0 1 −1 2
3. the solution is left to the student
4. Basic x1 x2 e1 e2 RHS
z 0 0 −5 −15/2 3800
x1 1 0 −3/20 1/40 18/5
x2 0 1 1/40 −7/80 7/5

Chapter 3 Exercise 3.2


100
1. − ≤ ∆ ≤ 300 million
3
2. −24 ≤ ∆ ≤ 56 million, and if ∆ = 40 million then znew = 520 million
 
110
3. by adding the activity x3 =  12  the current solution is not optimal
7

108
Chapter 3 Exercise 3.3

1. min w = y1 + 3y2 + 4y3 3. min w = 5y1 + 7y2 + 6y3 + 4y4


s.t. −y1 + y2 + y3 ≥ 2 s.t. y1 + 2y2 + y4 ≥ 4
y1 + y2 − 2y3 ≥ 1 y1 + y2 + 2y3 = −1
y1 , y2 , y3 ≥ 0 y3 + y4 = 2
y1 , y2 ≥ 0, y3 ≤ 0, x4 urs

4. max z = 6x1 + 8x2


2. max z = 4x1 + x2 + 3x3 s.t. x1 + x2 ≤ 4
s.t. 2x1 + x2 + x3 ≤ 1 2x1 − x2 ≤ 2
x1 + x2 + x3 ≤ −1 2x2 = −1
x1 , x2 , x3 ≥ 0 x1 ≤, x2 urs

Chapter 3 Exercise 3.4

1. w = 250/3.

2. (a) the solution is left to the student


(b) CN BV = CBV B−1 N − CN BV = −2 7/2
 

(c) ˆ y = CBV B−1 = 4 0 = y1 y2


   

ˆ Using the general formula: y1 = 0 + 4 = 4


y2 = 3 + (−3) = 0
 
 0.5
3. z = CBV B−1 b = 0.4 1.4

= 0.9
| {z } 0.5
y
 
4. since y = 0 1 then w = 10 6= 20/3

5. x1 = 0, x2 = 20, x3 = 0, z = 1200

6. the solution is left to the student

Chapter 3 Exercise 3.5

1. y1 = 4, y2 = 1

2. znew = zold + 10 × 4 = 340

Chapter 3 Exercise 3.6

1. (a) dual constraint for x1 is y1 + 2y2 ≥ c1 . since y1 = 4 and y2 = 1 the


current basis is still optimal if c1 ≤ 6
y1 3y2
(b) dual constraint for Type 1 Candy Bar is now + ≥ 3. since
2 4
y1 = 4 and y2 = 1 does not satisfy this constraint, the current basis
is no longer optimal

109
(c) the dual constraint for a Type 4 Candy Bar is 2y1 + y2 ≥ 10. since
y1 = 4 and y2 = 1 does not satisfy this constraint, the current basis
is no longer optimal. the new optimal solution would make Type 4
Candy Bars.

2. (a) the solution is left to the student


(b) c2 ≤ 1/6
(c) c1 ≤ 5/6

Chapter 3 Exercise 3.7

y1 = 1, y2 = 1, w = 13

Chapter 3 Exercise 3.8

1. (a) x1 = 4/3, x2 = 10/3, x3 = 0, z = 2/3


(b) the solution is left to the student

2. x1 = 0, x2 = 14, x3 = 9, z = −9

3. x1 = 0, x2 = 10, x3 = 0, x4 = 0, w = 70

4. the solution is left to the student

110

You might also like