Linear Programming Problems
Linear Programming Problems
LINEAR PROGRAMMING
PROBLEMS
2.1 Introduction
Linear Programming is the branch of applied mathematics that deals with solv-
cision variables. The constraints are usually in the form of linear inequalities of
decision variables used in the objective function. Linear programming is a rel-
atively recent mathematical discipline. The Russian mathematician Leonid V.
ear programming problem. The simplex algorithm was developed by the American
mathematician George B. Dantzig in 1947, and the theory of duality was developed
by John von Neumann the same year i.e. in 1947. It was also Dantzig who coined
the name ”‘Linear Programming”’ to this class of problems. In 1972, Klee and
Minty showed that the simplex method has exponential computational complex-
15
ity. In 1979, Khachiyan showed that linear programming problems can be solved
What is common among all the algorithms mentioned above is that they are
all iterative and therefore sequential. That is, every subsequent solution depends
quential not restricted to the surface of the polytope representing the set of feasible
solutions of the linear programming problem. This development is motivated by
the need for a speedy algorithm, since each of the classical techniques often requires
16
2.2 Linear Programming Problems in Canonical
Form
Linear programming problems can be expressed in the canonical form. The canon-
ical form of a linear programming problem is
Maximize C 0X (2.1)
subject to AX ≤ b, (2.2)
and X ≥ 0. (2.3)
Where C and X ∈ <n , b ∈ <m and A ∈ <m×n . Here (2.1) is called the
In this section we state some standard definitions and some of the important
characteristics of a solution to a linear programming problem formulated in the
canonical form.
which satisfy all the constraints and also the non-negativity condition is called
the feasible solution set of the linear programming problem.
17
• Optimal solution. An optimal solution X ∗ is a feasible solution subject to
C 0 X ∗ = max {C 0 X : AX ≤ b, X ≥ 0} . (2.4)
• Convex set. A set S is said to be convex set if, given any two points in
the set, the line joining those two points is completely contained in the set.
Mathematically, S is a convex set if for any two vectors X (1) and X (2) in S,
the vector X = λX (1) + (1 − λ)X (2) is also in S for any real number λ ∈ [0, 1]
[55].
Figure 1 and 2 represent convex set, whereas Figure 3 is not a convex set.
• Extreme point. An extreme point of the convex set is a feasible point that
can not lie on a line segment joining any two distinct feasible points in the
set. Actually, extreme points are the same as corner points [61].
18
• Linearity of the objective function results in convexity of the objective func-
tion.
Linear Programming Problems are solved by some methods like the graphi-
cal method, the systematic trial and error method (enumeration method) and the
simplex method.
The graphical method is not applicable in more than two dimensional space.
The enumerate method is a native approach to solve a linear programming (which
has an optimal solution) would be to generate all possible extreme points and
determine which one of them gives the best objective function value. But, the
simplex method, Which is a well-known and widely used method for solving linear
programming problem, does this in a more efficient manner by examining only a
points of the feasible solution set whose objective function values are worse
than the present one. This makes the procedure more efficient than the
enumeration method.
3. Continue to find better extreme point of feasible solution set, improving the
19
4. When a particular extreme point of feasible solution set cannot be improved
The various steps of the simplex method can be carried out in a more compact
manner by using a tableau form to represent the constraints and the objective
20
The standard form of the above LP problem is shown below.
maximize z = 6x1 + 8x2 + 0s1 + 0s2
subject to
5x1 + 10x2 + s1 ≤ 60,
4x1 + 4x2 + s2 ≤ 40,
x1 , x2 , s1 , s2 ≥ 0,
where s1 and s2 are slack variables, which are introduced to balance the constraints.
cient in one of the constraints and zero coefficient in the remaining constraints
[53].
If all the constraints are “≤”, then the standard form is to be treated as the
canonical form. The canonical form is generally used to prepare the initial simplex
tableau [53]. The initial simplex table of the above problem is shown in Table 2.1.
Here, cj is the coefficient of the j th term of the objective function and CBi is the
coefficient of the ith basic variable. The value at the intersection of the key column
and the key row is called key element. The value of zj is computed using the
following formula.
2
X
zj = (CBi )(aij )
i=1
21
Where aij is the coefficient for the ith row and j th column of the table. cj −zj is the
than or equal to zero, then optimality is reached; otherwise select the variable
with the maximum cj − zj value as the entering variable (For minimization
problem, if all cj − zj are greater than or equal to zero, the optimality is
reached; otherwise select the value with the most negative value as the en-
tering variable).
In Table 2.1, all the values for cj − zj are either equal to or greater than zero.
Hence, the solution can be improved further. cj − zj is the maximum for the
variable x2 . So, x2 enter the basis. this is known as intering variable, and
the corresponding column is called key column.
1. In each row, find the ratio between the solution column value and the
value in the key column.
2. Then, select the variable from the present set of basic variables with
respect to the minimum ratio (break tie randomly). Such variable is
the leaving variable and the corresponding row is called the key row.
The value at the intersection of the key row and the key column is
called key element or pivot element.
22
In Table 2.1, the leaving variable is s1 and the row 1 is the key row. Key element
is 10. The next iteration is shown in Table 2.2. In this table, the basic variable s1
of the previous table is replaced by x2 . The formula to compute the new values of
table 2.2 is as shown below:
Here
Key column value × Key row value
N ew value = Old value −
Key value
As a sample calculation, the computation of the new value of row and column
x1 is shown below.
4×5 20
N ew value = 4 − =4− =4−2=2
10 10
Computation of te cell values of different tables using this formula is a boring
process. So, a different procedure can be used as explained below.
1. Key row:
N ew Key row value = Current Key row value ÷ Key element value
23
These computations are applied to the Table 2.1. in the following manner:
The results are shown in Table 2.2. The solution in the Table 2.2 is not optimal.
The criterion row value for the variable x1 is the maximum positive value. Hence,
the variable x1 is selected as entering variable and after computing the ratio, s2 is
In Table 2.3, all the values for cj − zj are either 0 (corresponding to basic vari-
x1 = 8 and x2 = 2
24
2.2.4 Revised simplex method
The simplex method discussed in section 2.2 performs calculation on the entire
tableau during each iteration. However, updating all the elements in tableau dur-
ing a basis change is not really necessary for using the simplex method. The only
information needed in moving from one tableau (basic feasible solution) to another
tableau is as follows:
The information contained in other columns of the tableau plays no role in the
ment over simplex method. The revised simplex method is computationally more
efficient and accurate. Its benefit is clearly comprehended in case of large LP
problems. The revised simplex method uses exactly the same steps as those in
simplex method but at each iteration the entire tableau is never calculated. The
relevant information it needs to move from one basic feasible solution to another
is directly generated from the original equations. The steps of the revised simplex
Step 1. Obtain the initial feasible basis B. Determine the corresponding feasible
solution xB = B −1 b.
25
Step 2. Obtain the corresponding simplex multipliers π = cB B −1 . Check the
optimality of the current basic feasible solution (BFS). If the current basis
is optimal, then STOP.
Step 3. If the current BFS is not optimal, identify the entering variable xj (that
is, cj − zj = cj − m
P
i=1 πj aij = cj − πPj < 0).
Step 4. Obtain the column P̄j = B −1 Pj and perform the minimum ratio test to
26
2.3 Duality Theory and Its Applications
From both the theoretical and practical point of view, the theory of duality is one
of the most important and interesting concepts in linear programming. The basic
idea behind the duality theory is that every linear programming problem has an
associated linear programming called its dual such that a solution to the original
linear program also gives a solution to its dual. Thus, whenever a linear program
is solved by the simplex method, we are actually getting solutions for two linear
programming problems. The Dual can be considered as the “inverse” of the Primal
in every respect. The column coefficients in the Primal constrains become the
row co-efficients in the Dual constraints. The coefficients in the Primal objective
function become the right hand side constraints in the Dual constraints. The
column of constants on the right hand side of the Primal constraints becomes the
row of coefficients of the dual objective function. The direction of the inequalities
are reversed. If the primal objective function is a “Maximization” function then the
dual objective function is a “Minimization” function and vice-versa. The concept of
duality is very much in useful to obtain additional information about the variation
in the optimal solution when certain changes are effected in the constraint co-
efficient, resource availabilities and objective function co-efficients. This is termed
27
The concept of a dual is introduced by the following programme.
Example 2.1
The above linear programme has two constraints and four variables. The dual
y1 and y2 are called the dual variables. The original problem is called the primal
problem. Comparing the primal and the dual problems, we observe the following
relationships:
1. The objective function coefficients of the primal objective function have be-
come the right-hand-side constants of dual. Similarly, the right-hand-side
constants of primal problem have become the objective function coefficients
of the dual.
28
4. Each column in the primal corresponds to a constraint (row) in the dual.
Thus, the number of dual constraints is equal to the number of primal vari-
ables.
In both of the primal and the dual problems, the variables are nonnegative and
the constraints are inequalities. Such problems are called symmetric dual linear
programmes.
form and in a minimization problem they must be in “greater than or equal to”).
In matrix notation the symmetric dual linear programmes are:
Primal:
M aximize Z = C 0X
subject to
AX ≤ b,
X ≥ 0.
Dual:
M inimize W = b0 Y
subject to
A0 Y ≥ C,
Y ≥ 0.
29
Where A is an m × n matrix, b is an m × 1 column vector, C is a n × 1 column
sense of constraint i in the primal with the sign restriction for yi in the dual, and
sign restriction of xj in the primal with the sense of constraint j in the dual. Note
that when these alternative definitions are allowed there are many ways to write
the primal and dual problems; however, they are all equivalent.
Primal:
primal problem leads the manufacturer to find an optimal production plan that
maximizes the sales with available resources.
30
Dual:
Lets assume the manufacturer gets the resources from a supplier. The man-
ufacturer wants to negotiate the unit purchasing price yi for resource i with the
supplier. Therefore, the manufacturers objective is to minimize the total purchas-
ing price b0 Y in obtaining the resources bi . Since the market price cj and the
“product-resource” conversion ratio aij are open information on market, the man-
ufacturer knows that, at least, a “smart” supplier would like to charge him as much
as possible, so that
a1j y1 + a2j y2 + · · · + amj ym ≥ cj .
In this way, the dual linear program leads the manufacturer to come up with
a least-cost plan in which the purchasing prices are acceptable to the “smart”
supplier.
An example is giving to get more understanding on above economic interpre-
tation. Let n = 3 (products are Desk, Table and Chair) and m = 3 (resources
are Lumber, Finishing hours(needed time) and Carpentry hours). The amount of
each resource which is needed to make one unit of certain type of furniture is as
follows:
XXX
XXX Product
XXX Desk Table Chair
Resource XXX
Lumber 4 lft 6 lft 1 lft
Finishing hours 4 hrs 2 hrs 1.5 hrs
Carpentry hours 2 hrs 1.5 hrs 0.5 hrs
48 lft of lumber, 20 hours of finishing hours and 4 carpentry hours are available.
A desk sells for $60, a table for $30 and a chair for $20. How many of each type
31
produced. This problem is solved with the following LP:
M aximize Z = 60x1 + 30x2 + 20x3
subject to
8x1 + 6x2 + x3 ≤ 48,
4x1 + 2x2 + 1.5x3 ≤ 20,
2x1 + 1.5x2 + 0.5x3 ≤ 8,
x1 , x2 , x3 ≥ 0.
Suppose that an investor wants to buy the resources. What are the fair prices,
y1 , y2 , y3 , that the investor should pay for a lft of lumber, one hour of finishing
In exchange for the resources that could make one desk, the investor is offering
(8y1 + 4y2 + 3y3 ) dollars. This amount should be larger than what manufacturer
Similarly, by considering the “fair” amounts that the investor should pay for the
combination of resources that are required to make one table and one chair, we
conclude
y1 + 1.5y2 + 0.5y3 ≥ 20
32
Consequently, the investor should pay the prices y1 , y2 , y3 , solution to the
following LP:
M inimize W = 48y1 + 20y2 + 8y3
subject to
8y1 + 4y2 + 2y3 ≥ 60,
6y1 + 2y2 + 1.5y3 ≥ 30,
y1 + 1.5y2 + 0.5y3 ≥ 20,
y1 , y2 , y3 ≥ 0.
The above LP is the dual to the manufacturers LP. The dual variables are often
referred to as shadow prices, the fair market prices, or the dual prices.
Now, we shall turn our attention to some of the duality theorems that gives
important relationships between the primal and the dual solution.
From the weak duality theorem we can infer the following important results:
Corollary 2.1 The value of the objective function of the maximum (primal)
problem for any (primal) feasible solution is a lower bound to the minimum
value of the dual objective.
Corollary 2.2 Similarly the objective function value of the minimum problem
(dual) for any (dual) feasible solution is an upper bound to the maximum
value of the primal objective.
33
Corollary 2.3 If the primal problem is feasible and its objective is unbounded
Corollary 2.4 Similarly, if the dual problem is feasible, and is unbounded (i.e.,min W →
−∞), then the primal problem is infeasible.
grammes such that the corresponding values of their objective functions are equal,
then these feasible solution are in fact optimal solutions to their respective prob-
lems.
We will use the properties of duality theory in proposed search algorithm, see
section 5.
34
2.4 Computational Problems in Simplex Method
There are a number of computational problems that may arise during the actual
application of the simplex method for solving a linear programming problem. In
this section we discuss some complication that may occur.
value in the cj − zj row is chosen. In case there exists more than one variable
with the same largest positive value in the cj − zj row, then we have a tie for
selecting the nonbasic variable.
straints to give the same least ratio value. This result in a tie for selecting
which basic variable should leave the basis. This complication causes degen-
eracy in basic feasible solution.
be degenerate basic feasible solution if at least one basic variable in the so-
lution equals zero.
3. Cycling:
nomenon causes that we obtain a new basic feasible solution in the next
iteration which has no effect on the objective function value. It is then the-
35
oretically possible to select a sequence of admissible bases that is repeatedly
selected without over satisfying the optimality criteria and, hence, never
reaches a optimal solution. A number of anticycling procedure have been
developed [17, 14, 66, 49, 10]. These anticycling method are not used in
practice partly because they would slow the computer program down and
partly because of the difficulty of recognizing a true zero numerically [7].
case complexity of the simplex algorithm is exponential time. Also, the number
of arithmetic operations in the algorithm grows exponentially with the number of
variables. The first polynomial time (interior point) algorithms, Ellipsoid method,
for linear programming were given by Khachian (1979), then Karmarkar (1984)
suggested the second polynomial time algorithm. Unlike the ellipsoid method
Karmarkar’s method solves the very larg linear programming problems faster than
does the simplex method [58]. Both the ellipsoid method and the karmarkar’s
method are mathematically iterative and they are not better than simplex algo-
rithm certainly for small linear programming problems and possibly for reasonably
36
where stable and acceptable (i. e. near-optimal) answers are desired quickly [27].
37
2.5 Proposed Stochastic Search Algorithm
P: max z = c1 x 1 + c2 x 2 + ... + cn x n
subject to
a11 x1 + a12 x2 + ... + a1n xn ≤ b1 ,
a21 x1 + a22 x2 + ... + a2n xn ≤ b2 ,
.. .. .. .. ..
. . . . .
am1 x1 + am2 x2 + ... + amn xn ≤ bm ,
x1 , x2 , ... , xn ≥ 0.
variables have zero lower bound, in fact, the positive quadrant contains the simplex
representing the set of feasible solution of dual problem. We call it DB.
Now, generate a random element each from B and DB using a specified dis-
tribution, pr , on B and DB. If this element is primal feasible, then compute the
primal objective function at this solution. Ignore all infeasible elements. Compute
the largest value of the primal objective functions for these primal feasible solu-
tions. The same process is applied to the dual problem by computing the smallest
value of dual objective functions for the generated feasible dual solutions. In this
process, a feasible primal solution is declared inadmissible if the value of the primal
objective function does not exceed the highest value obtained till then. A feasible
38
dual solution is declared inadmissible if the value of the dual objective function is
not smaller than the smallest value obtained till then. We keep track of the num-
ber of generated elements, the number of feasible solutions among these and the
number of admissible points among the latter, in both problems. The procedure
1. random elements
2. feasible solutions, or
3. admissible solutions.
The simulation approach obtains a near-optimal solution for every starting value.
Near-optimality of the final solution does not depend on the initial value and hence
the rate of convergence can not be determined. Therefore, it may be convenient
to begin with more than one initial value, generating an independent sequence of
solutions for every initial value. This will give us several end-points and the best
of these will be better than the solution obtained from any one of them.
The maximum of the primal objective function and the minimum of the dual
objective function obtained through the above procedure are both near-optimal.
This method is not iterative in the sense that consecutive solutions may not im-
prove the objective function. Also, it is simple from the mathematical point of
view and there are no complicated computations.
39
Algorithmically, the method is described as follows.
1. Generation in PP:
2. Computation in PP:
3. Termination in PP:
1. Generation in DP:
40
2. Computation in DP:
3. Termination in DP:
1. Uniform distribution.
In this case, every element is generated from B and DB with equal prob-
2. Non-Uniform distributions.
41
yi , i = 1, · · · , m in DP and unit variances. The results of pilot study show
points are expected to exceed the mean. This implies a rejection rate ex-
ceeding 1/2. This can be overcome by modifying the mean as follows.
f · µj , 1 ≤ j ≤ n or f · νi , 1 ≤ i ≤ m.
The gamma distribution is reasonable for the dual problem because its fea-
42
2.6 Results of Simulation Studies
Poisson.
The following symbols are used in the tables.
• Zn−o and gn−o are the primal and dual near-optimal objective functions,respectively.
• gap is the length of interval between primal and dual near-optimal values
(gap).
1. Beale’s Example
The following example is called Beale’s example and falls into cycling when
the simplex algorithm is used. The optimum value of objective function is
0.05.
s.t.
x3 ≤ 1,
x1 , x2 , x3 , x4 ≥ 0.
43
In this example based on algorithm steps the following steps are carried out
(a) The points are generated randomly from different distribution with dif-
ferent parameters.
(c) Evaluate the primal and dual objective function values,zi and gj for
feasible points.
Table 2.4: Near-Optimal values of Primal and Dual objective functions in LPP
dual problem.
Distribution zn−o gn−o # infeasible points gap CPU time/
Primal Dual Total sec.
U nif orm(0, µj ) 0.0486 0.0537 109 4722 4831 0.0051 2.9
N (µj , µ2j ) 0.049 0.0502 1690 4271 5961 0.0012 0.3
N (1/2µj , µ2j ) 0.0496 0.0505 1133 7835 8968 0.0009 0.4
N (0.75µj , µ2j ) 0.0492 0.051 1414 5583 6997 0.0018 0.32
Beta(0.5, 0.75) 0.0493 0.0513 136 983 1119 0.002 0.85
Beta(0.75, 0.5) 0.05 0.0512 100 5270 5370 0.0012 3
Beta(0.05, 0.05) 0.05 0.05 101 503 604 0 0.84
44
• As we mentioned before, the performance of non-uniform statistical dis-
• When normal distribution with the mean at the half of upper (lower)
bound for primal (dual) decision variable is considered the length of
gap is smaller than other cases of mean, this means the primal and dual
near-optimal value are nearer to each other than other cases of mean.
• The results which have obtained from Beta distribution with parameters
0 < α, β < 1 are better than other cases of α, β.
2. Kuhn’s Example
s.t.
x1 , x2 , x3 , x4 ≥ 0.
In this example based on algorithm steps the following steps are carried out
in both problems, Primal and Dual:
(a) The points are generated randomly from different distribution with dif-
ferent parameters.
45
(b) Test the feasibility of generated points.
(c) Evaluate the primal and dual objective function values,zi and gj for
feasible points.
Table 2.5: Near-Optimal values of Primal and Dual objective functions in LPP
dual problem.
Distribution zn−o gn−o # infeasible points gap CPU time/
Primal Dual Total sec.
U nif orm(0, µj ) 1.9993 2 290 1491 1781 0.000071 0.29
N (µj , 1) 1.9988 2 413 1595 2008 0.0012 0.124
N (µj , µ2j ) 1.9989 2 709 1536 2245 0.0011 0.117
N (1/2µj , µ2j ) 1.9988 2 1620 1573 3193 0.0012 0.125
Beta(0.5, 0.75) 1.9984 2 354 1510 1861 0.0016 0.238
Beta(0.5, 0.5) 1.9954 2 269 1310 1519 0.0046 0.223
Gamma(1, 1) 1.9947 2 385 1794 2179 0.0053 0.161
Gamma(1, 2) 1.9995 2 426 1426 1852 0.00005 0.1605
Geo(0.5) 2 2 431 1676 2107 0 0.221
46
variables in primal problem was applied and the exact optimal solution
vertices before arriving at the optimal vertex. The optimal objective function
value is 625.
s.t.
4x1 + x2 ≤ 25,
x1 ≤ 5,
x1 , x2 , x3 , x4 ≥ 0.
In this example based on algorithm steps the following steps are carried out
in both problems, Primal and Dual:
(a) The points are generated randomly from different distribution with dif-
ferent parameters.
(c) Evaluate the primal and dual objective function values,zi and gj for
feasible points.
47
(e) if (gn−o − zn−o ) ≤ , small positive value, or the number of restarting
get equal to its initial value, then the algorithm is stopped and outputs
the values of zn−o , xn−o , gn−o , y n−o and stop restart.
Table 2.6: Near-Optimal values of Primal and Dual objective functions in LPP
dual problem.
Distribution zn−o gn−o # infeasible points gap time/
Primal Dual Total sec.
U nif orm(0, µj ) 588.05 634.05 20254 0 20254 46 14.19
N (µj , µ2j ) na na na na na na na
N (1/2µj , µ2j ) 570.3 625.1 64880 237 65117 54.8 1.35
Beta(0.5, 0.75) 614.03 627.69 1920 0 1920 13.66 1.64
Beta(0.5, 1) 618.08 626.22 1179 0 1179 8.14 1.04
Beta(0.25, 1) 623.31 625.01 171 0 171 1.7 0.47
Beta(0.25, 0.75) 624.35 625.03 322 0 322 0.68 0.55
Beta(0.05, 0.05) 625 625 1121 0 1121 0 1.33
Geo(0.5) 625 625 833 0 833 0 0.55
• The normal distribution with the mean at upper (lower) bound for pri-
mal (dual) decision variable is not applicable. But normal distribution
with mean at the half of the upper (lower) bound for primal (dual)
decision variable has better performance than uniform distribution.
• The results which are obtained from Beta distribution with parameters
0 < α < 1 and β = 1 are better than other cases of α, β.
48
• The coordinates of bounding points of this example are multiples of
4. Further example
This example has integer optimal solution in both problems and the optimal
objective function value is 2200.
s.t.
x1 , x2 ≥ 0.
In this example based on algorithm steps the following steps are carried out
(a) The points are generated randomly from different distribution with dif-
ferent parameters.
(c) Evaluate the primal and dual objective function values,zi and gj for
feasible points.
49
The summary of results are shown in the following table.
Table 2.7: Near-Optimal values of Primal and Dual objective functions in LPP
dual problem.
Distribution zn−o gn−o # infeasible points gap CPU time/
Primal Dual Total sec.
N (µj , µ2j ) 2189.8 2208.32 1335 350 1685 18.472 0.076
N (1/2µj , µ2j ) 2191.6 2207.2 1082 346 1428 15.536 0.071
W eibull(µj , 3.5) 2198.7 2206.9 1454 550 2004 8.25 0.136
P oisson(µj ) 2200 2200 50000 2070 52070 0 18.2
P oisson(1/2µj ) 2200 2200 17 1675 1692 0 0.191
The Uniform distribution has no efficiency for this example. Then, we applied
Non-uniform distribution for both primal and dual problems. Normal and Weibull
distributions for primal and Gamma distribution for dual problem. As it can be
this distribution. Because of existence an integer solution for the problem, discrete
distributions are best for this problem.
All this work has been published in the Pakistan Journal of Statistics, Vol. 27,
50
2.7 Conclusions
is not necessarily bad if we have not worked much. Hence, an early solution
can be a near-optimal solution and may not significantly improve with more
work.
2. Generating feasible solutions is the major work in this algorithm. The bound-
ing box is either a hyper-cube or cuboid and then there is a high rejection
rate, or the complete Euclidean space with the same consequence. Attempts
are being made to improve the performance of random solution generator.
values).
6. Discrete distributions like Geometric and Poisson are very efficient in certain
special cases.
51