Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
54 views43 pages

Graphic Method

The document describes the graphical method to solve linear programming problems with two decision variables. This method represents the constraints on a Cartesian plane to delimit the feasible region, and the optimal solution will be found at one of the vertices of the figure formed by the intersection of the constraints. Additionally, it provides examples to illustrate how to apply this graphical method to find the optimal solution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views43 pages

Graphic Method

The document describes the graphical method to solve linear programming problems with two decision variables. This method represents the constraints on a Cartesian plane to delimit the feasible region, and the optimal solution will be found at one of the vertices of the figure formed by the intersection of the constraints. Additionally, it provides examples to illustrate how to apply this graphical method to find the optimal solution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 43

GRAPHIC METHOD

The Graphic Method ( graphic resolution ) constitutes an excellent alternative for


representing and resolving Linear Programming models that have 2 decision
variables. It consists of representing the restrictions on an X and Y coordinate axis,
to delimit the region where the feasible solutions are found (solutions that meet all
the restrictions).

The optimal values will be found around the figure formed by the intersection of
the different restrictions of the linear problem and will be found in the first
quadrant, the optimum of a linear problem is found in one of the vertices of said
figure or polygon.

Usually in each problem we can infer that they give us 3 types of data:

Decision Variables: What the model seeks to answer.

Objective Function: Criterion to select between feasible alternatives.

Constraints: Conditions that define the resources of the model.

EXAMPLES

1) A pot factory has two qualities A and B, it can manufacture at least 40 pots
per month and about 100, if it is of quality B; and at least 90 if it is quality A.
The profit per pot of quality A is S/ 15 and per pot of quality B is S/ 20. Yes,
it can manufacture at most 150 combined units monthly. How many units of
each quality must you manufacture to obtain maximum profit?

A=X
-> Variables
B=Y
F(X;Y) = 15X + 20Y
-> Objective Function
X ≥ 90
40 ≤ Y ≤ 100 -> Restrictions
X + Y ≤ 150

Each constraint has its own graph:


X ≥ 90

40 ≤ Y ≤ 100

X+Y = 150

AN
X + Y ≤ 150 x
D
0 150
150 0

The intersection of the graphs into one will form a geometric figure and the
vertices are the optimal points, in this case it asks us for the maximum profit and
that gives us at point D
And since we show that point D is the maximum gain, we replace the coordinate
points in the OBJECTIVE FUNCTION

F(X;Y) = 15X + 20Y

PTO C: F(90;40) = 15(90) + 20(40)

= 2150

PTO D: F(90;60) = 15(90) + 20(60)

= 2550

PTO H: F(110;40) = 15(110) + 20(40)

= 2450

It is clearly seen that point D is the maximum and point C is the minimum, in this
case it only asks us for the maximum; However, in the following table we will see
the values of all the points and we see that there are points with higher values, but
they do not meet the restrictions that they give us at the beginning of the problem.
Objective function
Spot X coordinate Y Coordinate
value
EITH
0 0 0
ER
TO 90 0 1350
b 90 100 3350
c 90 40 2150
d 90 60 2550
AND 0 100 2000
F 50 100 2750
g 0 40 800
h 110 40 2450
Yo 0 150 3000
J. 150 0 2250

NOTE:
In green the points where the solution is found.
In red the points that do not belong to the feasible region.

2) Sydsaeter/Hammond, Mathematics for Economic Analysis, Page 568 -


Problem 1A

F(X1; X2)= 3 X1 + 4 X2 -> OBJECTIVE FUNCTION


3 X1 + 2 X2 ≤ 6
1 X1 + 4 X2 ≤ 4 -> RESTRICTIONS
X1, X2 ≥ 0

Spot X coordinate (X1) Y coordinate (X2) Objective function value


EITH
0 0 0
ER
TO 0 3 12
b 2 0 6
c 1.6 0.6 7.2
d 0 1 4
AND 4 0 12

NOTE:
In green the points where the solution is found.
In red the points that do not belong to the feasible region.

THE DUAL MODEL

 DEFINITION OF THE DUAL PROBLEM


The dual problem is defined systematically from the primal (or original) LP model. The
two problems are closely related in the sense that the optimal solution to one
automatically provides the optimal solution to the other. In most LP treatments, the dual
is defined for various forms of the primal depending on the direction of optimization
(maximization or minimization), the types of constraints (≤,≥ or =), and the sign of the
variables (not negative or unrestricted). This chapter offers a single definition that
automatically covers all forms of primal.
Our definition of the dual problem requires expressing the primal problem in the
equation form presented in section 3.1 (all constraints are equations with non-negative
right-hand sides, and all variables are non-negative). This requirement is consistent
with the format of the initial simplex table. Hence any results obtained from the primal
optimal solution apply directly to the
associated dual problem.
The key ideas for constructing the dual from the primal are summarized as follows:

1. Assign a dual variable for each primal constraint.


2. Construct a dual constraint for each primal variable.
3. The constraint coefficients (column) and the target coefficient of the j -th primal
variable respectively define the left and right sides of the j -th dual constraint.
4. The dual objective coefficients are equal to the right-hand sides of the primary
constraint equations.
5. The rules that appear in Table 4.1 govern the direction of optimization, the direction
of the inequalities and the signs of the variables in the dual. An easy way to
remember the type of constraint on the dual (i.e., ≤ or ≥) is that if the dual objective
is minimization (i.e., pointing downward), then all constraints will be of type ≥ (i.e.
point upwards). The opposite applies when the dual objective is maximization.

All primary constraints are equations with non-negative right-hand sides, and all
variables are non-negative.

The following examples demonstrate in Table 4.1 the use of the rules; They even show
that our definition automatically incorporates all forms of the primal. Other types of
relationships between the primal and dual problems are the following:
Dual problem

Minimize w = 10y1 + 8y2

subject to

y 1 +2 y 2 ≥ 5

2 y 1− y 2 ≥ 12

y 1 +3 y 2 ≥ 4

y 1 +0 y 2 ≥ 0 ( y 1 ≥0 , y 2 irrestricta)

y 1 , y 2 irrestricta

Summary of the rules for constructing the dual:

Note that the column headers that appear in the table do not use the primal and dual
name. What matters in this case is the sense of optimization. If the primal is
maximization, then the dual is minimization, and vice versa. Note also that there are no
specific measures to include artificial variables in the primal. The reason is that artificial
variables would not change the definition of the dual

PRIMAL-DUAL RELATIONSHIPS

Let us consider the following linear model in symmetric form of maximization at

what we will call the primal model


T
max Z=c x

subject to

Ax ≤ b
x≥0
the dual model is the following model in symmetric minimization form
T
min G=b y
subject to
T
A y ≥c
y ≥0
 Example . Given the linear model

max z=2 x 1−x 2+ 3 x 3


subject to
x 1−x 2+ x3 ≤ 2
3 x 1−x 2+2 x 3 ≤ 1
x1 , x2 , x3 ≥ 0
the associated dual is
min G=2 y 1 + y 2
subject to
y 1 +3 y 2 ≥ 2
− y 1− y 2 ≥−1
y 1 +2 y 2 ≥ 3
y1 , y2 ≥ 0

OPTIMAL DUAL SOLUTION

Primal and dual solutions are closely related in the sense that the optimal solution to
one or the other problem gives the optimal solution to the other. Thus, in a LP model in
which the number of variables is considerably less than the number of constraints,
calculations can be saved by solving the dual because the number of simplex
calculations depends largely (although not entirely) on the number of constraints
This section provides two methods for determining dual values.

Method 1 .

Method 2 .

The elements of the row vector must appear in the same order that the basic variables
appear in the Basic column of the simplex table.

ECONOMIC INTERPRETATION OF DUALITY

The LP problem can be considered as a resource allocation model that seeks to


maximize revenue with limited resources. Considering the problem from this point of
view, the associated dual problem offers interesting economic interpretations of the
resource allocation model.

To formalize the approach, consider the following representation of the primal and dual
problems:
Considered as a resource allocation model, the primal problem consists of n economic
activities and m resources. The coefficient cj in the primal represents the income per
unit of activity j. The resource i with availability bi is consumed at the rate of aij units
per unit of activity j.

ECONOMIC INTERPRETATION OF DUAL VARIABLES

The section states that for any of the two primal and dual feasible solutions, the values
of the objective functions, when finite, must satisfy the following inequality:

At the optimum, the two target values are equal, that is, z = w.

Based on the resource allocation model, z represents $revenue, and bi represents


available units of resource i. Therefore, dimensionally, z = w implies

This means that the dual variable, yi, represents the value per unit of resource i. The
standard name dual price (or shadow price) of resource i replaces the (suggestive)
name unit value throughout the linear programming literature and in software
packages, hence the standard name was also adopted in this book. Using the same
dimensional analysis, we can interpret the inequality z < w (for either of the two primal
and dual solutions) as

(Income) < (Value of resources)

This relationship expresses that as long as the total income from all activities is less
than the value of the resources, the corresponding primal and dual solutions will not be
optimal. Optimality is achieved only when resources have been fully exploited. This can
happen only when the input (value of resources) equals the output (income in dollars).

 EXAMPLE:
The Reddy Mikks model deals with the production of two types of paint (interior and
exterior) with two raw materials M1 and M2 (resources 1 and 2) and subject to market
limits and demand by the third and fourth constraints. . The model determines the
quantities (in tons per day) of exterior and interior paints that maximize daily income
(expressed in thousands of dollars).

The optimal dual solution shows that the dual price (value per unit) of raw material M1
(resource 1) is y1 = 0.75 (or $750 per ton) and that raw material M2 (resource 2) is y2
= 0.5 (or $500 by Ton). These results hold true at specific feasibility intervals. For
resources 3 and 4, which represent the limits of the market and demand, both dual
prices are zero, indicating that their associated resources are abundant (i.e., they are
not critical in determining the optimum and, therefore, their value per unit, or dual price,
is zero).

ECONOMIC INTERPRETATION OF DUAL RESTRICTIONS

The economic meaning of dual constraints can be achieved using the formula, which
states that at any primal iteration,

Once again we use dimensional analysis to interpret this equation. The income per
m
unit, cj, of activity j is in dollars per unit. Hence, for consistency, the amount∑ ¿1 a ij y i
i
It must also be in dollars per unit. Next, since cj represents income, the amount
m

∑ ¿1 a ij y i ,with opposite sign, must represent cost. Therefore we have


i

The conclusion is that the dual variable y1 represents what is known in the LP literature
m
as imputed cost per unit of resource i, and we can consider that the amount∑ ¿1 a ij y i
i
as the imputed cost of all resources necessary to produce a unit of activity j. As
m
indicated, the amount∑ ¿1 a ij y i (= imputed cost of activity j – cj) is known as reduced
i
cost of activity j. The maximization optimality condition of the simplex method states
that an increase in the level of an unused (non-basic) activity j can improve income
only if its reduced cost is negative. Based on the preceding interpretation, this condition
establishes that

Thus, the maximization optimality condition says that it is economically advantageous


to increase the level of an activity if its unit revenue exceeds its imputed unit cost.

 EXAMPLE:

The optimal solution calls for 100 trucks and 230 cars to be produced, but no trains.
Suppose TOYCO is also interested in producing trains (x1). How can you get this?
Examining the reduced cost of x1, a toy train becomes economically attractive only if its
imputed unit cost is strictly less than its unit revenue. TOYCO can achieve this by
increasing the unit price. It can also reduce the imputed cost of resources consumed
(¿ y 1 +3 y 2+ y 3)
A reduction in the imputed unit cost leads to a reduction in the assembly times used by
a train in the three operations. Let r1, r2 and r3 be the ratios of the reductions in
operations 1, 2 and 3, respectively. The goal is to determine the values of r1, r2, and r3
such that the new imputed cost per train is less than its unit revenue, i.e.
For the optimal dual values, y 1=1 , y 2=2 , y 3=0 , This inequality reduces to

All values of r1 and r2 that meet these conditions will make the trains profitable. Note,
however, that this goal may not be achievable because it requires large reductions in
the times of operations 1 and 2 that do not appear to be practical. For example, even a
50% reduction (i.e., r1 = r2 = .5) does not satisfy the given condition. So the logical
conclusion is that TOYCO should not produce trains unless time reductions are
accompanied by an increase in unit revenue.
The simplex method
The simplex method was initially devised by Dantzing in 1947 and later perfected by different
thinkers. Its variations and extensions are used to solve post-optimal analyzes including
sensitivity analyzes on the model through shadow prices.

The Simplex Method is an analytical method for solving linear programming problems capable
of solving more complex models than those solved by the graphical method without restriction
on the number of variables.

The Simplex method is an iterative procedure that allows improving the solution of the
objective function in each step. The process ends when it is no longer possible to continue
improving said value, that is, the optimal solution has been reached (the highest or lowest
possible value, depending on the case, for which all restrictions are satisfied).

Starting from the value of the objective function at any point, the procedure consists of looking
for another point that improves the previous value. As will be seen in the Graphic method ,
these points are the vertices of the polygon that constitutes the region determined by the
restrictions to which the problem is subject.

It will be necessary to take into account that the Simplex method only works with problem
restrictions whose inequalities are of the type "≤" (less than or equal to) and their independent
coefficients are greater than or equal to 0. Therefore, the constraints will have to be
standardized to meet these requirements before starting the Simplex algorithm. If after this
process restrictions of the type "≥" (greater than or equal) or "=" (equality) appear, or they
cannot be changed, it will be necessary to use other resolution methods, the most common
being the method of Two phases.

The standard form of the problem model consists of an objective function subject to certain
constraints:

Objective Function: c 1 ·x 1 + c 2 ·x 2 + ... + c n ·x n

Subject to: a 11 ·x 1 + a 12 ·x 2 + ... + a 1n ·x n = b 1

a 21 ·x 1 + a 22 ·x 2 + ... + a 2n ·x n = b 2

...

a m1 ·x 1 + a m2 ·x 2 + ... + a mn ·x n = b m

x 1 ,..., x n ≥ 0
The model must meet the following conditions:

1. The objective will be to maximize or minimize the value of the objective function
(for example, increase profits or reduce losses, respectively).

2 . All constraints must be equations of equality (mathematical identities).

3. All variables (xi) must have a positive or null value (non-negativity condition).

4. The independent terms (bi) of each equation must be non-negative.

The modeled problem must be adapted to the standard form in order to apply the Simplex
algorithm.

Normalization of restrictions
Another condition of the standard model of the problem is that all restrictions are equations of
equality (also called equality restrictions), so inequality restrictions or inequalities must be
converted into said mathematical identities.

The condition of non-negativity of the variables (x1,..., xn ≥ 0) is the only exception and
remains as is.

· Constraint of type "≤"

To normalize a constraint with an inequality of the type "≤", a new variable must be
added, called the slack variable xs (with the condition of non-negativity: xs ≥ 0). This
new variable appears with a zero coefficient in the objective function, and adding to
the corresponding equation (which will now be a mathematical identity or equality
equation).

a 11 ·x 1 + a 12 ·x 2 ≤ b 1 a 11 ·x 1 + a 12 ·x 2 + 1 ·x s = b 1

· "≥" type constraint

In case of an inequality of the type "≥", we must also add a new variable called the
excess variable xs (with the condition of non-negativity: xs ≥ 0). This new variable
appears with a zero coefficient in the objective function, and is subtracted from the
corresponding equation.

A problem now arises with the non-negativity condition with this new problem
variable. The inequalities that contain an inequality of type "≥" would be:

a 11 ·x 1 + a 12 ·x 2 ≥ b 1 a 11 ·x 1 + a 12 ·x 2 - 1 ·x s = b 1

When performing the first iteration with the Simplex method, the basic variables will
not be in the base and will take a value of zero. In this case the new variable xs, after
making x1 and x2 zero, will take the value -b1 and would not meet the non-negativity
condition. It is necessary to add another new variable xr, called artificial variable,
which will also appear with zero coefficient in the objective function and adding in the
corresponding restriction. Then remaining as follows:

a 11 ·x 1 + a 12 ·x 2 ≥ b 1 a 11 ·x 1 + a 12 ·x 2 - 1 ·x s + 1 ·x r = b 1

· "=" type restriction

Contrary to what one might think, for restrictions of type "=" (although they are
already identities) it is also necessary to add artificial variables xr. As in the previous
case, its coefficient will be zero in the objective function and will appear adding to the
corresponding restriction.

a 11 ·x 1 + a 12 ·x 2 = b 1 a 11 ·x 1 + a 12 ·x 2 + 1 ·x r = b 1

In the last case, it becomes clear that the artificial variables represent a violation of the laws of
algebra, so it will be necessary to ensure that these artificial variables have a value of 0 in the
final solution. The Two Phase method is responsible for this and therefore whenever this type
of variables appear it will have to be done.

The following table summarizes the type of variable that appears in the normalized equation,
as well as its sign, according to the inequality:

Type of inequality Type of variable that appears

≥ - excess + artificial

= + artificial

≤ + clearance

Optimization type
As mentioned, the objective of the method will be to optimize the value of the objective
function. However, two options are presented: obtain the highest optimal value (maximize) or
obtain the lowest optimal value (minimize).

Furthermore, there are differences in the algorithm between the maximization objective and
the minimization objective in terms of the stopping condition criterion to end the iterations
and the entry and exit conditions of the base. So:
● Construction of the first table:

The columns of the table are arranged as follows: the first column of the table
contains the variables found in the base (or basic variables), that is, those that take
value to provide a solution; The second column collects the coefficients that these
basic variables have in the objective function (this column is called Cb); the third
shows the independent term of each restriction (P0); From this a column appears
for each of the decision and slack variables present in the objective function (Pj). To
have a clearer view of the table, a row containing the titles of each of the columns
is included.
Two new rows are added to this table: one of them, which leads the table, where
the coefficients of the variables of the objective function appear, and a last row that
collects the value of the objective function and the reduced costs Zj - Cj.
The reduced costs show the possibility of improvement in the Z0 solution. For this
reason they are also called indicator values.

The general appearance of the Simplex method table is shown below:


Board

C1 C2 ... cn

Base Cb P0 P1 P2 ... Pn

P1 Nb1 b1 a11 a12 ... a1n

P2 Nb2 b2 a21 a22 ... a2n

... ... ... ... ... ... ...

P.m cbm bm am1 am2 ... amn

Z Z0 Z1-C1 Z2-C2 ... Zn-Cn

All values included in the table will be given by the problem model except the
values in row Z (or indicator row). These are obtained in the following way: Zj =
Σ(Cbi·Pj) for i = 1..m, where if j = 0, P0 = bi and C0 = 0, and otherwise Pj = aij.
It is observed, when performing the Simplex method, that in this first table all the
slack variables occupy the base and therefore (all the coefficients of the slack
variables are 0 in the objective function) the initial value of Z is zero.
For this same reason, it is not necessary to carry out the calculations of the reduced
costs in the first table, since they can be determined directly as the change in sign
of the coefficients of each variable in the objective function, that is, -Cj.

● Stop condition:
The stopping condition is met when the indicator row does not contain any negative
value among the reduced costs (when the objective is maximization), that is, there is
no possibility of improvement.
Once the stopping condition is met, the value of each variable that achieves the
optimal solution is found in the P0 column, indicating in the base which variable that
value corresponds to. If a variable does not appear in the database, it means that its
value is zero. In the same way, the optimal value of the objective function (Z) is found
in column P0, row Z.
If the stopping condition is not met, it is necessary to perform one more iteration of
the algorithm, that is, determine the variable that becomes basic and the one that
stops being basic, find the pivot element, update the values of the table and check if it
is met. the stop condition again.
It is also possible to determine that the problem is not limited and its solution can
always be improved. In this case it is not necessary to continue iterating indefinitely
and the algorithm can be finished. This situation occurs when in the column of the
variable entering the base all the values are negative or null.

● Choice of the variable that enters the base:


When a variable becomes basic, that is, it enters the base, it begins to form part of the
solution. Observing the reduced costs in row Z, it is decided that the variable in the
column in which it is the one with the lowest value (or the highest absolute value)
among the negative ones enters the base.

● Choice of the variable that comes out of the base:


Once the incoming variable is obtained, it is determined that the variable found in
that row whose quotient P0/Pj is the smallest of the strictly positive ones leaves the
base (taking into account that this operation will only be done when Pj is greater than
0 ).

● Pivot element:
The pivot element of the table is marked by the intersection between the column of
the incoming variable and the row of the outgoing variable.

● Table update:
The rows corresponding to the objective function and the titles will remain unchanged
in the new table. The rest of the values must be calculated as explained below:

■ In the pivot element row each new element is calculated as:


New Pivot Row Element = Previous Pivot Row Element / Pivot.
■ In the rest of the rows each element is calculated:
New Row Element = Previous Row Element - (Previous Row Element in Pivot
Column * New Pivot Row Element).

● In this way, all the elements in the column of the incoming variable are null except for
the one in the row of the outgoing variable, whose value will be 1. (It is analogous to
using the Gauss-Jordan method to solve systems of linear equations.)

Example: Simplex Method

Solve the following problem using the simplex method:


Maximize Z = f(x,y) = 3x + 2y

subject to: 2x + y ≤ 18

2x + 3y ≤ 42

3x + y ≤ 24

x≥0,y≥0

The following phases are considered:

1. Perform a change of variables and normalize the sign of the independent terms.
A change is made to the nomenclature of the variables. Establishing the following
correspondence:

■ x becomes X1
■ and it becomes X2

Since the independent terms of all the constraints are positive, nothing needs to be
done. Otherwise, it would be necessary to multiply by "-1" on both sides of the
inequality (taking into account that this operation also affects the type of restriction).

2. Normalize restrictions.
The inequalities are converted into equations by adding slack , excess and artificial
variables according to the following table:

Type of inequality Type of variable that


appears

≥ - excess + artificial

= + artificial

≤ + slack

In this case, a slack variable (X3, X4 and X5) is introduced in each of the restrictions of
type ≤, to convert them into equalities, resulting in the system of linear equations:

2·X1 + X2 + X3 = 18

2 X1 + 3 X2 + X4 = 42

3·X1 + X2 + X5 = 24

3. Set the objective function equal to zero.

Z - 3·X1 - 2·X2 - 0·X3 - 0·X4 - 0·X5 = 0

4. Write the initial table of the Simplex method.

The initial table of the Simplex method is composed of all the coefficients of the
decision variables of the original problem and the slack, excess and artificial variables
added in step 2 (in the columns, P0 being the independent term and the rest of the Pi
variables coincide with Xi), and the constraints (on the rows). Column Cb contains the
coefficients of the variables found in the base.
The first row is formed by the coefficients of the objective function, while the last row
contains the value of the objective function and the reduced costs Zj - Cj.
The last row is calculated as follows: Zj = Σ(Cbi·Pj) for i = 1..m, where if j = 0, P0 = bi and
C0 = 0, and otherwise Pj = aij. Although this is the first table of the Simplex method and
all Cb are null, the calculation can be simplified, and this time set Zj = -Cj.

Table I. Iteration #1

3 2 0 0 0

Base Cb P0 P1 P2 P3 P4 P5

P3 0 18 2 1 1 0 0

P4 0 42 2 3 0 1 0

P5 0 24 3 1 0 0 1

Z 0 -3 -2 0 0 0

5. Stop condition.
If the objective is maximization, when in the last row (indicator row) there is no
negative value among the reduced costs (columns P1 onwards) the stopping condition
is reached.
In this case, the end of the algorithm is reached since there is no possibility of
improvement. The value of Z (column P0) is the optimal solution to the problem.
Another possible case is that in the column of the variable entering the base all the
values are negative or null. This indicates that the problem is not limited and its
solution can always be improved. In this situation, it is not necessary to continue
iterating indefinitely and the algorithm can also be terminated.
If not, the following steps are executed iteratively.

6. Choice of the incoming and outgoing variable of the base.


The variable that enters the base is determined first. To do this, choose the column
whose value in row Z is the smallest among all the negative ones. In this case it would
be the variable X1 (P1) with coefficient -3.
If there are two or more equal coefficients that meet the previous condition (case of a
tie), then the variable that is basic will be chosen.
The column of the variable that enters the base is called the pivot column (in green ).
Once the variable that enters the base is obtained, we proceed to determine what the
variable that leaves it will be. The decision is made based on a simple calculation:
divide each independent term (column P0) by the corresponding element of the pivot
column, as long as both elements are strictly positive (greater than zero). The row
whose result was the minimum is chosen.
If there is any element less than or equal to zero, said quotient is not created. If all the
elements of the pivot column were of this condition, the stopping condition would
have been met and the problem would have an unbounded solution.

In this example: 18/2 [=9] , 42/2 [=21] and 24/3 [=8]


The term of the pivot column that in the previous division gave rise to the smallest
positive quotient indicates the row of the slack variable that leaves the base. In this case
it turns out to be X5 (P5), with a coefficient of 3. This row is called the pivot row (in
green ).
If when calculating the quotients, two or more results meet the condition for choosing
the outgoing element of the base (case of a tie), the one that is not a basic variable is
chosen (whenever possible).
The intersection of the pivot row and pivot column marks the pivot element , in this case
3.

7. Update the table.


The new coefficients in the table are calculated as follows:

■ In the pivot element row each new element is calculated as:


New Pivot Row Element = Previous Pivot Row Element / Pivot

■ In the rest of the rows each element is calculated:


New Row Element = Previous Row Element - (Previous Row Element in Pivot
Column * New Pivot Row Element)

With this, the pivot element is normalized and its value becomes 1, while the rest of the
elements of the pivot column are canceled (analogous to the Gauss-Jordan method).
The calculations for row P4 are shown below:
Previous row P4 42 2 3 0 1 0

- - - - - -

Previous Row Element in Pivot Column 2 2 2 2 2 2

x x x x x x

New pivot row 8 1 1/3 0 0 1/3

= = = = = =

New row P4 26 0 7/3 0 1 -2/3

The table corresponding to this second iteration is:

Table II. Iteration #2

3 2 0 0 0

Base Cb P0 P1 P2 P3 P4 P5

P3 0 2 0 1/3 1 0 -2/3

P4 0 26 0 7/3 0 1 -2/3

P1 3 8 1 1/3 0 0 1/3

Z 24 0 -1 0 0 1

8. When checking the stop condition, it is observed that it is not met since among the
elements in the last row there is a negative one, -1. Continue iterating steps 6 and 7
again .
■ 6.1. The variable that enters the base is X2 (P2), as it is the variable that
corresponds to the column where the coefficient -1 is found.
■ 6.2. To calculate the variable that comes out, the terms of the P0 column are
divided by the corresponding terms of the new pivot column: 2 / 1/3 [=6], 26 /
7/3 [=78/7] and 8 / 1 /3 [=24]. Since the smallest positive quotient is 6, the
variable that leaves the base is X3 (P3).
■ 6.3. The pivot element is 1/3.

By updating the table values again we obtain:

Table III. Iteration #3

3 2 0 0 0

Base Cb P0 P1 P2 P3 P4 P5

P2 2 6 0 1 3 0 -2

P4 0 12 0 0 -7 1 4

P1 3 6 1 0 -1 0 1

Z 30 0 0 3 0 -1

9. A new check of the stop condition reveals that among the elements in the indicator
row there is again a negative one, -1. It means that the optimal solution has not yet
been reached and we must continue iterating (steps 6 and 7):

■ 6.1. The variable that enters the base is X5 (P5), as it is the variable that
corresponds to the coefficient -1.
■ 6.2. The variable that comes out is chosen by calculating the quotient between
the terms of the column of independent terms and the corresponding terms of
the new pivot column: 6/(-2) [=-3], 12/4 [=3], and 6 /1 [=6]. This time it is X4
(P4).
■ 6.3. The pivot element is 4.

After updating all the rows, you get the following table:
Table IV. Iteration #4

3 2 0 0 0

Base Cb P0 P1 P2 P3 P4 P5

P2 2 12 0 1 -1/2 1/2 0

P5 0 3 0 0 -7/4 1/4 1

P1 3 3 1 0 3/4 -1/4 0

Z 33 0 0 5/4 1/4 0

10. End of the algorithm.


It is observed that in the last row all the coefficients are positive, therefore fulfilling the
stopping condition.
The optimal solution is given by the value of Z in the column of independent terms
(P0), in this example: 33. In the same column you can see the point where it is reached,
observing the rows corresponding to the decision variables that have entered the
base: X1 = 3 and X2 = 12.
Undoing the change of variables gives us x = 3 and y = 12.
Two-Phase Simplex Method
The Two-Phase Simplex Method allows us to address the resolution of those Linear
Programming models that, after being taken to their standard form , do not allow obtaining an
initial feasible basic solution in the model variables. To face this situation, there are different
algorithmic strategies, among which the Two-Phase Simplex Method and the Large M Method
stand out, which are usually discussed in Operations Research courses (Operations Research).
In the following article we focus on the analysis of the first alternative through a theoretical-
practical approach.

Phase I (The first feasible basic solution is sought):

1. We consider a linear programming model that is in its canonical form, this model must
be transformed into its extended form by adding artificial variables in the constraints
where the origin is not a solution.
2. Now the objective function is changed by a minimization function where the decision
variables are the artificial variables, but we take the set of restrictions of the original
function.
3. We proceed to solve the model that we have proposed until one of the following cases
occurs: the artificial variables leave the base or the objective function obtains the
value of zero. If none occur, then the model has no solution.

Phase II (We solve the model with the new solution found):

1. We removed the artificial variables from the constraints, but kept the changes that
occurred during phase 1.
2. We return to the original objective function and solve the model with the changes that
occurred in the constraints during phase 1.

Example: Two-Phase Simplex Method

Consider the following Linear Programming model using the Two-Phase Simplex Method .

Phase I (Two-Phase Simplex Method)


Where X3 is the excess variable and X4 and X5 are artificial variables of restriction 1 and 2
respectively. Then we are in a position to prepare the initial table of Phase I where the
auxiliary variables X4 and X5 have reduced cost equal to one (given their respective
coefficients in the objective function of said phase).

The reduced costs of X4 and X5 are then brought to zero. To do this, row operations are
performed, for example, first multiplying row 1 by -1 and adding it to row 3, and then
multiplying row 2 by -1 and adding it to row 3.

Note now that the basic variables are X4 and X5 and the non-basic variables are X1, X2 and X3.
Among the non-basic variables, the one that has a negative reduced cost is X2, therefore said
variable enters the Base and through the feasibility criterion or minimum quotient, the basic
variable that leaves the base is determined. This is obtained from Min{2/1; 10/1}=2 .
Therefore X4 leaves the base and an iteration is performed.

After concluding the iteration, there are now two non-basic variables with negative reduced
cost: X1 and X3. Taking into consideration a criterion of speed of convergence, entry to the
base of X1 is privileged as it has the most negative reduced cost. The basic variable that leaves
the base is obtained from Min{8/2}=4 , determining that X5 leaves the base. To update the
table we add row 2 to row 3 (so that the reduced cost of X1 becomes zero) and then we
multiply row 2 by 1/2 and add it to row 1 (so that X1 be basic associated with row 2, taking the
structure of the outgoing basic variable X5).
It is verified that Phase I of the Two-Phase Simplex Method is concluded. This situation is
detected when there is a basic solution that satisfies the conditions of non-negativity, where
the non-basic variables have reduced costs greater than or equal to zero and the value of the
objective function is equal to zero.

Phase II (Two-Phase Simplex Method)

Next, Phase II of the Two-Phase Simplex Method begins. In this stage, the columns associated
with the auxiliary variables used in Phase I of the method are eliminated (in the example the
variables X4 and X5) and the vector of reduced costs is updated considering the objective
function of the original problem in minimization format, this is MIN -X1–3X2 .

It should be remembered that the basic variables completed in Phase I are X2 and 3 and then
row 1 is multiplied by 3 and added to row 3, obtaining the following:

From the previous procedure it results that the non-basic variable X3 has a reduced cost and
therefore enters the base. The basic variable that leaves the base is obtained from
Min{4/1/2}=8 and therefore X1 leaves the base. With this, an iteration of the method is
carried out, obtaining the following table:
Observe that the non-basic variable X1 has reduced cost equal to 2 (which satisfies the
conditions of non-negativity), in addition to facing a feasible basic solution for X2 and

Therefore, Phase II of the Two-Phase Simplex Method is concluded with optimal solution X1=0 ,
X2=10 and X3=8 and optimal value V(P)=30 .

Note:
Big M (or Big M) Method
Theoretically, it is expected that in the application of the Large M Method the auxiliary
variables will be non-basic in the optimum. If the Linear Programming model is infeasible (i.e.,
if the constraints are not consistent), the final Simplex Method iteration will include at least
one artificial variable as base.

Additionally, the application of the Big M technique theoretically implies that M tends to
infinity. However, when using the computer M must be finite, but large enough . Specifically,
M must be large enough to function as a penalty , at the same time it must not be so large as
to impair the accuracy of the Simplex Method calculations, when manipulating a mixture of
very large and very small numbers.

Simplex Method (Conclusions)


The example that we have developed in this article seeks to present in a simple and didactic
way the main fundamentals associated with the Simplex Method. It should be noted that it has
been necessary for the application of the algorithm to bring the original model to its standard
form, which, as discussed above, may have different representations depending on the
bibliography consulted.

In this context, each Linear Programming problem in its standard form meets the following
properties established in the Fundamental Theorem of Linear Programming :

1. If the problem does not have an optimal solution then it is unbounded or infeasible .
2. If you have a feasible solution, you have a basic feasible solution .
3. If the problem has an optimal solution, it has an optimal basic feasible solution.

It should be noted that a basic feasible solution is not always available in the original variables
of the model (after taking the problem to its standard form). Although there are various
algorithmic strategies to face this difficulty, the reader is invited to review the tutorials that we
have developed on this problem, in particular regarding the 2-Phase Simplex Method , the
Large M Method and the Dual Simplex Method .
TRANSPORTATION MODEL

The transportation or distribution problem is a special network problem in linear programming


that is based on the need to take units from a specific point called source or origin to another
specific point called destination . The main objectives of a transportation model are the
satisfaction of all the requirements established by the destinations, and of course, the
minimization of costs related to the plan determined by the chosen routes.

The context in which the transportation model is applied is broad and can generate solutions
related to the area of operations, inventory and allocation of elements.

The resolution procedure of a transportation model can be carried out using common linear
programming , however its structure allows the creation of multiple solution alternatives such
as the assignment structure or the most popular heuristic methods such as Vogel , Northwest
Corner or Minima. Costs .

Transportation or distribution problems are one of the most applied in the current economy,
leaving as expected multiple success stories on a global scale that stimulate apprehension of
them.
Additionally, there are several assumptions:

1. Requirements assumption: Each origin has a fixed supply of units that must be
completely distributed among the destinations.

2. Cost assumption: the cost of distributing units from a source to any destination is
directly proportional to the number of units distributed.

3. Property of feasible solutions: a transportation problem has feasible solutions if and


only if the sum of resources at the origins is equal to the sum of demands at the
destinations. In cases where the sum of resources and demands are not the same, a
fictitious origin or destination is added with the amount that allows the property of
feasible solutions to be fulfilled.

4. Property of integer solutions: In cases where both resources and demands take an
integer value, all basic variables (assignments) of any of the basic feasible solutions
(including the optimal solution) also assume integer values.

Due to the particularity of the transport model, the Simplex tabular form acquires a structure
that facilitates the assignment process to the basic variables, as shown below:

VOGEL APPROXIMATION METHOD


The Vogel Approximation Method is an improved version of the Minimum Cost Method and
the Northwest Corner Method that generally produces better initial basic feasible solutions ,
meaning basic feasible solutions that report a lower value in the objective function
( minimization) of a balanced Transportation Problem (sum of supply = sum of demand).

STEP 1

Determine for each row and column a penalty measure by subtracting the two lowest costs in
rows and columns.

STEP 2

Choose the row or column with the greatest penalty, that is, from the subtraction made in
"Step 1" you must choose the largest number. In case of a tie, it must be chosen arbitrarily (at
personal discretion).

STEP 3

From the row or column with the highest penalty determined in the previous step, we must
choose the cell with the lowest cost, and assign the greatest possible number of units to it.
Once this step is carried out, a supply or demand will be satisfied, therefore the row or column
will be crossed out. In the event of a tie, only 1 will be crossed out, the remaining row or
column will be left with supply or demand equal to zero (0).

STEP 4: CYCLE AND EXCEPTIONS

- If exactly one row or column with zero supply or demand remains uncrossed, stop.

- If a row or column with positive supply or demand remains uncrossed, determine the basic
variables in the row or column with the minimum cost method, stop.

- If all rows and columns that were not crossed out have zero supply and demand, determine
the basic variables zero by the least cost method, stop.

- If none of the above cases occur, return to step 1 until offers and demands have been
exhausted.

AIM

It is to reduce transportation costs to the minimum possible to satisfy the total demand and
material requirements.

CHARACTERISTICS
 Like other basic feasible solution algorithm methods, the largest quantities should be
sent at the highest possible cost; this seeks to send the largest quantities at the lowest
cost.

 They have different origins with different destinations.

 An origin can supply different destinations.

 At the end of the year, supply and demand must be satisfied in their entirety and/or
their values must end at zero.

 Vogel's approximation ends at minimum cost.

 It is more elaborate than the previous ones, more technical and time-consuming.

 It takes into account costs, offers and demands to make assignments. Generally it
leaves us close to the optimum.

ADVANTAGES

 Quickly leads to a better solution. through the calculations of the so-called row and
column penalties, which represent the possible penalty cost that would be obtained by
not assigning units to be transported to a certain position.

 It takes into account in the analysis the difference between the lowest transportation
costs, through the calculations of the so-called row and column penalties, which
represent the possible penalty cost that would be obtained by not assigning units to be
transported to a certain position.

DISADVANTAGES

 It does not provide any criteria to determine whether the solution obtained by this
method is the best (optimal) or not.

 requires greater calculation efforts than the Northwest Corner Method

APPLICATION

The model is used to help make decisions in carrying out activities such as: inventory control,
cash flow, programming reserve levels in presses, among others. This method is heuristic and
usually produces a better initial solution, it produces an optimal initial solution, or close to the
optimal level.

EXAMPLE 1
A Peruvian energy company has four generation plants to satisfy the daily electricity demand
in four cities, Arequipa, Trujillo and Lima. Plants 1, 2 and 3 can satisfy 12, 14 and 4 million KW
per day respectively. The needs of the cities of Arequipa, Trujillo and Lima are 9, 10 and 11
million kW per day respectively.

The costs associated with sending energy supply per million KW between each plant and each
city are recorded in the following table

CITIES
AREQUIPA TRUJILLO LIME OFFER

FLOOR 1 5 1 8 12

FLOOR 2 2 4 0 14

FLOOR 3 3 6 7 4

DEMAND 9 10 11 30

CITIES
AREQUIPA TRUJILLO LIME OFFER PENALTY
FLOOR I 5 1 8 12 │5-1│= 4
FLOOR 2 2 4 0 14 │4-0│= 4
FLOOR 3 3 6 7 4 │3-6│= 3
DEMAND 9 10 11 30
PENALTY │2-3│= 1 │1-4│= 3 │0 -7│= 7

CITIES
AREQUIPA TRUJILLO LIME OFFER PENALTY
FLOOR I 5 1 8 12 │5-1│= 4
FLOOR 2 2 4 11 0 3 │4-2│= 2
FLOOR 3 3 6 7 4 │3-6│= 3
DEMAND 9 10 0 19
PENALTY │2-3│= 1 │1-4│= 3

CITIES
AREQUIPA TRUJILLO LIME OFFER
FLOOR I 5 10 1 8 2
FLOOR 2 2 4 11 0 3
FLOOR 3 3 6 7 4
DEMAND 9 0 0 9

CITIES
AREQUIPA TRUJILLO LIME OFFER
FLOOR I 2 5 10 1 8 0
FLOOR 2 3 2 4 11 0 0
FLOOR 3 4 3 6 7 0
DEMAND 0 0 0 0

TO KNOW HOW MANY CELLS WE SHOULD HAVE FILLED, WE ARE GOING TO CALCULATE THE
GRADIENT

#ROWS + #COLUMNS – 1

So: 3 + 3 - 1 = 5 cells occupied.

TO CALCULATE THE TOTAL SHIPPING COST, THE FOLLOWING OPERATION IS CARRIED OUT:

Z = Units assigned * unit costs

Z = 2(5) + 10(1) + 3(2) + 11(0) + 4(3)

Z = 38 is the minimum total shipping cost

CITIES PLANTS

(ORIGIN DESTINY)

1 1

2 2

3 3

REPORT:

The distribution of items to cities to minimize transportation costs would be assigned as


follows:
Plant 1 would supply the city of Arequipa with 2 million kW at a minimum transportation cost
of $5

Plant 1 would supply the city of Trujillo with 10 million kW at a minimum transportation cost of
$1

Plant 2 would supply the city of Arequipa with 3 million kW at a minimum transportation cost
of $2

Plant 2 would supply the city of Lima with 11 million kW at a minimum transportation cost of
$0

Plant 3 would supply the city of Arequipa with 4 million kW at a minimum transportation cost
of $3. (In this case, plants 1, 2 and 3 supplied the city of Arequipa to cover the demand of 9
million kW).

PROBLEM SOLUTION THROUGH THE TORA PROGRAM

MINIMUM COST METHOD


The least cost method is a procedure used to obtain the initial feasible solution for a
transportation problem. It is used when the priority is to reduce product distribution costs.

The minimum cost method seeks to achieve the lowest transportation cost between several
demand centers (the destinations) and several supply centers (the sources).

The production capacity or supply of each source, as well as the requirement or demand of
each destination, are known and fixed.

The cost of transporting a unit of the product from each source to each destination is also
known.

The product must be transported from various sources to different destinations in such a way
as to meet the demand of each destination and, at the same time, minimize the total cost of
transportation.

Other methods can be used if time savings rather than cost savings are the priority.

CHARACTERISTICS

The optimal allocation of a product from various sources to different destinations is called
transportation problem.

– Transportation models deal with the transportation of a product manufactured in different


plants or factories (supply sources) to several warehouses (demand destinations).

– The objective is to satisfy the requirements of the destinations within the production
capacity limitations of the plants, at the minimum transportation cost.

STEPS OF THE MINIMUM COST METHOD

Step 1

The cell containing the lowest transportation cost in the entire table is selected. That cell is
assigned as many units as possible. This amount may be limited by supply and demand
restrictions.

In case multiple cells have the lowest cost, the cell where the maximum allocation can be
made will be selected.

Then we proceed to adjust the supply and demand that is in the affected row and column. It is
adjusted by subtracting the amount assigned to the cell.

Step 2

The row or column in which the supply or demand has been exhausted (is zero) is deleted.

In case both supply and demand values are equal to zero, any row or column can be
eliminated, arbitrarily.
Step 3

The previous steps are repeated with the next lowest cost and continue until all the available
supply in the different sources or all the demand from the different destinations is satisfied.

APPLICATIONS

– Minimize transportation costs from factories to warehouses or from warehouses to retail


stores.

– Determine the minimum cost location of a new factory, warehouse or sales office.

– Determine the minimum cost production schedule that satisfies the company's demand with
production limitations.

ADVANTAGES

The least cost method is considered to produce more accurate and optimal results compared
to the northwest corner method.

This is because the northwest corner method only gives importance to the supply and
availability requirement, with the top left corner as the initial allocation, regardless of the
shipping cost.

On the other hand, the least cost method includes transportation costs while assignments are
made.

– Unlike the northwest corner method, this method provides an accurate solution as it
considers the transportation cost when making the allocation.

– The least cost method is a very simple method to use.

– It is very simple and easy to calculate the optimal solution with this method.

– The minimum cost method is very easy to understand.

DISADVANTAGES

– To obtain the optimal solution, certain rules must be followed. However, the least cost
method does not follow them step by step.
– The minimum cost method does not follow any systematic rule when there is a tie in
minimum cost.

– The least cost method allows selection through personnel observation, which could create
misunderstandings in obtaining the optimal solution

It does not have the capacity to provide any kind of criteria to determine whether the solution
obtained with this method is the most optimal or not.

– The quantities of offers and demands are always the same, since they do not vary over time.

– It does not take into account other types of factors to assign, but only transportation costs.

EXAMPLE 1

A Peruvian energy company has four generation plants to satisfy the daily electricity demand
in four cities, Arequipa, Trujillo and Lima. Plants 1, 2 and 3 can satisfy 12, 14 and 4 million KW
per day respectively. The needs of the cities of Arequipa, Trujillo and Lima are 9, 10 and 11
million kW per day respectively.

The costs associated with sending energy supply per million KW between each plant and each
city are recorded in the following table .

CITIES
AREQUIPA TRUJILLO LIME OFFER

FLOOR I 5 1 8 12

FLOOR 2 2 4 0 14

FLOOR 3 3 6 7 4

DEMAND 9 10 11 30

CITIES
AREQUIPA TRUJILLO LIME OFFER
FLOOR I 5 1 8 12
FLOOR 2 2 4 0 14
FLOOR 3 3 6 7 4
DEMAND 9 10 11 30
CITIES
AREQUIPA TRUJILLO LIME OFFER
FLOOR I 5 10 1 8 2
FLOOR 2 2 4 11 0 3
FLOOR 3 3 6 7 4
DEMAND 9 0 0 9

CITIES
AREQUIPA TRUJILLO LIME OFFER
FLOOR I 2 5 10 1 8 0
FLOOR 2 3 2 4 11 0 0
FLOOR 3 4 3 6 7 0
DEMAND 0 0 0 0

CALCULATION OF TOTAL SHIPPING COST

Z = Units assigned * unit costs

Z = 2(5) + 10(1) + 3(2) + 11(0) + 4(3)

Z = 38 is the minimum total shipping cost

In this case, the minimum cost method presents a total cost equal to that obtained
through the Vogel approximation method , however this is not commonly the case, it is
also simple to develop and has better performance in terms of results compared
to the Corner Method. Northwest .

SOLUTION BY THE MINIMUM COST METHOD IN TORA

You might also like