## Chapter I: Linear Programming

Chapter I: Linear Programming Xiaoxi Li ∗ September 23, 2016 This is a preliminary version. Initialized the 09/22/2016, and revised the 09/23/2016...
Author: Evan Jacobs
Chapter I: Linear Programming

Xiaoxi Li

September 23, 2016

This is a preliminary version. Initialized the 09/22/2016, and revised the 09/23/2016. For further versions of this draft, please visit the course's web page. You are welcome to send back to me any of your comments, remarks, or corrections on typos/grammar errors.

1

Introduction: LP modeling, graph solution and some examples

See PPT in course and the texbook.

2

LP: The Simplex Method

Consider the following LP problem derived from the prototype example in Chapter 3 of Hillier and Lieberman: Maximize z = 3x1 + 5x2   1 x1 ≤4     2x2 ≤ 12 2 s.t. 3x1 + 2x2 ≤ 18 3    x , x ≥ 0 + 1 2

2.1

(2.1)

Preparation

The LP problem described in Equation (2.1) models the situation of allocating limited resources to activites so as to maximize the total prot. This is a classical form of LP problems. Suppose that (for i = 1, ..., m and j = 1, ..., n) ∗

Department of Mathematical Economics and Mathematical Finance, Economics and Management School

& Institute for Advanced Study, Wuhan University. Email: [email protected]. Course's web page: xiaoxil-

i.weebly.com/teaching.

1

• z = value of overall measure of performance • xj = level of activity j • cj = unit contribution to Z of activity j • bi = total capacity of resource i available for allocation • aij = amount of resource i consumed by each unit of activity j Let us formulate the mathematical model for the general problem: Maximize z = c1 x1 + c2 x2 + · · · + cn xn   1 a11 x1 + a12 x2 + · · · + a1n xn ≤ b1     a x + a22 x2 + · · · + a2n xn ≤ b2 2   21 1 .. s.t. .     am1 x1 + anm x2 + · · · + amn xn ≤ bm m    x ≥ 0, ..., x ≥ 0 + 1

(2.2)

n

We distinguish two types of constrains, " 1 , ..., m " the the nonnegatives constraints.

functional constraints and "

+"

Any point (x1 , ...xn ) ∈ Rn satisfying all constraints " 1 ,..., m and + " in (2.2) is a feasible solution. The feasible region is a subset of Rn dened by all constraints (a collection of all feasible solutions). The feasible region is always closed, and could be emply, nonempty and bounded, or nonempty and unbounded. The rst case gives no solution for the LP problem, and the second case gives bounded optimal value for the LP problem, where the bounded solution(s) could be unique or innitely many.

Question:

how about the third case? That is, in case that there is a unbounded feasible region, what would the optimal value and the optimal solution look like? The above LP problem is said to be in the following three conditions:

standard form,

which is dened to satisfy the

1. The objective z is maximized ; 2. The LHS and the RHS entries of the functional constraints are connected with "≤" and the RHS entries are positive constants; 3. x1 , ..., xn are all required to be nonnegative (nonnegative constraints exist for all decision variables). Other forms include: z being maximized, "bj < 0", the LHS and RHS entries of the constraints are connected by "≥" or "=", or some xi is not required to be nonnegative. 2

We stress that our presentation of the simplex method is essentially based on (starts from) the standard form. LP problems in other forms will also be encountered, and we shall show later on how to solve them by transforming them into the standard form. Some geometry on the linear system

Denition 2.1 In Rn , a hyperplane is any set of the form: H(α, β) = {x ∈ Rn |αT x = β},

where α ∈ Rn \ {0} and β ∈ R.

Fact:

H is a hyperplane in Rn if and only if the set H − x0 := {x − x0 |x ∈ H}, where x0 ∈ H

is a subspace of Rn of dimension (n − 1). Hyperplanes in R are points, hyperplanes in R2 are lines, hyperplanes in R3 are planes, and hyperplanes in Rn are translates of (n − 1) dimensional subspaces. Every hyperplane divides the space in half. This denes two closed half-spaces, which are = {x ∈ Rn |αT x ≥ β} and H − (α, β) = {x ∈ Rn |αT x ≤ β}. H(α, β) is then called the supporting hyperplane of H + (α, β) and H − (α, β).

H + (α, β)

Denition 2.2 Any set in

Rn that can be represented as the intersection of a nite number

of half-spaces is called a convex polyhedron. The supporting hyperplanes of these half-spaces forming the convex polyhedron is also called supporting hyperplanes of the convex polyhedron.

Remark.

A bounded convex polyhedron is called a convex polytope, another denition for which is that it is the convex hull 1 of a nite set of points in Rn . These extreme points are then called the vertices of the convex polytope. The vertices are usually intersections of several (how many?) supporting hyperplanes. We see that any constraint in Equation (2.2) denes a half-space, and a LP (in standard form or not) is simply a problem of maximizing or minimizing a linear objective function over a convex polyhedron.

2.2

Some theories on LP

This subsection presents several important theories of LP that motivates the simplex method. We focus on LP with a nonempty and bounded feasible region2 . In this case, the feasible region is a convex polytope as the convex hull of a nite number of extreme points (vertices, or 1

Let X = {x1 , ..., xN } be a set ofP nite number of points in Rn . Its convex hull is dened as conv(X) := {x ∈ PN R |x = i=1 αi xi for some αi ≥ 0, N i=1 αi = 1}. 2 As will be demonstrated later, the simplex method will produce also outcomes for the degenerate cases. N

3

corner-point feasible/CPF

solutions). A constraint boundary of the LP problem is the feasible part of a supporting hyperplane. Two CPF solutions are adjacent to each other if they share (n − 1) common supporting hyperplanes (constraint boundaries)3 . The following three basic properties of LP with a bounded feasible region underlines the simplex method for nding out an optimal solution.

Property 1.

(Fundamental Theorem of LP) The LP problem has bounded optimal solutions.

If (a) there is only one optimal solution, it should be a CPF solution (extreme point); if (b) there are more than one optimal solution, eventually there are innitely many, and at least two of them must be CPF solutions (extreme points). Proof. When the feasible region is nonempty and bounded, it is a compact set. Now the objective function is linear thus continuous, a bounded optimal solution exists. (i) We rst prove that any optimal solution should be on the boundary of the feasible region (the convex polytope P ). Suppose by contradiction that there is one optimal solution, denoted by X ∗ , that is in the interior of P . By moving a little bit around X ∗ to all directions, it is still within P . This leads to a contradiction: if we move X ∗ in the direction that increases the objective function, we arrive at one feasible solution X 0 that gives a strictly better value for the objective than X ∗ . (ii) Next we x one optimal solution X ∗ , and let z ∗ = C T X ∗ be the optimal value of the LP problem. This denes a hyperplane H ∗ (C T , z ∗ ) = {X ∈ Rn |C T X = z ∗ } passing the point X ∗ . The intersection of H ∗ (C T , z ∗ ) with the convex polytope (feasible region) gives us all the optimal solutions. Let B be such intersection set. By our argument in (i), B should be on the boundary of the convex polytope. Moreover, B is a convex set (why?), so it should coincide with a facet4 of P (why? a little bit delicate...). Now we know that a facet is spanned by several extreme points of the convex polytope, so: case (a). if the LP problem has a unique optimal solution, the facet B is the unique extreme point; case (b). if the LP problem has more than one optimal solutions, then the facet B contains at least two extreme points, say X1∗ and X2∗ , and any point on the edge connecting X1∗ , X2∗ should also be an optimal solution. Q.E.D.

Property 2. The LP problem has nitely many CFP solutions (extreme points). Proof. A CFP solution is determined as the intersection of n supporting hyperplanes where the vectors dening them are linear independent, in which case we call these supporting hyperplanes independent. There are in total (m + n) supporting hyperplanes (functional + nonnegative constraints), thus choosing n from (m + n), Cnn+m = (m+n)! m!n! gives an upper bound (nite!) for the number of CFP solutions. Q.E.D.

Remark.

Property 2 holds true even if the feasible region is unbounded. Property 1 and Property 2 together imply that an optimal solution can be found by enumeration of all CFP 3

As a response to a previous question, an extreme point of a convex polytope is an intersection of (at least)

n supporting hyperplanes. The intersection of some (n-1) supporting hyperplanes for these two adjacent extreme points denes a line connecting them, and this line's feasible part is called an edge of the polytope. 4

A facet of a polytope is the intersection of several supporting hyperplanes.

4

solutions. Even if for m and n not big, this enumeration might take a huge number of time: m = n = 50 will imply around 1029 systems of equations to be solved! By contrast, the simplex method would need a dramatically smaller amount of computations to nd one optimal solution. An essential point for the construction is the following Property 3.

Property 3. (Optimality Test) If one CFP solution has no adjacent CPF solution that is strictly better than itself (as measured by the objective z ), then this CPF solution is an optimal solution. A heuristic proof. Let X ∗ be such a CFP solution. z ∗ = C T X ∗ is the corresponding value and we look at the hyperplane H(C T , z ∗ ). If X ∗ has only one adjacent CFP solution, then P should be the edge connecting the two points (is it true and why?) and the proof is trivial. Consider now that X ∗ has several adjacent CFP solutions, say X 1 ,...,X t . By assumption, these adjacent solutions are all contained in H − (C T , z ∗ ). For each X j ∈ {X 1 , ..., X t }, we pick one supporting hyperplane Hj (ATj , bj ) of P passing both X s and X ∗ . Using the convexity of the polytope P , one can prove that ∩j Hj− (ATj , bj ) ⊆ H − (C T , z ∗ ). Q.E.D.

Remark.

Property 3 implies that to nd an optimal solution, we do not need to look at the extreme points one by one. Rather, we compare each extreme point with only its adjacent extreme points, and a local optimality will imply the global optimality.

2.3

The basic simplex method

We are now back to the prototype example to ilustrate the basic simplex method for LP in standard form. Maximize z = 3x1 + 5x2   x1 ≤4 1     2x2 ≤ 12 2 s.t.  3x1 + 2x2 ≤ 18 3    x , x ≥ 0 + 1 2

(2.3)

2.3.1 Solving the example: the geometric view In this subsection we utilise the three basic properties of LP (motivating the simplex method) to solve the example. For this aim, we rst compute all the CPF solutions and for each of them, we list their adjacent CPF solutions. As was mentioned, a CFP solution can be dened as an intersection of n independent supporting hyperplanes that lies within the feasible region. For each CFP solution, these hyperplanes are referred as active ones.

5

CPF Solutions (0, 0) (0, 6) (2, 6) (4, 3) (4, 0)

Its Adjacent CPF Solutions (0, 6) and (4, 0) (2, 6) and (0, 0) (4, 3) and (0, 6) (4, 0) and (2, 6) (0, 0) and (4, 3)

Active Supporting Hyperplanes x2 = 0 and x1 = 0 x1 = 0 and 2x2 = 12 2x2 = 12 and 3x1 + 2x2 = 18 3x1 + 2x2 = 18 and x1 = 4 x1 = 4 and x2 = 0

Note that within the above form we did not exhaust all combinations of any two supporting hyperplanes (here n = 2), due to the fact that either they have no intersection or the intersection is not feasible. In other cases, it could be possible that more than n supporting hyperplanes intersect at one point, and there are several combinations of n supporting hyperplanes corresponding to a single CFP solution. When the problem is in R2 as above, it is possible to nd out the extreme points on the graph.

Question. Find out all other combinations as intersections of 2 supporting hyperplanes that did not appear in the above form. Insert here: FIGURE 4.1 in H-L.

Initialization. Choose (0, 0) as the initial CPF solution.

Remark.

Using the supporting hyperplanes from the nonnegative constraints (here x1 = 0, x2 = 0) is convenient for computation. Moreover, when the LP is in standard form, this always produces a feasible solution.

Optimality Test : We compare (0, 0) to its adjacent CFP solutions (0, 6), (4, 0) and obtain

that (0, 0) is not an optimal solution. Indeed, both (0, 6) and (4, 0) produces a higher value (30 and 12 compared to 0) for the objective z .

Iteration 1 : Move to a better adjacent CPF solution, (0, 6), by performing the following

analysis.

Now we need to x the next CFP solution to check the optimality. The simplex method moves to one of its adjacent CFP solutions that performances strictly better than the current one. The problem is to choose which one. One may suggest choosing the one that gives the highest value among all adjacent CFP solutions. Nevertheless this is not case in the simplex method. Rather, it chooses the direction along which the objective z increases at a faster rate. In the objective function z = 3x1 + 5x2 , the increasing rate for x2 is 5, faster than that for x1 , which is 3. Thus we move the current CFP solution (0, 0) along the vertical line x1 = 0 (which solely increases x2 ), and stop at the next CFP solution (0, 6), which is computed as the intersection of "x1 = 0" and "2x2 = 12".

Optimality Test. We compare (0, 6) to its adjacent CFP solution (2, 6) (note that this time (0, 0) needs not be compared) and obtain that (0, 6) is textitnot an optimal solution. Iteration 2. Move to a better adjacent CPF solution, (2, 6), by performing the following

analysis.

Note that this time, (2, 6) is the only adjacent CFP solution to (0, 6), so we move along the line "2x2 = 12" until the intersection with the line "3x1 + 2x2 = 18", where lies the CFP solution 6

(2, 6).

Optimality Test. We compare (2, 6) to its adjacent CFP solutions (0, 6), (4, 3) and obtain

that (0, 0) is an optimal solution. Indeed, both (0, 6) and (4, 0) produces a lower value (30 and 27 compared to 36) for the objective z . Insert here: FIGURE 4.2 in H-L.

2.3.2 The simplex method: the algebraic form An overview In the last subsection, we have seen how the "idea behind the simplex method" help us solve a LP problem with 2 decision variables: the CPF solutions matter ; there is a nite number of CPF solutions ;

a local optimality implies the global optimality.

Yet, the procedure is not exactly the same as the simplex method since more computations are involved than needed. As will be seen later on, at each iteration, the simplex method does not compute all the adjacent CFP solutions. Instead, it identies the direction for moving directly without knowing any adjacent CFP solution. Further, when there are more than 2 variables, the geometric view is missing, we need a system of algebraic languages to deal with the iteration procedure. As was demonstrated, a CFP solution (extreme point) is the intersection of a group of n independent supporting hyperplanes, and two adjacent CFP solutions (extreme points) share (n − 1) common supporting hyperplanes. To determine the next CFP solution (group), the simplex method selects one supporting hyperplane within the group to be dropped, and identies a new one outside this group to be added in. We introduce the slack variable to each functional constraint as an indicator variable for the constraint to bind or not. The original decision variable xi is already an indicator variable for the nonnegative constraint "xi ≥ 0". In view of this,

• the group of n supporting hyperplanes (geometric concept, heavy in notation) to dene the CFP solution can be replaced by the corresponding group of n indicator vairables (algebraic concept, compact in notation); • switching from one CPF solution to another adjacent to it can be done through replacing one indicator in the group by another one outside it. This group of n indicator variables (equal to zero, active) dening the current CPF solution is called the nonbasic variables, and the other m are called the basic variables forming the basis. In term of this language, at each iteration, we shall identify one nonbasic variable (called the entering variable) to enter the basis and one basic variable (called the existing variable) to exist the basis. The general rules for the selection is that:

7

• the entering variable is chosen to make the objective z increasing at a faster rate (eciency concern); • the existing variable is chosen to make the next corner-point solution still feasible (the minimum ratio test). After selecting the new group of n nonbasic variables, we set them equal to zero and then solve for the other m basic variables. This can be done by using the elementary algebraic operations on the constraint equations to obtain the proper form from Gaussian elimination. The variables are put together to form the augmented CFP solution (called the basic feasible (BF) solution). Finally, to make the optimality test easy, one can rewrite the objective function to contain only nonbasic variables, again by elementary algebraic operations.

***************************

Starting from a LP problem in standard form, we add one "slack variable sj " to the LHS of each functional constraint " j " to obtain an "equality" constraint. This gives us the following LP problem in canonical form (also called augmented form): Maximize z = c1 x1 + c2 x2 + · · · + cn xn   1 a11 x1 + a12 x2 + · · · + a1n xn + s1 = b1     a x + a22 x2 + · · · + a2n xn + s2 = b2 2   21 1 .. s.t. .     am1 x1 + am2 x2 + · · · + amn xn + sm = bm m    x ≥ 0, ..., x ≥ 0, s ≥ 0, ..., s ≥ 0 + 1 n 1 m

(2.4)

The nonnegative slack variables sj 's are also decision variables in LP (2.4), yet their coecients are zero in the objective functions thus omitted.

Question.

Show that the LP problem (2.2) and the LP problem (2.4) are equivalent, i.e. one feasible solution for (2.2) is also a feasible solution for (2.4), and vice versa. In the canonical form, sj = 0 if and only if the constraint " j " in the standard form binds, i.e. "aj1 x1 + aj2 x2 + · · · + ajn xn = bj ". Viewed in the same way, xi = 0 means the binding of the constraint "xi ≥ 0". Thus we also call them indicator variables. An augmented solution is a solution to the LP in canonical form, which can be also seen as a solution for the original decision variables that has been augmented by the corresponding values of the slack variables. It is an augmented feasible solution if all constraints in canonical form are satised. We call an augmented corner-point solution a basic solution, to emphasize its algebraic meaning in linear system. In the same way, we call an augmented CPF solution a basic feasible (BF) solution. 8

Let (x∗1 , ..., x∗n , s∗1 , ..., s∗m ) be an augmented feasible solution. Since each CPF solution is the intersection of n independent supporting hyperplanes, we see that an augmented CPF solution thus a BF solution has at least n indicator variables equal to zero. In this way, we distinguish two types of variables in a

basic solution:

the n (number of original decision variables) nonbasic variables which are set equal to zero  each one refers to a binding constraint in the LP in standard form; the m (number of functional constraints) others named is referred as the basis).

basic variables (this set of variables

Basically, the n nonbasic variables are set to be zero so as to obtain a system of m equations (from the m functional constraints), and the m basic variables are obtained as a simultaneous solution to this system. The system may have no solution or it may have multiple. Note that the basic variables could be equal to zero also. Suppose that we have a unique solution, thus a basic solution is obtained. Then this basic solution is a BF solution if all the m basic variables are nonnegative (why?).

Question.

In a LP problem with n decision variables and m functional constraints, there is n degree of freedom. One can choose whichever n variables to be the nonbasic variables, and put them equal to zero. However, it does not always produce a basic solution. Prove that:

if the n supporting hyperplanes corresponding to a set of n nonbasic variables are independent, i.e. the vectors (coecients for x1 , ..., xn in the standard form, living in Rn ) dening them are linearly independent, then by letting these nonbasic variables all equal zero, we obtain a unique basic solution. Under what condition do they produce no solution? And multiple solutions?

There is a one-to-one correspondence between the CPF solutions (resp. corner-point solutions) and the BF solutions (resp. basic solutions), so we can dene two BF solutions (resp. basic solutions) to be adjacent if their associated CPF solutions (resp. corner-point solutions) are adjacent.

Question.

Prove that two BF solutions are adjacent if and only if all but one of their nonbasic variables are the same. We illustrate these notions and the simplex method by the following example. By introducing slack variables, we obtain the equivalent canonical form of the LP problem (2.1) as follow: Maximize z = 3x1 + 5x2

  x1    

+ s1 = 4 2x2 + s2 = 12 s.t.  3x1 + 2x2 + s3 = 18    x , x , s , s , s ≥ 0 1 2 1 2 3

1 2 3 +

(2.5)

One can choose "x1 , s2 " to be the nonbasic variables: "x1 = 0, s2 = 0". Substituting them back into Equation (2.5) yields the values for the basic variables "x2 = 6, s1 = 4, s3 = 6". The 9

corner-point solution is then (0, 6) and the basic solution is (0, 6, 4, 0, 6). Since all variables are nonnegative, this is a BF solution.

Question.

Find out all the adjacent BF solutions to (0, 6, 4, 0, 6) by replacing each time one of the two nonbasic variable "x1 , s2 " with "x2 , s1 or s3 ". In section 2.3.1, we have solved the example by checking the CPF solutions. In particular, they are not enumerated one by one. At each CPF solution, we compute all its adjacent solutions and compare them to it, and more to better one adjacent to it if it is not locally optimal. The geometric view to pick all its adjacent CFP solutions is dicult to operate when there are more than 3 decision variables. Moreover, the computation of all the adjacent CFP solutions is still too costly. The simplex method identies directly the direction of moving and calculate only the adjacent CFP solution that we actually arrive at. The direction is composed of dropping one indicator variable (a nonbasic variable, to enter into the basic) and selecting another (a basic variable,to leave the basis). The procedure implemented by algebraic operations is illustrated below: in each iteration, we calculate one BF solution, check its optimality, and move to a better adjacent BF solution if optimality is not satised.

Initialization We choose (0, 0) as the initial CPF solution with {x1 , x2 } the nonbasic variables. The basic variables' values can be read directly from Eq (2.5): "s1 = 4, s2 = 12, s3 = 18", which are nonnegative thus the BF solution obtained is feasible.

  s1  

s2

=4 = 12 s3 = 18

1 2 3

The coecient matrix for these basic variables is in a proper form from Gaussian elimination, that is, each constraint contains only one basic variable with its coecient equal to 1, and this basic variable does not appear in other equation.

Now the question being raised is: is it always possible to nd a BF solution by assigning all decision variables equal to zero? The response is that yes when the LP is in standard form (why?). This also explains the role that standard form plays in the simplex method.

Optimality Test and Iteration 1 It is possible to compute all the adjacent BF solutions to (0, 0, 4, 12, 18), and compare them to it one by one. However this is too costly and a more ecient method is available. Moreover, recall that our criteria for selecting the next BF solution is such that along the edge connecting the current BF solution and the next one, the objective z increases at the fastest rate. These two steps can be combined together to be set as follows: the objective function is z = 3x1 + 5x2 , thus

• the increasing rate in x1 for z is 3, positive ; 10

• the increasing rate in x2 for z is 5, positive.

Remark.

In order to balance the constraint equations, when the nonbasic variable x1 or x2 is increasing, the basic variable s1 , s2 , or s3 needs to change its value. However, since all the basic variables have zero coecients in the objective function, the change in s1 , s2 or s3 has no eect on the change of z . Thus 3 or 5 is the correct increasing rate in x1 or x2 . We stress this point because at the beginning of each iteration (corresponding to a new BF solution), it is vital to have the objective z containing no basic variable. This implies that it is not optimal and an iteration is needed. Step 0. (optimality text). (0, 0) is not optimal, and we must proceed to Iteration 1; Step 1 (determining the direction of movement). The direction is to increase x2 because 5 > 3. x2 is then called the entering basic variable, which is to enter into the basis to become a basic variable. Step 2 (where to stop). From Step 1, we want to increase x2 as far as possible until we reach the boundary of the feasible region. To determine the point, we look at the constraint equations determining the boundary. Let x1 = 0, we make the following minimum ratio test: from constraint 1 : s1 = 4 ≥ 0 =⇒ no upper bound on x2 ; from constraint 2 : s2 = 12 − 2x2 ≥ 0 =⇒ x2 ≤

12 2

=6

from constraint 3 : s3 = 18 − 2x2 ≥ 0 =⇒ x2 ≤

18 2

= 9.

←− minimum;

Thus, x2 can be increased just to 6, at which point x4 drops to 0. The objective of this test is to determine which basic variable drops to 0 rst as the entering basic variable x2 increases. At this point, the functional constraint 2 binds, "x1 + 2x2 = 12" with "x4 = 0". x4 then exists the basis and become a nonbasic variable for the next BF solution. We call it the leaving basic variable. In all constraints, we only need to look at those in which the coecient of the entering basic variable is strictly positive (here the constraint 2 and 3 but not 1 ) since the others will not pose any restriction on it (here the constraint 1 ). Step 3 (Solving for the new BF solution). Increasing x2 from 0 to 6 moves us from the initial BF solution on the left to the new BF solution on the right:

Nonbasic variables Basic variables

Initial BF Solutions x1 = 0, x2 = 0 s1 = 4, s2 = 12, s3 = 18

New BF Solutions x1 = 0, s2 = 0 s1 =?, x2 = 6, s3 =?

We write the objective function as z − 3x1 − 5x2 = 0, and it is regarded as a new equation ans is put together with other constraint equations:

11

  (0)    (1)  (2)    (3)

z − 3x1 − 5x2 x1

+ s1

2x2 3x1 + 2x2

=0 =0 + s2 = 12 + s3 = 18

The objective is to nd the new BF solution, and to modify the objective function so the optimality test and if needed the new iteration can be implemented easily. To do this, we convert the above systems into a proper form from Gaussian elimination for the new basic variables s1 , x2 and s3 . At the same time, the objective function will contain only nonbasic variables x1 and s2 . All these are done by performing the elementary algebraic operations on the above linear systems, including: 1. Multiply (or divide) an equation by a nonzero constant; 2. Add (or subtract) a multiple of one equation to (or from) another equation. To do this, we see that the coecients for the entering basic variable x2 are (5, 0, 2, 2), and we need to change them into (0, 0, 1, 0), as that for the existing basic variable s2 . We do the following operations in order: 1. Eq.(20 ) = Eq.(2) ÷ 2; 2. Eq.(00 ) = Eq.(0) + 5Eq.(20 ); 3. Eq.(30 ) = Eq.(3) + 2Eq.(20 ). The resulting new system becomes:

  (0)    (1)  (2)    (3)

z − 3x1

+ 25 s2

x1 x2 3x1

= 30 + s1 =4 1 + 2 s2 =6 − s 2 + s3 = 6

x1 and s2 are the new nonbasic variables, thus are equal to 0. This gives immediately the new BF solution (x1 , x2 , s1 , s2 , s3 ) = (0, 6, 4, 0, 6), which yields z = 30.

Optimality Test and Iteration 1 From Eq. (0), we obtain the new objective function:

5 z = 30 + 3x1 − s2 . 2 Step 0.(Optimality test) Increasing x1 would yield a higher value of z , thus it is not optimal. Step 1 (determining the direction of movement). To increase x1 the entering into the basis. 12

basic variable

Step 2 (where to stop). From Step 1, we want to increase x1 as far as possible until we reach the boundary of the feasible region. To determine the point, we look at the constraint equations determining the boundary. Let s2 = 0, we make the following minimum ratio test: from constraint 1 : s1 = 4 − x1 ≥ 0 =⇒ x1 ≤

4 1

= 4;

from constraint 2 : x2 = 6 ≥ 0 =⇒ no upper bound on x1 ; from constraint 3 : s3 = 6 − 3x1 ≥ 0 =⇒ x1 ≤

6 3

=2

←− minimum.

Thus, x1 can be increased just to 2 (the entering basic variable), at which point s3 drops to 0 (hence the existing basic variable). Step 3 (Solving for the new BF solution). Nowthe coecients for x1 in the new system should be (0, 0, 0, 1) the rst one is for that in Eq. (0) as that of s3 . We perform the following operations in order: 1. Eq. (30 ) = Eq. (3) ÷ 3; 2. Eq. (00 ) = Eq. (0) + 3Eq. (30 ); 3. Eq. (10 ) = Eq. (1) − Eq. (30 ). The resulting new system becomes:

  (0)    (1) (2)    (3)

z s1 x2 x1

+ 32 s2 + s3 + 31 s1 − 13 s3 + 12 s2 − 13 s2 + 31 s3

= 36 =2 =6 =2

The new BF solution reads as (x1 , x2 , s1 , s2 , s3 ) = (2, 6, 2, 0, 0), and z = 36.

Optimality Test The objective function now writes as

z = 36 − 3/2s2 − s3 . Increasing either s2 or s3 would decrease z , so no adjacent (thus no other) BF solution will be strictly better than the current one. Conclusion: the optimal decision variables are (x1 , x2 ) = (2, 6) and the optimal value is z = 36.

2.3.3 The simplex method in tabular form The

tabular form is to perform the simplex method in algebraic form more compactly. 13

Inserted here the TABLE 4.3. The logic and operations behind the simplex method tabular form is exactly the same as that in algebraic form. It omits many computation details and present the direct conditions for each step.

Initialization • Introduce the slack variables to transform the LP in standard form into one in canonical form. • Set the original n decision variables to be the nonbasic variables, and the m slack variables to be the basic variables. • The nonbasic variables are set equal to zero and the values of the basic variables are read directly from the tableau (in proper form from Gaussian elimination), i.e. each basic variable is equal to the RHS entry of the corresponding constraint equation. • The initial BF solution is (x1 , ...xn , s1 , ..., sm ) = (0, ..., 0, b1 , ..., bm ). • Write the objective function in the form "z − c1 x1 − c2 x2 − · · · − cn xn = 0", denoted as Eq. (0), and put this equation together with the constraint equations.

Optimality Test The current BF solution is optimal if and only if the coecient in Eq. (0) "z − c1 x1 − c2 x2 − · · · − cn xn = 0" for each nonbasic variable is nonnegative (≥ 0).

∗ Stop if it is; ∗∗ Otherwise, proceed to an iteration to obtain a new BF solution, which includes the following three steps.

For the example. Iteration Step I. Identify the entering basic variable by looking at each nonbasic variable in Eq. (0) with a strictly negative coecient and having the largest absolute value. The column of the entering basic variable in the simplex tableau is referred as the pivot column.

For the example:... Step II. Determine the leaving basic variable by applying the minimum ration test. i Pick our the coecients that are strictly positive; ii Divide each of these coecients into the RHS entry for the same row; iii Identify the row with the smallest ratio. iv The basic variable corresponding to this row is identied as the leaving basic variable. Replace this variable by the entering basic variable in the basic variable column of the next simplex tableau. The row with the least ratio is the pivot row, and the intersection entry of the pivot row with the pivot column is called the pivot number. 14

For the example:... Step III. Solve for the new BF solution. Use the elementary row operations (multiple, add, subtract...) to construct a new simplex tableau in proper form from Gaussian elimination:  basically, the pivot column including Eq. (0) has to be (0, ...1, ...0), where the only 1 appears in the entry of pivot number.

For the example:... Read the new BF solution, and

Re-do the Optimality Test and a new Iteration if needed...

For the example:... At some point, optimality is satised and we give the conclusion.

15