OPERATIONS RESEARCH: LINEAR PROGRAMMING 2. INTEGER PROGRAMMING 3. GAMES. Books: Ð3Ñ Intro Þ to OR ÐF.Hillier & J

LP (2003) 1 OPERATIONS RESEARCH: 343 1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING 3. GAMES Books: Ð3Ñ IntroÞ to OR ÐF.Hillier & J. LiebermanÑ; Ð33Ñ...
Author: Reynold Perkins
0 downloads 0 Views 489KB Size
LP (2003) 1

OPERATIONS RESEARCH: 343

1. LINEAR PROGRAMMING 2. INTEGER PROGRAMMING 3. GAMES

Books: Ð3Ñ IntroÞ to OR ÐF.Hillier & J. LiebermanÑ; Ð33Ñ OR ÐH. TahaÑ; Ð333Ñ IntroÞ to Mathematical Prog ÐF.Hillier & J. LiebermanÑ; Ð3@Ñ IntroÞ to OR ÐJ.Eckert & M. KupferschmidÑÞ

LP (2003) 2

LINEAR PROGRAMMING (LP) LP is an optimal decision making tool in which the objective is a linear function and the constraints on the decision problem are linear equalities and inequalities. It is a very popular decision support tool: in a survey of Fortune 500 firms, 85% of the responding firms said that they had used LP. Example 1: Manufacturer Produces:

A (acid) and C (caustic)

Ingredients used in the production of A & C:

X and Y

Each ton of A requires:

2lb of X; 1lb of Y

Each ton of C requires:

1lb of X ; 3lb of Y

Supply of X is limited to:

11lb/week

Supply of Y is limited to:

18lb/week

1 ton of A sells for:

£1000

1 ton of C sells for:

£1000

Manufacturer wishes to maximize weekly value of sales of A & C. Market research indicates no more than 4 tons of acid can be sold each week. How much A & C to produce to solve this problem. The answer is a pair of numbers: x" Ðweekly production of AÑ, x# Ðweekly p.of CÑ There are many pairs of numbers Ðx" , x# Ñ: Ð0,0Ñ, Ð1,1Ñ, Ð3,5Ñ.... Not all pairs Ðx" , x# Ñ are possible weekly productions Ðex. x" œ 27, x# œ 2 are not possibleÑ Ð Ð27, 2Ñ is not a feasible set of production figuresÑ. The constraints on x" , x# are such that Ðx" , x# Ñ represent a possible set of production figures: The amount each product is produced is non-negative:

x"   0 x #   0

The amount of ingredient X required to produce x" tons of A & x# tons of C is 2x"  x# . As X is limited to 11lb/week:

2x"  x# Ÿ 11

The amount of ingredient Y required combined with the supply restriction:

x"  3x# Ÿ 18

We cannot sell more than 4 tons of A/week:

x" Ÿ 4

A possible set of production figures satisfies these constraints. constraints are a possible set of production figures: see FIGURE 1

Conversely any Ðx" , x# Ñ satisfying these

THE FEASIBLE REGION is the intersection of the shaded regions & is given by see FIGURE 2 . The feasible region ÐOPQRSÑ represents all pairs Ðx" , x# Ñ that satisfy the constraints. The corners (vertices) O,P,Q,R,S have a special significance [ O=(0,0), P=(0,6), Q=(3,5), R=(4,3), S=(4,0)].

LP (2003) 3

Associated with each feasible Ðx" , x# ) is a sales value of £1000 ‚ Ðx"  x# Ñ. Since we wish to maximize this amount, our problem is: Maximize :

x"  x#

oq objective function

Subject to

2x"

 x#

Ÿ 11

oq constraint

x"

 3x#

Ÿ 18

oq constraint

x" x" , x#   0

Ÿ 4

oq constraint oq constraint

This is called a LINEAR PROGRAM (LP): A problem of optimizing (maximizing or minimizing) a linear function subject to linear constraints. ÐLinear: no powers, exponentials or product termsÑ. PROPERTY Ð*Ñ: Observe that the set ÖO, P, Q, R, S× contains an optimal solution to our L.P. evaluate the objective function x"  x# at these points: 0,6,8,7,4 Ê Q œ Ð3,5Ñ, x" œ 3, x# œ 5 is the optimal solution. see FIGURE 3 Note: The feasible region Ði.e. area described by the polygon OPQRSÑ lies entirely within that half of the plane for which x"  x# Ÿ 8. Since 5  3 œ 8 no feasible point has a higher objective value than that of Q. Property Ð*Ñ holds if we replace x"  x# by any linear function c" x"  c# x# . e.g. to minimize 3x"  x# over points in the polyhedron OPQRS we take the smallest point of 0,  6, 4, 9, 12 and find that P : x" œ 0, x# œ 6 is an optimal solution. The SIMPLEX ALGORITHM, to be described later, is an efficient method for finding an optimal vertex without necessarily examining all of them. Property Ð*Ñ does not imply that points other than vertices cannot be optimal. e.g. if we want to maximize 2x"  x# then any point on the segment QR is optimal. 2. STANDARD LP FORM Any LP can be transformed into STANDARD FORM minimise

x0 = c1

subject to

x1 + c2

a11 x1 + a12

x2 + ... + cn

xn

x2 + ... + a1n

x n = b1

a21 x1 + a22 x2 + ... + a2n ã ã am1 x1 + am2 x2 + ... + amn and

x1   0 x2   0 ...

x n = b2 ã ã x n = bm xn

 0

bi , cj , aij : fixed real constants; xi ; i=0, ..., n: real numbers, to be determined. We assume that bi  0 (each equation may be multiplied by -1 to achieve this). Compact Notation minimise

x0 = cT x

subject to A x = b and

x   0. Ô a11 Öa A = Ö 21 ã Õ am1

a12 a22 ã am2

... ... ä ...

a1n × a2n Ù Ù ã amn Ø

Ô b1 × Öb Ù b= Ö 2Ù ã Õ bm Ø

Ô x1 × Öx Ù x= Ö 2Ù ã Õ xn Ø

Ô c1 × Öc Ù c= Ö 2Ù ã Õ cn Ø

LP (2003) 4

cT = cc1 , ..., cn d

Example 2: Slack variables min subject to

x 0 = c1

x2 + ... + cn

xn

x2 + ... + a1n

xn

Ÿ b1

a21 x1 + a22 x2 + ... + a2n ã ã am1 x1 + am2 x2 + ... + amn

xn ã xn

Ÿ b2 ã Ÿ bm

x1   0 x2   0 ...

xn

 0

x1 + c2

x2 + ... + cn

xn

x2 + ... + a1n

x n + y1

a11 x1 + a12

and min subject to

x1 + c2

x 0 = c1

a11 x1 + a12

a21 x1 + a22 x2 + ... + a2n ã ã am1 x1 + am2 x2 + ... + amn and

x1   0 x2   0 ...

xn ã xn xn

Total variables: n + m. Slack variables: y1 , y2 , ..., ym m ‚ (m + n) matrix: cAãId

+ y2 ä + ym

=

b1

=

b2 ã bm

=

  0, y1   0, y2   0, ..., ym   0.

cAãId

Ôx× ... = b ÕyØ

Example 3: Surplus variables If the inequalities of Example 2 were reversed so that the typical inequality becomes

Ê

ai1 x1 + ai2

x2 + ... + ain

xn

  bi

ai1 x1 + ai2

x2 + ... + ain

xn - yi = bi ; yi   0. Å surplus variable

By suitably multiplying by (-1) and adjoining slack and surplus variables, any set of linear inequalities can be converted to standard form if the unknown variables are restricted to be nonnegative. max

4 x 1 - 3 x 2 + 2 x3

subject to 3 x1 + 2 x2 - x3 = 4 x1 + x2 + 2 x3 Ÿ 5 -x1 + 2 x2 - x3   2 x1 , x2 , x3   0

Ä

min

-4 x1 + 3 x2 - 2 x3

Ä Ä Ä Ä

subject to 3 x1 + 2 x2 - x3 =4 x1 + x2 + 2 x3 + x4 =5 -x1 + 2 x2 - x3 - x5 = 2 x1 , x2 , x3 , x4 , x5   0

Example 4: Free variables (I) Suppose x1   0 is not present: x1 may take (+) or (-) values. Substitute x1 with x1 = v1 - u1 ; v1 , u1   0. The problem has now (n+1) variables: v1 , u1 , x2 , ..., xn . Example 5: Free variables (II)

LP (2003) 5

Eliminate x1 using one of the constraint equations. min subject to

x 1 + 3 x 2 + 4 x3 x 1 + 2 x 2 + x3 = 5 2x1 + 3 x2 + x3 = 6 x2 , x3   0

As x1 is free, solve for it using the first constraint: x1 = 5 - 2 x2 - x3 . Substitute this in the objective function and the constraint, min š x2 + 3 x3 ¹ x2 + x3 = 4, x2 , x3   0 › 3. EXAMPLES OF LP PROBLEMS Example 6: The diet problem To determine the most economical diet that satisfies the basic nutritional requirements for good health. n different foods: i th sells at price ci /unit m basic nutritional ingredients:healthy diet Ê daily intake for individual at least bj units of jth ingredient each unit of food i contains aji units of j th ingredient xi : number of units of food i in the diet. minimise total cost

x0 = c1

subject to

x1 + c2

x2 + ... + cn

xn

a11 x1 + a12 x2 + ... + a1n ã ã am1 x1 + am2 x2 + ... + amn

and nonnegativity of food quantities

x1   0 x2   0 ...

xn ã xn

  b1 ã   bm

xn

 0

Example 7: The transportation problem Quantities a1 , a2 , ..., am of a product are to be shipped from each of m locations and are demanded in amounts b1 , b2 , ..., bn at each of n destinations. cij : unit cost of transporting product from origin i to destination j xij : the amounts to be shipped from i to j (i=1, ..., m; j=1, ..., n) Determine xij to satisfy shipping requirements and minimise total cost of transportation. minimise

! cij xij i,j

subject to ! xij = ai (total shipped from ith origin; i = 1, ..., m) n

j=1

! xij = bj (total required by jth destination; j = 1, ..., n) m

i=1

xij   0 ; i = 1, ..., m ; j = 1, ..., n (For consistency we must also have ! ai = ! bj ). m

n

i=1

j=1

4. BASIC SOLUTIONS To compute a basic solution, consider the system of equalities A x = b;

x − ‘n ;

b − ‘m ;

A − ‘m‚n

LP (2003) 6

Select from the n columns of A a set of m linearly independent columns (exists if rank (A) = m). For simplicity, assume that we select the first m columns of A and denote the m ‚ m matrix determined by these columns by B − ‘m‚m . B is nonsingular and we may uniquely solve B xB = b; xB − ‘m

of x are equal to those of xB x =  x0B ‘ i.e. ’ the first m components “ the rest are equal to zero

set

We thus obtain a solution to A x = b. Definition: Given A x = b, let B be any nonsingular m ‚ m matrix made up of the columns of A. If all n-m components of x, not associated with the columns of B, are set to zero, the solution to the resulting set of equations is said to be a basic solution (BS) to A x = b, w.r.t. the basis B. The components of x associated with columns of B are basic variables (BV). B is a basis since it consists of m l.i. columns that can be regarded as a basis for ‘m . There may not exist a basic solution to A x = b. To ensure existence we have to assume: Full rank assumption: The m ‚ n matrix A has m  n and the m rows of A are linearly independent. Linear dependency among the rows of A Ê either contradictory constraints (there is no solution to A x = b : e.g. x1 + x2 = 1, x1 + x2 = 2) or to a redundancy that can be eliminated (e.g. x1 + x2 = 1, 2 x1 + 2 x2 = 2). Under the full rank assumption, A x = b will always have at least one basic solution. Basic variables in a basic solution are not necessarily nonzero: Definition: If one or more BV in a BS has zero value, then the solution is a degenerate BS. There is an ambiguity in degenerate BS since the zero-valued basic and nonbasic variables can be interchanged. Definition: x satisfying A x = b and x   0 is said to be feasible. A feasible solution that is also basic is a basic feasible solution (BFS). If this solution is degenerate, it is called a degenerate BFS. Example 8: After adding slack variables to the problem of Example 1, we obtain the following equations which `happen' to form an initial basic representation: x!



x" 2 x" x" x"

 x#  x#  3x#

 x$

 x%

 x&

œ œ œ œ

(x0 = cT x)

0 11 18 4

(1)

in basic representation the variables (i.e. the elements of vector x) are divided into basic variables and non-basic variables. In the system of equations given by (1) above the basic variables are Öx! , x$ , x% , x& × and the non basic var's are Öx" , x# × .

Each equation in (1) expresses a particular basic variable as a linear expression in the non-basic variables

.

The basic solution of this representation is obtained by setting xj œ 0 for each non-basic variable and then solving the equations for the remaining BV's: Set x" œ x# œ 0 Ê x! œ 0, x$ œ 11, x4 œ 18, x& œ 4 Ê BS: Ðx! , x" , x# , x$ , x% , x& Ñ œ Ð0, 0, 0, 11, 18, 4Ñ = BFS Looking for a better solution than this, we search for a non-basic variable xj such that increasing xj (from 0) improves x! . x! œ x1  x#

LP (2003) 7

Ê can increase either x" or x# Ðincreasing both is too complicated). Consider the solutions obtained by increasing x" to - and leaving x# œ 0. In order to satisfy (1) and stay feasible we must ensure that x!

œ -

x$

œ 11  2 -   0 Ê - Ÿ 11/2

x%

œ 18 

-   0 Ê - Ÿ 18

x&

œ 4

-   0 Ê - Ÿ 4



(2)

We want the best (largest) - satisfying (2). As - takes values between 0-4, the solution defined by (2) has x" œ - x# œ 0 which corresponds to a point 0S in Figure 2 . The solution given by - œ 4 Ðx! , x" , x# , x$ , x% , x& Ñ œ Ð4, 4, 0, 3, 14, 0Ñ (point S in fig. 2Ñ

This is also a BFS to (1). BV: Öx! , x" , x$ , x% × and NBV: Öx# , x& ×. Note: in a basic solution, the non-basic variables are zero. We need the basic representation, i.e. need to transform (1) so that x! , x" , x$ , x% are expressed in terms of x# , x& . We do this by pivoting (to be discussed later) to get x!

 x# x# 3 x#

 x$

 x%

x" A solution to (3) is a solution to (1) and conversely.

   

x& 2x& x& x&

œ 4 œ 3 œ 14 œ 4

(3)

From x! œ 4  x#  x& we see that to increase x! , we should increase x# (& keep x& œ 0Ñ. Set x# œ -, x& œ 0. To satisfy (3) and stay feasible the other variables must satisfy x! x$ x% x"

œ œ œ œ

4 3 14 4

  

3-

  0   0   0

Ÿ 3 Ÿ 14/3

Ê Ê -

The best value for - œ 3. As - − [0, 3], the solution has x" œ 4, x# œ - which corresponds to a point on SR in Figure 2 . The solution given by - œ 3 is: Ðx! , x" , x# , x$ , x% , x& Ñ œ Ð7, 4, 3, 0, 5, 0Ñ (point R in fig. 2Ñ

This is also a BFS to (1). the BV: Öx! , x" , x# , x% ×; NBV: Öx$ , x& ×. The basic representation: x!

 x$  x$  3x$

x# x"

 x%

 x&  2 x&  5 x&  x&

œ œ œ œ

7 3 5 4

(4)

A solution to (4) is a solution to (1) and conversely. From x! œ 7  x$  x& , to increase x! we should increase x& . Set x& œ - x$ œ 0 to satisfy (4) and stay feasible, the other variables must satisfy x!

œ

7



-

x#

œ

3



2-

  0 Ê -

Ÿ

_

x%

œ

5



5-

  0 Ê -

Ÿ

1

x"

œ

4



-

  0 Ê -

Ÿ

4

LP (2003) 8

Ê - œ 1 As - takes values from 0 to 1, the solution defined by (12) has x" œ 4  -, x# œ 3  2- corresponding to a point on RQ in fig. 2. The solution given by - œ 1: Ðx! , x" , x# , x$ , x% , x& Ñ œ Ð8, 3, 5, 0, 0, 1Ñ (point Q in fig. 2Ñ

This is also a BFS to (1). BV : Öx! , x" , x# , x& ×; NBV: Öx$ , x% ×. Basic representation: x! x#

x"



# &

x$

+



" &

x$



$ & $ &

+

" &

x%

œ 8



# &

x%

œ 5

x$



" &

x%

x$



" &

x%

 x&

(5)

œ 1 œ 3

A solution of (5) is a solution of (1), and conversely. Thus, any solution to (1) satisfies x! œ 8 

# &

x$ 

" &

x%

Any feasible solution has x$ , x%   0 and hence by (5) x! Ÿ 8. Ð8, 3, 5, 0, 0, 1Ñ has x! œ 8 and so this solution is maximal. SUMMARY 1. Among the FS to min { x0 = cT x | A x = b, x   0 } there is an important finite subset: BFS. 2. Each BFS is associated with a basic representation: A set of equations equivalent to min { x0 = cT x | A x = b, x   0 } that expresses each BV in terms of the NBV's. 3. By looking at a basic representation we can see if increasing any NBV will improve the objective. If there is one, we can increase it until a new, better, BFS is reached (usual case). If there does not exist such a NBV, we have the optimal solution. BASIC FEASIBLE SOLUTIONS Let Sn œ Ö 1 ,...., n ×, I © Sn have m elements. Set of basic variables: Öxi ± i − I×  Öx! × . Let aj denote the column of A corresponding to xj , j − Sn . Associated with I is an m ‚ m matrix B œ BÐIÑ where the columns of B are made up from Öai ± i − I×. Example 9: If

Then

A œ

Ô

2 3 Õ 1

4 3 2

3 4 1

3 2 2

1 0 0

0× 1 0Ø

I œ Ö1, 5, 2×

Ê

B œ

I œ Ö6, 3, 4×

Ê

B œ

Ô

2 3 Õ 1

1 0 0

Ô0 1 Õ0

3× 2 2Ø

3 4 1

4 × 3 2 Ø

The remaining columns aj for j  I form matrix N and so (after shuffling the columns of A) we may assume A œ ÒB ã NÓ

LP (2003) 9

we then conformably partition c, x into ÐcB , cN Ñ and ÐxB , xN Ñ respectively. i.e. cB œ (ci ± i − I Ñ ; Example 10:

cN œ Ðcj ± j  I Ñ;

xB œ Ðxi ± i − I Ñ;

xN œ Ðxj ± j  I Ñ

n œ 6; I œ Ö5, 3, 2× cB œ Òc& , c$ , c# Ó;

cN œ Òc" , c% , c' Ó

xB œ Òx& , x$ , x# Ó; xN œ Òx" ,x% , x' Ó Given this partition,

T min š x! = cTB xB + cN xN ¹ B xB  N xN œ b; xB , xN   0 ›

(6,a)

T x!  cTB xB  cN xN œ 0

(6,b)

B xB  N x N œ b

(6,c)

As B is assumed to be nonsingular Ði.e B" existsÑ then a solution to Ð6,cÑ satisfies xB  B" N xN œ B" b

Ð7,aÑ

and conversely. Using (7,aÑ to eliminate xB from Ð6,aÑ yields T x! = cTB B" b  Š cN  cBT B" N‹ xN

(7,b)

Note that Ð7Ñ expresses the BV's (x! , xB ) in terms of the NBV's xN . The vector T  cBT B" N ‹ rT = ŠcN

(8)

is the relative (or reduced) cost vector (for NBV's). It is the components of r that determine which vector can be brought into the basis. Example 11:

Ô6× Ö3Ù Ö Ù Ö4Ù Ö Ù Ö2Ù c œ Ö Ù; Ö -3 Ù Ö Ù Ö4Ù Ö Ù 0 Õ0Ø

2 A œ ” 3

2 I œ Ö4, 3× Ê B œ ” 2 Ð7,a) Ê

& #

x"  7x#

1 4

3 2

x!  5x"  9x#

3 3

2 0

3 ; det B œ  2 Á 0 2•  x% 

$ # x& 

 x"  5x#  x$ Ð7,bÑ Ê

2 2

0 0 ; 1 •

1 0

B1 œ ”

2x'  x( 

$ # x)

 2x'  x(  x)  6x&

4 b œ ” •; 2

 2x(  x)

$ #

1 1

1

œ

1

œ

2

œ

6



The Importance of BFS It is necessary only to consider BFS's when seeking an optimal solution to an LP because the optimal value is always achieved at such a solution.

LP (2003) 10

Definition: Given an LP in standard form, a feasible solution to the constraints {A x = b; x   0} that achieves the minimum value of the objective function subject to those constraints is said to be an optimal feasible solution. If this solution is basic then it is an optimal BFS. Theorem 1: Fundamental theorem of LP Given an LP in standard form where A is an m ‚ n matrix of rank m: (i) if there is a feasible solution, then there is a BFS (see Figure 4 ). (ii) if there is an optimal solution, then there is an optimal BFS (see Figure 5 ). Theorem 1 reduces the task of solving an LP to that of searching over BFS's. Since for a problem having n variables and m constraints, there are at most

Š mn ‹ =

n! m! (n - m)!

basic solutions (corresponding to the number of ways of selecting m on n columns), there are only a finite number of possibilities. Thus Theorem 1 yields an obvious but terribly inefficient way of computing the optimum through a finite search technique. Example 12: Š m ‹ for small problem n

m œ 30 , n œ 100. as Š 100 30 ‹ œ

100! 30! 70!

¸ 2.9 x 10#& .

This would take approximately two years assuming we could check 10' sets of I / second. The set of basic variables: {xi | i − I }  {xo } where I § Sn and has m elements. The set of non-basic variables: xj  I and j Á 0. The BS corresponding to I is given by: ÐiÑ xj = 0 for j  I and j Á 0 Ê xN œ 0 (in (7,a)) (ii) xB œ B1 b  B1 N xN = B1 b This is a feasible solution iff B1 b   0 in which case it is a BFS. Example 13: A, b, c given as in Example 11, I œ Ö4, 3×. BS: Ðx! , x" , ... , x) Ñ œ Ð6,0,0,2,  1,0,0,0,0Ñ is not feasible. Exercise: Find some BFS for this example. Note The number of distinct BFS's to Ð1Ñ is Ÿ Ð mn Ñ œ the number of sets I © Sn with ± I ± œ m . Ê This number is finite. This number is usually  Ð mn Ñ because for a given I, ÐiÑ BÐIÑ may be singular, ÐiiÑ the basic solution may not be nonnegative. Also it is possible that two (or more) distinct I1 , I# can lead to the same BFS: Example 14: x"  x#  2x$ œ 1 2x"  x#  x$ œ 2 Ÿ I1 œ Ö 1, 2 × , I# œ Ö 1, 3 ×

Both I1 and I2 lead to the same BFS: Ð1, 0, 0Ñ. Example 15: To demonstrate that we cannot simply state “an LP has an optimal BFS" Infeasible ÖFR× œ g:

min š x! œ 2x"  x# ¹ x"  x# Ÿ 1 ;

Unbounded:

max š x! œ x" ¹

x"  x#   2; x" , x#   0 ›

x"  x#   1; x" , x#   0 ›

We can make x" arbitrarily large. i.e. there is no maximum value to x! and so no optimal solution. Note: The problem min x! above has a solution x" œ 0 , x# œ 1 and so unbounded refers to the objective value and not to the `size' of ÖFR× Ðwhich is unboundedÑ 5.THE SIMPLEX ALGORITHM

LP (2003) 11

Convention: Indexing the rows of a BV: A row is indexed by the basic variable in that row. The simplex algorithm is based on the fact that if a BFS x is not optimal, then there is some neighbouring basic solution y with a better objective value. If we examine the sequence of BFS's found when solving Example 8 we see that the sequence of index sets I (and x0 ) of basic variables was Ö0, 3, 4, 5× Ä

Ö0, 3, 4, 1× Ä

Ö0, 2, 4, 1× Ä

I"

I#

I$

Ö0, 2, 5, 1× I%

Notice that for t œ 1, 2, 3 ..., It1 is obtained from It by removing one element and replacing it by a new element i.e. ± It1 \ It ± œ ± It \ It1 ± œ 1. If I, Iw are such that BÐIÑ, BÐIw Ñ are non-singular, then I, I´ are said to be neighbours if ± I \ I´ ± œ ± I´\ I ± œ 1. Pivots At each stage of the simplex algorithm we have a basic representation and its BFS. We then use the reduced costs to see if there is some neighbouring representation that has a better solution. Should such a neighbour exist, we construct this representation by pivoting. Consider the system A x = b (A − ‘m‚n , m Ÿ n) a11 x1 + a12 x2 + ... + a1n ã ã am1 x1 + am2 x2 + ... + amn

x n = b1 ã ã x n = bm

(9)

If the equations (9) are linearly independent, we may replace a given equation by a nonzero multiple of itself plus any linear combination of the other equations. This leads to the well known Gaussian reduction schemes, whereby multiples of equations are systematically subtracted from one another to yield a canonical form. If the first m columns of A are linearly independent, the system (9) can, by a sequence of such multiplications and subtractions, be converted to the following canonical form: x1 x2 ä

+ y1,m+1 xm+1 + y1,m+2 xm+2 + ... + y1,n xn

=

y10

+ y2,m+1 xm+1 + y2,m+2 xm+2 + ... + y2,n xn

=

y20

=

ã ym0

ã ã xm + ym,m+1 xm+1 + ym,m+2 xm+2 + ... + ym,n xn

ã

(10)

Example 16: x1

+ 3 x2 +

x3 = 1 Ÿ eq. 1 - 3 ‚ eq. 2

x2 + 3 x3 = 3 x1

 8 x3 =- 8

Ä x2 + 3 x3 = 3 According to this canonical representation: Basic variables: x1 = y10 , x2 = y20 , ..., xm = ym0 Non-Basic variables: xm+1 = 0, xm+2 = 0, ..., xn = 0. We relax our definition and consider a system to be in canonical form if, among n variables, there are m basic ones with the property that each appears in only one equation, its coefficient in that equation is unity, and no two of these m variables occur in any one equation. This is equivalent to saying that a system is in canonical

LP (2003) 12

form if by some reordering of the equations and variables, it takes the form (10). (10) is also represented by its corresponding coefficients or tableau: 1 0 ã 0

0 1 ã 0

... ... ...

0 0 ã 1

y1,m+1 y2,m+1 ã ym,m+1

y1,m+2 y2,m+2 ã ym,m+2

... ... ...

y1,n y2,n ã ym,n

y10 y20 ã ym0

The question solved by pivoting is this: given a system in canonical form, suppose a non-basic variable is to be made basic and a basic variable is to be made nonbasic. What is the new canonical form corresponding to the new set of basic variables? The procedure is quite simple. Suppose in (10) we wish to replace the basic variable xp , 1 Ÿ p Ÿ m, by the nonbasic variable xq . This can be done iff ypq Á 0 in (10). It is accomplished by dividing the row p by ypq to get unit coefficient for xq in the p th equation, then subtracting suitable multiples of row p from each of the other rows in order to get zero coefficient for xq in all other equations. This transforms the q th column of the tableau so that it is zero except its p th entry, which is 1 and does not affect the columns of the other basic variables. Denoting the coefficients of the new canonical form by y´ij :

y´ij = yij 

ypj ypq

yiq ,

i Á p

and

y´pj =

ypj ypq

,

j = 0, ..., n

(11)

(11) are the pivot equations in LP. ypq is the pivot element. Example 17: x1 x2

+ x4 + x5  x6 + 2 x4  3 x5 + x6  x4 + 2 x5  x6 x3 Find the basic solution with basic variables x4 , x5 , x6 .

= 5 = 3 = 1

x1 x2 x3 x4 x5 x6 ______________________________________________________________ 1 0 0 1 1 -1 5 0 1 0 2 -3 1 3 0 0 1 -1 2 -1 -1 ______________________________________________________________ 1 0 0 1 1 -1 5 -2 1 0 0 -5 3 -7(replacing x2 by x5 as BV) 1 0 1 0 3 -2 4 ______________________________________________________________ 3/5 1/5 0 1 0 -2/5 18/5 2/5 -1/5 0 0 1 -3/5 7/5 -1/5 3/5 1 0 0 -1/5 -1/5 ______________________________________________________________ 1 -1 -2 1 0 0 4 (New basic solution: 1 -2 -3 0 1 0 2 x4 =4, x5 =2, x6 =1) 1 -3 -5 0 0 1 1

LP (2003) 13

Example 18: Using the Example 11, I œ Ö4,3× x!  5x"  9x# & # x"

 x"

 6x&

 7x#

 x% 

$ # x&

 5x#  x$

 2x(  x) œ 0 $ # x)

 2x'

 x( 

œ 1

 2x'

 x(  x) œ 2

and pivoting on Ð4,6Ñ yields x!

 5x"  9x# 

& % x"

$ # x"



( # x#

 2x#  x$

 6x& 

" # x%



 x% 

 2x(  x)

$ % x&

 x'

$ # x&



" # x(





$ % x)

" # x)

œ 0 œ

" #

œ 1

which is the basic representation for I + 6  4 œ Ö6, 3×. The simplex algorithm starts with a BFS and a basic representation and proceeds by a sequence of pivots to find a BFS which is also optimal. For most problems, finding an initial BFS is not easy and this is discussed later. However, for those problems in which the constraints are ! aij xj Ÿ bi ; n

i œ 1 ,..., m ;

xj   0 ;

jœ1

j œ 1 ,..., n

and where bi   0 , i œ 1, ... , m it is straightforward. On adding slack variables, xn+1 , ..., xnm , we find that x!  ! cj xj œ 0 ; n

jœ1

xni  ! aij xj œ bi ; n

i œ 1 ,..., m

j=1

is itself a basic representation with I œ Ön  1, ..., n  m × and xj œ 0 j=1, ..., n (non basic), xn i œ bi , i=1, ..., m (Basic) is feasible as long as bi   0. We can now develop the simplex algorithm. Assume that we have some basic representation Ð7, aÑ - (7, b). Notice that, because of equivalence the original problem ( i.e. minimise x0 = cT x, subject to A x = b ; x   0 ) and (7, a) - (7, b) have the same set of solutions. The goal of the simplex algorithm is to produce a basic representation whose basic solution is optimal. This is done by satisfying the conditions 90 Theorem # Theorem # Ð9:>37+6>CÑ If r = cN  NT BT cB   0 (see (7, b)), then the associated basic feasible solution minimizes x! . Proof For the given basic solution Ðassumed feasibleÑ T  cBT B-1 N )xN = cBT B1 b x0 = cTB B1 b + (cN

For any other solution to (7, a)-(7, b) we have

x0 = cTB B-1 b + rT xN r xN = rm+1 xm+1 + rm+2 xm+2 + ... + rn xn   0 T

since r   0 and xN   0. It then follows that

x´0 = cTB B-1 b + rT xN   cBT B-1 b = x0 .

…

LP (2003) 14

Suppose now that our basic representation does not satisfy the conditions of Theorem 2. In the simplex algorithm we try to choose a pivot so that new basic representation is (a) feasible and (b) x´0  x0 (unfortunately, because of degeneracy, we can only guarantee x´0 Ÿ x0 - more on this laterÑ. The motivation for the pivot choice is given in two ways. We assume that one of the elements of r is negative. At this stage we need to introduce an alternative representation using

x0  (cTN  cBT B-1 N )xN = cBT B1 b x0 + " T x = "0 (= cBT B1 b) or x0 + !"i xi = "0 n

i=1

where "i = 0, a i − I, and "i = corresponding element of (  r) a i − Sn  I (non-basic variables). If the basic variables are x1 , x2 , ..., xm , then "1 = "2 = ... = "m = 0 and corresponding to the non-basic variables xm+1 , ..., xn we have "m+1 =  rm+1 , "m+2 =  rm+2 , ..., "n =  rn . Thus, assume that for some k − Sn  I we have "k  0. We can first look at the current BFS and examine how increasing xk will lead to a better solution. Example 19: x!

 6x%  5x&  x' œ 26 x"

 2x%  2x&  x' œ 7 x#

 3x%  3x&  3x' œ 5 x$

 3 x%  x&  x' œ 6

now "4  0. We consider increasing x% while keeping x& œ x' œ 0. The values of the basic variables will become x! x" x# x$

œ 26  6x% œ 7  2x% œ 5  3x% œ 6  3x%

The larger x4 , the smaller x0 Must ensure x" , x# , x$   0

The larger x% , the smaller x! will become. We must, however, ensure that x" , x# , x$ remain non-negative. We can see that x# actually increases and remains non-negative. However, if x%  7/2, x" will become negative. Thus, the best ÐfeasibleÑ solution is x% œ min š

( #

,

' $

› œ 2

or

Ðx! , x" , x# , x$ , x% , x& , x' Ñ œ Ð14, 3, 11, 0, 2, 0, 0Ñ Of key importance is the fact that this latter solution is also a basic solution. It is that associated with the new basic representation obtained by a pivot on the circled  3. Returning to the general case with "k  0, if we put xk œ -  0 and xj = 0, j  I  Ök×, then the value of basic variable xi must satisfy xi œ yi0  yik - for i − I  {0} in order that Ð7, a, bÑ still hold. Now we have assumed that "k  0 and so x! decreases monotonically as - increases. We thus increase xk as much as possible while ensuring that all variables except x! remain nonnegative in value. The

LP (2003) 15

variables which are non-basic (currently) other than xk remain at zero. So we have only to consider variables which are currently basic. Rules

(12)

If yik Ÿ 0 then - is unrestricted for that equation a -   0 Ð here, i Á 0Ñ yi0  yik -   yi0   0, Ê xi   0 no matter how large - becomes. (Example: x2 = 5 + 3 x4   0 , a x4 = -   0.) If yik  0 then yi0  yik -   0 Í - Ÿ yi0 /yik Ê to ensure that all variables remain non-negative we need only to ensure - Ÿ yi0 /yik a i − I such that yik  0.

…

As a consequence, we can show: Theorem 3 If for some basic feasible representation "k  0 and yik Ÿ 0 for i − I then the problem is unbounded ÐbelowÑ i.e. there is no minimum value for x! . Proof From Ð12Ñ we see that we can make - above arbitrarily large and still have a feasible solution. The objective value for this solution is x! œ "!  "k - Ä  _. … If b i − I such that yik  0 then the best solution is obtained by making - as large as possible i.e. - œ min š yi0 /yik ± yik  0, i − I › (the ratio test). If ) denotes this value of -, then the solution obtained is ÐiÑ xk œ ) ; ÐiiÑ xi œ yi0  yik ), a i − I  {0}; ÐiiiÑ xj œ 0, a j  I  Ök×

(13)

Suppose that ) œ yj0 /yjk . Then we see from the pivot formulae Ð11Ñ that the solution given in (13Ñ is the basic solution obtained after pivoting on Ðj,kÑ. The pivot choice Ðj,kÑ above has the following characteristics: "k  0

(14, a)

yj0 /yjk œ min š yi0 /yik ± yik  0 ›

(14, b)

This choice of pivot can also be justified from the pivot formulae. Assume now that our current BFS is nondegenerate (i.e. yi0  0, i − I). We seek a pivot that produces a new BFS which is FEASIBLE and " ´!  "! Theorem 4 Assuming non-degeneracy, Ð15Ñ holds iff Ð14Ñ hold. Proof For feasibility we must have ÐiÑ y´j0 œ yj0 /yjk   0 yj0   0 from BFS; thus (i) is true iff yjk  0 ÐiiÑ y´i0 œ yi0  yik Šyj0 /yjk ‹   0 and ÐiiÑ holds trivially if yik Ÿ 0 as then

(15)

LP (2003) 16

y´i0   yi0  0. For yik  0, ÐiiÑ holds iff yi0  yik Šyj0 /yjk ‹   0

Ê

yj0 /yjk Ÿ yi0 /yik

which justifies Ð14, bÑ. To obtain

" ´! œ "!  "k Š yj0 /yjk ‹  "! ,

(this is an application of the pivot equation to the equation x0 + " T x œ "0 ; specifically, we are evaluating the new value of "0 ) we must have "k Š yj0 /yjk ‹  0. Since yj0 , yjk  0, this is possible if and only if Ð14, aÑ holds.

…

We have so far only considered minimization problems. A maximization problem can be dealt with by noting that

max x! œ  min Ð  x! Ñ or by looking for positive reduced costs rather than negative reduced costs in the simplex method. In general there will be more than one non-basic variable with "k  0. One reasonable policy used to choose k is

"k œ

max j

š "j › .

i.e. choose the variable which produces the greatest decrease in x! per unit increase in the variable. The Simplex Algorithm (for minimization problems) Step 0: Find an initial basic feasible solution and construct its basic representation. Step 1: If "k Ÿ 0 for k  I stop, the current basis is optimal. Else Step 2: If b k such that "k  0 and yik Ÿ 0 for i − I, stop. There is no finite minimum. Else Step3: Choose xk such that "k  0 Ðentry criterionÑ  xk enters basis Step 4: Let yj0 /yjk œ

min š yi0 /yik k yik  0 › ÐExit criterionÑ  xj leaves basis. i − I

Step 5: Pivot on yjk and go to step 1. The procedures Step 1 - 5 define what is called a Simplex iteration. Iterations can be effectively carried out using a Simplex tableau ÐExtendedÑ

LP (2003) 17

Basic

Non-Basic

Basic Variables

x1

...

xi

...

xj

...

xn

R.H.S.

x!

0

0

"j

"n

"!

ã

ã

ã

ã

ã

ã

xi

0

1

yij

yin

yi0

ã

ã

ã

ã

ã

ã

Example 20: Minimise x! œ  4x"  2x#  x$ Subject to

x"  x# + x$ Ÿ 4 x"  x#  2x# Ÿ 3 3x"  2x#  x$ Ÿ 12 x" , x# , x$   0

Adding slack variables x% , x& , x' , the constraints become x"  x#  x$  x% x"  x#  2x$ 3x"  2x#  x$

œ 4  x&

œ 3  x'

œ 12

x" ,..., x'   0 We thus have an initial (all slack) basic feasible solution with basic variables x% , x& , x' .

LP (2003) 18

Basic Variables x1 x2 x3 x4 x5 x6 RHS ___________________________________________________________________ x!

4

2

 1

0

x%

1

1

1

x&

1‡

1

#

1

4 "

$

x' 3 2 1 1 12 ___________________________________________________________________ x!

6

7

x%

#

$*

x"

1

x'

 4 1

 12

"

 1 

2

1

5

7

 3

" 3 1

3

___________________________________________________________________ x!

% $

x$

# $*

x"

1

 "

" $

" $

x'



( $



& $

" $

 "$

# $

" $

( 3



%$ $



" $ "" $

# $

# $

1

___________________________________________________________________ x! x# x" x'

 2

 3

$ #

" #



" #

" #



" #

1 1



& #

 1 



15



" #

" #

" #

( #

" #

1

" #

Finiteness of the Algorithm: Theorem 5 If all basic solutions are non-degenerate then the simplex algorithm described above must terminate after a finite number of steps with an optimal solution or with proof that no finite optimum exists. Proof Since no basis is degenerate, yj0  0 at each step and hence " ´!  "! at each step (remember Theorem 4) i.e. the sequence of objective values obtained by the algorithm is a strictly monotonically decreasing (" ´´0  " ´0  "0 ). Therefore, no basic solution can be repeated. Since there are a finite number of basic solutions the process cannot continue indefinitely and so must terminate at step 1 or step 2 after a finite number of iterations. … Theorem 6 In the absence of degeneracy a necessary condition for a basis to be minimal is that "j Ÿ 0.

LP (2003) 19

Proof (Same as Theorem 2). If "k  0 for some k, then either there is no finite minimum or by pivoting on … yjk defined in step 4, we can strictly reduce the value of the objective function. 6. DEGENERACY We have discussed the simplex algorithm under assumptions of non-degeneracy. We can say that a basic solution is degenerate if it has more than n-m zero valued components. Lemma A basic solution x to the LP problem is associated with more than one index set iff it is degenerate (under a 'mild' assumption). Proof Suppose first that x can be obtained from I" and I# where I1 Á I# . Then xj œ 0 for j − Ð Sn  I" Ñ  ÐSn  I# Ñ œ Sn  I"  I# . As I" Á I# , ± I"  I# ±  m and so x is degenerate. ÐNote, for example, this means yi0 œ 0 for i − I"  I# using the representation defined by I" .Ñ Suppose now that x is a degenerate basic solution. Let I be an index set which produces x and suppose for some j − I, yj0 œ 0 . `Mild' assumption: for each j − Sn there is a solution to LP with xj Á 0. Under this assumption, b k  I such that yjk Á 0 otherwise the equation xj œ 0 is part of the representation. If we pivot on Ðj,kÑ the new basic solution is identical to the current one - substitute yj0 œ 0 into Ð11Ñ with j œ 0. Thus x can be produced from index set I and I  k  j. … How does degeneracy affect the simplex algorithm? We have seen that if pivot Ðj,kÑ satisfies yj0 œ 0 then the new basic solution obtained is identical to the old one. In particular, " ´! œ "! and the proof of the Theorem Ðfinite terminationÑ breaks down. Let us call a pivot Ðj,kÑ degenerate if yj0 œ 0 and non-degenerate otherwise. An instance of simplex algorithm can now be decomposed into: sequence of non-degenerate pivot – degenerate — pivots

sequence of non-degenerate pivot – degenerate — pivots

.....

Note that some or all of these sequences of degenerate pivots may be empty. Geometrically speaking, the current BFS remains unchanged throughout a sequence of degenerate pivots and then a non-degenerate pivot 'moves us' to a neighbouring BFS. We know that the number of non-degenerate pivots is Ÿ Š mn ‹ . However, suppose that I" , I# , ... Ik ,...

denotes a sequence of basic index sets produced during some sequence of degenerate pivots. Suppose that Ik œ Ik  j for j   3 Ðj cannot be 1 or 2 Ñ then, assuming a given index set determines a unique pivot we will have Ik œ Ikj œ Ik  2j œ .... Ik1 œ Ikj1 œ Ik2j1 œ ... Ik2 œ Ikj2 œ Ik2j2 œ ... and so the algorithm will cycle and never terminate.

LP (2003) 20

Example 21: min

x! œ



$ %

x%  20 x& 

subject to

x"



" %

x%  8 x&  x'  9 x(



" #

x%  12 x& 

x#

" #

x'  6x(

" # x'

 3 x(

œ 0 œ 0

 x' œ 1 x$ We have the following sequence of tableaus choosing "k œ max Ð"j Ñ and choose first j that minimizes the ratio yi0 /yik for yik  0. BV x" x# x$ x% x& x' x( RHS ___________________________________________________________________ x!

0

0

$ % "* % " #

0

 20

" #

 6

0

x" 1 0 0 8  1 9 0 x# 0 1 0  12  "# 3 0 x$ 0 0 1 0 0 1 0 1 ___________________________________________________________________ ( x!  3 0 0 0 4  33 0 # x% 4 0 0 1  32  4 36 0 $ x#  2 1 0 0 4*  15 0 # x$ 0 0 1 0 0 1 0 1 ___________________________________________________________________

x! 0 0 0 2  18 0  1  1 x% 12 8 0 1 0 8*  84 0  " " $ "& x&  0 0 1  0 # % ) % x$ 0 0 1 0 0 1 0 1 __________________________________________________________________ x! x'



2  3 $ 1 #

0 0



" % " ) $ '% " )

0 0

0 3 1  #" #

0 0 *

" $ x&  ") 0 1 0 0 "' "' $ #" x$ 1 1  0 0 1 # # ___________________________________________________________________ " x! 1  1 0  16 0 0 0 # * x' 2  6 0  &# 56 1 0 0 " # " "' x(  $ 0  % 0 1 0 $ $ & x$  2 6 1  56 0 0 1 # ___________________________________________________________________

x! x"

0 2 1  3

0 0  *

( % & % " '

 44  20

" # " # " '

0 0

$ % " % "* #

"  20 #  8  1

 6 9

0 0

 12  "# 0 0 1

3 0

0 1

0 0

" x( 0 0  4  1 0 $ x$ 0 0 1 0 0 1 0 1 ___________________________________________________________________

x! x"

0 1

0 0

0 0

x# x$

0 0

1 0

0 1

LP (2003) 21

We can avoid the possibility of cycling by tightening pivot choice rule. There are several possibilities. We give one of the simplest, prove its validity and then discuss whether in practice any such rule is necessary! Bland's Rule ÐiÑ Pivot column choice: k œ min Ö j Á 0 ± "j  0 × ÐiiÑ Pivot row choice: Let 3 œ min Ö yi0 /yik | yik  0 ×, j œ min Ö i ± yi0 /yik œ 3 and yik  0 × Theorem 7 ÐDegeneracyÑ With Bland's rule the simplex algorithm cannot cycle and hence is finite. Degeneracy in Practice Until recently, cycling only occurred in contrived examples (as the one given above). It has therefore been the practice to ignore it in commercial codes. More recent experience with larger and larger problems indicates that cycling is now considered a rare possibility. Rigorous methods such as Bland's rule are not satisfactory in practice as they increase in practice the number of (or work per) iterations in the vast majority of problems which would not cycle anyway. It has also been suggested that it is perfectly satisfactory to replace yi0 œ 0 by yi0 œ %  0 Ð% œ 10# or 10$ ) and then continue. 7. SHADOW PRICES ÐSPÑ Shadow prices are important accounting prices in decision making and in sensitivity analysis. Suppose that we have solved problem min š x0 œ cT x ¹ A x = b , x   0 › and found an optimal basis matrix B xB œ B" b   0 ÐThe basis is feasibleÑ r = cN  NT BT cB   0 , ÐAll reduced costs are non-negative)

(16, a) (16, b)

The shadow prices C for this problem are defined by C œ BT cB Ð or CT = cBT B" Ñ ÐIf there is more than one optimal basis there may be more than one set of SP.Ñ These 'prices' give information about the objective value if we alter the RHS of the constraints. Let p − ‘m denote a general RHS and define the perturbation function v ÐpÑ: ‘m Ä ‘ by v ÐpÑ œ min š cT x ¹ A x œ p ; x   0 ›

(17)

Thus, solving min { x0 œ cT x | A x = b , x   0} computes v ÐbÑ Ðto be rigorous v: ‘m Ä ‘  Ö  _,  _× using  _ for unbounded problems and  _ for infeasible problems.Ñ Theorem 8 If B" p   0 then v ÐpÑ œ v ÐbÑ  CT Ðp  bÑ Proof If B" p   0 then B is an optimal basis for (17Ñ as Ð16, bÑ is not affected by changing b to p. Thus, vÐpÑ œ cTB B" p

Šv(p) œ x0 (p) = cTB B-1 p  rT xN = cBT B-1 p‹

œ cTB B" b  cBT B" Ðp  bÑ œ v ÐbÑ  CT Ðp  bÑ

…

This is a local result i.e. p must not differ =?,=>+8>3+66C from b =9 B" p   0 3= 7+38>+38/. . We also have the following global result. Theorem 9

LP (2003) 22

v ÐpÑ   v ÐbÑ  CT Ðp  bÑ;

a p − ‘m

Proof vÐpÑ œ x   0; A x œ p š cT x  CT Š A x  p ‹ › min

  min š cT x  CT Š A x  p ‹ ›

x 0

œ min š Š cT  CT A ‹ x  CT p ›

x 0

  CT p As T c cT  CT Adx œ c ccBT ã cN d  cBT B1 c B ã N d d ”

xB xN •

x x T œ ccTB ã cN d ” B •  cBT c I ã B1 N d ” B • xN xN T œ cTB xB  cBT xB + ccN  cBT B1 Nd xN

œ rT xN (r   0, xN   0)   0 CT p œ CT b  CT (p  b) = cBT B1 b  CT (p  b) œ v ÐbÑ  CT Ðp  bÑ.

…

Why C is called the vector of SP? Suppose b" , ... , bm represent demands for certain products and c" , ... , cn are the costs of certain activities which produce these products. Suppose there is an increase in demand of 0 for product t i.e. bt : œ bt  0 and suppose that a small firm offers to produce the extra demand at price .t . Should one accept or decide to produce more oneself? Ô × Ô0× Ö Ù Ö ã Ù Ö Ù Ö Ù Ö Ù Ö0Ù Ö Ù Ö Ù ÖNote: p œ b  0 et , where et œ Ö 1 Ù Ã t Ù Ö Ù Ö Ù Ö Ù Ö0Ù Ö Ù Ö Ù ã Õ Ø Õ0Ø Accept the offer Ê total production cost = v ÐbÑ  .t 0

Produce extra Ê total production cost : 

= v ÐbÑ  Ct 0

  v ÐbÑ  Ct 0,

if B" Ðb  0 et Ñ   0

in general

Thus, if .t  Ct one should definitely accept the offer. If .t  Ct and if B" Ðb  0 et Ñ   0 one should definitely reject the offer. In this case Ct is the maximum price one should pay.

LP (2003) 23

Maximization Problems For maximization problems, Theorem 8 is unchanged and the inequality is reversed in the statement of Theorem 9. Evaluation of Shadow Prices In certain circumstances the shadow prices for a particular row can be read off from the final tableau. Suppose that row t was initially a Ÿ constraint and a slack variable xs was added. The objective row coefficient "s for this variable in the final tableau is given by "s œ

 cs  CT as œ 0  CT et œ

Ct

Therefore Ct can be read off. If xs is the slack variable for a   constraint then we get "s œ  Ct . Example 22: Consider Example 20 reading off the final top row coefficients we get



Ô  3× 1 Õ 0 Ø

Note that in this case C Ÿ 0 which makes sense. If the R.H. sides increase to Ð4  0" , 3  0# , 12  0$ Ñ then the minimum obtainable objective function value will decrease to  15  C" 0"  C# 0#  C$ 0$ œ  15  3 0"  0# Ðfor `small' positive 0" , 0# , 0$ Ñ. 8. INITIAL BASIC FEASIBLE SOLUTION : The two-phase SIMPLEX If no BFS is known for the problem one can create one by adding artificial variables. ÐPreviously, in Section 4, we constructed a feasible all slack basis. Alternatively, one may know a BFS because a similar problem has been solved previously. We consider below, the situation when neither is possibleÑ. Suppose the constraints are x" 2x" x" xi

    

2x#  3x$   6 x#  x$ œ 4 2x#  x$ Ÿ 3 0, i = 1, ..., 3.

After adding slack variables x%, x& we add artificials 0" , 0# to give x"  2x#  3x$  x%  0" 2x"  x#  x$

œ 6 œ 4

 0#

x"  2x#  x$

 x&

œ 3

xi   0, i = 1, ..., 5 ; 01 , 02   0. In general, we first add slack variables to obtain equations and then ensure that RHS's bi are nonnegative by multiplying through by  1 where necessary. The aim next is to construct an enlarged system of equalities which is itself a basic representation. Some rows will contain slack variables. Other rows will contain an artificial and some other rows will contain both artificial and slack variables. This produces a basic representation whose BFS consists of artificials and slacks (in rows which do not have an artificial). In general one needs an artificial variable for each equality constraint and one for each inequality   bi where bi  0. The basic solution constructed will be feasible as we have ensured non-negative RHS's After adding slacks and artificials, we have the augmented system: x!  cT x œ 0,

Ax  Im 0 œ b

(18)

LP (2003) 24

feasible infeasible Æ Æ 0T œ Ò.....0.. 0p ... ÓT Clearly, a solution Ðx* , 0* Ñ to (18) gives a solution to min š x0 = cT x ¹ A x = b, x   0 › iff 0* œ 0. We note that by construction a BFS to (18) is known. The problem of finding a BFS to min { x0 = cT x | A x = b, x   0 } has now been replaced by that of finding a BFS to Ð18Ñ with 0 œ 0. To do this we solve the linear programming problem min ' œ 0"  0#  ...  0a

S.T. x!  cT x œ 0,

Ax  Im 0 œ b,

(19)

x, 0   0

If a feasible solution to min { x0 = cT x | A x = b, x   0 } exists then the minimum value of ' is zero with 0" œ ... œ 0a œ 0. We can apply the simplex method directly to Ð19Ñ since by construction we have an initial BFS to the problem. If having solved (19) we find 0 œ 0 then current values of x! , x" , ... xn will constitute a BFS to min š x0 = cT x ¹ A x = b, x   0 ›. Infeasiblity ' can be expressed in terms of the initial non-basic variables in the following way. Suppose that the row containing artificial 0i is

0i adding the infeasible rows we obtain



n D aij xj œ bi j=1

'  ! ÐD xj aij Ñ œ ! bi i− P i− P

(20)

where P is the set of indices of infeasible rows Ði.e. those with an artificial variableÑ. (20) expresses ' in terms of the non-basic xj . The coefficients of xj being given by the sum of the coefficients of xj in the infeasible rows. (Note that the basic xi (the slack variables for the equations that do not need an artificial) do not exist in the rows in which an artificial occurs. This is why (20) expresses ' in terms of the non-basic xj .) Example 23: max x! œ 3x"  x#  x$ S.T.

x"  x#  x$ œ 10 2x"  x#

  2

x"  2x#  x$ Ÿ

6;

xi   0, i = 1, ..., 3.

adding slacks and artificials where necessary, the constraints become x!  3x"  x#  x$ x"  x#  x$ 2x"  x#

œ 0 œ 10

 0"  x%

x"  2x#  x$

 0#

œ 2

 x& œ 6

xi   0, i = 1, ..., 5 ; 01 , 02   0. Note that artificial columns are ignored after the corresponding variables is made non-basic. 0" 0# Basic Variables x" x# x$ x% x& RHS ___________________________________________________________________

LP (2003) 25

' 3 1  1 12  3  1 x! 1 0 0" 1 1 1 1 10 0# 2 1  1 1 2 x& 1  2 1 1 6 ___________________________________________________________________ " ' 1 "# 1 9 # " " x!  2# 1  1# 3 " 0" 1 "# 1 1 9 # " " x" 1  #  # 1 " x&  1 "# 1 1 5 # ___________________________________________________________________

' 0 ) # x!  18 $ $ # " x# 1 6 $ $ " " x" 1  4 $ $ x& 2 1 1 14 ___________________________________________________________________ # )# x! 4 $ $ %" x# 1  "$ $ " #' x" 1 1 $ $ x% 2 1 1 14 Description of the Two-Phase Method Phase 1 Step 1: Step 1´: Step 2: Step 3: Step 4:

Modify the constraints so that the RHS of each constraint is non-negative. This requires that each constraint with negative RHS be multiplied through by (  1). Identify each constraint that is now (after Step 1) an equality or   constraint. In Step 3 we shall add an artificial variable to such constraints. Convert each inequality constraint to standard form. If i is a Ÿ constraint, add a slack variable. If constraint i is a   constraint, subtract an excess variable. If (after Step 1´) constraint i is a   or an equality constraint, add an artificial variable 0i to constraint i. Find the minimum value of ' using the simplex algorithm. Each excess and artificial variable is restricted to be   0.

Phase 1 ends when ' has been minimized. This phase will result in the following three cases which are dealt with in Phase 2. Phase 2 Case 1:

' *  0 . The original LP problem has no feasible solution.

Case 2:

The optimal value ' * œ 0 and no artificial variables are in the optimal Phase 1 basis. (The final phase 1 basis contains no basic artificial variables at zero value.) Ê Drop all columns in the optimal Phase 1 tableau that correspond to the artificial variables. We now combine the original objective function with the constraints from Phase 1 tableau. The final basis of Phase 1 is the initial basis of the Phase 2 LP. The optimal solution to the Phase 2 LP is the optimal solution to the original LP problem.

Case 3

The optimal value ' * œ 0 and at least one artificial variable (at zero value) is in the optimal Phase 1 basis. (When this occurs, it indicates that the original LP had at least one redundant constraint.) Again, continue by optimizing x! but have to ensure that no artificial variables becomes non-zero again. We note first that we will not make an artificial variable non-zero by allowing it to enter the

LP (2003) 26

basis once it becomes non-basic. The problem occurs when xk is to enter the basis and yjk  0 where yjk is the coefficient in column k for some row j with an artificial 0 in it. But as yj! œ 0 (the coefficient of the RHS of row j), necessarily we can pivot on yjk (an unusual pivot). The basic solution does not change but the artificial is pivoted out of the basis. One thus applies the normal simplex criteria for choice of pivot except that the above case yjk  0 causes a “non-standard" pivot selection. We can see now how we have sidestepped the earlier assumption that the rows the A matrix were linearly independent - we have ensured this by adding artificial variables. If the rows of the original A matrix are in fact linearly dependent then even when a feasible solution is found there will be artificial basic variables Ðat zero value, of courseÑ. We briefly discuss why ' *  0 Ê the original LP has no feasible solution and ' * œ 0 Ê the original LP has at least one feasible solution. Suppose the original LP is infeasible. Then, the only way to obtain a feasible solution to the Phase 1 LP is to let at least one artificial variable to be positive Ê ' *  0 Ê Case 1. If the original LP has a feasible solution, this feasible solution (with all 0i œ 0) is feasible in the Phase 1 LP and leads to ' * œ 0 Ê if the original LP has a feasible solution, the optimal Phase 1 solution will have ' * = 0. Example 24: (Case 2) min x0 =

2 x 1 + 3 x2 " #

x1 +

" 4

x2 Ÿ 4

x1 + 3 x2

  20

x1 +

œ 10

x2

x1 , x2   0 Steps 1  3 transform the constraints into " #

x1 + x1 + x1 +

" 4

x2 + 3 x2 x2

s1  e2

 03

œ 4 œ 20 œ 10

 03

œ 4 œ 20 œ 10

 02

(s = slack, e = excess, 0 = artificial)

Step 4 yields the Phase 1 LP min ' = 02  03 subject to: " #

x1 + x1 + x1 +

" 4

x2 + 3 x2 x2

s1  e2

 02

Initial BFS for Phase 1: s1 œ 4, 02 œ 20, 03 œ 10. 02 and 03 must be eliminated from the objective function ' before solving Phase 1: Row 0 + Row 2 + Row 3 = New Row 0

'

'

x1 + 3 x1 + + 2 x1  4

x2 x2 x2

 e2  e2

 02  0 3  02  03

œ œ œ œ

0 20 10 30

Combining new row 0 with the Phase 1 constraints yields the initial Phase 1 tableau. Since Phase 1 is always a minimisation problem (even if the original LP is a maximisation problem) we enter x2 into the basis. The ratio test indicates that x2 will enter the basis in row 2. Then 02 will exit from the basis.

LP (2003) 27

In the second tableau, since 2/3  1/3, x1 enters the basis. The ratio test indicates that 03 should leave the basis. Since 02 and 03 will be nonbasic in the next tableau, we know that the third tableau is optimal Phase 1. BV x1 x2 s1 e2 02 03 RHS ___________________________________________________________________

ratio

' 2 4 -1 30 s1 1/2 1/4 1 4 02 1 3 -1 1 20 03 1 1 1 10 ___________________________________________________________________

16 20/3 10

' 2/3 1/3 -4/3 10/3 s1 5/12 1 1/12 -1/12 7/3 x2 1/3 1 -1/3 1/3 20/3 03 2/3 1/3 -1/3 1 10/3 ___________________________________________________________________

28/5 20 5

' s1 x2 x1

1

-1 1/8 1/2 -1/2

-1/8 -1/2 1/2

1 1

-1 5/8 -1/2 3/2

0 1/4 5 5

' * = 0 Ê Phase 1 concluded. BFS: s1 œ 1/4, x2 œ 5, x1 = 5. No artificial variables in the basis: this is an example for case 2. We now drop the columns of the artificial variables 02 , 03 (we no longer need them) and reintroduce the original objective function: min x0 = 2x1 + 3x2

or x0  2x1  3x2 œ 0.

Since x1 and x2 are in the optimal Phase 1 basis, they must be eliminated from Phase 2, row zero (i.e. the objective function x0 ). This is normally done implicitly, or automatically, as Phase 1 progresses, as in Example 23. The purpose of this explicit illustration is to highlight the underlying mechanics of the process. Phase 2 Row 0: + 3 ‚ (Row 2): + 2 ‚ (Row 3): œ New Phase 2 Row 0:

x0  2 x1 x0

2x1

 3 x2 3 x2   

3 # " #

e2 e2 e2

œ œ œ œ

0 15 10 25

We now begin Phase 2 with the following: min s1 x2 x1

x0

   

" # " 8 " # " #

e2 e2 e2 e2

œ œ œ œ

25 " 4

5 5

This is optimal. In this problem, Phase 2 requires no further pivots. If Phase 2 row 0 does not indicate an optimal tableau, simply continue with the simplex algorithm until an optimal row 0 (i.e. objective function) is obtained.

LP (2003) 28

Example 25: (Case 1) min x0 =

2 x1 " # x1 x1 x1

After Steps 1 - 4, min ' = 02  subject to: " " # x1 + 4 x1 + 3 x1 +

+ 3 x2 + "4 x2 Ÿ 4 + 3 x2   36 + x2 œ 10 x1 , x2   0

03 x2 + x2 x2

s1  e2

 02  03

œ 4 œ 36 œ 10

Initial BFS for Phase 1: s1 œ 4, 02 œ 36, 03 œ 10. Again, 02 and 03 must be eliminated from the objective function ' before solving Phase 1: Row 0 + Row 2 + Row 3 = New Row 0

'

'

x1 + 3 x1 + + 2 x1  4

x2 x2 x2

 e2  e2

 02  0 3  02  03

œ œ œ œ

0 36 10 46

Since 4  2, x2 enters the basis and replaces 03 . In the second tableau, no variable in Row 0 has a positive coefficient: optimal Phase 1 tableau with ' * œ 6  0 Ê no feasible solution to this problem. BV x1 x2 s1 e2 RHS 02 03 ___________________________________________________________________ ' 2 4 -1 46 s1 1/2 1/4 1 4 02 1 3 -1 1 36 03 1 1 1 10 ___________________________________________________________________ ' s1 03 x2

-2 1/4 -2 1

-1 1 -1

1

1

-4 -1/4 -3 1

ratio

16 12 10

6 3/2 6 10

9. EXTENSIONS OF LP Some optimization problems can be converted to an LP, a sequence of LP's or be solved by modifying the simplex algorithm. EXTENSION 1: Min-Max with LP Let cÐ"Ñ , ... , cÐpÑ − ‘n and let 9 ÐxÑ œ The min-max problem

max š ÐcÐtÑ ÑT x › t œ 1,...,p

min š 9 ÐxÑ ¹ A x œ b; x   0 ›

(21)

min š x! ¹ x!  ÐcÐtÑ ÑT x   0, t œ 1,..., p; A x œ b; x   0 ›

(22)

can be converted to the LP

Theorem 10 If Ðx*! , x* Ñ solve Ð22Ñ then x* solves Ð21Ñ and x*! œ 9 Ðx* Ñ.

LP (2003) 29

Proof If x is a feasible solution Ð21) then Ð9 ÐxÑ, xÑ is a feasible solution to Ð22Ñ (since x satisfies Ax œ b, x   0 and 9 ÐxÑ  ÐcÐtÑ ÑT x   0, t œ 1,..., p.) Thus, x*! Ÿ 9 ÐxÑ which, in particular, implies that x*! Ÿ 9 Ðx* Ñ. But as x!*   ÐcÐtÑ ÑT x‡ for t œ 1, ... , p in (22), we have x*!   9 Ðx* Ñ and hence x!* œ 9 Ðx* Ñ. It then follows that x*! œ 9 Ðx* Ñ and 9 ÐxÑ   x*! œ 9 Ðx* Ñ for any feasible solution Ð21Ñ.

…

EXTENSION 2: Min-min problems Let cÐ"Ñ , ... . cÐpÑ be as in Ð1Ñ and let < Ðx Ñ œ

min š ÐcÐtÑ ÑT x ›. t œ 1, ..., p

We consider the problem min š < ÐxÑ ¹ A x œ b; x   0 ›

(23)

min š ÐcÐtÑ ÑT x ¹ A x œ b ; x   0 ›

(24,t)

This can be tackled by solving the p LP's:

t=1, ..., p. Let xÐtÑ , t œ 1, ... , p , denote an optimum solution to Ð24,tÑ and let zÐtÑ œ ÐcÐtÑ ÑT xÐtÑ . Theorem 11 *

If zÐt Ñ œ

min š zÐtÑ › then xÐt Ñ is an optimal solution to Ð23Ñ and zÐt Ñ œ < ÐxÐt Ñ Ñ. t œ 1,..., p *

*

*

Proof If x is a feasible solution to Ð23Ñ then for some q < Ðx Ñ œ ÐcÐqÑ ÑT x   zÐqÑ *

  zÐt Ñ Now for t Á t* we have

*

ÐcÐtÑ ÑT x Ðt Ñ

  zÐtÑ *

  zÐt Ñ *

*

œ ÐcÐt Ñ ÑT xÐt Ñ *

*

and hence < ÐxÐt Ñ Ñ œ zÐt Ñ .

…

EXTENSION 3: Goal Programming and Approximation Problems: minimise ! | (c(i) )T x  b | p

i=1

A typical function (c(i) )T x  b may be split into negative and positive parts by writing The relationship leads to the formulation

(c(i) )T x  b œ xi+  x i ;

x+i , x   0. i

| (c(i) )T x  b | Ÿ xi+  x i

LP (2003) 30 min x, x+ , x

  (i) T + +   0› š ! ( x+i  x i ) º (c ) x  b œ xi  xi , i = 1, ..., p ; xi , xi p

i-1

for x − ‘n , x+ , x − ‘p . Example 26 min š | x1  2 x2  x3 | ¹ 2 x1 + 3 x2 + 4 x3   60; 7 x1 + 5 x2 + 3 x3   105; x1 , x2 , x3   0 › The LP formulation (written as a max problem) max x1 2 x1 7 x1

( x+1  x 1 ) ( x+1  x 1 )

  2 x2  x3   3 x2  4 x3  5 x2  3 x3 x1 , x2 , x3 , x+1 , x 1   0

œ 0   6   105

EXTENSION 4: Fractional LP

min 

!!  !" x"  ...  !n xn "!  "" x"  ...  "n xn

» A x œ b; x   0 Ÿ

(25)

Where the set P œ Ö x ± Ax œ b , x   0 × is bounded i.e b L  0 such that P © Ö x k ² x ² Ÿ L ×. We first make the transformation

xj œ

yj y!

for j œ 1, ... , n

and assume that y!   0. Problem Ð25Ñ then becomes

min 

!! y!  !" y"  !# y#  ...  !n yn "! y!  "" y"  "# y#  ...  "n yn

bi y!  ! aij yj œ 0, i œ 1 , ..., m n

»

j=1

y!  0, y" , ... , yn   0

Ÿ

(26)

Note next that if Ðy! , y" , ... , yn Ñ is feasible for (26) then - Ðy! , y" , ... , yn Ñ is also feasible for any -  0 and further has the same objective value Ðas - œ 1Ñ. We can thus restrict our attention to Ðy! , y" , ... , yn Ñ satisfying "! y!  ...  "n yn œ 1 or  1 Because given an optimal solution to Ð26Ñ which does not satisfy one of these equations we can positively scale it to one that does. Thus, Ð26Ñ can be solved by solving the two problems.

bi y!  !aij yj œ 0, i = 1, ... , m n

min  !! y!  !" y"  ...  !n yn »

j=i

"0 y0  "1 y1  ...  "n yn œ $ y! , ... yn   0

Ÿ (27)

Where in one problem $ œ 1 and the other $ œ  1. We then choose the better of the two solutions, Ð y*! , y*" , ... , y*n Ñ and the Ðy"* /y!* , ... , y*n /y!* Ñ is optimal for Ð25Ñ. This will only be valid if we know that y*!  0. Suppose that to the contrary y*! œ 0. Then setting Ô y" × ã , 0 œ Õ y* Ø n *

we have

A 0 œ 0, 0   0 and 0 Á 0,

LP (2003) 31

as "" 0"  ...  "n 0n œ 1 or  1. It follows that if x − P then x  - 0 − P for any -  0. This contradicts the fact that P is bounded Ðwhy?Ñ.

Suggest Documents