Linear Programming – The Final Tableau of a Minimization Problem

A Minimization problem

Let me work a minimization problem rather than a maximization. Like the previous problem, this one comes from Loomis & Turban, “Applied programming for management”, 003-078240-6 (pp. 97-99).

A customer asks a butcher to grind up several cuts of beef to form a blend of not less than 17.6% protein and 14.8% fat.

What he has available is

so that table gives us the A matrix and the c vector; the protein and fat requirements give us the b vector. Let the variables be x1, x2, x3.

I need to include the non-negativity of the xi; the set of constraints becomes:

Then the problem is to minimize c.x

Both constraints are met, so i don’t need slack variables – but i want them for their shadow prices. And they are surplus variables – that is, they would be negative if they were not zero.

So add two variables called s1 and s2:

I augment the A matrix with an identity matrix on the right… and I augment the cost vector with two costs of zero:

My b-constraints become a couple of equalities, As.xs = b; and I have sign constraints on the xi (non-negative) and si (non-positive).

… and I ask Mathematica® to Minimize cs.xs subject to the constraints cons…

Good. The answer is still x1 and x3, so I print A, construct our transition matrix T from columns 1 and 3 of A… get its inverse Ti… print the cost vector, and extract the first and third entries from it:

I could get Af and bf, entries in the final tableau, directly:

But now let me construct the augmented T so I can get everything all at once. I add a column on the right… and the cut-down costs on the bottom… and get the inverse.

I also need to augment A. I append b as a column on the right… and I append the negative of the full cost vector (i.e. -cs) on the bottom (and a zero in the southeast corner)… So the first matrix I print is \hat{A}\ , and then I compute and print \hat{T}^{-1}\ \hat{A}\ , which is the final tableau.

What is that -48 in the bottom row?

It ought to be how much the cost of x2 needs to change (i.e. to fall) for x2 to enter the optimal solution.

Really? Let’s drop the cost of x2 by 49 cents. I reduce the second entry in the cost vector:

(There’s no reason to include the slack/surplus variables for this case; I just want to confirm that x2 is in the new optimal solution.) Just to be sure, let’s drop the cost by only 47 cents instead. We recall the original costs… and lower the middle one. but only by 47 cents:

So that’s what the -48 told us: when x2 would enter the optimal solution.

Now, what about the shadow prices on s1 and s2? let’s change b1 by -1. Will the cost go down 4 cents? That’s my guess.

Star with the original b vector… and reduce the first entry by 1: here’s the original b vector followed by the new one:

Reset the constraints:


Yes. The cost has fallen by 4 cents, to 110.8 cents/lb from 114.8 cents/lb. Similarly, since the shadow price of s2 was 3 cents (the fifth entry of the last row of the final tableau…

… the customer knows he can save 3 cents a pound by relaxing the second requirement by 1%. More generally, having gotten the butcher’s answer for the original problem, the customer can now decide how to drop the cost to, say, 100 cents per pound.

There’s a lot more to sentivity analysis, but you have what you need for studying it: a way to get the final tableau from Mathematica’s bare-bones solution to the problem.

The Dual Problem

Let me remind you – assuming you’ve ever seen it elsewhere! – that you can construct the dual linear programming problem by doing four things:

  1. transpose the A matrix
  2. interchange the b and c vectors
  3. replace maximize with minimize, or vice versa
  4. reconsider the signs on slack/surplus variables.

That is, the dual of this problem is set up and solved as follows.

Transpose A… and swap b and c:

I choose to label the dual variables y = (y1, y2) instead of x:


(If you were womdering why I used vp and sp for the original solutions – p was for primal, and now d is for dual.) If you’ve ever seen this before, you were expecting to see that the optimal value is the same as it was, vd = vp = 114.8:

and you see our shadow prices of the primal (original) problem are now the values of the yi in the dual problem.

Let me get the final tableau.

Since i have three constraints, I add three slack varsiables. For these I retain the notation si.

I augment the A matrix and the c vector:

Here are the augmented constraints:

The solutions are still y1, y2, so the extraction is still columns (1,2)… but now we see the 48 that was in the primal final tableau, as the value of s2, hence the shadow price of b2. Let me first get and print the transition matrix T but don’t bother with its inverse…and get the cost vector and its cut-down version:

Augment and print the transition matrix, now \hat{T}\ … and now get and print its inverse \hat{T}^{-1}\ :

Now augment the A matrix again, to \hat{A}\ … and compute \hat{T}^{-1}\ \hat{A}\ :

For comparison, recall the final tableau of the primal problem…

Every number that is in one of these can be found in the other: the final tableau of either problem with slack or surplus variables comprises all the information available from either the primal or dual problems. In other words, we could have constructed the final tableau of the dual problem from the final tableau of the primal problem, without ever solving the dual.

I worked this minimization problem, as I said in a diary post, because I needed to verify the sensitivity analysis for myself. I worked the dual problem to confirm that there was no additional information in it, not once we have the primal solution with slack/surplus variables.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: