Well, I’m on vacation this week and next.
I had thought I would get out a technical post last Monday, but life had other ideas. I never had any hope of putting one out next Monday, and that’s still true.
I had been, however, working on three different possibilities, and I hope that one of them will be ready for Halloween.
For each of the three, I’m in the same position: I could put out a post – but there’s something more I want to figure out, even though I don’t need to understand it for the post itself.
One of the three is to construct the final tableau of a linear programming problem. Oh, yes, Mathematica® will do a fine job of solving the problem – but the final tableau contains additional information. Whether you’re trying to follow along in a text, or whether you want to do sensitivity analysis, the final tableau is a very handy thing to have.
I know how to get it. And I can explain it. What I’m not clear on is whether duality theorems have anything additional to contribute. I know how to do sensitivity analysis – but people talk about duality as though it contributes to sensitivity analysis – and I just don’t see that. It may be that I’m so hung up on getting a nice display of the tableau that I’m failing to see some relationships.
The second thing is generalized eigenvectors. It’s pretty straight-forward to find them – and they will always give you an N+S decomposition.
Which I think is equivalent to upper triangular, but I’m not sure. More interestingly, any basis of generalized eigenvectors will work for that – and I could write this up as a self-contained post.
But… if you want a basis of generalized eigenvectors which will give you Jordan Canonical Form, you have to make some special choices. Not every basis of generalized eigenvectors will give you JCF. I don’t need to explain how to do this in the first post – but I want to understand it before I put out that first post.
Third, I played around with a suggestion I’ve seen a few times: eliminate multicollinearity by orthogonalizing the data – make the columns of the design matrix (including the column of 1s) perpendicular to each other.
Well, if I use Gram-Schmidt to do that with the Hald data, it works out fine. But if I use principal component analysis (PCA) to do that with the Hald data, it didn’t work out fine. Oh, but I hadn’t included the column of 1s….
More interesting, the result of the Gram-Schmidt looks like an affine transformation rather than a linear one: what should have been the transition matrix for the change of basis is off by a constant vector… I seem to have moved the origin.
So I could show you what happens to the multicollinearity if I use Gram-Schmidt on the Hald data – but I need to know more about what I really did, and whether PCA could also have been used, even though these are not essential for the first post.
Thus, I have three main courses cooking… but I want to know what’s for dessert before I put any main course on the table.
As for this post, I could hold it until Saturday – but every once in a while the scheduling doesn’t quite do what I intend… and I may not have internet access Saturday… since it’s ready now, let it go now.
This, then, is the diary post regularly scheduled for 2011 Oct 22.