Here’s where i get to be honest about what i stumbled over. I’ve read the proof in stewart. this is just an expanded version of it, although i want to change the notation. Some of what i need to do is obvious; some of it didn’t register. i had to go back and peek at his proof again.
Let X be a real matrix. We wish to show that
with u and v orthogonal, and
diagonal, all elements positive.
OK, form . it is square, diagonalizable, and all of its eigenvalues are real; in fact they are non-negative. Even more, the positive numbers are the squares of the numbers in . we would usually write
with v orthogonal, but we know that we’re going to get our w’s from the square roots of these eigenvalues.
So let’s change our notation and write
this was the first thing i forgot when i tried to work this out for myself. It turns out there’s another good reason for using the square, and it will present itself later if we don’t do it now.
Now, may have some zero eigenvalues. WLOG all of the eigenvalues may be ordered, getting smaller (more precisely, not getting larger) as the indices increase:
(and yes, the ordering of the columns of v depends on the order of the eigenvalues.)
Let be the diagonal submatrix of consisting of the positive eigenvalues (it may be all of . split v = (v1 v2) so that v1 corresponds to the nonzero eigenvalues, v2 to the zero eigenvalues, if any. why? because a diagonal matrix can be inverted, and it’s trivial, iff all the diagonal entries are nonzero. We’ve just split off the invertible part of .
We have split
(by definition all the vectors in v2 have eigenvalue . (the second equation, hence v2, may be vacuous.)
In fact, what we have is a matrix equation
, which zeroes out the 2nd column;
we do have
since all the vectors in v are mutually orthogonal, and this zeroes out the southwest block of the matrix.
All of that is wonderful, but it applies to . what about X itself? this is the part that didn’t register when i was reading it. sheesh! just expand A. (ouch!) from
OK, we’ve gotten v and v1, v2. How do we get u? well, want do we want?
we might suspect that we want something of the form
but of course w isn’t invertible. Ah, but is. Let’s try
we know that u is to be orthogonal, so u1 needs to be a set of orthonormal vectors. how do we show that? (i need to re-work this, but here it is.) u1 has exactly the same elements as
so clearly we want to post-multiply by :
and then we need to realize that the diagonal matrix is its own transpose:
is in fact
That looks like u1 is orthogonal. It isn’t, actually, because it isn’t square, so the product isn’t square. That combination would be called an orthonormal matrix instead. Each column of u1 is a unit vector, and all the columns are mutually orthogonal. There just aren’t enough of them.
(BTW, i found the term “orthonormal matrix” defined in applications of the SVD. i had come across the term before, but never seen it defined in a math book. And no, i didn’t say it wasn’t in any of them, just that i had never seen it defined before.)
so there just aren’t enough of them?
Good golly, miss molly! we know what to do when we don’t have enough orthonormal vectors. Just extend the set of column vectors of u1 by gram-schmidt, and let u2 be the columns of the new vectors. That is, we let
u = (u1 u2).
There’s nothing left to do but to see if it worked. Compute
now what? u1 was defined in terms of v1, so use it:
we will also need to realize that the eigenvectors v2 in the null space of A are also in the null space of X: i.e.
finally, we arrange that into the usual form:
(whew! it takes too long to do all that, and i may have defeated the purpose of explaining it. dang!)