that was R-mode. Q-mode is similar. in fact, it’s more than similar, but we’ll get to that. the starting point is to form instead of :
where was 3×3 because we had 3 variables, is 4×4 because we have 4 observations. clearly, if we have 100 observations, will be 100×100, etc. that could be hard to work with numerically; it would very likely be overwhelming conceptually. not sure about you, but i don’t want to look for structure in a 100×100 matrix!
guess what? a light dawns. i see why one might not want to display all of the V matrix. and for large enough data sets, one might not compute it. orthogonal be damned, if it’s too damned big!
there is a method to their madness, at least computationally. nevertheless, conceptually i will continue to use the SVD with u and v square and orthogonal.
guess what? another light dawns. i wonder if one might sometimes choose to compute only an eigendecomposition of instead of an SVD of X; we would get only the smaller eigenvector matrix, without the larger u and w matrices (in ). this could be a very useful dodge.
anyway, having , we do an eigendecomposition (of course).
we construct a diagonal matrix of square roots of the eigenvalues:
if we remember that the nonzero eigenvalues of and of are the same, we are not surprised by that matrix. except for its size, it’s very like the diagonalization of , which was:
in fact, in their nonzero content, they’re identical. this is probably a good time to remind ourselves that the row rank and column rank of a matrix are the same; and the ranks of and are also the same, and are also equal to the rank of X; and the number of nonzero singular values is also equal to the rank of X, and to the number of nonzero eigenvalues. what all that means is that and always have the same number of nonzero diagonal elements. (then the SVD shows us that in fact the nonzero eigenvalues are the same.) alternatively, whichever of the diagonal matrices is bigger differs only in having a lot of extra zeroes.
in case you hadn’t said it to yourself, our data matrix is of rank 2 instead of rank 3. why? because its 3 columns add up to 0; that’s the very definition of linearly dependent vectors, that a nontrivial linear combination add up to zero.
next, we check that our program gave us an orthogonal eigenvector matrix V: we compute :
whoa! now you see why i check that! i have no idea why Mathematica® didn’t return an orthogonal matrix, especially since it’s so close to being one. anyway, there’s a single command i can use (Orthogonalize) and doing so, i get an orthogonal and V… then i confirm that is the identity (V is orthogonal)…
(this is similar to what you might have to do to extend an orthonormal matrix from your SVD routine to an orthogonal one.)
BTW, what changed? we originally had:
and then we got:
only the 4th column is different. Mathematica® did not change the first 3 columns.
back to our computations. we define as the -weighted eigenvectors:
the definition of might look slightly different at first glance, but it isn’t. a Q-mode analysis of X is precisely an R-mode analysis of . we compute (transposing only the X, by comparison with ):
to summarize, I have 4 matrices:
all four are different sizes. nevertheless, they are far more similar than they appear.