PCA / FA Example 4: Davis. Q-mode.

that was R-mode. Q-mode is similar. in fact, it’s more than similar, but we’ll get to that. the starting point is to form XX^T instead of X^T\ X:
XX^T = \left(\begin{array}{ccc} -6&3&3\\ 2&1&-3\\ O&-1&1\\ 4&-3&-1\end{array}\right) x \left(\begin{array}{cccc} -6&2&0&4\\ 3&1&-1&-3\\ 3&-3&1&-1\end{array}\right)
= \left(\begin{array}{cccc} 54&-18&0&-36\\ -18&14&-4&8\\ O&-4&2&2\\ -36&8&2&26\end{array}\right).
where X^T\ X was 3×3 because we had 3 variables, XX^T is 4×4 because we have 4 observations. clearly, if we have 100 observations, XX^T will be 100×100, etc. that could be hard to work with numerically; it would very likely be overwhelming conceptually. not sure about you, but i don’t want to look for structure in a 100×100 matrix!
guess what? a light dawns. i see why one might not want to display all of the V matrix. and for large enough data sets, one might not compute it. orthogonal be damned, if it’s too damned big!
there is a method to their madness, at least computationally. nevertheless, conceptually i will continue to use the SVD with u and v square and orthogonal.
guess what? another light dawns. i wonder if one might sometimes choose to compute only an eigendecomposition of X^T\ X instead of an SVD of X; we would get only the smaller eigenvector matrix, without the larger u and w matrices (in X = u\ w\ v^T). this could be a very useful dodge.
anyway, having XX^T, we do an eigendecomposition (of course).
we construct a diagonal matrix of square roots of the eigenvalues:
\left(\begin{array}{cccc} 9.16515&0.&0.&0.\\ 0.&3.4641&0.&0.\\ 0.&0.&0.&0.\\ 0.&0.&0.&0.\end{array}\right)
if we remember that the nonzero eigenvalues of X^T\ X and of XX^T are the same, we are not surprised by that matrix. except for its size, it’s very like the diagonalization of X^T\ X, which was:
\left(\begin{array}{ccc} 9.16515&0.&0.\\ 0.&3.4641&0.\\ 0.&0.&0.\end{array}\right)
in fact, in their nonzero content, they’re identical. this is probably a good time to remind ourselves that the row rank and column rank of a matrix are the same; and the ranks of X^T\ X and XX^T are also the same, and are also equal to the rank of X; and the number of nonzero singular values is also equal to the rank of X, and to the number of nonzero eigenvalues. what all that means is that X^T\ X and XX^T always have the same number of nonzero diagonal elements. (then the SVD shows us that in fact the nonzero eigenvalues are the same.) alternatively, whichever of the diagonal matrices is bigger differs only in having a lot of extra zeroes.
in case you hadn’t said it to yourself, our data matrix is of rank 2 instead of rank 3. why? because its 3 columns add up to 0; that’s the very definition of linearly dependent vectors, that a nontrivial linear combination add up to zero.
next, we check that our program gave us an orthogonal eigenvector matrix V: we compute V^T\ V:
V^T\ V = \left(\begin{array}{cccc} 0.801784&-0.267261&0&-0.534522\\ O&0.816497&-0.408248&-0.408248\\ -0.597614&-0.358569&0&-0.717137\\ 0.146095&-0.266412&-0.885173&0.352349\end{array}\right)
x \left(\begin{array}{cccc} 0.801784&0&-0.597614&0.146095\\ -0.267261&0.816497&-0.358569&-0.266412\\ O&-0.408248&0&-0.885173\\ -0.534522&-0.408248&-0.717137&0.352349\end{array}\right)
= \left(\begin{array}{cccc} 1.&0&0&0\\ O&1.&0&0\\ O&0&1.&-0.244464\\ O&0&-0.244464&1.\end{array}\right)
whoa! now you see why i check that! i have no idea why Mathematica® didn’t return an orthogonal matrix, especially since it’s so close to being one. anyway, there’s a single command i can use (Orthogonalize) and doing so, i get an orthogonal V^T and V… then i confirm that V^T\ V is the identity (V is orthogonal)…
V^T\ V = \left(\begin{array}{cccc} 0.801784&-0.267261&0&-0.534522\\ O&0.816497&-0.408248&-0.408248\\ -0.597614&-0.358569&0&-0.717137\\ O&-0.365148&-0.912871&0.182574\end{array}\right)
x \left(\begin{array}{cccc} 0.801784&0&-0.597614&0\\ -0.267261&0.816497&-0.358569&-0.365148\\ O&-0.408248&0&-0.912871\\ -0.534522&-0.408248&-0.717137&0.182574\end{array}\right)
= \left(\begin{array}{cccc} 1.&0&0&0\\ O&1.&0&0\\ O&0&1.&0\\ O&0&0&1.\end{array}\right)
(this is similar to what you might have to do to extend an orthonormal matrix from your SVD routine to an orthogonal one.)
BTW, what changed? we originally had:
\left(\begin{array}{cccc} 0.801784&0&-0.597614&0.146095\\ -0.267261&0.816497&-0.358569&-0.266412\\ O&-0.408248&0&-0.885173\\ -0.534522&-0.408248&-0.717137&0.352349\end{array}\right)
and then we got:
\left(\begin{array}{cccc} 0.801784&0&-0.597614&0\\ -0.267261&0.816497&-0.358569&-0.365148\\ O&-0.408248&0&-0.912871\\ -0.534522&-0.408248&-0.717137&0.182574\end{array}\right)
only the 4th column is different. Mathematica® did not change the first 3 columns.
back to our computations. we define A^Q as the \sqrt{\text{eigenvalue}}-weighted eigenvectors:
A^Q = \left(\begin{array}{cccc} 0.801784&0&-0.597614&0\\ -0.267261&0.816497&-0.358569&-0.365148\\ O&-0.408248&0&-0.912871\\ -0.534522&-0.408248&-0.717137&0.182574\end{array}\right)
x \left(\begin{array}{cccc} 9.16515&0.&0.&0.\\ 0.&3.4641&0.&0.\\ 0.&0.&0.&0.\\ 0.&0.&0.&0.\end{array}\right)
= \left(\begin{array}{cccc} 7.34847&0.&0.&0.\\ -2.44949&2.82843&0.&0.\\ 0.&-1.41421&0.&0.\\ -4.89898&-1.41421&0.&0.\end{array}\right)
the definition of S^Q might look slightly different at first glance, but it isn’t. a Q-mode analysis of X is precisely an R-mode analysis of X^T. we compute S^Q = X^T\ A^Q (transposing only the X, by comparison with S^R = X\ A^R):
S^Q = \left(\begin{array}{cccc} -6&2&0&4\\ 3&1&-1&-3\\ 3&-3&1&-1\end{array}\right) x \left(\begin{array}{cccc} 7.34847&0.&0.&0.\\ -2.44949&2.82843&0.&0.\\ 0.&-1.41421&0.&0.\\ -4.89898&-1.41421&0.&0.\end{array}\right)
= \left(\begin{array}{cccc} -68.5857&0&0&0\\ 34.2929&8.48528&0&0\\ 34.2929&-8.48528&0&0\end{array}\right)
to summarize, I have 4 matrices:
A^R = \left(\begin{array}{ccc} 7.48331&0.&0.\\ -3.74166&-2.44949&0.\\ -3.74166&2.44949&0.\end{array}\right)
S^R = \left(\begin{array}{ccc} -67.3498&0&0\\ 22.4499&-9.79796&0\\ O&4.89898&0\\ 44.8999&4.89898&0\end{array}\right)
A^Q = \left(\begin{array}{cccc} 7.34847&0.&0.&0.\\ -2.44949&2.82843&0.&0.\\ 0.&-1.41421&0.&0.\\ -4.89898&-1.41421&0.&0.\end{array}\right)
S^Q = \left(\begin{array}{cccc} -68.5857&0&0&0\\ 34.2929&8.48528&0&0\\ 34.2929&-8.48528&0&0\end{array}\right)
all four are different sizes. nevertheless, they are far more similar than they appear.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: