What about redistributing the variance?
We learned from harman that if we use the orthogonal eigenvector matrix v to compute new data, the result has redistributed the original variance according to the eigenvalues; and we also learned that if we use the weighted eigenvector matrix instead, we get new variables of unit variance.
(i think i thought this required us to start with the correlation matrix; it does not.)
let’s see this again for davis’ centered data X:
We have variances…
and the sum of the variances is…
Now is a good time to recall :
The sum of the diagonal elements is 96, and if we divide by 3 (= N-1, for the covariance matrix instead of for ), gee, we get 32.
stop for a moment. We have two, admittedly proportional, sets of eigenvalues, one set from the covariance matrix, the other from . this gives us two different possible sets of weights to apply to the eigenvector matrix v. (recall that v is an eigenvector matrix both for and for the covariance matrix of X, as well as coming out of the SVD of X.)
Let’s stay with what i know: our new data, using the orthogonal eigenvector matrix v, is .
The variances are…
and we don’t even need to consciously add those: the sum is 32. We have redistributed the original variance among our two new variables.
Now, everyone seems to say that the -weighted eigenvectors have variances equal to the eigenvalues. What those eigenvectors really have is lengths equal to their .
here indeed be dragons.
as linear algebra is my touchstone in general for all of this, that redistributed variance using the orthogonal eigenvector matrix is my vorpal blade (dragon, jabberwock, no big deal – snicker-snack either way). any different choice of basis must lead to different results.
we looked at that redistribution of variance in harman. We learned then that the correct way to get new data with redistributed variance is to use the orthogonal eigenvector matrix. We just reconfirmed it for davis’ example.
Instead of actually kicking one of the dragons awake, let’s try counting its scales while it sleeps. If the R-mode scores are some form of data (and they are), what are their variances?
the variances of the first two columns are
yikes! if nothing else, that should convince you that the R-mode scores are weird.Life is so much simpler if we use the orthogonal eigenvector matrix. The new data is trivial to compute (X v), and we can see that it has redistributed the variance of the original data.Trying to use any weighted eigenvector matrix is dicey if there is a zero eigenvalue (because our transition matrix is not invertible); and confusing if we used instead of the covariance matrix (because they have different eigenvalues).i will eventually calculate the data wrt both of the weighted eigenvector matrices. The dragons will be awake and aloft and pretty to look at.