since it’s been more than a week, we should all probably review davis’ R-mode FA. (i needed to!) the challenge may be that we have kept track of not that much information but in multiple ways. bottom line is that we have
- an orthogonal 3×3 eigenvector matrix P,
- a 3×3 diagonal matrix of eigenvalues,
- and a 3×3 diagonal matrix , i.e. of square roots of eigenvalues.
in order to be very sure that my calculations match davis’, i also have cut-down versions of those three matrices:
- an orthonormal 3×2 matrix U containing the first two eigenvectors,
- a 2×2 diagonal matrix of the non-zero eigenvalues,
- and a 2×2 diagonal matrix , of the square roots of the nonzero eigenvalues.
moving on…. he computes what’s called the matrix of R-mode loadings in his notation , for which my notation is ,
but which i would write and compute as
here it is, my way:
i’m carrying around an extra column of zeroes. i then compute it his way:
the nonzero numbers are the same; using the smaller matrices shows no numerical effect. what we’ve seen is that i get the same answer either way. but what about his answer? if i round my answer just a little bit i get…
and then i copy what he shows (on p. 505):
we see that davis and i differ by .0001 or .0005; not at all significant. he hasn’t lost any numbers by throwing away a zero eigenvalue and the corresponding
eigenvector. my guess is that mathematica carried more precision in the computations.
now he computes what are called the R-mode scores as
using my matrices, i get…
and doing it his way i get
(again, that verifed only that i get the same answer two ways.) i round it for comparison with him, getting
and to this accuracy, davis and i agree exactly (again p. 505, but i didn’t bother to copy and display his numbers per se).
now let’s consider what we did: an eigendecomposition of , where X was “centered” data, i.e. with mean zero. as we discussed, the covariance matrix of zero-mean data X is just
where N is the number of observations. right? the covariance matrix of our data is…
then multiply by 3 (=N-1)…
i know, i used when i started this example, and maybe i shouldn’t have, but i wanted a one-letter symbol for the equations involving this matrix and i chose “c” because i knew this was effectively if not exactly the covariance matrix.
there is a significant similarity: and the covariance matrix have the same eigenvectors (as close to “the same” as it gets, for things that really only specify directions in space). if i ask mathematica for an eigenvector matrix of the covariance matrix, i get:
OTOH, the eigenvector matrix we got for was:
(ok, i’m dancing on the high wire. unit eigenvectors may differ by a sign; not being normalized, these eigenvectors could have differed by arbitrary scale factors! by finding the smallest possible integer components, mathematica gave me the same answers for the two computations. i wasn’t really lucky, per se: i was sort of expecting that it would work out that way. otherwise, i could have converted both sets to orthonormal vectors in order to compare them.)
there is also a significant dissimiliarity, the eigenvalues of cov(X) and of differ by a factor of N-1, where N is the number of observations. (this is the N-1 in the computation of the sample variance.) just as the matrices differ by a factor of N-1,
so do the eigenvalues:
eigenvalues of = (N-1) * eigenvalues of cov(X).
for our example, here they are:
the eigenvalues of are,…
the eigenvalues of cov are…
and 3 times the eigenvalues of cov gives…
so, whether we use of zero-mean data X, or whether we use the covariance matrix of the raw data, has no effect on the eigenvectors, and only a common scaling effect on the eigenvalues. recall, by contrast, that whether we use the covariance matrix or the correlation matrix has unpredictable effects on the eigenvectors and eigenvalues.
but what are we doing using the covariance matrix or something like it? jolliffe and harman both think we should be using the correlation matrix. and jolliffe has shown us how significant the difference can be between using the correlation matrix or the covariance matrix. and using exacerbates the problem, because it has even larger eigenvalues than the covariance matrix.
this is why i will say, over and over, your choice of preprocessing is more important than the subsequent eigendecomposition or singular value decomposition, or your scaling of the eigenvectors, or whether you throw away eigenvectors associated with zero eigenvalues. preprocessing is far more important than whether we write the eigenvalue matrix as a 2×2 or as a 3×3. (all that is, however, my opinion, and i am an outsider to this field.)
we definitely need to talk about this. for the record, i am certain that davis knows exactly what he’s doing, and that – jolliffe and harman notwithstanding – it may be correct to use centered data (or the covariance matrix) for some analyses.