This is the 4th post in the Bartholomew et al. sequence in PCA/FA, but it’s an overview of what I did last time. Before we plunge ahead with another set of computations, let me talk about things.
I want to elaborate on the previous post. We discussed
- the choice of data corresponding to an eigendecomposition of the correlation matrix
- the pesky that shows up when we relate the of a correlation matrix to the principal values w of the standardized data
- the computation of scores as
- the computation of scores as where
- the computation of scores as projections of the data onto the reciprocal basis
- different factorings of the data matrix as scores times loadings
The three computations of scores, of course, are all the same, only looking different. I will show you an example where A is not invertible (but not in this post). Although it looks harder at first, it simplifies so considerably as to be easier in practice.
The choice of data is actually dictated by the SVD. If we do an eigendecomposition of the correlation matrix c,
then the data must be standardized or a constant multiple of the standardized data (e.g. small standard deviates), because that gives us that V from the eigendecomposition matches v from the SVD (to within signs of columns),
Similarly, if we do eigendecomposition of the covariance matrix, then the data must be centered or a constant multiple of the centered data. (We could define “small centered deviates” by dividing the data by .) Any multiple of centered data give us that V from the eigendecomposition matches v from the SVD (again, to within signs of columns).
Generally, then, if we do an eigendecomposition of XTX or XXT, then the data must be X or a multiple of it. (In that form it looks pretty obvious, huh?)
We had gotten used to not seeing that N-1 or . Both Davis and Malinowski used eigendecompositions in preference to SVDs, but they computed eigendecompositions of not of . I was able to replicate their results by using an SVD of X, instead of their eigendecompositions, and we had that : the principal values were equal to the square roots of the eigenvalues.
But Harman and jolliffe used eigendecompositions of the correlation matrix, and the square roots of the eigenvalues are proportional instead of equal to the principal values,
First remark: the same thing happens if we use the covariance matrix and centered data instead of the correlation matrix and standardized data. (It is not uncommon for people to refer to as a “covariance matrix”; I try to reserve that term for with X centered.)
Second remark: we saw that Harman’s model and Bartholomew’s is to factor the (preferably standardized, possibly centered) data matrix using a weighted matrix A:
and that in fact we could write that same factoring of X using the SVD:
Third remark: Jolliffe wrote his model with symbols similar to Harman’s but with two completely different meanings: he writes
Z = X A
but this time Z is the principal components instead of , and his loadings A are the orthogonal eigenvector matrix (i.e. my V), and X is still the data matrix (with variables in columns). That is, in my usual notation,
Y = X V,
where I have written Y for Z because the product X V is precsiely the new components of the data wrt V. His “principal components” are the new components of X.
Now, we can write that as a factoring of the data matrix, and because V is orthogonal we get
Funny thing, that looks just like Malinowski’s
Malinowski, of course, uses his decomposition for an arbitrary X: it may be standardized, or centered, or even raw data. Jolliffe uses his decomposition preferably for a correlation matrix, possibly for a covariance matrix. That difference aside, Jolliffe and Malinowski are using the same model.
Let me be explicit about something: we may start with v from the SVD and use that v in place of V in the eigendecomposition. Going the other way is harder: the SVD has consistent signs on u and v, but V may not be consistent with u. In the eigendecomposition, sign consistency is automatic between V and V^T, or between v and v^T. Ah, the analog of mixing u and V in the SVD would be to mix v and V in the eigendecomposition, but I can’t imagine that we would ever decompose the correlation matrix c as either or , using both v and V. That’s good, because it wouldn’t work in general.
Harman and Bartholomew are factoring the data as
which is similar to
while Jolliffe and Malinowski are factoring the data as
Note that I wrote singular values w not , hence “similar to” for Harman & Bartholomew et al.
The first are very nearly associating the singular values w with v, the second are associating the singular values with u. That strikes me as a fundamental difference.
What is common is that they all are factoring the data matrix as scores times loadings.
Unfortunately, Davis is only sometimes in that camp, and sometimes not. From his definitions on p. 504 – the ones I showed – his scores times his loadings do not reproduce the data. He defines loadings
and then defines scores as projections of the data X onto the loadings:
We know that by projecting data onto non-orthonormal bases and , Davis is computing components of the data wrt the reciprocal bases instead of wrt the bases.
I don’t know why he uses scores and loadings that do not factor the data matrix. About the closest I can get to factoring the data is (making w square, and u, v conformable):
which just isn’t the same as
(So close and yet so far.)
Incidentally, if you are looking in Davis, remember that his uppercase U and V are my lowercase v and u respectively, not u and v; and his is my w (made square).
Those definitions notwithstanding, Davis defines different scores on pp. 536-537, and this new definition is a factoring of the data matrix; more to the point, by this new definition of the scores, they can be computed as .
That is, when he discusses “factor analysis” around p. 536, he agrees with Harman & Bartholomew et al.
Tell me again why we get that pesky ? Answer: because we want the eigenvalues of the correlation matrix, not the singular values of the data matrix.
And why do we want them? Because the eigenvalues are numerically equal to the variances of the new data Y wrt the basis V, i.e. the variances of the columns of Y,
Y = X V.
Why do we care that the eigenvalues are equal to the variances of the new data? Answer: I suspect that people wanted to make inferences about the new components of the data without computing it, i.e. without actually computing the scores. We can tell what the variances would be just from the eigenvalues.)
And yet, the new data does not depend on the eigenvalues or the singular values. Take the orthogonal eigenvector matrix V, premultiply it by X (“project the data onto the basis vectors”), and we’ve gotten data with the new variances, i.e. new data with the property that it has the same total variance as the original data X, but the variance has been redistributed maximally.
What’s important is that the new data does not depend on the matrix A. so why did they create A instead of using the orthogonal V? Answer: the F matrix.
What does depend on the matrix A is the F matrix. That pesky factor of gives us that
i.e. is not only standardized but also uncorrelated. That’s in complete contrast to Y = X V, which has the redistributed variances. Maybe that was the purpose, and the creation of A is the means to that end. (To look at it another way: u is orthogonal, the cut-down u0 is orthonormal, and is standardized, instead of orthonormal.)
Nevertheless, we struggle to have it both ways: either the new data is XV, with redistributed variances, or the new data is F^T with unit variances. I would go so far as to say that Harman & Bartholomew et al. are assessing one thing (the variances of the new data Y), while computing another thing (standardized data).
People talk about both. We’ll apply all this to an example.