edit 5 Oct 2008: I had omitted the word “constant”. see edit.
The following example comes from Bartholomew et al. “The Analysis and Interpretation of Multivariate Data for Social Scientists.”
It is an excellent example with which to wrap up PCA / FA. (There’s a lot we haven’t done, but it’s almost time for me to move on.)
The example is “employment in 26 European countries”, “eurojob” for short, from chapter 5 (either 1st or 2nd edition), and data for both editions is available at http://www.cmm.bris.ac.uk/team/amssd.shtml . Please note that I am using the 1st edition of the book, and the 1st edition data.
When I first worked this example, I knew that something interesting happened, but not why; and, there was one thing I didn’t understand at all, back then.
One reason why this example is so good is that they provide both the data and a correlation matrix. In fact, most of their analyses in chapter 5 seem to be based on the correlation matrix, except in those few cases when they compute scores (for which they need the data). We get to return to our starting point, using the correlation matrix.
Here’s the raw data.
I remark that the matrix is 26×9. What we have is: for 26 countries, the % of people employed in 9 categories (agriculture, mining, etc.).
This would be a good, rather than excellent, example to end with simply because it takes us back to our roots, using the correlation matrix for its PCA analysis. I compute the correlation matrix, and I save it; but I also round it to 2 places, and show it to you:
I do not quite match them. In fact, we differ in one number, at the end of the first row (and at the end of the first column, of course). To five places, I have the number as -0.56492. They show -.57 where I round to -.56.
That’s irrelevant. What is not irrelevant is the result of computing the eigenvalues of the rounded correlation matrix.
It could be a big deal, but it isn’t. Here be dragons, in either case. The point is that their apparent starting point is their printed (hence rounded) correlation matrix. (To be more precise: they clearly computed an eigendecomposition of the correlation matrix. I used their printed correlation matrix.) Whether I use theirs or mine, so long as it’s rounded to 2 places, life is interesting.
Let’s just do our thing: get the eigendecomposition of the rounded correlation matrix… and look at the eigenvalues.
I hope you took a deep breath right there. The smallest eigenvalue is negative. But the correlation matrix is supposed to be positive semi-definite. (Because it’s of the form .)
Alarm bells should be ringing in your head: eigenvalues of a positive semi-definite matrix are non-negative. Where did that negative eigenvalue come from?
Presumably the smallest eigenvalue is very close to 0, and the negative number is numerical error.
Well, sort of, but we can say more. Here are the eigenvalues of the unrounded correlation matrix:
We see that the last one is very small but positive. The computed correlation matrix is positive definite. The problem comes from rounding off the correlation matrix.
But that must be positive definite, too, you cry?
Symmetry is not enough to guarantee positive definite.
This is just the tip of an iceberg. There was nothing special about rounding to 2 digits, nothing special about a correlation matrix. It is not trivial to manipulate a positive (semi-) definite matrix and guarantee that subsequent matrices are still positive (semi-) definite. But that’s about all I’ll say about the numerical hassles.
As I said, the computed correlation matrix is, in fact, positive definite: its determinant and all its eigenvalues are positive. Mathematica® does just fine with it.
But the rounded correlation matrix – which is what they presented – is not positive definite: its determinant and one eigenvalue are negative.
So we now know two good reasons to provide data instead of just providing a correlation matrix: one, we can do more with the data; two, a rounded-off correlation matrix may not be positive semi-definite. And, bless them, they did provide the data.
Just this insight alone upgrades this to a very good example. And this negative eigenvalue was the “something interesting”. We’ll see very shortly why it happened. (Perhaps you can guess.)
To summarize. The smallest eigenvalue of the correlation matrix is extremely close to zero, but non-negative, as it should be. The real problem is that the correlation matrix is almost singular (not of full rank); then the rounded correlation matrix is almost singular, too, but the smallest eigenvalue went to the other side of zero.
The lesson we have learned is: if the correlation matrix is barely of full rank, working from a rounded version of it could work out badly.
Now, why is the correlation matrix almost singular? I.e. why is the smallest eigenvalue so close to zero?
I swear I had no idea that this example would tie in so closely with recent posts. I knew that inadvertent reduction of rank was important, but I didn’t have a clear reason why.
What might cause the smallest eigenvalue to go to zero? All we did was compute a correlation matrix. That is, all we did was implicitly center the columns.
Ah ha! Just what are the row sums of the raw data?
Here they are:
The min, mean, and max are 99.7, 99.9923, and 100.4 respectively.
They’re not constant, but they’re awfully close to constant. Hence, the smallest eigenvalue of the correlation matrix isn’t zero, but it’s awfully close to zero. I explained here that making (edit: constant) non-zero row sums followed by centering the columns leads to constant zero row sums.
So: we just got a real-life example of the inadvertent loss of rank caused by using the correlation matrix on constant-row-sum data. The computed correlation matrix is positive definite but almost singular (determinant almost zero but positive, smallest eigenvalue almost zero but positive); and then because we were nearly singular, the rounded correlation matrix could fail to be positive definite, and in fact failed to be positive semi-definite.
Next, we will confirm that part of their analysis based on the correlation matrix.