Less is more. And more is huge. It is easy for me to end up with huge posts to put out here, but I’d rather go with smaller.

Let’s get started with PCA / FA, principal components analysis and factor analysis.

In case it matters, I am using Mathematica to do these computations.

Here is an example, the first of several. This comes from Harman’s “factor analysis”. In order to emphasize the distinction between PCA and FA, he has one example of principal component analysis, and this is it.

Let me tell you up front what he did:

- Get some data;
- Compute its correlation matrix;
- Find the eigenstructure of the correlation matrix;
- Weight each eigenvector by the square root of its eigenvalue;
- Tabulate the results;
- Plot the original variables in the space of the two largest principal components.

I also need to say that his conceptual model is written

Z = A F,

And from the dimensions of the matrices it is clear that

A is square, k by k

Z and F are the same shape, with k rows.

We infer from its size that A will be derived from the eigenvector matrix, and that Z is derived from the given data matrix. From the shapes, we conclude that Z has observations in columns, rather than in rows. (If you’re used to econometrics or regression, you expect the transpose, observations in rows.)

But this is a fine thing, because we recognize that Z = A F is a change-of-basis equation for corresponding columns of Z and F; A is a transition matrix mapping new components (any one column of F) to old components (a column of Z).

The following data comes from Harman, p. 14. We would customarily say it has 5 variables (k = 5) and 12 observations (n=12); to be precise, we mean it has 12 observations per variable, since the total number of data points is 60. I have chosen to use regression notation, and denote the number of variables by k.

Here’s the data.

I have called it D, for data matrix; and I have displayed it with observations in rows. This is what I’m used to from regression analysis. This is also how Harman displayed it. The point of the discussion about Z is that Z is a transposed matrix, relative to D: D has variables in columns, Z has variables in rows. In addition, if Harman were to to compute Z – which he does not – it would almost certainly be standardized data. (That is, subtract the mean of each variable, and then divide each by its sample standard deviation.)

He printed means and standard deviations. I checked his printed means…

Our numbers agree: despite using N in his discussion, he correctly used N-1 in the computation of the sample variance. I would have been stunned to disagree with his means, but he led me to expect different computed standard deviations. I am pleasantly surprised.

We compute the correlation matrix r:

Now (jumping from his p. 14 to p. 135) we get the eigenstructure of the correlation matrix r. Here is an orthogonal eigenvector matrix:

(I remind you that eigenvectors – even orthonormal ones – are not unique; as it happens, every one I got is the negative of his. that doesn’t matter, but for subsequent computations, I changed the sign of my matrix.)

(We have “diagonalized” the correlation matrix r. That is, we have computed a matrix P whose columns are unit eigenvectors, and a diagonal matrix whose diagonal elements are the eigenvalues, such that .)

We also agree on the eigenvalues (which are unique):

Now he weights each eigenvector by the square root of its eigenvalue; e.g., the first eigenvector will have length while the fifth will have length .

We end up with the following (rounded) matrix of weighted eigenvectors (where I have also multiplied all of mine by -1):

He presents the following summary table, whose center is precisely our weighted eigenvectors:

I’ll have a lot to say about that table, but not yet. Oh, I should point out that the row labeled “variance” consists of the five eigenvalues and their sum (yes, 5.).

There’s one more thing he did. Back on p. 16, as a preview of things to come, he wrote

(where he used rather than to emphasize that these were principal components from PCA rather than factors from FA; you should read them as , but I don’t want to misrepresent exactly what he wrote).

But that is precisely the equation Z = A F written for the second row of the Z matrix: multiply the second row of A by each of the columns of F. I am happy to see this. It explains why people never display Z, and never compute F: they just want to describe the old variable names in terms of the new variable names.

Let us select all the coefficients of P1 and P2, not because only two new variables are important, but because it’s easy to plot things in 2D. By weighting the eigenvectors by , he has emphasized the first eigenvector over the second, and the second over the third, etc. So let’s see what the first two tell us.

Here are the first two columns of the weighted eigenvector matrix A, i.e. the first two weighted eigenvectors.

Let me summarize Harman’s analysis to this point. For this example, he has:

- given us data: 5 variables, 12 observations;
- computed the eigenstructure of the correlation matrix;
- Weighted each eigenvector by the square root of its eigenvalue;
- Tabulated the results;
- Plotted the original variables in the space of the first two principal components.

What has he not done?

- He did not actually compute Z or F;
- He did not explain why his table looks the way it does;
- He did not explain the graph he drew;
- I wonder if the fact that z4 is different from the two clusters suggests that there really ought to be three new variables P1, P2, P3 instead of just two;
- Maybe we should do a 3D graph.

Enough for now. Next, I’ll talk about what he did.

## Leave a Reply