I want to look at reconstituting the data. Equivalently, I want to look at setting successive singular values to zero.

This example was actually built on the previous one. Before I set the row sums to 1, I had started with

I’m going to continue with Harmon’s & Bartholomew’s model: Z = A F, Z = X^T, X is standardized, A is an eigenvector matrix weighted by the square roots of the eigenvalues of the correlation matrix of X.

I want data with one eigenvalue so large that we could sensibly retain only that one. Let me show you how I got that.

Get the SVD (Singular Value Decomposition) of t1… and look at w:

OK, keep u and v – just because they’re handy – but redefine w, and compute a new data matrix using this w:

Let :

Standardize t2 to get data X (“the data”):

Get the SVD of X, , but look at w… (we’ll see u and v later)

Get the eigenvalues of the correlation matrix and look at the percentages:

so our first eigenvalue is 81% of the sum, the second is nearly all the rest, 19%.

Get the diagonal matrix of square roots, :

Get A and the scores , except that I never want more than 3 columns. (The first 3 columns of 2 u are the components of the new data wrt the A basis.)

Check by confirming that , i.e. that we have factored the data matrix.

Let’s quickly compute the reconstituted data from 1 and 2 singular values. I could use square forms of w (w1 and w2) with u and v cut down to u1 and v1 (or u2, v2) to be conformable. To be specific,

and compute :

or take

and then :

You might note that my “1” and “2” indicate how many singular values I retained.

But what I was saying about leaving u and v untouched is that I can keep u and v full-size

and use the following matrices for w, all the same size as X. The full-size w is:

and the two others are:

It’s easy enough to reconstitute the data that way,

and we do get the same answers for X1 and X2.

Now, what if take just the first two columns of F^T, and the first two columns of A (i.e. the first two rows of A^T)? (We saw this last time, too.) That is, I cut F^T down to:

and I cut A^T down to

Do we get X2? Yes.

What if take just the first column of FT, and the first row of A^T? That is, I take FT to be:

and AT to be

Is their product equal to X1? Yes.

What we just saw is obvious in retrospect, but worth stating explicitly. When we throw away small singular values or eigenvalues, we do not change the scores and loadings. What we change, instead, is the number of scores and loadings used.

**When we throw away a nonzero singular value or eigenvalue, we are throwing away a scores-and-loadings pair. We don’t change any of the other scores and loadings.**

We have 3 pairs of “scores & loadings”, and then for each pair we compute a product. The individual pairs, and the 3 products, are not affected by our decision to “drop a factor”. Instead of adding all three products, we may choose to add only the first two products, or to keep only the first product.

It is probably customary to say that we are retaining 1 or 2 factors when we retain 1 or 2 scores-loadings pairs.

That “the scores” are selected columns of tells us that the individual scores & loadings cannot change no matter how many factors we keep.

Throwing away – choosing not to use – a nonzero scores-loadings pair does affect the reconstituted data. X2 is close to X, but not the same, and X1 is quite different.

To look at that another way, the means of X2 are zero, and the variances of X2 (2 factors retained) are:

“Not far from 1” is an understatement.

The means of X1 are also zero, but the variances of X1 (1 factor retained) are:

You might note that the X1 data is still centered, but it is no longer standardized. That was lost when we threw away a fairly large singular value.

Next, I think I will show you what I reckon I would do, today, for PCA / FA.

April 15, 2009 at 6:19 am

Not that I’m impressed a lot, but this is more than I expected when I stumpled upon a link on Digg telling that the info here is quite decent. Thanks.

April 15, 2009 at 4:34 pm

I appreciate it.

(And, fortunately, I’m not trying to impress, just to share some learning.)