## Introduction

edited 16 Jan 2009: I found a place where I called F^T the loadings instead of the scores. That’s all.

I want to run thru what is admittedly a toy case, but this seems to be where I stand on the computation of PCA / FA.

Recall the raw data of example 9:

Get the mean and variance of each column. The means are

and the variances are

We see that the raw data will differ from the centered data, and that will differ from the standardized data. Let’s do the standardized data first, because that’s what we’ve been doing most recently.

Here’s what I’m going to do. For a data matrix X

- get the SVD,
- get the eigenvalues (in 2 cases, that’s the correlation matrix or the covariance matrix)
- form the diagonal matrix
- form the weighted eigenvector matrix
- form the
~~loadings~~scores - form the new data Y wrt v, Y = u w
- form Davis’ loadings
- form Davis’ scores .

I will also keep the largest “few” singular values w

and form the reconstituted data and the residual data , where:

.

From given raw data, I’m going to do all that for the raw data, the centered data, and the standardized data. Chances are only one of those is really appropriate for any given problem, but I don’t know how to decide, so I’m going to use my really cheap computing power to see it all.

That lays out everyone’s models. For Harman and Bartholomew, we’re getting

(i.e. Z = A F)

for Jolliffe and Malinowski we’re getting

(i.e. X = R C and Y = X v)

and I’ve included Davis’s R-mode scores and loadings.

This is complicated by my doing it three times, but that encourages me to check things along the way. Two of the obvious checks are

but we can also check that

- Y has redistributed the variance
- is uncorrelated: covariance matrix = identity

but we have to be very careful about the last two when we work with the raw data.

Finally, I will look across loadings v, A, AR and across corresponding scores Y, F^T, SR

Gentemen, start your engines.

## Standardized

i standardize the data and call it Xs:

Although we saw most of these calculations in the previous post, my focus is a little different this time. Among other things, we will see just how few things need to be computed.

Get the SVD , labeled by “s”. For now, look only at w…

Get the eigenvalues of the correlation matrix:

If I simply do a pie chart, I don’t have to explicitly compute percentages:

Form the diagonal matrix of square roots …

Form the weighted eigenvector matrix A

Get F^T as the first three columns of 2u (where , N=5). This is as close to the u matrix itself as I need to get.

Check it by confirming that we have . We do.

We compute the new data Y wrt the v basis (Y = u w = Xv):

And we confirm the product – which amounts to confirming the SVD. (That is, from Y = u w, the SVD becomes .) Not a bad thing to do considering how many u’s and w’s I have floating around.

Finally, Davis defines R-mode loadings AR and scores SR:

,

so we can compute them:

OK, he would of course drop the zero columns:

In case I’ve never pointed it out before, although Davis’ scores S^R are strange, his loadings are exactly analagous to . The difference is in using w for weights instead of .

We should display the three definitions of loadings A, v, A^R (and this is when I choose to see v from the SVD computed way back at the beginning):

Since we have no zeroes, we can display the ratios (no, I wouldn’t usually do this):

i hope there were no surprises there. Each column of A or A^R is a multiple of the corresponding column of v.

We should display the corresponding scores (“new data”) F^T, Y, S^R (where F^T and Y are new components of X wrt A and v resp., but S^R is new components wrt the reciprocal basis of A.)

Since we have no zeroes, we can display the ratios:

Again, one set of scores is a column weighted multiple of any other.

We could confirm that the new data Y are centered and have redistributed variances equal to the eigenvalues: each column of Y does have mean 0, and the variances are

and those are indeed the eigenvalues

And we could confirm that F^T is standardized: each column does in fact have mean 0 and variance 1. In fact, far more is true: F^T is uncorrelated, it’s covariance matrix is the identity.

What about reconstituted data? I would look at w in preference to :

We know by now that setting to zero will have a very small effect on the data: the reconstituted matrix would differ from the original by a root sum of squares = . So I’m going to set all but the first w equal to zero, just to get reconstituted data that differs nontrivially from the original.

There are many equivalent ways to do this, as we saw last time. What come to mind are

- take 1 column of F and one column of A.
- set two columns of F and two columns of A to zero.
- reduce w to a 1×1 matrix and take skinny forms of u and v.
- set two columns of Y to zero.
- leave sizes alone, but reset two w’s to zero.

i’ll do the last one, (5). And we didn’t actually see the second-to-last one (4), but I’ll show it to you after we get the reconstituted data. I zero-out two entries in w:

and I compute :

Now recall our initial Xs:

and look at their difference:

Incidentally, we could square each element in Xs1

and add up all those squares: the sum is 2.26886 . That, as we’ve seen before, is also the sum of squares of the omitted w’s; that is,

.

Is it plausible to ask about reconstituted Y? yeah, that’s just part of the previous calculation. We had computed ; the reconstituted Y is

Ah ha! And sure enough, the difference between Ys and Ys0 is extremely simple:

So the reconstituted Y was the first column of Ys and the other two columns set to zero. We could have just taken it and post-multiplied by vs^T to get the reconstituted X: this was variant (4) of ways to do the calculation.

OK, we’ve seen how to run thru the calculations, for the standardized data. There’s a lot redundancy in all that I’ve computed, more than I need for any analysis of my own. Still, if I’m checking someone’s else’s work, I should have routinely gotten anything they came up with.

That’s enough for now; next time, centered data and raw data.

## Leave a Reply