## Introduction

The previous post discussed something interesting that Basilevsky did. (Search the bibliography, he’s there.) I can’t say I like it — because it leads to a basis which is non-orthogonal, and whose vectors are not eigenvectors of either the correlation or covariance matrices.

But I had to understand it.

I don’t know how widespread it is nowadays, but even if Basilevsky is the only author who does it, it’s another example of the lack of standardization (no pun intend, I swear) in PCA / FA. This branch of applied statistics is like the mythical Wild West: everybody’s got a gun and there are bullets flying all over the place. Law and order have not arrived yet.

OTOH, it’s nice to find something different in just about every book I open.

Let me set the stage again. What we have is the following 3 models.

.

where X is data (variables in columns, observations in rows, and there are N rows), Z is principal components (PCs), and A is constructed from an eigenvector matrix V using the diagonal matrix of .

Now is a good time, before we need to be reminded of it, to emphasize that all three models are of the same form:

.

In the first model,

,

Xc is centered data, and

,

where Vc is an orthogonal eigenvector matrix of the covariance matrix :

.

In the third model,

,

Xs is standardized data, and

,

where Vs is an orthogonal eigenvector matrix of the correlation matrix :

.

In the second model,

we have the standardized data Xs as in the third model, but the PCs Zc from the first model. The transition matrix Ar may be found by normalizing the rows of Ac:

,

where is a diagonal matrix of the standard deviations () of Xc.

## The example

Basilevsky gives us a covariance matrix c. (So we have no data, for this example.)

We will want both the correlation matrix r and the diagonal matrix , so let’s get them. is to be a diagonal matrix of the square roots of the variances, hence of the square roots of the diagonal of c. (i should call it s, because it’s made up of sample standard deviations, but I don’t want “s” as well as “As”, “Vs”, etc.) Here it is:

Once we have , the fastest way for me to get the correlation matrix is

:

## From the covariance matrix:

So let’s do the eigendecompositions. First from the covariance matrix, which will give us the first model. We get eigenvalues…

an eigenvector matrix…

These are not quite Basilevsky’s numbers. The 4th eigenvector is off, by as much as .0014 . ah, the 2nd is off too, in two terms. Not great, but not a big deal. (Actually, I want to check his numbers, but not today. I will be using my numbers for eigenvectors and eigenvalues.)

(I suspect that most packages which do PCA / FA use iterative schemes to find one eigenvector after another; I’m getting used to seeing more and more errors as we move to the right in the eigenvector matrices. I think they’re suffering from accumulated round-off errors. Of course, I generally take it for granted that my answers are the right ones, but see below.)

Now I’m going change the signs to agree with Basilevsky:

We will discover that Basilevsky should have changed the sign of the second column, too, but he didn’t so I won’t either.

Let’s keep going. We get the diagonal matrix of …

and

Basilevsky then displays equations for the variables, using Vc not Ac:

What he has written is

i.e.

,

where the Zu are unstandardized, in contrast to Zc and Zs which are standardized. (The Zu were computed from Vc, not from Ac.) I remark that Basilevsky did not compute Ac.

## From the correlation matrix:

Now let’s do the correlation matrix, which will give us the third model. Here are the eigenvalues…

and eigenvectors…

Let’s stop there for a moment.

I want to compare the two V’s: recall Vc…

He says, “… even though is positive as a covariance loading, it becomes highly negative as a correlations loadings [sic].”

He correctly points out that the 2nd column of Vc and the second column of Vs would have the same signs if one were multiplied by negative 1, and correctly concludes that we can take care of the sign of by changing the sign of the second column. (At least, I hope that what he said. And is his notation for an element of what i’m calling V.)

Unfortunately, he missed the fact that the 3rd columns cannot be made to have the same signs. The 3rd column of Vc has 2 negative signs; the 3rd column of Vs has only 1.

**That is, even though is positive, is negative, when all other signs are the same.**

This serves as a reminder that Vc and Vs, eigenvectors of the covariance matrix and of the correlation matrix, do not have a simple relationship to each other.

i will also point out that both Ac and Ar will be derived from Vc by scaling by positive numbers, and As will be derived from Vs by scaling by positive numbers, so this sign discrepancy between the (2,3) elements of Vs and Vc will propagate to the (2,3) elements of Ac and Ar versus As.

Now, lets get back to doing our thing. Get …

and As…

Yes and yes, Basilevsky and I agree. Even to all his signs.

We use As and get the equations

These Xs are standardized, and so are the Zs, in contrast to the Xc and Zu of the first set of equations:

## Basilevsky’s model:

Basilevsky has shown us that if we define

,

then we would have the second model,

.

here’s Ar…

And here’s the corresponding equations…

followed immediately by the second set computed earlier…

There are some satisfying similarities. The first columns differ significantly only in the last entry, and even that is .2 vs .36 . For the second column, as we said, if we had changed its sign, all the signs would match up, and in particular the single large terms would then be +0.9324 and +0.9575 .

I should emphasize that he calls both Ar and As, “correlation loadings”, at one point or another.

We could also severely round both As and Ar to confirm their similarity:

As he said, we shoulda changed the sign of the second column of Vc.

Without data, there isn’t a whole lot more we can do here. But I do want to show you one last thing: let’s take a closer look at Ar. We computed it from

and the rows of Ar are supposed to be of unit length. As usual, the fastest way to compute the lengths (more precisely, the squared lengths) of vectors in an array A, is to compute either or . The former gives us dot products of all the rows with each other, including with themselves, while the latter gives us all the dot products among the columns.

So if we compute , the diagonal elements should be 1. We see that they are:

In fact, we see that is equal to our old friend the correlation matrix r:

Don’t be distracted, however, by his scaling the rows. The new basis – which gives us the PCs Zc from Xs – is specified by the columns of Ar, just as the columns of Ac and As were the bases that gave us Zc from Xc, and Zs from Xs. (As I said, all three models are of the same form: . And as I’ve said before, we should keep our eyes on the linear algebra.)

So what does the basis Ar look like? (I’ve said some terrible things about it; let me justify them.) For this, we compute :

That the diagonals are not 1 says the columns are not of unit length; that the off-diagonal elements are not zero says the columns are not mutually orthogonal.

**Ar is not a very nice basis at all.**

I think it has some redeeming social value, but we’ll look at that when we have data.

## Looking at eigenvectors

Let’s check one more thing. The columns of Vc are eigenvectors of the covariance matrix: we should have

.

Do we? We do. (When I compute the difference, I get zero, to within .)

And columns of Ac are eigenvectors of the covariance matrix, too:

.

Similarly, columns of Vs and As are eigenvectors of r, and we should have both

and

.

We do.

(This is why I’m so sure my answers are right: I’ve checked the eigendecompositions.)

What about Ar? Even supposing that its columns are eigenvectors of c, we don’t know what the eigenvalues are. No problem, just see if

r Ar

is proportional to Ar. (That is, see if columns are proportional; see below.)

We have

and when we divide, element by element, by Ar, we get

so we conclude that the columns of Ar are not eigenvectors of r. (Put that the other way: if columns of Ar are eigenvectors, then applying r to them should yield columns which are proportional to the eigenvectors, i.e. proportional to Ar.)

Are the columns of Ar eigenvectors of c? here’s c Ar…

divided by Ar…

Again, no; the columns of Ar are not eigenvectors of c.

Oh, do you need to see how that calculation works when it comes out right? here’s c Ac…

and dividing by Ac gives us…

which says the 4 eigenvalues for the 4 columns are

and that’s exactly what we got a long time ago:

The purpose of this post was to demonstrate Basilevsky’s computations. Next time, I’ll do these kinds of calculations with data.

## Leave a Reply