harman gave us two outputs as a result of his analysis: a table and a picture.
let’s consider the picture first. it clearly shows z2 and z5 similar, z1 and z3 similar, and z4 roughly in the middle between them.
i don’t know about you, but this surprised me. should it have? well, let’s take a look at the correlation matrix, from which we got our results. i’m going to round it off just a little bit, so i can refer to 3-digit numbers instead of 5.
we confirm that there is a high correlation (.972) between z1 and z3, and and one almost as large (.863) between z2 and z5. z4 is about equally related to z2 (.691) and to z5 (.778). the same information as the picture seems to be there, but i didn’t see it.
let’s learn from that blindness. how might we have expected it? this time, round off the correlation matrix severely (to the nearest .5).
there we are: on the off-diagonal, we have 1s for z1 & z3, z2 & z5 (good), and also for z4 & z5 (not so good). at the other extreme, we have 0s for z1 & z2 and z1 & z5, so z2 & z5 are far from z1. finally, we have either .5 or 1 for z4 vs everything else.
there was nothing sacred about .5. if we round to the nearest .25, instead, we get …
we see similar relationships, but now the only off-diagonal 1 is between z1 & z3; z2 seems to be equally related to z4 & z5. but z4 is still the only variable related to everything else. bear in mind that we – i at least – still think the third variable, P3 (cf. F3) might be important.
conclusion: the picture we got showed us some strong relationships that are not quite so clearly seen by playing with the correlation matrix.
i find it reassuring that the relationships can be somewhat seen in the correlation matrix; i begin to trust that the relationships were not produced by our computations.
conclusion: severe rounding can be informative.
before we leave the picture for his table, let’s go to 3D. we take the first three columns of the weighted eigenvector matrix A…
… and we plot them as points in 3D:
that’s disappointing: z1 & z3 are still close, but z2 & z5 are not. ok, we have something to look at down the road. what we’re nibbling on the edges of is reduction of dimensionality: can we replace 5 variables by fewer? we are not done with this, not by a long shot.
open: investigate reduction of dimensionality.
now let’s go look at his table. recall it:
there are a few things which i find confusing at best, misleading at worst. (in fact, i think the table is inconsistent.)
i do not count his use of instead of , although you might. maybe i should not have used his notation. he used instead of simply because his is a principal components example, but he will later do a factor analysis of the same data. eventually, he will obtain some other coefficients which he will multiply by .
this is just like my distinquishing the orthogonal eigenvector matrix P from the weighted eigenvector matrix A. you haven’t seen me use P since we got A, but i want both available; we won’t see him compute the other coefficients, but he wanted both labels available. i do plan to discuss his factor analysis; unfortunately, the example i need to use is not based on this data.
the first thing i object to is the column headings. whether or , they are misleading. the column headed P1 contains the first column of the weighted eigenvector matrix A. P1 (cf. F1) should denote the first column of the F matrix. instead, he is using the column heading as a reminder that in the equation…
we will multiply, for example, the 3rd column number .319 by P3. numerically, that just says that the second row of Z is the matrix product of the second row of A with (each column F1, F2, … of) the F matrix. his headings really stand for the columns of F.
his fundamental model is
and for the second row of Z, that equation would be written with instead of :
as i said, maybe i made a mstake in switching to his notation; i’ll keep it in mind for the future. and no, he did not write that model as Z = A P; although he changed to , he did not change the matrix F to P. good thing, or i’d have had to use a symbol different from P for the orthogonal eigenvector matrix.
the second thing i object to is both of his “variance” labels. in fact, the numbers in the second-to-last row are the squared lengths of the columns of the weighted eigenvector matrix A. and the numbers in the last column are the squared lengths of the rows of the weighted eigenvector matrix A.
i shouldn’t be hasty. maybe those squared lengths are the variances.
ok, let’s compute F. hmm. we need Z. harman never computed Z. as i said, what he wanted was that drawing showing the relationship between the Z variables and the F varables.
ok, let’s figure out what Z might be. we can certainly construct a Z matrix with variances equal to 1, which is what he says they are.
(the only reason i found this confusing is that for his theoretical work he has taken Z to be “small standard deviates” – where you divide each datum by – instead of “standardized”.)
in order for the variances of the Z variables to be 1, all we need to do is standardize the data. we recall our data matrix D…
we standardize D and call it X:
i have changed the name; this let’s me distinquish the data matrix D from a design matrix X. further, we will let
that may seem like a silly step to you, but i have enough trouble sorting out what’s going on without having to translate things in my head: i really want both X and Z. oh, why do i want both? because Z in harman’s model has variables in rows, not in columns. back to our model:
(BTW, A inverse exists because none of the eigenvalues were zero. A is an eigenvector matrix weighted by the (square roots of the) eigenvalues. it came from an orthogonal matrix P of unit eigenvectors. P is trivial to invert; just take the transpose: A will be invertible so long as the weights are nonzero, because the corresponding column of A will be zero if and only if a weight is zero.)
next, bizarre as you may think me, i’m going to take the transpose of the last equation. and i’m going to abuse notation (so i’ve been told), writing for the inverse transpose of A…
why am computing , the transpose of F? for one thing, both Z and F have 12 columns and don’t fit in the page! for another, Mathematica wants variables in columns: its Mean, Variance, Standardize, Covariance, and Correlation commands each want , not F. (did you think i was computing the correlation matrix step-by-step?)
ok, like the Z variables, the F variables have zero means:
that is to say,
and, like the Z variables, the F variables have unit variances:
oops. this is not ok. the table is wrong: harman said the variances were the eigenvalues. (and if you happen to know, vaguely or precisely, that PCA was supposed give us new variabes that redistribute the variance, you were really, really expecting to see the eigenvalues. welcome to the club.)
what’s going on? it’s time to do a little algebra.
i need one key relationship. if we have a matrix M with variables in columns, with N rows, and with each variable having zero mean (called centered data), then the covariance matrix of M is given by
further, if the variance of each variable is 1, then the covariance matrix c is also the correlation matrix r.
let’s lay out all the equations we have.
one, X contains columns of standardized variables, so the correlation matrix r (= the covariance matrix) of X (or of Z) is given by
two, the covariance (not necessarily correlation) matrix of F is given by
(i’m going to take it for granted that the means of the F values are zero; after all, the Fs a linear combination of Xs, which do have zero mean.)
three, the eigendecomposition was
with P orthogonal and diagonal.
four, the weighted eigenvector matrix A is
where denotes a diagonal matrix whose entries are square roots of the entries of . (all of the eigenvalues are non-negative in general; and in particular, my changing all the signs of columns of A amounts to choosing the negative square roots! eigenvectors are not unique.)
perhaps i should remind you that multiplying diagonal matrices is almost trivial: we multiply corresponding diagonal elements. that means, for example, that and have diagonal elements which are the inverses of each other. that’s also why it makes sense to take square roots of the elements of and call the result .
finally, five, the fundamental model is
assuming that A is invertible (i.e. assuming all eigenvalues of r were positive, not just non-negative).
let’s compute the covariance (not correlation) matrix c of F. we have
first we substitute and :
then we substitute :
then we substitute :
whoa! an identity matrix? boy, do we need to compute the covariance matrix of F for this data!
here it is, truncated so i don’t have to reformat a bunch of numbers of the form a x 10^-b:
an identity matrix, conceptually if not numerically. yes, the algebra was right. not only are the F variables of unit variance, they are completely uncorrelated with each other. yessssss. now i remember seeing that statement somewhere.
conclusion: the F variables are uncorrelated variables of unit variance.
conclusion: the eigenvalues are not the variances of the F variables. not for Z = A F, with A a weighted eigenvector matrix. tsk, tsk.
conclusion: harman’s table can be fixed most simply by changing the first-column label “Variance” to “Eigenvalue”.
nevertheless, his “variance” label suggests that we should be able to construct combinations of the Zs which have the eigenvalues for variances.
open: how do we find new variables whose variance is the eigenvalues?
but we’ve done enough for one post.
ok, ok, i won’t quite leave you hanging. what about using the orthogonal eigenvector matrix P instead of the weighted eigenvector matrix A?