## PCA / FA example 1: harman. discussion 2

recall that we had discovered that harman’s new F variables were uncorrelated and standardized: they had mean 0 and variance 1. he had implied, however, that the variances of the new variables would be the eigenvalues of the correlation matrix of Z.

he didn’t get that. can we?

$Z = A F$

(where Z is the standardized data, A is a weighted eigenvector matrix, and F is the transformed, new, data), which implies

$F^T = X \ A^{-T}$

we take

$Z = P Y$,

(where P is an orthogonal eigenvector matrix, and Y is the transformed, new, data), i.e.

$Z^T = Y^T \ P^T$

hence

$Y^T = Z^T \ P^{-T} = X P$.

(in words, we fix his messy $A^{-T}$ by using an orthogonal eigenvector matrix P, for which $P^{-T}= P$.)

here’s the transpose of the Y matrix:

$Y^T = \left(\begin{array}{ccccc} -1.64367&0.955194&0.489728&0.0115288&-0.173057\\ 2.25697&1.0171&-0.14812&-0.592152&-0.0381633\\ 2.49521&-0.120329&0.518183&-0.146651&0.025367\\ -0.779124&1.73058&-0.377206&0.291712&-0.0255523\\ -0.564461&1.55725&-0.0522449&0.460059&-0.00751884\\ 1.17305&-1.54174&0.662974&0.40844&-0.000126565\\ 1.73452&1.57553&-0.172833&0.0361866&0.0856931\\ -0.0974502&-1.14587&-0.682363&0.0811312&0.073895\\ -1.32993&-0.75909&-0.320628&-0.110526&0.223974\\ -3.18609&0.0790644&0.54889&-0.437424&0.0779839\\ 0.384854&-1.7828&0.113252&0.160042&0.0140288\\ -0.443879&-1.56488&-0.579631&-0.162345&-0.256524\end{array}\right)$

and the variances of the Y variables?

${2.87331,\ 1.79666,\ 0.214837,\ 0.0999341,\ 0.0152554}$

ta da! those are so close to the eigenvalues…

${2.87331,\ 1.79666,\ 0.214832,\ 0.0999379,\ 0.0152551}$

that we all are quite sure they are equal in principle. right?

for the record, here is the covariance matrix of Y:

$\left(\begin{array}{ccccc} 2.87331&0&0&0&0\\ 0&1.79666&0&0&0\\ 0&0&0.214837&0&0\\ 0&0&0&0.0999341&0\\ 0&0&0&0&0.0152554\end{array}\right)$

like the Fs, the Ys are uncorrelated; unlike the Fs, the Ys have different variances, and in fact, the first Y variable, $Y_1$, has the largest possible variance among linear combinations of the Xs, subject to the constraint that the total variance of the five variables is constant (= 5.) and so on: it’s no accident that $Y_2$ has the second largest variance.

we have somethig new here, but it does not come from the F variables in harman’s model. using an orthogonal eigenvector matrix as a change-of-basis, we have redistributed the variances of the Z data. harman did not accomplish this with the weighted eigenvector matrix.

it gets worse. if we were to recreate that wonderful picture using the first two coumns of the orthogonal eigenvector matrix -P…

$\left(\begin{array}{cc} 0.342731&0.601629\\ 0.452506&-0.406417\\ 0.396696&0.541664\\ 0.550056&-0.077817\\ 0.466738&-0.416428\end{array}\right)$

(yes, it is convenient to change the sign of P; we did it to A, too.) we plot those pairs of numbers and get the left-hand image; the right-hand image is the previous plot…

we see exactly the same implications (2 & 5, 1 & 3 together, 4 alone), but the scales are different. maybe we should have stayed with P all along.

next? let’s see how jolliffe does PCA in “principal component analysis”.