## PCA / FA. Example 4 ! Davis, and almost everyone else

I would like to revisit the work we did in Davis (example 4). For one thing, I did a lot of calculations with that example, and despite the compare-and-contrast posts towards the end, I fear it may be difficult to sort out what I finally came to.

In addition, my notation has settled down a bit since then, and I would like to recast the work using my current notation.

The original (“raw”) data for example 4 was (p. 502, and columns are variables):

$X_r = \left(\begin{array}{lll} 4 & 27 & 18 \\ 12 & 25 & 12 \\ 10 & 23 & 16 \\ 14 & 21 & 14\end{array}\right)$
Read the rest of this entry »

## PCA / FA Bartholomew et al.: discussion

This is the 4th post in the Bartholomew et al. sequence in PCA/FA, but it’s an overview of what I did last time. Before we plunge ahead with another set of computations, let me talk about things.

I want to elaborate on the previous post. We discussed

• the choice of data corresponding to an eigendecomposition of the correlation matrix
• the pesky $\sqrt{N-1}$ that shows up when we relate the $\sqrt{\text{eigenvalues}},\ \Lambda,$ of a correlation matrix to the principal values w of the standardized data
• the computation of scores as $\sqrt{N-1}\ u$
• the computation of scores as $F^T\$ where $X^T = Z = A\ F$
• the computation of scores as projections of the data onto the reciprocal basis

## PCA / FA Example 7: Bartholomew et al.: the scores

edited 17 Oct 2008 to round-off the first two columns of 5 u.

This is the 3rd post about the Bartholomew et al. book.

## Introduction

It would be convenient if Bartholomew’s model were one we had seen before.

It is.

We got their scores and loadings just by following their instructions. Although they didn’t use matrix notation, their equations amounted to

loadings = the $\sqrt{\text{eigenvalue}}$-weighted eigenvector matrix, $A = V\ \Lambda$

“component score coefficients” = reciprocal basis vectors, $cs = V\ \Lambda^{-1}$

scores = X cs.

where V is an orthogonal eigenvector matrix of the correlation matrix, $\Lambda$ is the diagonal matrix of (nonzero) $\sqrt{\text{eigenvalues}}\$, X is the standardized data.

Let’s recall Harman’s model. Read the rest of this entry »

## The familiar part

This is the second post about Example 7. We confirm their analysis, but we work with the computed correlation matrix rather than their published rounded-off correlation matrix.

I am pretty well standardizing my notation. V is an orthogonal eigenvector matrix from an eigendecomposition; $\lambda$ is the associated eigenvalues, possibly as a list, possibly as a square diagonal matrix. $\Lambda$ is the square roots of $\lambda\$, possibly as a list, possibly as a square matrix, and possibly as a matrix with rows of zeroes appended to make it the same shape as w (below).

Ah, X is a data matrix with observations in rows. Its transpose is $Z = X^T\$.

The (full) singular value decomposition (SVD) is the product $u\ w\ v^T\$, with u and v orthogonal, w is generally rectangular, as it must be the same shape as X.
Read the rest of this entry »

## PCA / FA Example 7: Bartholomew et al. Correlation matrix

edit 5 Oct 2008: I had omitted the word “constant”. see edit.

The following example comes from Bartholomew et al. “The Analysis and Interpretation of Multivariate Data for Social Scientists.”

It is an excellent example with which to wrap up PCA / FA. (There’s a lot we haven’t done, but it’s almost time for me to move on.)

The example is “employment in 26 European countries”, “eurojob” for short, from chapter 5 (either 1st or 2nd edition), and data for both editions is available at http://www.cmm.bris.ac.uk/team/amssd.shtml . Please note that I am using the 1st edition of the book, and the 1st edition data.

When I first worked this example, I knew that something interesting happened, but not why; and, there was one thing I didn’t understand at all, back then.
Read the rest of this entry »