## PCA / FA Malinowski Summary

Malinowski’s work is considerably different from everything else we’ve seen before.

First of all, he expects that in most cases one will neither standardize nor even center the data X. We can do his computations as an SVD of X, or an eigendecomposition of $X^{T}X$ or of $XX^T$ – but because the data isn’t even centered, $X^{T}X$ and $XX^T$ are not remotely covariance matrices. For this reason, I assert that preprocessing is a separate issue.
Read the rest of this entry »

## introduction

For a good reason which I have not yet discussed, Malinowski wants to find x vectors which are close to $\hat{x} = H\ x$ vectors. (His x and $\hat{x}$ are usually written y and $\hat{y}$ for a least-squares fit). He finds a possible x and tests it to see if xhat is close. He recommended computing an intermediate t vector which was the $\beta$ for his least-squares fit to x.

Since he seems to care about t only when $\hat{x}$ is close to x, and since $\hat{x}$ is incredibly easy to compute directly, I prefer to delay the computation of t. Find t after we’ve found a good x.

It will also turn out that he wants a collection of t vectors in order to pick a nicer basis than u or u1. And I’m not going to follow him there, because all of that is what practitioners call “non-orthogonal rotations”. (That strikes me as an oxymoron.) It’s what Harman spends most of his book doing, and that’s where I’ll look if I ever I want to. It’s important, but I’m not going to look at it this time around.

Anyway, we factored the data matrix
Read the rest of this entry »

## PCA / FA malinowski: example 5. target testing

Recall that we computed the SVD $X = u\ w\ v^T\$ of this matrix: $X = \left(\begin{array}{lll} 2 & 3 & 4 \\ 1 & 0 & -1 \\ 4 & 5 & 6 \\ 3 & 2 & 1 \\ 6 & 7 & 8\end{array}\right)$

and we found that the w matrix was $w = \left(\begin{array}{lll} 16.2781 & 0. & 0. \\ 0. & 2.45421 & 0. \\ 0. & 0. & 0. \\ 0. & 0. & 0. \\ 0. & 0. & 0.\end{array}\right)$

Because w has only two nonzero entries, we know that X is of rank 2. Its three columns only span a 2D space.

Given a column of data x (a variable, in this example, of length 5), Malinowski wants to know if it is in that 2D space. As he puts it, “if the suspected test vector [x] is a real factor, then the regeneration $\hat{x} = R\ t$ will be successful.” He gives us a formula for computing t; by a successful regeneration, he means that $\hat{x}\$ is close to x.
Read the rest of this entry »

## PCA / FA Malinowski: example 5.

(June 10: i have made 4 edits, all cosmetic. you may search on “edit:”)

Malinowski (edit: “Factor Analysis in Chemistry”, 3rd ed.) does a lot of things differently from what we’ve seen. Fortunately, his model is simple enough, although his notation is… different. His model is

X = R C,

and he calls R and C the row and column matrices respectively. He wants X to have more rows than columns, so he transposes if necessary; then he chooses C to have more columns than rows, and R will have more rows than columns. For starters, then, his X matrix looks like the usual design matrix for regression. (Incidentally, he didn’t call it X.)

He chooses $C = {v_1}^T$, from the cut-down SVD. That is, I write the SVD of X as $X = u\ w\ v^T\$,

where u and v are orthogonal and w is the same shape as X. But we know from the derivation and our experience with Davis that we may also write $X = u_1\ w_1\ {v_1}^T\$,

where $w_1$ is square, diagonal, and invertible (it is a cut-down w), and $u_1$ and $v_1$ are the submatrices of u and v which are conformable with $w_1$. (We’ll see all this shortly.) We have dropped the parts of u, w, and v which are not required for reproducing X. (I remind you that what we’ve lost is the orthogonality of the matrices u and v.)
Read the rest of this entry »