PCA / FA Malinowski Example 5: noise and both X and Y data

I think I understand what Malinowski is talking about on p. 63. He has no example to illustrate it, and I have to assume that he was a little sloppy: I must assume that the matrix T in eq. 1.136 3.136 is different from the matrix T in eq. 1.134 3.134. I don’t like having to assume that, but the math makes sense this way. This is the best way I have found to reconcile his equations.

I had thought that eq. 1.136 3.136 was a typo, but I no longer think so.

If what I’m about to do is what Malinowski was suggesting, then the credit is his; in any case, the mathematics looks good to me, and if it’s not what he suggested, it is nevertheless my reaction to his work.

the data and its SVD

Read the rest of this entry »

Advertisements

Why not orthogonal?

Consider the diagonalization of a matrix:

D = P^{-1}\ A\ P\ ,

or

A = P\ D\ P^{-1}\ .

A friend asked me why I couldn’t always choose P to be an orthogonal (more generally, a unitary) matrix. Actually, it was more like: remind me why I can’t do that. I fumbled for the answer. Sad dog.

We recall that an orthogonal matrix Q is real, square and satisfies

Q^T\ Q = I\ ,

where Q^T is the transpose of Q. That is to say, the inverse of Q is its transpose: Q^T = Q^{-1}\ . If our eigenvector matrix P is orthogonal, we may replace

A = P\ D\ P^{-1}\ .

by

A = P\ D\ P^{T}\ .

A unitary matrix is complex (possibly real), square and satisfies

Q^{\dagger}\ Q = I\ ,

whereQ^{\dagger} is the conjugate transpose of Q. That is to say, the inverse is the conjugate transpose: Q^{\dagger} = Q^{-1}\ .

Finally, we recall that a matrix is normal if it commutes with its complex conjugate:

A^{\dagger}\ A  = A\ A^{\dagger}\ .

Now we have to go all the way back to the schur’s lemma post here and recall that

  1. Any matrix A can be brought to upper triangular form by some unitary matrix.
  2. A matrix A can be diagonalized by some unitary matrix if and only if A is normal.

That is to say, our guaranteed upper triangular matrix is in fact diagonal if and only if A is normal.

Return to our diagonalization: we have

D = P^{-1}\ A\ P\ ,

By (2), if A is not normal, then P cannot be unitary; if, in addition, P is real, then P cannot be orthogonal.

I suppose I should remind us all that the matrices X\ X^T and X^T\ X (cf. covariance and correlation matrices, R-mode and Q-mode) are normal, so that if we diagonalize them, our eigenvector matrices are orthogonal.

As for the SVD, X = u\ w\ v^T\ , we are using two matrices u and v, not one matrix P, and u and v are guaranteed unitary.

As for my fumbling the answer…. A old friend said that you understand something when it’s intuitively obvious. Well, it just is not intuitively obvious to me that normality of the one gives me orthogonality of the other. I should probably take another look at the proof…. In the meantime, I’ll try to just remember the answer.

PCA / FA Malinowski: example 6. Why Target Testing?

We have seen that Malinowski cares about “target testing”. That is, having reduced the dimension of his data, he goes looking for other vectors that lie in or near the “factor space”. Let me show you why. I ought to say that I really like what we do here.

This will be a long post, but I hope most of it is familiar; and much of it will be matrices being printed only for reference should you be following the actual computations.

Here’s the data for what he calls a hypothetical example, on p. 6, way before we can actually follow what he’s doing. We have a problem with notation, so I’m going to call the data matrix D instead of X.

D = \left(\begin{array}{lllll} 0.005 & 0.031 & 0.063 & 0.091 & 0.046 \\ 0.04 & 0.172 & 0.356 & 0.444 & 0.218 \\ 0.103 & 0.283 & 0.484 & 0.471 & 0.208 \\ 0.116 & 0.323 & 0.562 & 0.548 & 0.241 \\ 0.125 & 0.318 & 0.516 & 0.45 & 0.185 \\ 0.104 & 0.267 & 0.43 & 0.376 & 0.154\end{array}\right)

Each row measures a response to a specific frequency of ultraviolet light; each column is a different substance.

get the SVD three ways

Read the rest of this entry »

Happenings – 10 July

I know, of course, that my posts have been few and far between recently. What’s going on?

Nothing complicated. I’m working on 6 things, and none of them is ready to post. Once upon a time I saw a statement by Isaac Newton that the secret of his success was that he worked on any given problem until he had solved it. I’m sure there was more to his success than that. In any case, I tend to walk away from problems, let them stew while I do something else, and hope for new insight when next I pick them up. I remembered his comment precisely because it’s what I don’t do.
Read the rest of this entry »

rotating coordinate systems: examples 2 & 3

2: released in rotating frame (linear motion, inertial frame)

(See the previous post, example 1, for notation.)

Suppose we are holding an object fixed on the merry-go-round at a distance R on the x-axis: it is stationary in the rotating frame. Now suppose that the surface is frictionless. We release the object at t= 0. What is it’s motion?

At t = 0 we have initial values of \rho and \nu:

\rho0 = \{R,\ 0,\ 0\}

\nu0 = \{0,\ 0,\ 0\}

We apply the transition matrix (setting t = 0; note that all axes coincide) to get initial r and v from initial \rho and \nu:

r0 = T\ \rho0 = \{R,\ 0,\ 0\}

v0 = T\ \nu0 - \omega\ N\ T\ \rho0 = \{0,\ R \omega ,\ 0\}

(Yes, wrt the fixed frame, the initial position r0 is on the x-axis, and the initial velocity v0 is tangential, i.e. having only a y-component.)
Read the rest of this entry »