Color: decomposing the RGB A transpose matrix

(abuse of) terminology

Sometimes I get tired of writing “xbar, ybar, zbar tables” — and I just write “xyz bar tables” or even “XYZ tables”. Similarly for rbar, gbar, bbar tables — rgb bar. I’m not talking about anything new, just abbreviating the names.

Introduction

This is the third post about the example on p. 160 of W&S. Once again, I am going to decompose the reflected spectrum into its fundamental and its residual.

This time, however, I’m going to use the rbar, gbar, bbar tables (RGB) instead of the xbar, ybar, zbar (XYZ) tables. They did not do this.

I’ll tell you now there is one little twist in these calculations. We will need the ybar table, because we still need to use it to scale our results.

In addition to showing, as I did previously, the dual basis spectra and the orthonormal basis spectra for the non-nullspace, I will display the orthonormal basis for the nullspace.
Read the rest of this entry »

Color: decomposing the A transpose matrix and the reflected spectrum

This is the second post about the example on p. 160 of W&S. Here’s the first. (Incidentally, I am going to decompose the reflected spectrum into its fundamental and its residual. I do not think that they did this. Oh, this is what I did for Cohen’s toy example, but now it’s for real. Approximate, but for real.)

review the XYZ for this

Read the rest of this entry »

PCA / FA example 4: davis. Reciprocal basis 4.

(this has nothing to do with the covariance matrix of X; we’re back with A^R for the SVD of X.)

in the course of computing the reciprocal basis for my cut down A^R

\left(\begin{array}{ll} 7.48331 & 0. \\ -3.74166 & -2.44949 \\ -3.74166 & 2.44949\end{array}\right)

i came up with the following matrix:

\beta = \left(\begin{array}{ll} 0.0445435 & 0 \\ -0.0890871 & -0.204124 \\ -0.0890871 & 0.204124\end{array}\right)

now, \beta is very well behaved. Its two columns are orthogonal to each other:
Read the rest of this entry »

PCA / FA example 4: davis. Davis & harman 3.

Hey, since we have the reciprocal basis, we can project onto it to get the components wrt the A^R basis. After all that work to show that the S^R are the components wrt the reciprocal basis, we ought to find the components wrt the A^R basis. And now we know that that’s just a projection onto the reciprocal basis: we want to compute X B.

Recall X:

X = \left(\begin{array}{lll} -6 & 3 & 3 \\ 2 & 1 & -3 \\ 0 & -1 & 1 \\ 4 & -3 & -1\end{array}\right)

Recall B:

B = \left(\begin{array}{ll} 0.0890871 & 0. \\ -0.0445435 & -0.204124 \\ -0.0445435 & 0.204124\end{array}\right)

The product is:

X\ B = \left(\begin{array}{ll} -0.801784 & 0 \\ 0.267261 & -0.816497 \\ 0 & 0.408248 \\ 0.534522 & 0.408248\end{array}\right)

What are the column variances?

\{0.333333,0.333333\}

Does that surprise you? Read the rest of this entry »

PCA / FA example 4: davis. Reciprocal basis 2 & 3.

The reciprocal basis for A^R using explicit bases.

Here I go again, realizing that I’m being sloppy. i call

\{2,1,-3\}

the second data vector, but of course, those are the components of the second data vector. a lot of us blur this distinction a lot of the time, between a vector and its components. So long as we work only with components, this isn’t an issue, but we’re about to write vectors. i wouldn’t go so far as to say that the whole point of matrix algebra is to let us blur that distinction, but it’s certainly a major reason why we do matrix algebra. but for now, let’s look at the linear algebra, distinquishing between vectors and their components.

i need a name for the second data vector; let’s call it s. we will write it two ways, with respect to two bases, and show that the two ways are equivalent.
Read the rest of this entry »

PCA / FA example 4: davis. Scores & reciprocal basis 1.

The reciprocal basis for A^R, using matrix multiplication.

i keep emphasizing that A^Q is the new data wrt the orthogonal eigenvector matrix; and i hope i’ve said that the R-mode scores S^R = X\ A^R are not the new data wrt the weighted eigenvector matrix A^R. In a very real sense, the R-mode scores S^R do not correspond to the R-mode loadings A^R. This is important enough that i want to work out the linear algebra. In fact, i’ll work it out thrice. (And i’m going to do it differently from the original demonstration that A^Q is the new data wrt v.)

Of course, people will expect us compute the scores S^R, and I’ll be perfectly happy to. I just won’t let anyone tell me they correspond to the A^R.

First off, we have been viewing the original data as vectors wrt an orthonormal basis. Recall the design matrix:

\left(\begin{array}{lll} -6 & 3 & 3 \\ 2 & 1 & -3 \\ 0 & -1 & 1 \\ 4 & -3 & -1\end{array}\right)

Let’s take just the second observation:

X_2 = (2,\ 1,\ -3)

Those are the components of a vector.
Read the rest of this entry »

transpose matrix & adjoint operator 2

(
begin digression
just in case you’ve seen this in another form, let me make some connections. if you don’t recognize any of this digression, that’s ok. you can move along, there’s nothing to see here.
i am taking a linear operator L: V\rightarrow V, from a vector space V to the same vector space V. in fact, i have more than a vector space: V is an inner product space; i have a “dot product”. 
what i call the reciprocal basis is often called the dual basis. in fact, halmos calls it the dual basis. but that terminolgy is also associated specifically with the so-called dual space V* of linear functionals on V; in that case, the dual basis is a basis in V*. there can be a great deal of confusion here. the dual space V* can be defined without an inner product on V; the inner product on V can be defined without ever mentioning the dual space V*. but if we introduce both inner product and V*, then there is a natural isomorphism between elements of V and of V*. i have seen people think of the one-to-one relationship between elements of V and V* as an identity, and to confuse the inner product of two vectors in V with the effect of a linear functional on a vector. (worse, i have seen people assert that an inner product involves one element of V and one element of V*.
there is a one-to-one correspondence between my right shoe and my left, but they are not identical. isomorphism is not always identity.
here, i have a finite-dimensional vector space V with an inner product on it. i have two bases (original and new) on V, and i want to construct a third basis for V. i call it the reciprocal basis to emphasize that it is not a basis on V*.
end digression
)
let’s see how this plays out.

Read the rest of this entry »

transpose matrix and adjoint operator 1


recall the post about schur’s lemma. i provided an example of a matrix A which could be diagonalized (to B) but not by an orthogonal transition matrix. A is not a normal matrix: it does not commute with its transpose:

A\ A^T \neq A^T \ A.

it was the monday morning after the post when i woke up thinking, “but B is a diagonal matrix, it is its own transpose… 

B^T = B 

and therefore it is trivially normal:

B \ B^T = B^2 = B^T \ B,

but i know that’s wrong.”

Read the rest of this entry »