## Cohen’s example again

Let me now show you how I would do Cohen’s example. (His computations, pretty much, were the previous post.) I cannot over-emphasize that he deserves a lot of credit for getting the mathematics right, even if he didn’t name it correctly or do it beautifully.

I start with the A matrix:

This, as I have said, is a toy example of real color matching functions. Each column corresponds to one of the tri-stimulus values; each row represents a wavelength.

My summary of Cohen began with a different matrix — the E matrix — only because it was easier to enter into the computer. A is primary.

The next issue we have to cope with is that we really care about the transpose rather than about A. A’ will be applied to a spectrum, and will deliver three tri-stimulus values for it. I’m not convinced that any notational choice I make now will be without problems.

(As I said in the previous post, I use A’ for the transpose whenever it is convenient.)

I am going to let

.

B represents the linear operator I wish to study. Thus, B is

It maps to . At most, it can be of rank 3; and then it would have a four dimensional null space.

What shall we do with it? All together now, class: find its singular value decomposition (SVD).

,

where u and v are orthogonal (hence square) matrices, and w is as close to diagonal as it can be, given that it is the same shape as B.

The Mathematica command is

{u,w,v}=SingularValueDecomposition[B];

where the final semi-colon suppresses printing.

It is convenient to look first at the singular values themselves, the nonzero “diagonal” elements of the matrix w:

.

That all three are “significantly” larger than zero tells us that the matrix B (and therefore its transpose, A) is, indeed, of rank 3.

In case this is new to you, the w matrix is

(And if this is new to you, check the tag cloud and the categories. There’s a lot examples of the Singular Value Decomposition. I could almost rename this blog “Applications of the SVD”.)

What about v?

It is 7×7. We know from the discussion of the four fundamental subspaces (look at the blue text) that the 4 rightmost columns of v are an orthonormal basis for the null space of B, and the 3 leftmost columns are an orthonormal basis for the pre-image of the range of B.

That is, writing a partitioned matrix [v] = [v1 v2], we have

We should confirm that B.v2 = 0, i.e. every column of v2 is in the null space of B:

Once having a basis, v1, for the pre-image of the range, we know that we can construct a projection operator as v1 v1′:

Because I didn’t call it R, I can recall Cohen’s R matrix, computed using a pseudo-inverse. I assure you that it is the same, we have P = R.

On the other side, since B is of full rank, its range is equal to its codomain, both being , so the entire matrix u is an orthonormal basis for the range of B.

Now the thing to do is, as before, split some given spectrum into its fundamental and its residual (metameric black). We only really have one good test, an equal energy spectrum.

We get the fundamental of our test spectrum…

…by applying our projection operator to it:

and that is exactly what we got before (come on, P = R; we have the same projection operator.)

Then we get the residual n as the difference…

and that, too, is the same as before.

As a refesher, let’s compute the components of the fundamental wrt the v1 basis. Since v1 is orthonormal, we may find the components of the fundamental by taking dot products, the fundamental dotted into each basis vector in v1. But that’s just

which says that f can be written as the sum

of the following three multiples of the basis vectors– I’m going to round off for clarity —

.

The sum does indeed (almost) match our fundamental. (The discrepancy comes from my rounding off.)

.

The last thing we need to get is the dual basis E for the rows of B. I just go look it up in the previous post…

:

Check that B E = I; it does.

And why do we care about the dual basis, i.e. the columns of E? Well, if we apply B to our test spectrum, we get

.

What are those three numbers? They are components of our test spectrum wrt the dual basis E!

That is, the sum of the following three vectors (which are multiples of the basis vectors in E)…

is

… which, once again, differs from our fundamental

only because of the rounding used to make the numbers readable.

To say that again: **the dual basis E gives us a geometric interpretation of the tri-stimulus values we computed by applying B (i.e. A’) to a spectrum. The tri-stimulus values are components of the spectrum (equivalently, of its fundamental) wrt the E basis.**

So. What did Cohen and I have that was the same? The matrices A and E, and the projection operator R. But note that where he computed R as

R = E’A,

and that’s short for ,

I would use

.

I’ll admit that’s not much of a savings if we have both A and E.)

What did we have that was related? Where he had two orthonormal bases F1 and F2, I have one, v1. All three of these bases span the same subspace of .

## Digression: ways to show that F1, F2, and v1 span the same subspace of R7

One way to show that is to show that the columns of F1 and F2 are preserved by the projection operator cobstructed from v1. We have

and we compute

And yes, their difference is the zero matrix.

Similarly for F2.

Alternatively, F1 and F2 should generate the same projection operator as v1 did.

I assure you that is the same as P and R. And similarly for F2.

Alternatively, again, we could simply write the columns of F1 (then F2) as a linear combination of the columns of v1, and solve for the components.

Alternatively, yet again, we could try to find a transition matrix relating F1 and v1, then F2 and v2. That’s awkward because they’re not square matrices. Guess what? Use a pseudo-inverse! (Very likely the next math post.)

I have no idea what is special about F1 and F2. They are orthonormal bases, like v1, for the pre-image of the range of B. Since they span the same space, they are related by rotations (and possibly a reflection).

## end digression, return to main thread

Finally, what did I do that Cohen did not? The singular value decomposition, which got me a basis, v2, for the null space of B — which he did not have. And I daresay that v1 was easier to get than either F1 or F2.

**I think having v2 is incredibly useful. I get a basis for metameric blacks.**

(Of course, Cohen had computed the matrices Ma and Me, and Ga and Ge, but I don’t need them because I wasn’t computing F1 and F2.)

Let me summarize that.

Cohen

- A and E;
- Ma = A’A, Me = E’E, where Me = Ma^-1;
- Ga and Ge, Cholesky decompositions of Ma and Me;
- orthonormal bases F1 = A Ge and F2 = E Ga;
- the projection operator .

Rip

- A;
- A’ = B = u w v’;
- [v] = [v1] [v2] where both v1 and v2 are orthonormal bases;
- the projection operator R = v1 v1′;
- the dual basis E.

(Strictly speaking, A is the dual basis to E, but the relationship is symmetric. Nevertheless, the rows of A’ live in the dual space V*, while the columns of E live in the original space V.)

I dispensed with everything leading to F1 and F2. I use v1 in place of F1 and/or F2. I get a basis, v2, for the null space.

**I think the key point is using the projection operator to split a spectrum into fundamental and null vectors.** It’s a unique orthogonal decomposition into a vector which has the same tri-stimulus values as the original spectrum, and a vector in the null space (metameric black, with tri-stimulus values = {0,0,0}.)

(The decomposition is unique to A, but A itself is not unique. More to follow.)

November 16, 2009 at 11:57 pm

In this post I said, “I have no idea what is special about F1 and F2.”

I now understand what is special about Cohen’s F1 and F2 orthonormal bases.

There are two significant facts about each of them. Let us consider F1. Cohen emphasizes that the first column of F1 is proportional to the first column of E. I don’t yet know why — and I’ll confess I don’t particularly care — but it certainly appears to be true. (I know, someday I’ll have to work it out so that I know exactly when and why it’s true.)

And the first columns of F1 and E are proportional, rather than equal, precisely because each column of F1 is a unit vector, but the columns of E are not.

Oddly enough, Cohen does not mention the second property, which is equally important, and trivial to see (unlike the first). This one comes straight from the definition of F1.

By definition, F1 = A Ge, where Ge is lower triangular. That means that the third column of F1 is proportional to the third column of A; the proportionality constant is the (3,3) entry of Ge.

Again, we must have proportionality rather than equality because one is of unit length, the other is not.

So.

We can compute the first and third columns of F1 just by normalizing the first column of E and the third column of A.(Similarly, F2 = E Ga implies that the third column of F2 is proportional to the third column of E, and Cohen assures us that the first column of F2 is proportional to the first column of A.)

Can we by any chance compute the second column of F1 from the two we know?

Of course we can. Easily. F1 is an orthonormal basis in a 3D space. Cross product, anyone?

(F1 and F2 were computed here, in the previous post: https://rip94550.wordpress.com/2009/09/28/cohen-visual-color-and-color-mixture/)