**Cohen** (“Visual Color & Color Mixture”, see the bibliography) did something very interesting. In fact, he did something useful which I had never seen before.

Although this post uses some matrices which we saw in the color posts, I think this can stand on its own: you need not have read the color posts. But if you are specifically interested in color, or in Cohen’s work, this post is very relevant.

**He was trying to describe how to find a transition matrix between two given data matrices.** This will come in handy — very handy! — whenever people give the alleged result of an unspecified linear transformation of a data matrix.

On p. 93, he wrote

D T = C in general

F T = A in particular

with

.

Because D’D (that is, ) is invertible, we infer that D and C are taller than wide, like the A matrix… like our usual data or design matrix X.

We also infer that T is a small matrix: its size is cxc, where c is the number of columns of D. D and C are the same size.

I do have to point out that in his equation (and his notation)

D T = C,

T is an attitude matrix! I will change that. (Trust me: you want me to change that.)

Since we have both XYZ (i.e. 1931 xyz bar) and RGB (1931 rgb bar) tables, and since they are supposed to be related by the transformation matrix T31…

… we should try the idea embodied in that equation for T. Assuming that there is a transition matrix T, let J and K stand for the XYZ and RGB matrices respectively.

As usual, let me use 20 nm intervals instead of the published 5 nm intervals. We have

In case you have not read the color posts, all I am saying is that J and K and T31 are given matrices, and T31 is supposed to be a transition matrix between J and K.

We do know — or take it as given — that for the transition matrix T31, old is XYZ ~ K, so let me write

K’ = T J’

That says T is a transition matrix which, applied to each column vector of J’, delivers the corresponding column of K’.

Transpose:

K = J T’

and then pretend exists (even though J is rectangular)

and then substitute the appropriate pseudo-inverse:

.

(The inappropriate pseudo-inverse would use JJ’ instead — which cannot be of full rank, given the assumed shape of J, hence cannot be invertible.)

That equation defined the transpose T’, but here’s T:

Recall the given T31:

By doing it my way, I constructed the counterpart to T31, which is the published transition matrix between XYZ and RGB (1931) with XYZ considered the “old” basis.

T and T31 are very close, but not the same.

Even if I used the full tables, at 5 nm intervals, we would not get T31.

First, I am certain that if there were a transition matrix T, then we must have computed it. Since this computation did not yield T31, I infer that T31 is not the transition matrix.

Second, is T the transition matrix? We should apply T to the appropriate table and see what we get. We had assumed

K’ = T J’, or equivalently

K = J T’

so let’s compute a counterpart to K, namely Q = J T’:

and compare it to K, via Q – K:

(I keep using pictures for things like that, namely scientific notation, just because it’s easier than editing the LaTeX.)

The largest entry in absolute value is… 0.000069765 . The three columns of Q look like this:

Pretty small, and pretty random, but still not really zero. Well, maybe it wouldn’t be. Let’s try our magic formula on J and Q, since they are related by the transition matrix T. From

,

I compute what I call t’:

,

and then display the transpose of the transpose, t” = t:

Recall T:

So. That equation for T’ does recover the transition matrix if there is one. The discrepancy between T and T31 is real.

Maybe I’m making too big a deal of this. I infer that the XYZ and RGB tables do not span exactly the same subspace. They almost do, but not quite.

**To be more explicit: the 1931 XYZ (xbar, ybar, zbar) and RGB (rbar, gbar, bbar) tables are not exactly related by any transition matrix whatsoever, although the published T31 is close to being one for them.**

I have found a marvelous example — disappointing as a published result but marvelous as a bad example — where the discrepancy is significant. I expect to show it in another post, soon.

We’ve been looking at the computation in one way: if there is a transition matrix T such that

K’ = T J’

then

.

What if there is no such transition matrix T? We can still define T by that equation, and compute (still assuming J is of full rank, with more rows than columns), but what are we getting, if T itself does not exist as a transition matrix?

That is, what if we compute T, and find that

?

Well, this will be clearer in the next post, but…. That equation for T’ looks an awful lot like the normal equations for a least-squares fit.

What I will illustrate in the next post is that generates a transition matrix to the subspace spanned by J. That is, T is a transition matrix — any invertible matrix is! — but it relates J to a subspace other than (the one spanned by) K.

(In our case, we had 21 observations; the 3 columns of J and of K define 3D subspaces of . Those two 3D subspaces are almost the same, but not exactly.)

Let me try to explain here it without a good (I mean, bad) example. If J and K span the same subspace, then any one column of K (think of it as the dependent variable y) can be written as a linear combination of the columns of J (think of it as the X matrix).

But if J and K do not span the same space, then the best we can do is to find that linear combination (call it yhat !) of columns of X which is as close to y as possible.

They really are examples of y and yhat and X.

And what is T’ ? Well, if we write it with y and X…

we might recognize that as in ordinary least squares.

Let me show you that, while providing another example of the utility of the pseudo-inverse.

The derivation of the normal equations for ordinary least squares is fairly complicated, if only because it involves the derivative of a matrix. But the pseudo-inverse would let us recover the equations themselves.

Deriving them is one thing; recovering the form of them once we know they exist is another.

We write our model (this is a quick & dirty recollection of the answer, not a derivation):

.

(I guess I have to talk about that. True is . But I write y instead of yhat, as though we have equality rather than a projection onto a subspace.)

We pretend that X is invertible, and write

.

Since X is not invertible, but X’X is (by assumption X is of full rank with more rows than columns), we replace the non-existent inverse by the pseudo-inverse and write

.

The real derivation gave us that very answer. Once I know that, I never need to compare them again. I have a plausibility argument that gives me the known right answer.

This is analagous to confirming in the previous color post, among others, that if I compute

then I have a dual (or reciprocal or biorthogonal) basis E such that

E’A = I.

In that case, I prefer to verify it computationally every time.

Incidentally, this trickery with the pseudo-inverse would also generate a true result if I had started with yhat instead of y — it is true that , but it’s not very useful because we don’t know yhat!

You might have noticed that corresponds, algebraically, to a column of T’. Asking if the first column of K is a linear combination of all the columns of J gives us the first column of T’; asking if the ith column of K is a linear combination of all the columns of J gives us the ith column of T’. This, too, will be in the next related post.

## Summary

Given two data matrices J and K, tall and thin, we can define and compute

.

If the individual observations are in fact related by a transition matrix M, so that

K’ = M J’,

then our computed T is M:

T = M.

This is pretty slick. **Any time someone hands us some data and the alleged result of applying an unspecified linear transformation to it — we can find that linear transformation if it exists, or demonstrate that it does not exist (in which case, they goofed).**

And if there is no such matrix M? Well, it’s simple enough: if there is no such M — and J and K are both of full rank — then our computed T must be something else, so we do not have equality:

.

That’s all it takes for us to know that there is no transition matrix M between J’ and K’. If there were, it would be T, but that didn’t work.

(That’s if J and K are both of full rank; if only one is not, then one of J’J or K’K is actually not invertible — as JJ’ and KK’ are not invertible — and T can only be computed in one direction. I expect that T is not invertible if J and K are not both of full rank.)

And when there is no transition matrix M, we nevertheless get a transition matrix T, but it’s between J’ and Q’ = T J’.

I will freely confess that I did not know any part of this before I read Cohen: neither that we could find the transition matrix so easily; nor what we were finding if it were not a transition matrix.

(Cohen implicitly assumes that there exists a transition M; he does not discuss what happens if there isn’t one.)

Next post on this topic, a numerical illustration.

February 13, 2010 at 4:48 am

Thanks to this post I do not seem like an idiot. I had an argument with someone and this shows I was right. Thanks!