Cohen: “Visual Color and Color Mixture”

Introduction

Oct 10 edit: the third heading has been changed to “Computing E from A”

I want to work an example from Cohen, “Visual Color and Color Mixture” (see the bibliography, and this “books added” post). I am not, however, going to do it exactly the way he did. Nevertheless, I will show you everything he calculated.

Because I want to get everything of his into one post, I will break this into small sections. I expect that my next post will show you what I would have done instead.

All of his matrices can be found on p. 70 of his book.

What we have here is an extremely small example to illustrate “color matching functions” applied to light spectra, resulting in three real numbers which we call R,G,B or X,Y,Z. This is a prelude to using real color matching functions on real spectra. I will refer to the 3 numbers I get during the course of this example as “R,G,B”, in quotes because this is a toy example, and because down the road I’ll be computing “XYZ tristimulus values” in preference to RGB.

The A and E matrices: computing A from E


He has two matrices denoted A and E. They are closely related, and the typing is simpler if I start with the E matrix:

E = \left(\begin{array}{ccc} 1 & 0 & 2 \\ 0.5455 & -0.3636 & 1.5 \\ -0.5455 & -1.6364 & 0.5 \\ -0.8182 & -1.4545 & 1.3 \\ -1 & -1 & -0.5 \\ -0.7273 & -0.1818 & 0 \\ 0 & 1 & 0\end{array}\right)

Let me emphasize that the A matrix is primary, and we should have computed the E matrix from it.

Let me answer a question at the outset. What is the point of the A matrix? Applied to a spectrum — careful! it’s actually the transpose A’ that is applied to a spectrum — it generates R,G,B or X,Y,Z values. This will get us from light spectra to colors! And the matrix E contains spectra; more to follow about that, I assure you.

Given E, he would compute the A matrix as

A = E\ (E'E)^{-1}\

where I will frequently use A’ for the transpose A^T\ , simply because it is sometimes typographically more convenient — and it is an extremely common notation. But note that I will write A^{-T}\ for the inverse transpose. In general, I will use whichever notation is more convenient for any particular expression, and I may even mix them shamelessly.

We get (with 4 places, as he showed it)

A = \left(\begin{array}{ccc} 0.1219 & 0.034 & 0.2194 \\ 0.1052 & -0.0428 & 0.139 \\ 0.2045 & -0.3668 & -0.1085 \\ -0.4496 & 0.1018 & 0.2928 \\ -0.1611 & -0.0946 & -0.0523 \\ -0.5545 & 0.3052 & 0.2298 \\ -0.5429 & 0.4933 & 0.2885\end{array}\right)

Hey what? What did he accomplish with that? First, we may say that he found an A such that both

A’E = I,

and E’A = I,

i.e.

A^T\ E = E^T\ A = \left(\begin{array}{ccc} 1. & 0 & 0 \\ 0 & 1. & 0 \\ 0 & 0 & 1.\end{array}\right)

A straightforward way of stating those relationships is: A’ is a left inverse of E; A is a right inverse of E’. We should also observe that by transposing

A’ E = I

we get

E’ A = I,

i.e. the second of those relationships. They are equivalent.

Now, we computed A from E as

A = E (E'E)^{-1}\ ,

Please note that the existence of (E'E)^{-1}\ depends on E having more rows than columns, and on E being of full rank. The matrix E is of rank 3, and the 3×3 matrix E’E is invertible. The 7×7 matrix EE’ is not, and can never be (not with E having 3 columns). And, of course, the matrices A and E themselves cannot possibly be invertible, since they are not square.

The A and E matrices: computing E from A

Had we been given the matrix A, we might guess that we could have computed E from A by some similar formula. Let’s try the obvious,

A\ (A'A)^{-1}\

and see what we get. We have

A' = (E'E)^{-T}\ E'\

A'A = (E'E)^{-T}E'\ E (E'E)^{-1} = (E'E)^{-T}\

(A'A)^{-1} = (E'E)^T

A\ (A'A)^{-1} = E\ (E'E)^{-1}(E'E)^T\ .

That doesn’t look very good — but wait a minute. We have spent a lot of time computing “dispersion matrices”, things of the form E’E — and they are symmetric. That is,

(E'E)^T = E'E\ .

So we continue, getting

A\ (A'A)^{-1} = E\ (E'E)^{-1}(E'E) = E\ .

That’s what we were hoping for, that E is computed from A exactly the same way as A was computed from E. Given the matrix A we would compute the matrix E as

E = A\ (A'A)^{-1}\ .

By the way, we saw, in that process, that

A'A = (E'E)^{-T} = (E'E)^{-1}\ .

This matters later when we define Ma and Me. That latest equation will tell us that Ma and Me are inverses.

The “real” relationship between A and E: dual (reciprocal) bases

All of Cohen’s work in this example treats A and E as matrices. There is another interpretation, a crucial interpretation. I will return to this in another post, but I have to tell you what he “really” did.

We have seen this computation before. Some of you may be sick of it. Well, it isn’t quite what we have seen before.

What we have seen before is a square matrix — specifically a transition matrix defining a basis — and we have wanted to compute the reciprocal basis. I even emphasized that I do not remember the answer, but I remember how to get it. If P is the given transition matrix, and R is the transition matrix for the reciprocal basis, then I want to compute all the dot products of the columns of R with the columns of P, and I want the answers to be an identity matrix. That is, I want an identity matrix to be the matrix product of rows of R’ with columns of P,

R’P = I

so R is the inverse transpose of P:

R = P^{-T}\ .

Wait just a minute. For a reciprocal basis, we just wrote

R’P = I.

For A and E, we had

E’A = I.

Guess what? Each column of E is a reciprocal basis vector for the columns of A. What we have are three columns of A, 7-dimensional but spanning a 3-D subspace. And the three columns of E are a reciprocal basis.

For reasons that will be far more obvious later, here is a place where I would choose to call the columns of E a dual basis rather than a reciprocal basis. Computationally — in finite dimensional spaces — there is no difference, but conceptually it will help a lot even there. In fact, I would say that we need to distinquish them conceptually precisely because we can’t tell the difference numerically.

It also turns out that it’s better to think of the columns of E (column vectors) as the original basis, and the rows of A’ (row vectors) as the dual basis. It might be best to think of the pair, columns of E and row of A’, as a set of dual bases, plural. Each row of A’ represents a so-called linear functional, a linear operator mapping a spectrum (a column vector) to a real number (the suffix -al in functional says the output is 1D). If this is new to you, relax. We’ll see it again, with real matrices A and E and real spectra.

I have barely discussed the dual space (that’s where dual vectors live, while reciprocal vectors live in the same space as the original basis) in these posts. We will get to it. If you know about it, good: for color theory, think dual basis rather than reciprocal basis. If you don’t know about the dual space, it’s okay to think reciprocal basis, for a while.

Okay, I’ll tell you that if you’ve ever seen differential forms, you’ve seen dual bases:

Picture 46

I should remark that Cohen never uses either term, reciprocal basis or dual basis. He knows that the relationship between A and E is crucial, he just doesn’t use the mathematical name for the relationship.

Now is a good time to remark, as I have before for other authors, that Cohen was almost certainly breaking new ground. I will not fault him for not saying it “correctly”; he deserves far too much credit for getting the mathematics right in the first place.

So, we still have exactly what we started with, the A and E matrices. In fact, we start with either one of them and get the other.

His dispersion matrices Me and Ma, and their Cholesky (LU) decompositions

I am going to show you the rest of what he did. We will later do almost all of this differently, but I had to know what he was doing. And if you are reading Cohen, then I have to show you a better way to do what he did. He computed (and named) the two intermediate dispersion matrices:

Me = E’E

Ma = A’A

and then he focuses on the inverses Me^{-1}\ and Ma^{-1}\ . Oh, he also knows that Me and Ma are themselves inverses, as I noted earlier when we worked out E in terms of A:

Me^{-1} = E'E^{-1} = A'A = Ma\ .

(That is, there are only two distinct matrices among these 4 names.)

Anyway, here are Me, and its inverse…

Me = \left(\begin{array}{ccc} 3.7936 & 3.0166 & 1.9818 \\ 3.0166 & 6.9586 & -2.7544 \\ 1.9818 & -2.7544 & 8.44\end{array}\right)

Me^{-1} = \left(\begin{array}{ccc} 0.8981 & -0.5429 & -0.3881 \\ -0.5429 & 0.4933 & 0.2885 \\ -0.3881 & 0.2885 & 0.3038\end{array}\right)

and, for example, Ma…

Ma = A^T A= \left(\begin{array}{ccc} 0.8981 & -0.5429 & -0.3881 \\ -0.5429 & 0.4933 & 0.2885 \\ -0.3881 & 0.2885 & 0.3038\end{array}\right)

which is, as it should be, Me^{-1}\ .

Although he did not describe it this way, he next computed two matrices by doing Cholesky decompositions of Ma and Me. I think it is safe to describe the Cholseky decomposition as a special case of the LU decomposition. It is the result of an LU decomposition applied to a positive define symmetric (Hermitian in general) matrix, and the resulting L and U matrices are transposes, so there’s really only one of L and U to be found.

If you are following along with Cohen, let me warn you that I am going to change notation. Why? Because there’s a little magic going on here and a slightly different notation will focus on it. Besides, his notation seems silly. He has A and E, ends up with F1 and F2 — and along the way he uses G and Gbar.

I’ll keep his F1 and F2, since they are among his final results, but I am going to use Ga and Ge for the intermediate G’s. Trust me: you want me to do that.

Let me just do them. For Ma, we get…

Ga = \left(\begin{array}{ccc} 0.9477 & 0 & 0 \\ -0.5729 & 0.4062 & 0 \\ -0.4095 & 0.1326 & 0.3442\end{array}\right)

What he has found is a lower triangular “square root” of Ma. That is, we have Ga such that

Ga Ga’ = Ma.

(This is his Gbar.)

We do it for Me, too…

Ge = \left(\begin{array}{ccc} 1.9477 & 0 & 0 \\ 1.5488 & 2.1354 & 0 \\ 1.0175 & -2.0279 & 1.8144\end{array}\right)

As before, we have gotten Ge such that

Ge Ge’ = Me.

Note that I have used Ga and Ge to denote what we took the Cholesky decomposition of: Ga came from A’A, and Ge came from E’E, which is the inverse of A’A.

His orthonormal bases F1 and F2

He then defines

F1 = A Ge

F2 = E Ga.

That is what I want you to see: he uses Ge with A, and Ga with E. This guarantees that F1 and F2 are orthonormal matrices. (See below.) And if I view Ge and Ga as transition matrices, then F1 is apparently a basis for the column space of A, F2 for the column space of E. I have to confess that I don’t really care about F1 and F2 — I can do far better.

But, here are Cohen’s F1 and F2:

F1 = \left(\begin{array}{ccc} 0.5134 & -0.3724 & 0.3981 \\ 0.2801 & -0.3734 & 0.2523 \\ -0.2801 & -0.5632 & -0.1968 \\ -0.4201 & -0.3765 & 0.5313 \\ -0.5134 & -0.0959 & -0.0948 \\ -0.3734 & 0.1857 & 0.417 \\ 0 & 0.4683 & 0.5234\end{array}\right)

F2 = \left(\begin{array}{ccc} 0.1287 & 0.2652 & 0.6884 \\ 0.111 & 0.0512 & 0.5163 \\ 0.2158 & -0.5985 & 0.1721 \\ -0.4744 & -0.4185 & 0.4475 \\ -0.17 & -0.4725 & -0.1721 \\ -0.5851 & -0.0739 & 0 \\ -0.5729 & 0.4062 & 0\end{array}\right)

Okay, just why did F1 and F2 turn out to be orthonormal? That is, they each satisfy F’ F = I:

F1^TF1 = F2^T F2 = \left(\begin{array}{ccc} 1. & 0 & 0 \\ 0 & 1. & 0 \\ 0 & 0 & 1.\end{array}\right)

Let’s see why. Let’s compute F1′ F1. We have

F1 = A Ge.

Then

F1′ F1 = Ge’ A’A Ge = Ge’ Ma Ge

= Ge' Me^{-1} Ge = Ge' (Ge Ge')^{-1} Ge

= Ge^T (Ge^{-T} Ge^{-1}) Ge = I\ I = I\ .

It works out because Ma and Me are inverses, and Ge is used with A to get F1. A similar calculation works for F2.

I want to point out that F1 and F2 are not dual to each other; we have, for example, F1′ F2 != I:

F1^T F2 = \left(\begin{array}{ccc} 0.541775 & 0.764073 & 0.350247 \\ -0.392952 & 0.598605 & -0.698041 \\ -0.743014 & 0.240551 & 0.624553\end{array}\right)

Of course not. Just as an orthonormal basis is its own reciprocal basis, so it “is” its own dual basis, at least in a finite dimensional space. I waffle and use quoted “is”, because the two bases which are dual to each other live in different vector spaces — but if one is orthonormal, the dual has the same numerical components. We will see this explicitly. In this case, it means that F1 “is” its own dual basis, and F2 “is” its own dual basis.

Let me say that again, approximately but simply: F1 is dual to itself, and F2 is dual to itself; they are not dual to each other.

But today we know how to get such bases without the Cholesky decomposition. We also know that we can use the same orthonormal basis for both of those spaces.

I will do that in the next post.

Using the A matrix: applied to the columns of E

I ask again, what is the point of the A matrix? Applied to a spectrum — careful! it’s actually the transpose A’ that is applied to a spectrum — it generates “R,G,B” values. Of course, this A matrix is a toy, but better to start with 3×7 than 3×81 or even larger. We’ll get there.

There are two interesting possible spectra to which we might apply A’, even from a toy matrix A. The obvious possibility, to me at least, it to apply A’ to E. (That is, we are applying A to 3 spectra at once.) Now, we already know the answer, because E was chosen as the dual basis. That is, we have already seen that

A’E = I.

That is, if we let the columns of E be E1, E2, E3, then we have

A’ E1 = (1,0,0)

A’ E2 = (0,1,0)

A’ E3 = (0,0,1).

In words, E1 is a spectrum which leads pure red; E2 generates pure green; and E3 generates pure blue. More precisely, E1 (etc.) is a spectrum which generates our chosen red. I’ll make more sense of that when we have real A matrices. (Oh, yes, there are more than one.)

Using the A matrix: applied to an equal energy (constant) spectrum

The second possibility is an equal energy spectrum:

ee= \{1,1,1,1,1,1,1\}

A\ ee = \{-1.27646,0.43012,1.00883\}

Let’s not worry too much about that negative value; what it means is that we cannot actually match the equal energy spectrum (with whatever our toy “R,G,B” light sources are); instead, we have to add red light to the equal energy spectrum, and match that resulting mixture.

Perhaps now is a good time to say that the key point here is that we get 3 values; our perception of color vision is specified by 3 numbers. In practice, “CIE tristimulus values X,Y,Z” will be our objective.

The projection matrix R

There is one final crucial question. A’ maps from 7D to 3D. It is of full rank 3, so it has a nullspace of dimension 4. (If you need, now would be a good time time to look at fundamental subspaces.)

There are a whole lot of spectra out there whose R,G,B values are {0,0,0}. More importantly, any given spectrum can be split into two vectors, one in the nullspace of A’ and the other in the preimage of its range (i.e. in the range of A).

How would we find that decomposition?

Any direct sum decomposition has projection operators associated with it, so let’s find the projection operator onto the preimage of the range.

Malinowski did that. I did it differently. Cohen does it the same way Malinowski did, but it looks more compact because he defines both E and A. Cohen writes

R = E A’.

If we use

E = A\ (A'A)^{-1}\

to expand Cohen’s equation, we get

R = A\ (A'A)^{-1}\ A^T\ .

We have also seen that this is the “hat matrix”, which projects an observation onto the subspace spanned by the least-squares fit. That is, we have

yhat = X\ \beta\ ,

and

\beta = (X'X)^{-1}\ X'\ y\ ,

so

yhat = X\ (X'X)^{-1}X'\ y\

and if we define the hat matrix H by

H = X\ (X'X)^{-1}\ X'\ ,

we have

yhat = H y.

The key is that H is to X as R is to A.

Just for completeness, I compute R as Cohen did:

R = E^TA = \left(\begin{array}{ccccccc} 0.5608 & 0.3833 & -0.0124 & 0.136 & -0.2656 & -0.0949 & 0.034   \\ 0.3833 & 0.2815 & 0.0822 & 0.157 & -0.1319 & -0.0687 & -0.0428   \\ -0.0124 & 0.0822 & 0.4344 & 0.2251 & 0.2165 & -0.0821 &   -0.3668 \\ 0.136 & 0.157 & 0.2251 & 0.6005 & 0.2014 & 0.3085 & 0.1018 \\ -0.2656 & -0.1319 & 0.2165 & 0.2014 & 0.2818 & 0.1344 &   -0.0946 \\ -0.0949 & -0.0687 & -0.0821 & 0.3085 & 0.1344 & 0.3478 &   0.3052 \\ 0.034 & -0.0428 & -0.3668 & 0.1018 & -0.0946 & 0.3052 & 0.4933\end{array}\right)

Since we know that projection operators are idempotent, we can check that R^2 = R\ . (It is.)

The projection of the columns of E

Having gotten the projection operator R, let’s use it. First, apply it to E, the dual basis to the rows of A’.

A^T E= \left(\begin{array}{ccc} 1. & 0 & 2. \\ 0.5455 & -0.3636 & 1.5 \\ -0.5455 & -1.6364 & 0.5 \\ -0.8182 & -1.4545 & 1.3 \\ -1. & -1. & -0.5 \\ -0.7273 & -0.1818 & 0 \\ 0 & 1. & 0\end{array}\right)

I hope it is not a surprise that we just got E back: the projection of E is E.

But what does that mean? No part of any column of E is in the nullspace of A’.

The projection of the equal energy spectrum

Recall the equal energy spectrum:

ee = \{1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1\}\

I will denote the projection as f, for fundamental.

f = R\ ee = \{0.741189,0.660536,0.496875,1.73027,0.34193,0.850176,0.43012\}

Well, that has a nontrivial component in the nullspace, call it n. (We know that, as soon as we see f != ee.) We have

ee = f + n,

where n is simply the difference, n = ee – f, namely

n = \{0.258811,\ 0.339464,\ 0.503125,\ -0.730266,\ 0.65807,\ 0.149824,\ 0.56988 \}

Direct computation confirms that n is in the nullspace of A’:

A^T\ n = \{0,\ 0,\ 0\}

Hang on just one second. What would you call a color whose “R,G,B” values were all zero?

I would call it black.

Because n itself, however, is a nontrivial spectrum — in the sense that there’s light there at each of those 7 frequencies! — let us refer to that spectrum as metameric black. (Two different spectra are said to be metamers or metameric if they appear to be the same color under the same illumination.)

It’s black to our eyes. (Although I could guess that if it were sufficiently intense, we might find ourselves with sore eyes from the radiation that is hitting them.)

I would emphasize that n depends more on A’, in a sense, than on the original equal energy signal ee. Of course, it came from ee and its fundamental, but once I have this n, I could add it to any other spectrum (of length 7!) without affecting the observed color of that spectrum. (In fact, it will depend on the illumination, too, but I’ll show you that. For now, pretend the illuminant is equal energy, too.)

The point is that in the real world, with real spectra — from pine needles or from some Munsell color chip or anything — every “metameric black” we compute could be added to any other spectrum we have, without affecting the color we perceive.

This, if northing else, reminds us that color is our perception of a light spectrum. And there are physical spectra in the visble region that appear black to us.

We can confirm that the fundamental f has precisely the same “R,G,B” values as the equal energy signal, by computing A’ f

A^T\ f = \{-1.27646,\ 0.43012,\ 1.00883\}

Recall, or recompute A’ ee

A^T\ ee = \{-1.27646,\ 0.43012,\ 1.00883\}

The same. We have indeed split ee into two parts

ee = f + n,

one of which (n) contributes nothing to the final R,G,B values, the other of which (f) contributes everything. This is why f is called the fundamental.

And, our observation that the columns of E are projected onto themselves says that the columns of E are also fundamental spectra.

And they are a basis, so every fundamental — everything we see — could be written as a linear combination of the columns of E. They’re not an orthonormal basis, so we would have to use the dual basis to compute coefficients of the linear combinations. Well, that’s exactly what we accomplish by applying A’ (the dual basis) to the fundamental.

Those 3 numbers (A’f = A’ ee) we get describe the fundamental f in terms of the basis vectors E1, E2, E3:

f = R E1 + G E2 + B E3.

They are simply the components of f wrt the basis E1, E2, E3.

They are not, however, the coefficients of the original signal ee. That lies not entirely in the space spanned by E — that’s what the metameric black n is, the part of the original signal that is not in the space spanned by E.

(They also are not necessarily the RGB values we would use in RGB color space. After all, the components of f could be any real numbers, while RGB color space has restricted values, whether 0 to 1, 0 to 100, or 0 to 255. Don’t worry, we’ll get there.)

Summary

Cohen has a matrix A of “color matching functions”; it has 3 columns and lots more rows.

We construct a dual basis E via

E = A\ (A'A)^{-1}\ .

Since “dual basis” is a dual relationship, we could have — and did! — compute A from a given E:

A = E\ (E'E)^{-1}\

We construct two dispersion matrices

Me = E’E,

Ma = A’A,

and we saw that they were inverses.

We did Cholesky decompositions (LU decompositions of positive definite symmetric matrices) of Ma and Me, getting

Ga Ga’ = Ma,

Ge Ge’ = Me.

Then we used Ge and Ga to construct two orthonormal bases F1 and F2

F1 = A Ge,

F2 = E Ga.

Finally, Cohen computed the projection operator R onto the non-nullspace part of the domain of A’ (the preimage of the range of A’, equivalently the image of A).

Most importantly, at the end, we saw that an equal-energy (constant) spectrum could be split into two parts (f and n), one of which (f) contained all the “R,G,B” nformation, the other of which (n) was metameric black, invisible.

Next, I will show what I would do differently.

Advertisements
Posted in color. Tags: , . 11 Comments »

11 Responses to “Cohen: “Visual Color and Color Mixture””

  1. Amber Says:

    Hiya!. Thanks for the blog. I’ve been digging around looking some info up for shool, but i think i’m getting lost!. Google lead me here – good for you i guess! Keep up the good work. I will be coming back in a couple of days to see if there is any more info.

  2. rip Says:

    Hi Amber,

    You’re welcome. I try to put out a new post every weekend — usually Sunday, but sometimes Monday evening.

    Feel free to ask questions about what’s on the blog. (But I can’t promise to answer questions about _other_ material.)

  3. geppi Says:

    Hello Rip,

    I came across your blog some time ago and in the meantime I’ve read several of your posts especially the ones on color. They turned out to be extremely helpful in understanding the core concepts and mathematic foundation of colorimetry and color science. So first I would like to thank you for all the insight that i’ve gained from reading your blog.

    Keep it up !

    What still puzzles me is the question how far you can get in colorimetry with just affine geometry, i.e. without introducing a metric. Does Cohens R matrix require a metric ?

    The question was raised the fist time when I read the book “Color for the Sciences” by Jan Koenderink. (BTW this is a very interesting book because it deals with the theoretical concepts of color science and not the practical methods which are dominated by standards and committee definitions. It is completely different from all the other books about colorimetry and color science that I’ve seen so far and is in my opinion a very good complement to a book like Wyszecki & Stiles.)

    In chapter 8 on page 331 he discusses the so called “Wyszecki hypothesis”, i.e. the idea that a spectrum can be split into a unique component that is causally effective for color vision and a component that is causally ineffective, thus a “black beam”. According to Koenderink this hypothesis is false, because there are infinitely many complements to the black space that have equal claim to be called “causally effective” and there is no way to chose between them.
    According to his reasoning the Wyszecki hypothesis is based on the definition of a particular metric in colorspace that finally leads to the orthogonal projection operator R (Cohens matrix).

    Your posts about Cohen have been the first that I’ve encountered which actually mention the concept of the dual space in conjunction with color matrix operations. Your remarks about the relationship of the A and E matrices have been extremely enlighting. Moreover they seem to confirm my understanding that the fundamental space and the black space are unique entities which don’t require a metric. The fundamental space is the preimage of the range of A and the black space is the nulspace of A. The actual representation of these spaces depends on the chosen base and the functionals of the dual base deliver the component values. They don’t require orthogonal base vectors, i.e. the base vectors can be oblique and there is no assumption that would require a metric up to that point.

    It is interesting that Koenderink presents the concept of the dual base at the beginning of his book and he also highlights the fact that in contrast to the dot product it doesn’t imply a metric. He is very careful to differentiate between column vectors (kets) and the row vectors or functionals of the dual space (bras). The more it surprises me that he claims that a metric would be required to get matrix R.

    However, there’s one other point that I don’t get:

    The fundamental space and the black space split the space of beams (or spectra) into two subspaces.
    Moreover the space of beams is the direct sum of the fundamental space and the black space.
    So let’s say I have matrix A and I have a base b1 for the nullspace of A and a base b2 for the preimage of the range of A.
    If I chose my bases b1 and b2 such that the spaces they span don’t have a vector in common (except fot the nul vector) I would call them mutually orthogonal. But hey, I didn’t define a metric. Is there a concept of orthogonality without a metric ?

  4. rip Says:

    Hi Geppi,

    Thanks. I’m glad to have helped.

    The Koenderink book sounds interesting and I’ve ordered it.

    You asked:
    If I chose my bases b1 and b2 such that the spaces they span don’t have a vector in common (except fot the nul vector) I would call them mutually orthogonal. But hey, I didn’t define a metric. Is there a concept of orthogonality without a metric ?

    Here are my thoughts.

    The x-axis and a 45° line have no vector in common, but they are not an orthogonal direct sum of the plane, because the two vectors are not orthogonal.

    But I don’t think we need an orthogonal direct sum. (We have one, in color theory, but I don’t think we need it.) I think we just need two of the four fundamental subspaces, and they don’t require a metric.

    When we split the beam space into the nullspace and the preimage of the range, any beam can be written uniquely as the sum of a black beam (null space) and color beam (preimage).

    Now, that did depend critically on the color matching functions, i.e. on the A’ matrix. Given that linear operator, I get the direct sum in terms of its fundamental subspaces. The A’ matrix maps some vectors to zero, and it maps some vectors to themselves. And any given vector can be written uniquely as the sum of one of each. All I need is a basis – not necessarily an orthonormal one – for each subspace. (And there’s your b1 and b2.)

    We’re getting orthogonality de facto, but without defining it. The relevant concept is “the annihilator” of a subspace; and Halmos’ “Finite Dimensional Vector Spaces” might be a good place to start.

    If I keep playing with this, I will probably work with a Minkowski metric (reverse the sign on the time component). It would be good to see what happens when we have a different metric.

    Still, I have to ask: what if we recognize that the A matrix is an approximation, and we investigate the thing it approximates? Let me see what Koenderink is actually doing.

    And there are applications like this in wavelets (so-called biorthogonal wavelets), and I want to think about them.

    In other words, I’m about where you are: it looks like I don’t need orthogonality, but I wonder a little bit. Oh, I know that we can define annihilators and work with them instead… but does that take care of everything?

  5. geppi Says:

    Hello Rip,

    as mentioned in another post I followed your recommendation and started to read Halmos. Here’s what I got from this so far:

    My b1 and b2 were the right example but with the wrong conclusion.
    Your objection with the x-axis and the 45 ° line did show the flaw in my naive thinking.

    So let me give it a second try.

    Let’s assume that beam space is the direct sum of the black space and some other space which we call fundamental space.
    As far as I understand it the black space is clearly defined by human color vision. If we have two different beams A and B and they evoke the same color sensation the difference A – B must lie completely in the black space.

    The black space does therefore not depend on any choice of primaries that we might use to gauge human color vision. They only define the actual representation in terms of the chosen base of primaries when setting up the A matrix from the color matching functions.
    The primaries are also not the only choice of base we make when gauging human color vision. In fact the primaries are not used as a base for the beam space but as a base for color space. Therefore the other base we have to chose is for the space of beams which is typically derived from the spectral decomposition into monochromatic beams of different wavelength.

    So talking in general terms of vector spaces and operators we have the beam space S that is mapped by the human color vision operator V to the color space C.
    The A matrix is just a representation of the operator V in terms of the base of single wavelength beams for vector space S and the base of primaries for vector space C.

    The black space is the set of all vectors in S that is mapped by V to the nullspace. In general terms this is a concept independent from all choices of bases.

    Back to my simple example this gives us the one dimensional space which geometrically let’s say is the horizontal axis.
    The primaries used for the color matching matrix would then just specify some vector b1 used as a base for this space.
    Now to say that an assumed 2 dimensional beam space is the direct sum of this “black space” and another one dimensional space does not completely define this other space.
    It just says that the two spaces have to be complementary, i.e. beside spanning beam space they just have to be disjoint.
    Geometrically speaking this is the case for any line through the origin other than the horizontal axis.

    If I got it right, Koenderink’s criticism is that Cohen’s matrix R is picking one particular of these subspaces as the fundamental space.

    But I’m suspicious that he might completely miss the point with his reasoning.
    I read your posting about the four fundamental spaces of a linear operator and if I understand it right each linear operator splits the space of its domain into two unique and well defined subspaces. The nullspace and the preimage of the range leaving no ambiguity regarding the latter.
    Therefore it would be the linear operator itself, i.e. human color vision, that makes the choice from all the spaces complementary to the black space. The choice of primaries as a base for color space and the choice of “wavelength” as a base for beam space would just lead to a particular representation of the fundamental space but it would be the same vector space that would be described by any other choice of primaries, i.e. base vectors.
    If this reasoning would be valid the only criticism left would be the choice of base to represent fundamental space but then the same criticism could be applied to the black space and more important nowhere a metric is required.
    Quite frankly I think the use of different bases and base transformations are so common in science that this would not reduce the generality of Wyszecki’s hypothesis and Cohen’s matrix R.

    On the other hand Halmos also helped me to eventually understand the concept of quotient spaces and from the first glimpse I wonder how this does apply to matrix R theory ?
    At least it is the right concept to chose a “natural” complement which is independent from any choice of base or metric. However, having just discovered this concept I’m struggling to apply it to the formulation of the fundamental color space or to see how it is connected to Cohens matrix R.
    There is a reference to the algebra of quotient spaces in Cohen’s book in the Introduction section at the end of page xxiv and beginning of page xxv which was written by Michael H. Brill.

    Now I’m confused again. Does the linear operator select a particular fundamental subspace from the wealth of complements to the black space or is the algebra of quotient spaces required ?

  6. rip Says:

    Geppi ended with

    “Does the linear operator select a particular fundamental subspace from the wealth of complements to the black space or is the algebra of quotient spaces required ?”

    The linear operator defines its nullspace and the pre-image of its range. Quotient spaces are an alternative approach, but they are not essentially different.

    As I indicated before, we have orthogonality whether we want it or not: the nullspace and the pre-image of the range are orthogonal complements. This implicit orthogonality is what I’m wondering about….

    If, instead, we start with a vector space V and a subspace W, there are an infinite number of possible complements to W. But if we started with a linear operator, as we do in this case, we get handed the two subspaces.

    Brill’s introduction to Cohen’s book said some interesting things…. Thanks again for getting me into all this again.

  7. Ray Says:

    Rip,
    I just discovered your blog and your posts on color. The CIE tables have values for every 5 nm, and most of your posts reduce the resolution to 20 nm for convenience. Has anyone interpolated these tables to get a 1-nm resolution? Or has anyone developed an adequate equation which describes these functions? I have been searching on the web in my spare time, and haven’t found anything.
    Ray

    • rip Says:

      Hi Ray,

      Welcome.

      In fact, the CIE itself interpolated the tables to 1 nm intervals. See, for example, the table beginning on p 725 of Wyszecki & Stiles.

      As for an equation, my own strong preference is to stay with linear algebra – i.e. with the matrix of tabulated values. I use 20 nm intervals because the arrays can be displayed… I use 5 nm intervals because the spectra are smoother… I’m not sure I’ve ever seen the 1nm tables on the internet, but I suspect they must be there somewhere.

  8. Ray Says:

    Rip,
    I didn’t check Wyszecki & Stiles because the book was published back in 1982. The newest book I have is, “The Science of Color,” edited by Steven K. Shevell (2003, Elsevier/Optical Society of America), which again uses 5-nm increments in the tables. I thought it must be part of the CIE standards or something.

    As it happens, I just stumbled on a site from reading an article on color: The Colour & Vision Research Laboratory, which is part of University College London (http://www.cvrl.org). It even has 0.1-nm wavelength interpolation. I guess this laboratory was >20 pages down in various web search engines.
    Ray

  9. eric Says:

    Rip,
    I’ve enjoyed reading your posts – would you be interested in taking a look at a related application? I’m trying to understand how to apply cohen’s r-matrix to automated color correction.

    Eric

    • rip Says:

      Eric,

      I emailed you a week ago at the address you left when you posted this. if you want to talk, please reply to that email


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: