the relationship between the raw and the orthogonalized data

OK, so we orthogonalized the hald data, including the constant (the column of 1s).

What’s the relationship between the new variables and the old? We might someday get a new observation, and if we were using the fit to the orthogonalized data, we might want to see what it predicts for a new data point.

(In all honesty, I would use the original fit – but I still want to know what the relationship is.)

My notation is a little awkward. I’m going to stay with what is used for this post, in which I first showed how to find….

Let me start fresh. If we have two typical data matrices (i.e. taller than wide), and they are supposed to be the same data, how do we find the relationship?
Read the rest of this entry »

Rotations: figuring out what a rotation does

I want to write a short emphatic post making one major point. I’ve made it before, but I think it needs to be hammered home.

In addition to the links in this post, you could use the Tag Cloud to find related posts.

I mentioned in this happenings post, near the end, that I had found a nice introductory book on the control of aircraft (Thomas Yechout et al., “Introduction to Aircraft Flight Mechanics…”, AIAA 2003, ISBN 1-56347-4344).

I looked at it yesterday morning. I was planning to spend the day working on the next regression post … and I did … but I figured I’d turn my kid loose first, and he wanted to play in that book.

I still think it’s a nice book… but two of their drawings for rotations are mighty confusing… at first glance, they appear to be wrong. It turns out they are right, but the display is unusual.

I’ve talked about this out before, but let me put it in a post all by itself. (This post seemed essential when I thought they had made a mistake; it is merely desirable once I saw that the reader had to take a careful look at their drawings.)

Suppose we are given the following rotation matrix…

Rz[\theta] = \left(\begin{array}{ccc} \cos (\theta ) & \sin (\theta ) & 0 \\ -\sin (\theta ) & \cos (\theta ) & 0 \\ 0 & 0 & 1\end{array}\right)
Read the rest of this entry »

Color: Color-primary transformation XYZ to RGB

this was titled “Glassner’s transformation XYZ to RGB”

Introduction

There is a marvelous calculation in Glassner (vol 1, p 103; see my bibliography). Oh, I do not mean to imply that he originated this calculation, merely that I have only ever seen it in his book. And for all I know, he might actually be the originator.

edit: As I said in “Yet More Books on Color“, I found this calculation, done much the way Glassner did but not quite, in Giorgianni & Madden. I decided that I should use a title that did not pretty much credit it to Glassner, despite my disclaimer. end edit.

As he puts it: “Our goal is to find a matrix M which will take a three-element vector representing an XYZ color and transform it to an equivalent RGB vector for some particular monitor.”

The only information given to us will be the chromaticity coordinates x,y for each of the three phosphors, and for the white point.
Read the rest of this entry »

Color: Cohen Figure 20, “an intriguing F matrix”

Typographic edits, 4 Jan 2010.

I struggled to make sense of pages 94-101 of Cohen’s “Visual Color and Color Mixture”.

I understand them now. Mostly.

The challenge was to derive a drawing of three basis vectors (Figure 20, p. 95), shown here :

He calls these three vectors both “an intriguing F matrix” and “This canonical orthonormal configuration….” But as far as I can tell, he never said where he got this particular one. He made it sound important, so I had to derive it. Figuring it out, as usual, was very educational.

What follows is a detective story.
Read the rest of this entry »

Example: Is it a transition matrix? Part 2

We had three matrices from Jolliffe, P, V, and Q. They were allegedly a set of principal components P, a varimax rotation V of P, and a quartimin “oblique rotation” Q.

I’ll remind you that when they say “oblique rotation” they mean a general change-of-basis. A rotation preserves an orthonormal basis; a rotation cannot transform an orthonormal basis to a non-orthonormal basis, and that’s what they mean — a transformation from an orthonormal basis to a non-orthonormal basis, or possibly a transformation from a merely orthogonal basis to a non-orthogonal one. In either case, the transformation cannot be a rotation.

(It isn’t that complicated! If you change the lengths of basis vectors, it isn’t a rotation; if you change the angles between the basis vectors, it isn’t a rotation.)

Anyway, we showed in Part 1 that V and Q spanned the same 4D subspace of R^{10}\ .

Now, what about V and P? Let me recall them:
Read the rest of this entry »

Example: Is it a transition matrix? Part 1

This example comes from PCA / FA (principal component analysis, factor analysis), namely from Jolliffe (see the bibliography). But it illustrates some very nice linear algebra.

More precisely, the source of this example is:
Yule, W., Berger, M., Butler, S., Newham, V. and Tizard, J. (1969). The WPPSL: An empirical evaluation with a British sample. Brit. J. Educ. Psychol., 39, 1-13.

I have not been able to find the original paper. There is a problem here, and I do not know whether the problem lies in the original paper or in Jolliffe’s version of it. If anyone out there can let me know, I’d be grateful. (I will present 3 matrices, taken from Jolliffe; my question is, does the original paper contain the same 3 matrices?)

Like the previous post on this topic, this one is self-contained. In fact, it has almost nothing to do with PCA, and everything to do with finding — or failing to find! — a transition matrix relating two matrices.
Read the rest of this entry »

Transition matrix: to be or not to be

Cohen (“Visual Color & Color Mixture”, see the bibliography) did something very interesting. In fact, he did something useful which I had never seen before.

Although this post uses some matrices which we saw in the color posts, I think this can stand on its own: you need not have read the color posts. But if you are specifically interested in color, or in Cohen’s work, this post is very relevant.

He was trying to describe how to find a transition matrix between two given data matrices. This will come in handy — very handy! — whenever people give the alleged result of an unspecified linear transformation of a data matrix.
Read the rest of this entry »

attitude & transition matrices etc. – corrected 5-18, 6-13

i’ve made two changes in total, and you can search on “correction”.

“etc” is the inverse transition matrix, but I didn’t want a longer title.

I know of no natural case in which we specify a linear coordinate transformation by giving its inverse attitude matrix, as such, but i’ll keep my eyes open.

The key relationship among them is this: to say that P is a transition matrix is equivalent to saying that P^T is an attitude matrix. (The inverse transition matrix, of course, is P^{-1}\ .) in the special case that P is orthogonal, then the inverse is the transpose, P^T = P^{-1}\ , so the inverse transition matrix is the attitude matrix.

(OK, did you catch that? If our coordinate transformation is orthogonal, then the inverse attitude matrix is the transition matrix, so any time we specify a rotation by its transition matrix, we have just specified it by its inverse attitude matrix. This doesn’t count. I’m interested in the specification conceptually, and for that i know no case where we specify what should be understood primarily as the inverse attitude matrix.)
Read the rest of this entry »