rotating coordinate systems: example 1

conventions and setup

As far as possible, I am going to stay with my notation. r and \rho are the old and new (fixed and rotating) components of the position vector; v and \nu are derivatives wrt time of r and \rho respectively; a and \alpha are derivatives wrt time of v and \nu respectively. (But R is a convenient scalar value, and will no longer denote the position vector whose components are r and \rho\ .)

v = \dot{r}

\nu = \dot{\rho}

a = \dot{v}

\alpha = \dot{\nu}

The rotating frame is the same in all these problems, so get its matrices early (hence not often). The z-axis is our axis of rotation.

The attitude matrix for a CCW rotation of the axes (about the z-axis) is…

A = \left(\begin{array}{lll} \cos (t \omega ) & \sin (t \omega ) & 0 \\ -\sin (t \omega ) & \cos (t \omega ) & 0 \\ 0 & 0 & 1\end{array}\right)

The transition matrix is… Read the rest of this entry »

rotating coordinate systems: background

I owe you derivations of three assertions. We will need a fourth one, too.

  1. matrix multiplication by N is equivalent to some vector cross product
  2. the transition matrix is T =  1 - \sin (\theta )\ N + (1-\cos (\theta ))\ N^2
  3. \dot{T} = - \omega\ N\ T\
  4. N^3 = -N

matrix multiplication by N

Read the rest of this entry »

rotating coordinate systems: equations

velocity

There are three key equations for rotational mechanics. Let me refer to them as “the equations”. Goldstein writes a general equation for “some arbitrary vector G”:

\left(\frac{\text{dG}}{\text{dt}}\right)_{\text{space}}=\left(\frac{\text{dG}}{\text{dt}}\right)_{\text{rotating}} + \omega \times G

a specific equation for velocity:

v_s=v_r + \omega \times r

(that’s the first equation with G = position vector r) and an equation for acceleration:

a_s=a_r + 2\ \omega \times v_r +\omega \times (\omega \times r)

Let’s look at all that. I will derive these 3, and a general second-derivative equation, but I will have to return and derive some auxiliary facts.
Read the rest of this entry »

Happenings – 21 June

I’m working on several things, and it’s possible I’ll have a technical post ready later today. OTOH, I’m going to a dinner party tonight, so this will be a short schoolday.

As you might guess from the recent posts about rotations, I have gotten caught up in rotating coordinate systems. The original cause was a nifty equation in the airplane control books. As is true of too many things, I can even find that equation in an old schoolbook, in this case my ancient copy of Goldstein. Worse, I highlighted it all those years ago. That equation writes the rotation axis \omega in terms of the derivatives of the Euler angles and their rotation axes.
Read the rest of this entry »

books added 21 June

The following books have been added to the bibliography.

The Ashley book is a welcome addition to control of flight vehicles (Bryson; Blakelock): it’s got a lot more detail about the underlying dynamics. I have no idea when I bought it, but I eventually remembered that it was somewhere in my library, and was delighted to find its more detailed explanation – and excellent drawing – of the various coordinate systems in use for aircraft and missles. This is material which the control theory books assume you’ve seen in more detail.

The Ideals & Varieties book is an introductory text which I am working thru with a friend. The third author, O’Shea, is the author of a recent book on the Poincare conjecture which is what got me started on the geometry of surfaces.

The 3 mechanics books (Marion, Symon, and the Berkeley) were additional references (cf. Goldstein) for acceleration in rotating coordinate systems. I have listed the Berkeley text twice, for the same reason I list Schaum’s Outlines twice. I’ve always heard it called “the Berkeley mechanics book”, and that’s how I searched to see if it – and the rest of the series – were in print (no) and available used (yes).

I bought the Basilevsky Factor Analysis book because I wanted something more about noise in factor analysis methods (cf. Malinowski). It looks like a good and interesting book (I wasn’t expecting to find the Kalman filter in it), although it is the specific text in which I found the mistaken assertion that we could always choose the eigenvector matrix orthogonal. As I said when I corrected that very same careless error on one of my own SVD pages, I am inclined to be tolerant of other people’s mistakes: I make mistakes, too.
Read the rest of this entry »

PCA / FA Malinowski: Example 5. missing data.

Malinowski does use H for something else, namely missing data points. The X matrix must be complete, but a test vector x need not be.

For quick reference, X, H and u are

X = \left(\begin{array}{lll} 2 & 3 & 4 \\ 1 & 0 & -1 \\ 4 & 5 & 6 \\ 3 & 2 & 1 \\ 6 & 7 & 8\end{array}\right)

H =\left(\begin{array}{lllll} 0.203008 & -0.180451 & 0.218045 & -0.165414 & 0.233083   \\ -0.180451 & 0.327068 & -0.0827068 & 0.424812 & 0.0150376   \\ 0.218045 & -0.0827068 & 0.308271 & 0.0075188 & 0.398496   \\ -0.165414 & 0.424812 & 0.0075188 & 0.597744 & 0.180451   \\ 0.233083 & 0.0150376 & 0.398496 & 0.180451 & 0.56391\end{array}\right)

u = \left(\begin{array}{lllll} 0.327517 & 0.309419 & -0.813733 & 0.257097 & -0.262167   \\ -0.0107664 & -0.571797 & -0.464991 & -0.668451 &   0.0994427 \\ 0.538684 & 0.134501 & 0. & 0. & 0.831703 \\ 0.200401 & -0.746715 & 0. & 0.634172 & -0.00904025 \\ 0.749851 & -0.0404178 & 0.348743 & -0.291376 & -0.479133\end{array}\right)

Let’s try a magic vector, with one missing value, marked NA. This vector came to me in a dream. (Not! But it might as well have. There is no way in the real world I would know this vector.)

x = \{1,\ 2,\ 3,\ \text{NA},\ 5\}

Is there a value of NA which would put this vector in the 2D subspace? (Yes, but I know this because I used this vector to construct the data matrix X!)
Read the rest of this entry »

PCA / FA Malinowski: Example 5. Simplified Target Testing

introduction

For a good reason which I have not yet discussed, Malinowski wants to find x vectors which are close to \hat{x} = H\ x vectors. (His x and \hat{x} are usually written y and \hat{y} for a least-squares fit). He finds a possible x and tests it to see if xhat is close. He recommended computing an intermediate t vector which was the \beta for his least-squares fit to x.

Since he seems to care about t only when \hat{x} is close to x, and since \hat{x} is incredibly easy to compute directly, I prefer to delay the computation of t. Find t after we’ve found a good x.

It will also turn out that he wants a collection of t vectors in order to pick a nicer basis than u or u1. And I’m not going to follow him there, because all of that is what practitioners call “non-orthogonal rotations”. (That strikes me as an oxymoron.) It’s what Harman spends most of his book doing, and that’s where I’ll look if I ever I want to. It’s important, but I’m not going to look at it this time around.

Anyway, we factored the data matrix
Read the rest of this entry »

PCA / FA malinowski: example 5. target testing

Recall that we computed the SVD X = u\ w\ v^T\ of this matrix:

X = \left(\begin{array}{lll} 2 & 3 & 4 \\ 1 & 0 & -1 \\ 4 & 5 & 6 \\ 3 & 2 & 1 \\ 6 & 7 & 8\end{array}\right)

and we found that the w matrix was

w = \left(\begin{array}{lll} 16.2781 & 0. & 0. \\ 0. & 2.45421 & 0. \\ 0. & 0. & 0. \\ 0. & 0. & 0. \\ 0. & 0. & 0.\end{array}\right)

Because w has only two nonzero entries, we know that X is of rank 2. Its three columns only span a 2D space.

Given a column of data x (a variable, in this example, of length 5), Malinowski wants to know if it is in that 2D space. As he puts it, “if the suspected test vector [x] is a real factor, then the regeneration \hat{x} = R\ t will be successful.” He gives us a formula for computing t; by a successful regeneration, he means that \hat{x}\ is close to x.
Read the rest of this entry »

PCA / FA Malinowski: example 5.

(June 10: i have made 4 edits, all cosmetic. you may search on “edit:”)

Malinowski (edit: “Factor Analysis in Chemistry”, 3rd ed.) does a lot of things differently from what we’ve seen. Fortunately, his model is simple enough, although his notation is… different. His model is

X = R C,

and he calls R and C the row and column matrices respectively. He wants X to have more rows than columns, so he transposes if necessary; then he chooses C to have more columns than rows, and R will have more rows than columns. For starters, then, his X matrix looks like the usual design matrix for regression. (Incidentally, he didn’t call it X.)

He chooses C = {v_1}^T, from the cut-down SVD. That is, I write the SVD of X as

X = u\ w\ v^T\ ,

where u and v are orthogonal and w is the same shape as X. But we know from the derivation and our experience with Davis that we may also write

X = u_1\ w_1\ {v_1}^T\ ,

where w_1 is square, diagonal, and invertible (it is a cut-down w), and u_1 and v_1 are the submatrices of u and v which are conformable with w_1. (We’ll see all this shortly.) We have dropped the parts of u, w, and v which are not required for reproducing X. (I remind you that what we’ve lost is the orthogonality of the matrices u and v.)
Read the rest of this entry »