Happenings – 16 March


I’ve continued working on the same 4 things: special relativity, controls, 2D surfaces, and PCA. I’ve also added another: quantum mechanics.

For PCA, I’m finishing the example from davis’ geology book. It’s not quite as straightforward as I thought, but I think I’ve got it all. I won’t be sure until it’s done, but I’m hopeful. I thought I would print it out yesterday while I took a walk around the block (it’s a long block); when I got back, I was shocked to see that it was still printing. I don’t know how many posts it will turn into, but that draft was 26 pages!

For control theory, I’ve been waffling. I had worked an interesting set of 3 examples in the schaum’s: for a given system and response specification, find 3 control systems: lag, lead, and lag-lead compensation. This was particularly interesting because what experience I have is with PID control instead.

Now this is schaum’s, so I was just doing the computations in 3 “solved problems”; it really wasn’t clear how they chose the parameters for each kind of control system. Frankly, they seemed to pull the parameters out of their… ah, out of thin air. It also wasn’t clear why any one of the 3 control systems was better than the others. And were they really suggesting that we decide among the 3 kinds by doing all 3 designs? Can’t we figure out up front which one to choose? (Nevertheless, i liked seeing that any of the 3 could be used to solve the same control problem.)

So I went looking for a book that might do a better job of explaining how to choose the parameters and when to use lead, or lag, or lead-lag.

I found it in my two oldest books, which are effectively two editions of the same book (d’azzo & houpis, but not in the bibliography). I hadn’t expected to be using either of them, so I should add at least one to the bibliography. (This particular material seems to be common to both editions, so I’ll use whichever is the cleaner copy; both are used books, and at least one is heavily underlined.)

Looking through those books… the good news is they do have a design strategy for choosing the kind of control, and a strategy for choosing the parameters. The bad news is, I can’t just pick it up in the chapter on using bode plots: they do preliminary work using other methods.

Do I want to work thru 3 chapters of an ancient text? Well, the Mathematica® add-on for controls is capable of doing all of those ancient techniques, so they must still be useful. I guess I should roll up my sleeves and get my hands dirty.

For 2D surfaces, I’ve finished taking notes on ch 1 in Bloch; I’m ready to work thru it. I think I’ll talk more about this separately.

In special relativity, I was hung up on what are called the aberration formulas. One is the formula that tells us that stars appear bunched up in front of us as we move toward them at relativistic speeds; the other is crucial for particle collisions. In retrospect, what bothers me is that they are derived from velocity transformations rather than from distances. They relate angles in two coordinate systems, so why aren’t they derived from distances?

Once I realized that it was a side issue that was bothering me, I just shrugged it off and confirmed the light aberration formula; I should move into particle collisions Real Soon Now.

The quantum mechanics problem I’m looking at is an old friend, in two respects. Got a copy of volume III of the Feynman lectures on physics? He does stern-gerlach experiments, for both 2-state and 3-state particles. That is, I think, we’re measuring one component of spin or angular momentum, but life gets interesting if we, for example, look for the z-component first, then look for the x-component next.

Feynman has physics arguments which lead to solutions. But I’ve seen Lie algebras since I first saw Feynman, and they’re another window on the same problem, also another part of math I’m looking for applications for. I’m still confused about what physicists are doing with Lie algebras, more specifically with representations of Lie algebras. Are they particles? Are they operators? I incline to think “operators”.

Anyway, how do we compute these probabilities? I did not understand that when I was a sophomore, but we had a handful of solutions we could plug into. A friend was reading Feynman and bounced off those calculations, too, so he asked me how to do them.

I don’t know. Yet. 

Anyway, that’s where I’m at. Along the way, there have been a few interesting tidbits. We’ll see if I can remember to put these out once a week, but here are the interesting observations so far this month.

From ch 8 of Bloch, when he discussed the Exp function, which I had seen before in differential geometry but never bothered with, I realized that it’s the same Exp function we find for Lie groups and Lie algebras. He also described it as wrapping the tangent plane onto the surface. Simple, easy, and so clear. Hey, I say, we’ve spent an entire book deforming surfaces; it’s way past time to deform the tangent plane too.

Working on a forthcoming blog entry about the reciprocal basis, I was reminded that the transition matrix for the reciprocal basis is A^{-T} = P^{-1}, where A and P are the attitude matrix and transition matrix for the original basis. I’ve been comfortable with the reciprocal basis in principle for quite a while, but I hadn’t recognized its transition matrix off the bat. In retrospect, I’ve seen it show up in PCA a few times.

Speaking of PCA, and non-orthonormal bases (the only kind for which there is a nontrivial reciprocal basis), I have this wild idea that scaling the eigenvector matrix by the square roots of its eigenvectors amounts to defining a non-euclidean inner product. Come to think of it, Christensen did say something like that, and I noted it, but I didn’t understand it. And, of all things, it was while I was looking at quantum mechanics – the linear algebra of particle states – that almost-understanding came to me in a flash. I still have to work it out.

And when I was looking at lag, lead, and lag-lead compensation, I decided it was time to actually look at the RC circuits they were waving under my nose. First I confirmed that carstens, schaum’s, and the oldest book, all agreed on the circuits. Well, carstens didn’t have lag-lead, but otherwise, they agreed. Now if I see a circuit somewhere else, at least in the near future, I’ll recognize it or see the difference if it isn’t the same. Anyway, the lag-lead circuit has 2 resistances and 2 capacitances, one RC pair in parallel, the other in series. I confirmed that if we let the C in parallel go to zero, or if we let the C in series go to infinity, then we get the other two circuits.

And when I was browsing the state space chapter in Franklin & Powell etc., I was struck by their comment that the system matrix cannot be diagonalized if there are repeated poles. Interesting. Having repeated eigenvalues does not mean that a matrix cannot be diagonalized; it is a necessary condition but not a sufficient one. A matrix which cannot be diagonalized cannot have all distinct eigenvalues; there must be at least one repeated pair. But, do you remember those linear differential equations sophomore year where exponentials (possibly complex) did not suffice for a general solution, and you had to multiply one (at least) of them by t, or by t and t^2, etc.? That’s because you had a non-diagonable matrix! Not that they showed us that. It seems as though a matrix which represents a system of (first-order) ODEs has enough extra structure that a repeated eigenvalue is sufficient for non-diagonable. We’ll see if that conjecture is true.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: