The most interesting news this week isn’t really news… it’s rather old stuff as these things go… but it’s new to me. (One of my readers is probably going to say, “You should have learned this 7 years ago from Science News.” He should know… he renews my subscription every Christmas. For which I’m grateful.)

As I said last week, I just ordered 4 books. 2 of them have come in, and one of those was Ian Stewart’s “In Pursuit of the Unknown: 17 Equations That Changed the World”. I wasn’t expecting to learn anything from the book, but I did:

The **interplanetary superhighway**.

(Also known as the **interplanetary highway** and the **interplanetary transport network**. Searches on any of these 3 terms should be quite productive. I’ve provided a handful of links from just the first page of Google results, and from Wikipedia.)

Personally, I think it could better be called the interplanetary hiking trails – it’s a large collection of very low energy, but also very slow, pathways between equilibrium points in the solar system. It means, for example, that if we’re willing to spend more time at it, we can get to Jupiter without using a slingshot – gravitational assist – around Mars.

You may recall that there are equilibrium points called “Lagrange points”, some associated with each planet and some associated with pairs of planets or satellites. At such equilibrium points, a tiny application of force suffices to keep an object at even an unstable equilibrium. What may be surprising is that such equilibria are connected by low–energy–cost pathways.

Anyway, here is a nice introduction. Here is a newsy summary. Here is NASA talking about setting up propellant depots. Here is a collection of videos and/or lectures… and here is another description by the same author. Here is a PDF of one book… and here is another book.

(No – are you sitting down? – I did not immediately order the book. Let me do a little more reading 1st.)

What else has been happening?

There was one earthquake in my vicinity in the past week, which brings the total for July – so far – to 11, a nice typical number… but experience suggests that there could easily be 5 earthquakes on any one of the 3 days remaining in July.

While working on the post about getting the final tableau of a linear programming problem, I saw a more compact way of assembling the algorithm… which makes me very glad that I didn’t rush into print.

While reading about time series analysis, I seem to have learned that the term “stationary” when applied to autoregressive models means “asymptotically stationary” instead of “second-order stationary” – which contradicts what most of my time series books suggest. That is, most of the books gloss right over the distinction. Of course, I’ll show it to you when I start this.

Finally… the technical post that I was so confident of putting out last Monday… well, it obviously didn’t go. What happened? Well, while reading it over for final editing, I realized that my 3rd theorem in a set of 3 was actually quite different from the 1st 2.

Look, it is essentially the definition, rather than a theorem, of a chi-square with n degrees of freedom that it is the sum of squares of n independent standardized normal variables.

But then, if we do not know the mean of the normal variables, we can use their sample mean to standardize them… and then we get a theorem that says that the sum of squares of n independent normal variables, each normalized using the sample mean, is distributed as chi-square with n-1 degrees of freedom. We lose a degree of freedom because we used the sum of the n variables, i.e. we used a dependent number, instead of the unknown theoretical mean.

In both cases we end up with an unknown common variance, , by which we divided in order to standardize our normal variables. It’s OK that the variance is unknown, because it also occurs somewhere else, and the two will cancel when we construct t-tests or F-tests.

So far, so good. There is, however, another theorem. If we take the sum of squares of n residuals from a regression with k variables, then that sum of squares is distributed as chi-square with n-k degrees of freedom.

Wait a minute. The 1st 2 cases assume that our normal variables are both independent (or mostly) and standardized. The residuals are not independent – but the n-k takes care of that. Okay.

But the residuals do not have a common variance… the variance of the ith residual is , where H is the hat matrix. (This post worked that out.)

So, although the final theorem is true, it is more complicated than the 1st 2 cases. At first glance, it should have terms in in it, but it doesn’t. And I wasn’t prepared to rewrite that part of the post Monday evening.

In fact, I haven’t rewritten it yet… but I certainly hope that post goes out this Monday… but I was thinking of adding something else, too….

August 18, 2012 at 7:39 pm

Awesome post. Very helpful. Thanks for the info.