Happenings – 2011 Christmas Day

Merry Christmas, or happy winter solstice celebration, whichever you prefer, if you have a preference and if you observe a holiday.

This post is a day late because I was very lazy yesterday. I watched the Steelers shut out their opponent, instead of working on this post. Afterwards, I just puttered around the house.

I expect to be having dinner with friends this evening, but I’ve decided to make up for yesterday even though it’s Christmas, and do what I should have done yesterday.

Last weekend and during the week, mathematically…

I continued working on a 2nd example of polynomial regression. As I said before, it is a juicier example than I originally realized. That’s still true, even though I found a typo in my Mathematica commands and the computational results are nicer than I had thought. Nevertheless, I think this will be an illuminating example.

It’s also complicated enough that it will span three posts. I was having trouble keeping track of all the things I had done – which is a clear and powerful signal that I have done too much for one post.

I am amused to see that on Dec 3 I said I had “2 small examples of multicollinearity when we try to fit polynomials to data”. Well, they are small – the second one, yet to come, has only 8 observations – but I will not try to cram the discussion and computations into one post.

I’ve also tracked down three – not just one – quantitative illustrations of the Higgs mechanism in three of my books. My situation, however, is that while I can follow the mathematics… I cannot justify the mathematics. I still might decide to post one of these illustrations, even before I understand it, just because it is so darned interesting and timely. After all, the Higgs particle is what gives their mass to all the other particles that have mass. Way cool, huh?

My undergraduate alter ego hasn’t done any more circuit theory; my perpetual student has picked up linear programming again… reading instead of trying to compute… but I’m getting an itch to start computing, of course.

Now let me show you the generalized Stokes’ theorem. It subsumes more familiar equations – from undergraduate physics – involving the gradient, curl, and divergence of vector fields. They can all be replaced by one equation – and a geek should know that.

Let me begin by reminding you (you who have had electromagnetism in college) of the divergence theorem… also called Gauss’ theorem. In words, as applied to electromagnetism, it said that the flux of the electric field through the surface S of a volume V was proportional to the charge contained in the volume:

$\int_S E\cdot \, dA = 4 \pi \int_V \rho \,dV\$.

The mathematical statement, however, is that the integral over the surface of the (normal component of the) electric field is equal to the integral of the divergence of the electric field over the volume bounded by the surface:

$\int_S E \cdot \, dA = \int_V div E \,dV\$.

The Maxwell’s equation

$div E = 4 \pi \rho$

says that those two right-hand-sides are equal. That equality is what gets the usual electromagnetic statement from the general math statement.

Then there is Faraday’s Law of Induction: the rate of change of the magnetic flux through an area A is proportional to the induced electromotive force along the curve C bounding the area:

$\int_C E \cdot \, ds = -\frac{1}{c} \frac{d}{dt} \int_A B \,dA\$.

(A changing magnetic field generates a voltage drop, which generates a current.)

The mathematical equation (Stokes Theorem, Green’s Theorem) is that

$\int_C E \cdot \, ds = \int_A curl E \,dA \$.

and we get from the math to the physics using the Maxwell’s equation

$curl E = -\frac{1}{c} \frac{\partial B} {\partial t}\$.

Finally, there are two similar equations, one of which I hope we take utterly for granted after freshman calculus. That one is the Fundamental Theorem of Integral Calculus:

$F(b) - F(a) = \int_a^b F'(x) \, dx\$.

The other is a generalization of that from the interval [a,b] to a curve C:

$\Delta \varphi = \varphi(b) - \varphi(a) = \int_C grad \varphi(s) \, ds\$.

(Some of you have been using that last equation since high school… you didn’t do the integral over the given path, you used a simpler path for which the potential difference $\Delta \varphi\$ was obvious: the change in gravitational potential energy is independent of the path taken; whether something travels down a playground slide or gets dropped straight down from the top of the slide, it hits the ground with the same speed.)

What do all these equations have in common?

One integral in n-dimensions, and the other integral in (n-1)-dimensions. We had a volume and the surface bounding it; an area and the curve bounding it; an interval and the endpoints bounding it; a curve and the endpoints bounding it. On the one side we had a function, and on the other some differential operator applied to it.

The generalized Stokes’ Theorem is written

$\int_{\partial M} \, \omega = \int_M \,d\omega\$.

$\partial M \$ is the boundary of the “manifold” M – where the term manifold encompasses intervals, paths, areas, volumes, and on to higher dimensions; and d is the exterior derivative of the form $\omega\$ – where a “form” encompasses anything we can integrate over.

Depending on $\omega \ , d\omega \$ turns out to be the ordinary derivative, the gradient, the curl, or the divergence.

As with most of my growing list of things a geek should know about mathematics, it isn’t just an equation or four – it’s that little extra. In this case, I think a geek should know the generalized Stokes theorem – not just its special cases.

That is, the equations

$\int_S E \cdot \, dA = \int_V div E \,dV\$

$\int_C E \cdot \, ds = \int_A curl E \,dA \$

$\varphi(b) - \varphi(a) = \int_C grad \varphi(s) \, ds\$

$F(b) - F(a) = \int_a^b F'(x) \, dx\$

are all special cases of

$\int_{\partial M} \, \omega = \int_M \,d\omega\$.