Introduction to Group Theory and Cyclic Groups

My goal in the next few posts is to talk about low order finite groups – that is, groups which contain a small number of elements.

My introduction to groups is going to be rather nonstandard. And it will be sketchy. Grab your favorite Introduction to Abstract Algebra or Introduction to Group Theory book. Suggestions:

  • Fraleigh, “A First Course in Abstract Algebra”. A popular introductory text. I own two different editions.
  • Dean, “Classical Abstract Algebra”, ISBN 0060416017. An excellent, if little known, introductory text.
  • Dummit & Foote, “Abstract Algebra”. Written for undergraduates, with enough material for grad students. This was my main reading this time around.
  • Schaum’s Outline of Group Theory. Cheap.
  • Armstrong, “Groups and Symmetry”, ISBN 0387966757. An excellent undergraduate text with emphasis on the actions of groups on geometric figures. I’ve been using this book, too.

I think the cleanest starting point is to define a group as follows. We start with a set G, and a binary operation * on it. That is, given any two elements a, b of the set G, we have the product a*b, and that product is an element of G. Specifically, we require that the set be closed under the operation (or product). (That rules out, for example, the dot product of two vectors – because the dot product is not a vector, but a scalar.)
Read the rest of this entry »

Happenings – 2012 jan 28

Mathematically speaking, it’s been a quiet week.

I hope to put out a summary of multi-collinearity this Monday… but, if necessary, I already have a different post written – a sketchy introduction to group theory. Finishing it off last weekend didn’t leave much time for other mathematics.

I did pick up chemical reactions again… namely, finding mechanisms – sequences of elementary reactions – that explain observed reaction rates… specifically, the dependence of rates on composition. It’s beginning to make sense, now that I’ve looked at it after a long layoff.

I also spent a little while looking through my books on spectrum analysis (that is, frequency domain analysis of time series). For this, I feel like one of the mythical 3 blind men trying to understand an elephant. I like having more than one way to come at something, but for spectrum analysis, there seem to be too many different ways of approaching it. (I know, first understand one, or maybe two together, and then move on to more.)

A 2nd book by Vladimir I. Arnold came in: “catastrophe theory”. The back of the book says that it “…provides a concise, non-mathematical review of the less controversial results in catastrophe theory.”

I beg to differ. One might call it “non-rigorous” but it is hardly nonmathematical.

The most significant thing I got out of it was: “On odd-dimensional manifolds there can be no symplectic structures, but instead there are contact structures.”

The point is that symplectic structures can only be defined on even-dimensional spaces – I knew that… the prototype is the even-dimensional phase space of Hamiltonian mechanics. And I have heard of contact structures… without realizing that there was any relationship. At this point, that’s about all I know.

So let me go learn some more about something.

Regression 1: ADM polynomials – 3 (Odds and Ends)

Edit Jan 29: a reference to the diary post of Jan 21 has been corrected to refer to Jan 14.

There are several things I want to show you, all related to our orthogonal polynomial fits.

  • Can we fit a 7th degree polynomial to our 8 data points? Yes.
  • We can do it using regression.
  • We can do it using Lagrange Interpolation.
  • Did Draper & Smith use the same orthogonalized data? Yes, but not normalized.
  • How did Draper & Smith get their values? They looked them up.
  • Were their values samples of Lagrange polynomials? No.

The bottom line is that starting with half-integral values of x, all I need is the Orthogonalize command, to apply Gram-Schmidt to the powers of x. I did that here. I don’t need to look up a set of equations or a pre-computed table of orthogonal vectors. Furthermore, I can handle arbitrary data which is not equally-spaced.

Read the rest of this entry »

Happenings – 2012 Jan 21

What’s really memorable about this past week, for me, was a short collaboration between my alter egos the kid and the undergraduate. The undergrad was struggling with easy stuff – and the kid really, really wanted to get it right.

I had been struggling with circuit theory… specifically with RLC circuits. Yes, I solved them long ago in college – although, in fact, I didn’t actually study elementary differential equations until I was a graduate student. (I hadn’t had them before I transferred to Caltech as a sophomore… where they had been covered freshman year.) I had also never seen Laplace transforms until I was a graduate student. I learned both during the 1st course for which I was a TA.

And here I was, all bollixed up, unable to get what I expected.

On the other hand, I had gotten there after deciding that I needed to solve one of the simplest possible LRC circuits, rather than the more complicated ones to which I was trying to apply a slightly more sophisticated method.

Anyway, I woke up one morning dreaming about it, and determined to forget all the complicated stuff… just solve the second-order differential equation for current… and then solve the equation using Laplace transforms. In fact, I solved the voltage balance rather than the second-order ODE using Laplace transforms – and there is one little tricky detail….
Read the rest of this entry »

Happenings – 2012 Jan 14

I finished playing a very long game of Ascendancy in the middle of the week. My alter ego the kid is now reading Körner’s “The Pleasures of Counting”. My alter ego the undergraduate managed to put in a couple of hours on circuit theory (trying to understand the use of complex impedance)… he even started making headway in an old Dover book, Kron’s “Tensors for Circuits”… the last time I looked at it, I was lost.

I have some mathematics done for a post for this Monday… but I keep thinking about other things I might add to it. If I can’t stabilize the content, I may find it difficult to put the post out on time. Duh.

So who’s in charge? The managing editor wants to publish… but the mathematician isn’t ready to call it quits. We’ll see what happens.

While looking for an online reference to a particular formula – I’m horrified that I couldn’t find it in my own library! – I found the following site. It appears to deal primarily with undergraduate numerical mathematics. (And it did have the formula I was seeking.)

There are 2 points I need to emphasize. One, it has an index to a collection of YouTube videos – mathematics, of course.

Two, I have only looked at five of the videos… and I have found a mistake – 2 of them actually – in one “slide”. Unfortunately, I did not find a blog post corresponding to the video… and I didn’t see any way to attach comments to the video. Ah, I did just send an email.

Here’s a freeze-frame:

The full video is here.

The lecturer asserts that a steeper slope implies a higher R^2 because the vertical distance between the data and the fitted line will be larger. Yes, but the vertical distance between the data and its mean value is also larger. The R^2 and the adjusted R^2 will not change. What will change? The estimated variance.

He also said that if the x values were more spread out, then the R^2 would be higher. That’s interesting, because if the x values are more spread out, then the computed slope would be lower… and according to the first point, the R^2 would be lower. In fact, the R^2 will be the same.

It is conceivable that I have completely misunderstood what he means, so let me be explicit. Take a 2-variable data set, x and y, and fit a regression. Now multiply y by 10 and fit another regression: you will get the same R^2 and adjusted R^2; the estimated variance will be 100 times larger.

Now multiply x by 10 (still using the 10 times y) and fit a 3rd regression: you will get the same R^2 and adjusted R^2, and the same estimated variance as for the second case.

So I’m saying that the R^2 and adjusted R^2 are not affected by vertical or horizontal scaling of the data. (We’ve seen that the R^2 and adjusted R^2 are the same for the Hald data and the standardized Hald data – and standardizing data is a change of scale. And I standardized everything, including the dependent variable, so I also had a change of vertical scale.)

(The numerical stability might be affected by changes of scale! We’ve just seen that taking powers of x = 1.986, 1.987, …, 1.993 leads to horrendous inversions of X’X, while centering the x values (i.e. rescaling to -7/2, -5/2, …, 7/2) eliminates the numerical inaccuracies.)

In other words, just as I tell you to be careful reading my posts, I tell you to be careful reading his, or watching his videos. But you might well find something of interest among them….

I started this before I turned my kid and undergraduate loose, so let me get about their play and work… before I turn to Monday’s post.

Regression 1: ADM polynomials – 2

Let’s look again at a polynomial fit for our small set of annual data. We started this in the previous technical post.

What we used last time was

That is, I had divided the year by 1000… because, as messy as our results were, they would have been a little worse using the years themselves.

But there’s a simple transformation that we ought to try – and it will have a nice side effect.

Just center the data. Start with the years themselves, and subtract the mean:

I’ll observe that if we wanted to work with integers, we could just multiply by 2. In either case, our new x is not a unit vector.

Oh, the nice side effect? Our centered data is orthogonal to a constant vector.

Let’s see what happens.
Read the rest of this entry »

Happenings – 2012 jan 7

I’m being a complete wastrel, taking a break from math by playing Ascendancy, my favorite computer game. I started Tuesday evening after work, I began conquering the galaxy yesterday, and I’m going to keep playing for a while. We’ll see whether a technical post goes out on Monday evening.

Regression 1: Archer Daniel Midlands (polynomials) – 1

Now I want to illustrate another problem, this time with the powers of x. The following comes from Draper & Smith, p. 463, Archer Daniel Midlands data; it may be in a file, but – with only 8 observations – it was easier to type the data in. Heck, I didn’t even look to see if it was all in some file somewhere.

raw data

I have chosen to divide the years by 1000; in the next post I will do something else.

The output of the following command is the given y values… I typed integers and then divided by 100 once rather than type decimal points.


Read the rest of this entry »