I thought about trying to put this post out last evening and last night, but I have a huge backlog of unread magazines, so I spent part of the evening reading in my recliner. I also record several crime shows during the week ( NCIS, all three CSI, Numb3rs, Bones, and Criminal Minds), and watch them on weekend evenings. That cuts down on channel surfing, and I can skip commercials. I did that after reading. Actually, I watched Numb3rs in real time, and read magazines during the commercials. About 11:30 PM I briefly considered working on this draft.
So, as usual, it is about noon on Saturday as I begin this post. I have already written in my journal, and I have done my playing around with whatever mathematics the kid in me wanted to do this morning.
Algebra. I have never played with unique factorization domains (UFDs) to my satisfaction. That is, more to the point, I have never played with non-UFDs to my satisfaction, and I have been meaning to. The simplest example is the set of numbers of the form
where a and b are integers. It is not a UFD because the number 21, for example, does not have a unique factorization into irreducibles:
and we can show that all four of those factors are irreducible, so we have two distinct factorizations. Go ahead, crack open a “modern algebra” or “abstract algebra” text and look for this subject.
I have had two nice conceptual breakthroughs during the week. One, I think I know how to isolate multi-collinear variables in data. It’s easy enough, using the singular value composition (SVD), to tell that a matrix is nearly singular; it should not be much harder to determine that the second and third variables, for example, are the culprits, i.e. almost linearly dependent. I hope to show you this soon; I may even have just given you enough of a hint to do it yourself.
Two, I figured out the right way to do another computation of the D4 scaling function. What I have already shown you is to start with an approximation and improve it. The method was comparatively easy because it involved repeatedly applying a fixed filter; the idea was comparatively hard because… well, it just looked weird to upsample the filter coefficients.
I mentioned that I originally planned to show you a different computation. I changed my mind because while the other computation was conceptually simple, its implementation in Matlab code was… challenging.
Well, I don’t need the Matlab code: what I need – what I have! – is a recursive programming language; Mathematica makes the other computation just as trivial in practice as it is in principle.
All I had to do was stop looking at their solution and think about solving it myself. Illumination was almost immediate, once I quit acting like a student.
The other computation, which I would call “a dyadic expansion”, is the one referenced in the link in Scot’s comment to my D4 scaling function post. The idea is incredibly simple. We have – trust me, we can get! – the exact values of the D4 scaling function at the integers; the dilation equation
will then permit us to compute the exact values at the half-integers… and then at the quarter-integers, and so on.
That is, where the first computation is getting better approximations at each point, the second computation gets the exact values at more and more points. Doing that “by hand” is straight forward; cutting down on the overhead and inserting the new values in the correct location did not look easy.
Recursion makes it very easy.
Yes, you better believe I’ll be showing you that.
I am still struggling with a bibliography for wavelets. Last weekend, I realized afterwards, I was judging each of my books by whether or not it had code and examples. Hardly fair. So far, what I’ve shown you – and plan to show you – has come out of one book, and I gave you that reference; I feel bad about the lack of a bibliography, but not terrible.
Okay, I am already planning to show you something out of other books, but that is a little way down the road. I have implemented three algorithms from three books for Haar wavelets; all three give different answers. This reflects different, valid, choices along the way.
In fact, we will see a different choice when I present my version of the dyadic expansion; it will differ slightly from the one HERE. And yes, I will do my best to describe the common choices.
On to mathematics.