Edit: 13 July, just one remark added. see “edit” below.
How are things going? As I said yesterday in “happenings”, the simplest answer is: right now I’m just thrilled when I can reproduce a drawing in a book, even if I don’t understand why the method works or where the function came from.
This is post that I ended up with when I started out to write a “happenings” post yesterday. I promised you a picture. In fact, I’ll give you a mother wavelet for the linear spline scaling function. (That link occurs a few more times. What can I say? That’s what I’m building on, in more than one way.)
You may recall that I have shown you two ways to approximate a scaling function. The one I do not understand applies convolution and downsampling repeatedly. The one I do understand is the dyadic expansion, and I’ve been using it ever since I worked it out, in preference to the other method.
Well. I have now seen a scaling function which is infinite at all the dyadic points, so the dyadic expansion hasn’t got a prayer of working! But the convolution and downsampling algorithm gives me a drawing which seems to match the book.
I do not yet understand how that scaling function was determined, but at least I have a picture. (Not the picture you’ll see later in this post. Different function, different picture; soon but not now.)
I also have a picture of the corresponding mother wavelet. But this scaling function and mother wavelet are half of a biorthogonal set of four functions — and I don’t yet understand how to get them. (Biorthogonal means that I have a non-orthonormal basis and its reciprocal basis.) Worse, although I’ve seen pictures of the other pair of functions, I don’t know enough about them to even draw them!
But I expect to. Most of the time. Sometimes in the evening I wonder if I’m in over my head, but the feeling is usually gone when I sit down to do mathematics the next morning.
On the plus side, I have read about these new functions while looking at orthogonality — more precisely, while looking at relaxing the orthogonality assumptions. That is, while looking at biorthogonal and semi-orthogonal wavelets. (Okay, that last is not standard terminology, as far as I know: I think the second case is called “a semi-orthogonal multi-resolution analysis” and “pre-wavelets“. It just seems natural to me to apply the term “semi-orthogonal” to the wavelets, especially when I’m contrasting them with orthogonal or biorthogonal wavelets.)
Here is an overview of what is going on. I’ve said at least some of this in more detail before.
A scaling function and its integer translates define a space . Then the function and its integer translates define a space . It is a major assumption that is a subspace of (and this, combined with the definition of , gives us the dilation equation). Then we define the space as the difference between and (i.e. as the complement of in ). is the space of the mother wavelet. We have
If and its integer translates are an orthonormal basis for , then is in fact the orthogonal complement of ; the direct sum is an orthogonal direct sum
and we may find a function (the mother wavelet) whose integer translates form an orthonormal basis for .
(Incidentally, in most of my books, people appear to use for the direct sum whether or not it is orthogonal. That is, you can’t tell usually from the written decomposition whether or not it is orthogonal. You have been warned.)
(Another aside…. To see an example of a non-orthogonal direct sum decomposition, take , i.e. the xy-plane, and take the non-orthogonal basis (1,0) and (1,1). Then the x-axis and the line y=x are non-orthogonal 1D subspaces, and their direct sum is the xy-plane. One possible orthogonal direct sum, in contrast, is the x-axis and the y-axis.)
But what if and its integer translates are a basis for , but not orthonormal? More specifically, what if they are not orthogonal? (If they are orthogonal, simply changing their sizes will make them orthonormal.)
You should be thinking of linear splines, those triangles that form a basis for but not an orthogonal one.
Well, we can still find an orthogonal direct sum, and we can even find a mother wavelet whose integer translates form a basis for .
But just as the basis for is not orthogonal, neither is the basis for . Still, because the direct sum is orthogonal, every element of is orthogonal to every element of . They’re just not orthogonal to thir own integer translates: may very well be orthogonal to many or most of its integer translates, but it is not orthogonal to all of them.
You know what? Pictures are good, so here are two pictures.
The linear spline scaling function, as we have seen, has three h coefficients, namely:
… and it looks like this:
Now, if we take as our mother wavelet a function whose g coefficients are (think of me as Moses waving stone tablets in your face saying, “God gave me these.”) …
…then we get:
The most significant feature may very well be that this mother wavelet has 5 coefficients while its scaling function has 3 coefficients.
The second most important feature may be that this mother wavelet is a spline, like the scaling function. Some people know a lot about splines, and some of them would rather have splines than orthonormal bases.
That mother wavelet is orthogonal to all of the integer translates of the linear spline. It is not orthogonal to its own translates by or , just as the linear spline is not orthogonal to its own translates by .
This is the decomposition which is called a “semi-orthogonal multi-resolution analysis”, and that mother wavelet is called a “pre-wavelet”. So far, Strang & Nguyen (Strang, Gilbert; Nguyen, Truong.Wavelets and Filter Banks.Wellesley-Cambridge Press, 1997 (revised edition).ISBN 0 9614088 7 1) is the only book in which I have found this. Oh, this is not an example of biorthogonal wavelets; for that, I need a second scaling function and two mother wavelets.
(edit: 13 July. I should remark that we’re not out of the woods in the semi-orthogonal case. Because our bases for and are not orthogonal, it seems to me that we will still need to find their reciprocal bases, for computing components of vectors.)
This is the function whose existence I believed in but couldn’t find. Well, now I’ve been given it but I don’t understand where those five coefficients came from.
As a reminder, the mother wavelet was computed from its g coefficients, using the analog of the dilation equation:
Please note that the scaling function was computed, as before, using the dyadic expansion; therefore the mother wavelet is computed only at dyadic points. In particular, neither of these is the terrible function I referred to, but have not yet shown you, which is infinite at all the dyadic points, and therefore must be computed by a different method.
Oh, a point is “dyadic” if it’s an integer divided by a power of 2; that is, all those points at which I have been computing scaling functions.
Let me show you some quick calculations of integrals. As usual, I am going to calculate the finite sum of areas of rectangles:
with dx = 1/128, and f(x) = a product of two functions. Just for simplicity, I let x range from -6 to 6, because some of my functions are shifted. NOTE that the shift is given by k, and that value is set in the command.
First, let’s approximate . It should be 0, because the mother wavelet is orthogonal to the scaling function:
Just to be clear: that calculation had .
The mother wavelet is also orthogonal to translates of the scaling function, so we expect . The approximating sum is…
and for , I get:
Those three were very good (and identical!) approximations to 0. If we now approximate , we will get exactly zero, because the functions do not overlap at all; we have shifted the scaling function quite far enough.
Now look at the inner product of two translates of wavelets. Just as the scaling function is not orthogonal to overlapping translates, the mother wavelet is not orthogonal to its overlapping translates.
Here, as one example, is (an approximation of) …
We’ve seen numbers that approximate of zero, and that isn’t close.
What about scale? That still works: a wavelet at one scale is orthogonal to wavelets at a different scale. Here, as one example, is the approximation of .
I expect that the next wavelets post will be a clean summary of multi-resolution analysis for orthogonal wavelets. I’ve been flailing a bit, and I think I owe you some clear-cut statements of what we can get.
I was all set to put out a post about consequences of orthogonality, a couple of weeks ago, until I realized that the two ideas (orthogonality of subspaces V,W and orthogonality of functions) needed to be distinquished at a deeper level. Not just conceptually, as I had, but in terms of their consequences.
This example, which has orthogonal subspaces V,W but has neither orthogonal translates of scaling functions nor orthogonal translates of wavelets, makes me very happy. I know we can break the link between the two ideas of orthogonality, and in fact I now have both the scaling function and the mother wavelet in my hands, so I can compute to my heart’s content and see what is no longer true in this case.
(There’s nothing special about linear splines. We can do this for higher order splines, too. Well, okay, I still have to figure out how to derive the g coefficients for the mother wavelet. And I’m pretty sure there were other possible solutions for the linear spline, with more than 5 coefficients, so there are probably an infinite number of solutions for any spline.)
I am now willing to return to the standard presentation of multi-resolution analysis of orthogonal wavelets, which assumes that both forms of orthogonality hold. I now have sufficient context for that standard presentation. That is enough to let me move on; I just needed to know their place in the universe. And, as I say, after all the confusion, I owe you an unambiguous summary of orthogonal wavelets.