edit jun 13: corrected one of the 4 equations at the end.
I have shown you one way to compute the D4 scaling function. I am not entirely comfortable with the method, but I don’t need to be: I can do better.
The method I’ve shown you begins with a terrible approximation — a constant function or signal — and improves it by repeatedly applying a fixed filter to it.
The primer by Burrus et al. also provides MATLAB® code for a quite different method. It turns out that we can find the exact values of the D4 scaling function at the integers. The dilation equation can then be used to find the exact values at the halves… then the exact values at the quarters… and so on. It’s called a dyadic expansion because it works for points whose denominator is a power of 2.
Their code, however, was challenging to figure out.
But I don’t need to figure it out. I can do better.
I can do so much better that it must be illegal, immoral, and fattening.
I don’t have to write any code at all — well, ok, I have to write an “If” clause. Mathematica® can do recursion, and the dilation equation is all I need. Oh, I need initial values, too.
Rather than distract you from this marvel by putting the details first, I’m going to show you the end product first.
the dyadic expansion simply & efficiently
To do a recursion, I need to have initial values. I’ll derive them later in this post, but the values of the scaling function at the integers 0, 1, 2, 3 are components of an eigenvector V:
I set those to be the function values at 4 points…
and then I define the function. (Whoa! I have to use a picture, apparently, because of special characters.)
(Actually, I can make that work. The problem is the < and > symbols, which are apparently being interpreted as containing nonsense HTML. Use & lt; and & gt; without the space after the “&”, which is what I did to make the code display.
The set delayed operator (:=) says don’t do anything until I give it an actual input. The in the section just says “after you compute it, assign it”.
There is a way to see what Mathematica® knows at this point.
Let’s compute it at 1/2. We get
and let’s see what Mathematica® knows now:
He knows a lot, more than when he started. And he will return from the table as soon as he finds any known value, and fall thru to the last line — the recursion — only for unknown values. This is a combined lookup table and recursion.
(I know that both Java and C++ are recursive; it appears that MATLAB® is not.)
Now that is nice. It shows that Mathematica® needed a wide range of values for the computation, and it saved them — and, more importantly, the next time it needs, for example, , it will find it in that list before it gets to the final entry, which is the recursion.
I must emphasize that we are computing exactly, at new points. The values at old points are never changed in this process.
OK, let’s see what we get. There’s absolutely no computational reason to proceed in stages, but I want to make sure you see it grow.
Here I show the initial values ( at the integers)…
We add the values at the halves.
We add the values at the quarters. And I can disdainfully tell it to “recompute” the values at the halves because I know it will look them up rather than actually compute them.
Add the eighths…
I note, from the position of the dots to the right of the maximum, that the function is changing very rapidly there. I wouldn’t be surprised if the convergence in our previous method is not very quick there.
getting the initial values
Let me now back up and show you how I got the scaling function at the integers. Oh, understand that it is zero outside the interval [0,3].
Here is the dilation equation for N=4, that is, for four nonzero filter coefficients h.
First, evaluate the dilation equation at the integers 0 .. 4. If we simply set t = 0, we get
but then we need to zero out the three values of at the negative integers in that equation, getting
For t = 1,
and this time we set , getting
For t = 2…
and we set , getting…
finally, t = 3…
and we set three of the to zero, getting
so, we have the system of equations
and we could solve them.
It is more instructive and useful to look at a matrix representation. That set of equations can be written
with the matrix M0 defined — in stages — as
and then I multiply it by , getting
(I created M0 using the diagonal bands, long before I knew how it was derived. Things were a little clearer to me without that ubiquitous .
Oh, and the vector holds the 4 values of at 0, 1, 2, 3:
But that matrix equation says that is an eigenvector of M0, with eigenvalue 1. And we can find eigenthings.
To solve it, we need the values of h for the D4 scaling function.
For the D4 scaling function, then, M0 becomes
Here are its eigenvalues (first row), and corresponding eigenvectors…
We see that the first eigenvalue is 1, so the first eigenvector is the one we want.
That is a vector of length 1. For reasons we will discuss later, I want a vector whose components add up to 1. It’s still an eigenvector.
(The signs changed because I simply divided by the sum, and the sum was negative.) We take those as the exact values of at the integers 0 thru 3. You see that ,
as I have said before.
So: the initial values of the scaling function are the components of an eigenvector of a matrix M0, specifically of an eigenvector with eigenvalue 1. That is how I got them.
the simple recursion
Of course we’ve already set up a recursion combined with a lookup table. I just want to emphasize the distinction between what I actually do, and the unadorned recursion.
We are ready to write the recursion. It looks almost exactly like the dilation equation…
but instead of “==” I use the “set delayed operator “:=”, and the LHS “t” must become a placeholder “t_” ; and the whole definition must be wrapped in an If-test to set all values to zero outside of [0,3] .
Oh, since I’m starting all over again (don’t want to get two sets of definitions mixed up), I need to set the values of h…
Again, I define the function values exactly as before, at 4 points…
And then I define the recursion:
Again, I can code that now, instead of using a picture.
The set delayed operator (:=) says don’t do anything until I give it an actual input. What’s different in this definition is instead of . There is no assignment statement to saying that the result of the computation should be saved.
Let’s compute again. We get the same answer, of course:
and let’s see what Mathematica knows:
That’s exactly what we knew before we computed it at 1/2. This is a pure recursion. He recomputes everything every time he needs it.
Simple, but potentially expensive in computer time.
From the dilation equation and the h coefficients, we can get the values of the scaling function at the integers. we do this either by directly solving a system of equations, or by recognizing this particular system of equations as an eigenvector problem
In either case, if we are at all uncertain what the matrix M0 looks like, we can construct it from the equations.
Note that because the values of at the integers are the components of an eigenvector, they are only determined to with a scalar multiple. That’s ok, we have a reason for normalizing so that the sum of the values is 1. (That, of course, is different from the sum of the squares of the values = 1.) And even that normalization will only get is to within a sign. If you ever plot a scaling function and get the negative of someone else’s picture, all you need to do is change the signs of the inital values at the integers, i.e. change the sign of the eigenvector.
Having the values of at the integers, we can get it at the halves, then the quarters, etc.
As I have said before, the purpose of this algorithm is to see what the scaling function looks like, and it will work for the mother wavelet, too.. In practice, however, we can compute the wavelet coefficients of a signal uisng the filter coefficients h (and the corresponding coefficients for the mother wavelet) — we don’t need the values of the scaling function or the wavelet to get the wavelet coefficients.
(I believe that the previous algorithm, however, serves another purpose. It seems more than plausible that the previous algorithm is used to prove existence and uniqueness of the scaling function, once we have its filter coefficients h.)
For comparison with the next case, let me note that the 4 h’s satisfy the following 4 equations…
edit jun 13: the third and fourth equations were identical. i have corrected the third equation by changing the sign of the cosine term.
The Daubechies h’s are found by setting .
All I’ve done is push back the mystery of where the h’s came from. Don’t worry. We’ll get there.
I had thought that I would show you plots of other scaling functions, but it makes more sense to show you how to get the mother wavelet.
Next. Unless something really important comes up.