The dyadic expansion of Daubechies D4 scaling function

introduction

edit jun 13: corrected one of the 4 equations at the end.

I have shown you one way to compute the D4 scaling function. I am not entirely comfortable with the method, but I don’t need to be: I can do better.

The method I’ve shown you begins with a terrible approximation — a constant function or signal — and improves it by repeatedly applying a fixed filter to it.

The primer by Burrus et al. also provides MATLAB® code for a quite different method. It turns out that we can find the exact values of the D4 scaling function at the integers. The dilation equation can then be used to find the exact values at the halves… then the exact values at the quarters… and so on. It’s called a dyadic expansion because it works for points whose denominator is a power of 2.

Their code, however, was challenging to figure out.

But I don’t need to figure it out. I can do better.

I can do so much better that it must be illegal, immoral, and fattening.

I don’t have to write any code at all — well, ok, I have to write an “If” clause. Mathematica® can do recursion, and the dilation equation is all I need. Oh, I need initial values, too.

Rather than distract you from this marvel by putting the details first, I’m going to show you the end product first.

the dyadic expansion simply & efficiently

To do a recursion, I need to have initial values. I’ll derive them later in this post, but the values of the scaling function at the integers 0, 1, 2, 3 are components of an eigenvector V:

V = \{0.,1.36603,-0.366025,0.\}

I set those to be the function values at 4 points…

\{\varphi (0),\varphi (1),\varphi (2),\varphi (3)\} = V

and then I define the function. (Whoa! I have to use a picture, apparently, because of special characters.)

Picture 45

(Actually, I can make that work. The problem is the < and > symbols, which are apparently being interpreted as containing nonsense HTML. Use & lt; and & gt; without the space after the “&”, which is what I did to make the code display.

\varphi[t\_]:=\varphi[t]=\text{If}\left[t <0 || t >3,0,\text{Chop}\left[\sum _{n=0}^3 h(n) \sqrt{2} \varphi (2\ t-n)\right]\right]

The set delayed operator (:=) says don’t do anything until I give it an actual input. The \varphi[t] in the section \varphi[t\_] := \varphi[t]= just says “after you compute it, assign it”.

There is a way to see what Mathematica® knows at this point.

?? \varphi

picture-341

Let’s compute it at 1/2. We get

\varphi[1/2] = 0.933013

and let’s see what Mathematica® knows now:

picture-361

He knows a lot, more than when he started. And he will return from the table as soon as he finds any known value, and fall thru to the last line — the recursion — only for unknown values. This is a combined lookup table and recursion.

(I know that both Java and C++ are recursive; it appears that MATLAB® is not.)

Now that is nice. It shows that Mathematica® needed a wide range of values for the computation, and it saved them — and, more importantly, the next time it needs, for example, \varphi[1/2]\ , it will find it in that list before it gets to the final entry, which is the recursion.

I must emphasize that we are computing \varphi exactly, at new points. The values at old points are never changed in this process.

OK, let’s see what we get. There’s absolutely no computational reason to proceed in stages, but I want to make sure you see it grow.

Here I show the initial values (\varphi at the integers)…
picture-37

We add the values at the halves.

picture-38

We add the values at the quarters. And I can disdainfully tell it to “recompute” the values at the halves because I know it will look them up rather than actually compute them.

picture-39

Add the eighths…

picture-40

Sixteenths…

picture-41

Thirty-seconds…

picture-42

Sixty-fourths…

picture-43

One-twenty-eighths…

picture-44
I note, from the position of the dots to the right of the maximum, that the function is changing very rapidly there. I wouldn’t be surprised if the convergence in our previous method is not very quick there.

getting the initial values

Let me now back up and show you how I got the scaling function \varphi at the integers. Oh, understand that it is zero outside the interval [0,3].

Here is the dilation equation for N=4, that is, for four nonzero filter coefficients h.

\varphi (t)=\sqrt{2} h(0) \varphi (2 t)+\sqrt{2} h(3) \varphi (2 t-3)+\sqrt{2} h(2) \varphi   (2 t-2)+\sqrt{2} h(1) \varphi (2 t-1)

First, evaluate the dilation equation at the integers 0 .. 4. If we simply set t = 0, we get

\varphi (0)=\sqrt{2} h(3) \varphi (-3)+\sqrt{2} h(2) \varphi (-2)+\sqrt{2} h(1) \varphi   (-1)+\sqrt{2} h(0) \varphi (0)

but then we need to zero out the three values of \varphi at the negative integers in that equation, getting

\varphi (0)=\sqrt{2} h(0) \varphi (0)

For t = 1,

\varphi (1)=\sqrt{2} h(3) \varphi (-1)+\sqrt{2} h(2) \varphi (0)+\sqrt{2} h(1) \varphi   (1)+\sqrt{2} h(0) \varphi (2)

and this time we set \varphi[-1] = 0, getting

\varphi (1)=\sqrt{2} h(2) \varphi (0)+\sqrt{2} h(1) \varphi (1)+\sqrt{2} h(0) \varphi (2)

For t = 2…

\varphi (2)=\sqrt{2} h(3) \varphi (1)+\sqrt{2} h(2) \varphi (2)+\sqrt{2} h(1) \varphi   (3)+\sqrt{2} h(0) \varphi (4)

and we set \varphi[4] = 0\ , getting…

\varphi (2)=\sqrt{2} h(3) \varphi (1)+\sqrt{2} h(2) \varphi (2)+\sqrt{2} h(1) \varphi (3)

finally, t = 3…

\varphi (3)=\sqrt{2} h(3) \varphi (3)+\sqrt{2} h(2) \varphi (4)+\sqrt{2} h(1) \varphi   (5)+\sqrt{2} h(0) \varphi (6)

and we set three of the \varphi to zero, getting

\varphi (3)=\sqrt{2} h(3) \varphi (3)

so, we have the system of equations

\begin{array}{c} \varphi (0)=\sqrt{2} h(0) \varphi (0) \\ \varphi (1)=\sqrt{2} h(2) \varphi (0)+\sqrt{2} h(1) \varphi (1)+\sqrt{2} h(0) \varphi (2)   \\ \varphi (2)=\sqrt{2} h(3) \varphi (1)+\sqrt{2} h(2) \varphi (2)+\sqrt{2} h(1) \varphi (3)   \\ \varphi (3)=\sqrt{2} h(3) \varphi (3)\end{array}

and we could solve them.

It is more instructive and useful to look at a matrix representation. That set of equations can be written

M_0 \varphi = \varphi

with the matrix M0 defined — in stages — as

\left(\begin{array}{llll} \text{h0} & 0 & 0 & 0 \\ \text{h2} & \text{h1} & \text{h0} & 0 \\ 0 & \text{h3} & \text{h2} & \text{h1} \\ 0 & 0 & 0 & \text{h3}\end{array}\right)

and then I multiply it by \sqrt{2}\ , getting

\left(\begin{array}{llll} \sqrt{2} \text{h0} & 0 & 0 & 0 \\ \sqrt{2} \text{h2} & \sqrt{2} \text{h1} & \sqrt{2} \text{h0} & 0 \\ 0 & \sqrt{2} \text{h3} & \sqrt{2} \text{h2} & \sqrt{2} \text{h1} \\ 0 & 0 & 0 & \sqrt{2} \text{h3}\end{array}\right)

(I created M0 using the diagonal bands, long before I knew how it was derived. Things were a little clearer to me without that ubiquitous \sqrt{2}\ .

Oh, and the vector \varphi holds the 4 values of \varphi at 0, 1, 2, 3:

\begin{array}{c} \varphi (0) \\ \varphi (1) \\ \varphi (2) \\ \varphi (3)\end{array}

But that matrix equation says that \varphi is an eigenvector of M0, with eigenvalue 1. And we can find eigenthings.

To solve it, we need the values of h for the D4 scaling function.

\left\{\text{h0}\to \frac{1+\sqrt{3}}{4 \sqrt{2}},\text{h1}\to \frac{3+\sqrt{3}}{4   \sqrt{2}},\text{h2}\to \frac{3-\sqrt{3}}{4 \sqrt{2}},\text{h3}\to \frac{1-\sqrt{3}}{4 \sqrt{2}}\right\}

For the D4 scaling function, then, M0 becomes

\left(\begin{array}{llll} \frac{1}{4} \left(1+\sqrt{3}\right) & 0 & 0 & 0 \\ \frac{1}{4} \left(3-\sqrt{3}\right) & \frac{1}{4} \left(3+\sqrt{3}\right) & \frac{1}{4}   \left(1+\sqrt{3}\right) & 0 \\ 0 & \frac{1}{4} \left(1-\sqrt{3}\right) & \frac{1}{4} \left(3-\sqrt{3}\right) & \frac{1}{4}   \left(3+\sqrt{3}\right) \\ 0 & 0 & 0 & \frac{1}{4} \left(1-\sqrt{3}\right)\end{array}\right)

or

\left(\begin{array}{llll} 0.683013 & 0. & 0. & 0. \\ 0.316987 & 1.18301 & 0.683013 & 0. \\ 0. & -0.183013 & 0.316987 & 1.18301 \\ 0. & 0. & 0. & -0.183013\end{array}\right)

Here are its eigenvalues (first row), and corresponding eigenvectors…

\begin{array}{c} \{1.,0.683013,0.5,-0.183013\} \\ \left(\begin{array}{llll} 0. & -0.965926 & 0.258819 & 0. \\ 0.408248 & -0.816497 & 0.408248 & 0. \\ 0. & 0.707107 & -0.707107 & 0. \\ 0. & 0.408248 & -0.816497 & 0.408248\end{array}\right)\end{array}

We see that the first eigenvalue is 1, so the first eigenvector is the one we want.

\{0.,-0.965926,0.258819,0.\}

That is a vector of length 1. For reasons we will discuss later, I want a vector whose components add up to 1. It’s still an eigenvector.

\{0.,1.36603,-0.366025,0.\}

(The signs changed because I simply divided by the sum, and the sum was negative.) We take those as the exact values of \varphi at the integers 0 thru 3. You see that \varphi(0) = \varphi(3) = 0\ ,
as I have said before.

So: the initial values of the scaling function are the components of an eigenvector of a matrix M0, specifically of an eigenvector with eigenvalue 1. That is how I got them.

the simple recursion

Of course we’ve already set up a recursion combined with a lookup table. I just want to emphasize the distinction between what I actually do, and the unadorned recursion.

We are ready to write the recursion. It looks almost exactly like the dilation equation…

but instead of “==” I use the “set delayed operator “:=”, and the LHS “t” must become a placeholder “t_” ; and the whole definition must be wrapped in an If-test to set all values to zero outside of [0,3] .

Oh, since I’m starting all over again (don’t want to get two sets of definitions mixed up), I need to set the values of h…

\left\{\frac{1+\sqrt{3}}{4 \sqrt{2}},\frac{3+\sqrt{3}}{4 \sqrt{2}},\frac{3-\sqrt{3}}{4   \sqrt{2}},\frac{1-\sqrt{3}}{4 \sqrt{2}}\right\}

Again, I define the function values exactly as before, at 4 points…

\{\varphi (0),\varphi (1),\varphi (2),\varphi (3)\} = \{0.,1.36603,-0.366025,0.\}

And then I define the recursion:

Picture 46

Again, I can code that now, instead of using a picture.

\varphi[t\_]:=\text{If}\left[t <0 || t >3,0,\text{Chop}\left[\sum _{n=0}^3 h(n) \sqrt{2} \varphi (2\ t-n)\right]\right]

The set delayed operator (:=) says don’t do anything until I give it an actual input. What’s different in this definition is \varphi[t \_] := If... instead of \varphi[t \_] := \varphi[t] = If...\ . There is no assignment statement to \varphi[t] saying that the result of the computation should be saved.

Let’s compute \varphi[1/2] again. We get the same answer, of course:

\varphi[1/2] = 0.933013

and let’s see what Mathematica knows:

picture-35

That’s exactly what we knew before we computed it at 1/2. This is a pure recursion. He recomputes everything every time he needs it.

Simple, but potentially expensive in computer time.

Summary

From the dilation equation and the h coefficients, we can get the values of the scaling function \varphi at the integers. we do this either by directly solving a system of equations, or by recognizing this particular system of equations as an eigenvector problem

M_0 \varphi = \varphi\ .

In either case, if we are at all uncertain what the matrix M0 looks like, we can construct it from the equations.

Note that because the values of \varphi at the integers are the components of an eigenvector, they are only determined to with a scalar multiple. That’s ok, we have a reason for normalizing so that the sum of the values is 1. (That, of course, is different from the sum of the squares of the values = 1.) And even that normalization will only get is to within a sign. If you ever plot a scaling function and get the negative of someone else’s picture, all you need to do is change the signs of the inital values at the integers, i.e. change the sign of the eigenvector.

Having the values of \varphi at the integers, we can get it at the halves, then the quarters, etc.

As I have said before, the purpose of this algorithm is to see what the scaling function looks like, and it will work for the mother wavelet, too.. In practice, however, we can compute the wavelet coefficients of a signal uisng the filter coefficients h (and the corresponding coefficients for the mother wavelet) — we don’t need the values of the scaling function or the wavelet to get the wavelet coefficients.

(I believe that the previous algorithm, however, serves another purpose. It seems more than plausible that the previous algorithm is used to prove existence and uniqueness of the scaling function, once we have its filter coefficients h.)

For comparison with the next case, let me note that the 4 h’s satisfy the following 4 equations…

edit jun 13: the third and fourth equations were identical. i have corrected the third equation by changing the sign of the cosine term.

h(0)=\frac{-\cos (a)+\sin (a)+1}{2 \sqrt{2}}

h(1)=\frac{\cos (a)+\sin (a)+1}{2 \sqrt{2}}

wrong: h(2)=\frac{-\cos (a)-\sin (a)+1}{2 \sqrt{2}}

right: h(2)=\frac{+\cos (a)-\sin (a)+1}{2 \sqrt{2}}

h(3)=\frac{-\cos (a)-\sin (a)+1}{2 \sqrt{2}}

The Daubechies h’s are found by setting a = \pi/3\ .

All I’ve done is push back the mystery of where the h’s came from. Don’t worry. We’ll get there.

I had thought that I would show you plots of other scaling functions, but it makes more sense to show you how to get the mother wavelet.

Next. Unless something really important comes up.

Advertisements

2 Responses to “The dyadic expansion of Daubechies D4 scaling function”

  1. Bill Maier Says:

    Very interesting article. Do you have a Mathematica notebook I can look at? I tried to piece it together from your article but I’m missing something and I don’t get the same results.

  2. rip Says:

    Hi Bill,

    You should have received a notebook via email.

    As I have said, I use David Park’s “Presentations” package, so you may have to figure out how to do the graphics using Mathematica’s commands


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: