## Introduction

I am going to do something a little risky. I’m going to show you how to approximate a particular function – the Daubechies D4 scaling function – even though I am a little shaky on the justification.

If you have ever looked at wavelets, you have probably seen this picture. It is a quintessential “father wavelet” – more commonly, I think, called a “scaling function”.

I want to compute it; this is not relevant to computing wavelet coefficients, because just as this is computed from its definition, so wavelet coefficients which depend on it and its associated wavelets are computed from its definition rather than from its values.

Talking about this is much like talking about one of one’s favorite movies: what makes something one of my favorite movies is deeply personal, and it doesn’t usually affect my friends that way. To them, at best, it’s just another movie; at worst, they think it is a dud.

I have wanted to know how to compute this function for a long, long time. You have noticed by now, however, that I tend to bounce from one subject to another; my desire to compute this function was put on the back burner.

Let me say again that in practice we do not compute this function: we use its defining characteristics instead of its actual values. But I just had to know how to do it – someday.

Well, it has been “someday” for a little while now; and I’ve been struggling to assemble this post.

It has gotten me to pick up wavelets again, and to look through my books on the subject. This particular computation has gotten me to look at some more challenging Matlab code, and it will lead me to study convolution. (Matlab code would be far less challenging if I had access to Matlab!)

This function is defined on the interval [0, 3]; it has no analytic formula. It is the solution of a difference equation, called variously “the dilation equation”, “the refinement equation”, and the multi-resolution analysis (MRA) equation”:

and in this case, the scaling coefficients h are:

It is called D4 because it was invented by Ingred Daubechies and it has 4 nonzero h’s. As I said, it is called a “scaling function” in contrast to “a wavelet”. We’ll see why, down the road.

If we look at wavelets, we see this function very often. But most books do not explicitly show us how to calculate it; and the calculation is not completely straightforward.

The method I am going to use comes out of DSP, discrete signal processing. It is not a tool I use all that comfortably, but the calculation itself is rather simple. On the other hand, details keep piling up, so I am going to split this post.

The method is iterative. We will start with a constant signal, and apply a filter to it eight times. I printed a picture of the transformed signal after each iteration, and I showed it to two friends who had some familiarity with DSP – but who were not practitioners of it.

They both found the algorithm bewildering. I now understand it in general, but there are still details I wonder about. Nevertheless, what I did to my friends was just wave the pictures in their faces, and mutter about upsampling, downsampling, and convolution.

In this post I want to show you the pictures, the Mathematica code, and the underlying idea. In the next post I will fill in some of the underlying details.

## Overview

My starting point was Matlab code in the back of a book. If you have Matlab available to you, duplicating my calculations will be very easy – because their code is also available on the Internet.

The very beginning of the appendix containing the code says

“You are free to use these programs or any derivative of them for any scientific purpose but please reference this book. Updated versions of these programs and others can be found on our webpage at: [http://dsp.rice.edu/software/introduction-wavelets-and-wavelet-transforms-primer]”

except I have corrected the URL (replacing dsp-rice by dsp.rice).

The book is: Burrus, C. Sidney; Gopinath, Ramesh A.; Guo, Haitao. **Introduction to Wavelets and Wavelet Transforms, A Primer.** Prentice Hall, 1998.ISBN 0 13 489600 9.

If you want to see or to use that code, please go get it; while I have used – and will show you – derivative code of my own, I will not actually publish their code. (After all, they said I could use the programs; they didn’t say I could publish them.) Oh, the specific function you want is their “psa(h, kk)”.

They define two completely straightforward functions: downsample and upsample. Here is my Mathematica code for them.

dnsample[x_List] := Table[x[[i]], {i, 1, Length[x], 2}]

upsample[x_List] := Table[{x[[i]], 0}, {i, Length[x]}] // Flatten

The only subtlety in the downsample code is that it drops the even-indexed terms (rather than the odd ones), counting the first term as index 1. Similarly, the upsample code inserts zeroes after the 1st term.

Then they define an upsample function which takes an additional (integer) argument, S, and instead of inserting a single zero between original values, inserts S-1 zeroes. Yes, I do know why their argument was S instead of S-1. You’ll see why, too. Here is my mathematica code for it.

upsample[x_List, S_Integer] :=

Table[{x[[i]], ConstantArray[0, {S – 1}]}, {i, Length[x]}] // Flatten

(Because Mathematica understands parameter lists, I do not have to change the name of the second upsample function; it distinquishes just fine between the function upsample with two arguments and the function upsample with one argument.)

The tricky point for me in the code is the Matlab function “conv”. As far as I can tell, the following Mathematica code is the equivalent.

conv[x_List, y_List] := ListConvolve[x, y, {1, -1}, 0]

(The challenge I face is understanding the options for the ListConvolve command. I would not bet the farm – I might bet a Dover paperback or two – that my Mathematica command is always equivalent to the Matlab command.)

## The algorithm

Now let me show you the code that generates the scaling function.

p = dnsample[conv[hu, p]] // N;

Take[%, 384];

The first line computes the convolution of two sequences, hu and p; it downsamples the result; and then it overwrites the sequence p. (The //N converts from rationals to reals.) The second line says that we are only going to plot the first 384 values in p. To rephrase that, this approximation to the scaling function is the first 384 points of the signal p.

(There are other ways to approximate the D4 scaling function; this is the most straightforward – even though I am not all that comfortable with convolution – because it repeatedly applies the same filter, “convolution followed by downsampling”, to an initial constant signal. I had originally thought to look at a different method in this book, until I realized that all I had to understand in this case was one filter.)

What exactly is hu? And where did the 384 come from?

We start with the h coefficients which define the D4 scaling function:

We multily by , and give the result a new name, h2:

Why? Recall the dilation equation:

All we have done is absorb the into the h’s. Our dilation equation is now

Our function is defined by 4 values of h, and – because of these 4 specific values, **I think** – our function is nonzero on the interval [0, 3]. We are going to compute it at 128 samples per unit interval, and we have 3 intervals, so we take 384 = 3 * 128 points.

In Mathematica, I used “rules” to set two parameters, K = 3 = 4 – 1, and S = 128, as follows:

ps={K->Length[h2]-1,S->128}

which gives me

For this post, however, I have chosen to work with these specific numbers; my original Mathematica notebook – in contrast to the notebook for this post – is still “rule based” and I can change the parameters very easily.

Now comes a piece of real magic: we upsample the filter coefficients h2, introducing 127 zeros (and so they wrote the code to do the subtraction for them, 127 = 128 – 1). This we call hu (u for upsampled) and the function call is:

hu=upsample[h2,128];

As a result, the four nonzero values of hu are at indices 1, 129, etc.:

**There is one of my first questions: since the final filter coefficient is at index 385, why did they not plot the first 385 values of p?**

In any case, hu is an upsampled set of filter coefficients. My friends and I were quite bewildered. We had all heard of upsampling a set of data – but upsampling the filter coefficients instead?

This, it turns out, is quite simple. It also has nothing to do with the up and down sampling characteristic of the filter bank approach to wavelets.

By taking 384 or 385 points, we have changed our timescale from [0, 3] to 1 .. 384 or 1 .. 385. That means we have to move the filter coefficients from the integers 0, 1, 2, 3 to the integers 1, 129, 247, 385 (or possibly to 0, 128, 246, 384, but I choose to start indexing at 1 instead off at 0.)

So much for upsampling the filter coefficients h.

What about the iteration itself? Instead of the dilation equation, the authors write (note the superscripts (k) and (k+1) ):

This is reminiscent of a fixed point problem

x = f(x)

which we may try to solve as an iteration: guess an x; compute f(x); now call that x because the equations says f(x) is x; if it converges, we have an answer. That is, the iteration:

might converge. On the other hand, the process might not converge, and even if it does, the answer need not be unique.

We have a slightly different situation because the RHS is at time 2t-n while the LHS is at time t, and it’s the function we’re trying to find, rather than x (or t). Still, if we can find a nice starting function , we can compute , and then , etc.

The idea looks very plausible to me; what threw me for a loop (no pun intended) was digitizing the dilation equation. Understanding that upsampling the filter coefficients was just a change of scale, went a long way toward understanding the algorithm.

I need to show you two things. One is the filter which is applied repeatedly, to get us from to . The other is the initialization, .

Here is the initialization step:

The first line,

p = ConstantArray[1, {1152}]/9;

says we start with 1152 values of p, each of them equal to 1/9. Let me be clear: **I do not know why we choose a power of 2 (2^7= 128) points per unit interval** – rather than, say, a power of 2 points for the entire interval [0, 3], nor **why we initialzied to 1/9**; nor why, with 384 points of interest, **we initialized our constant signal to 3 times that length** (1152 = 3*384 = 9*128).

The second line,

Take[%, 384];

says that we are only interested in the first 384 points, i.e. we are only interested in the answer over the original range [0,3].

And, as I said before, **I don’t know why we look at 384 points instead of 385.**

The third line,

data = Transpose[{Range[384]/128, %}];

is used only to change the horizontal scale for plotting purposes: I don’t want a horizontal scale of 1 .. 384, but rather [0,3].

The fourth line is the specific graphics commands and options which I used. Unless you have David Park’s “presentation” package, you will need to use the Mathematica “ListPlot” command instead of “ListDraw”.

dwg[0] = Draw2D[{

Thickness[.01], line, ListDraw[%, Joined -> True]

}, Axes -> True, PlotRange -> {All, {-.5, 1.5}}, Background -> back,

AspectRatio -> 1/GoldenRatio]

Although a plot of the initial function seems superfluous, let me err on the side of too many pictures.

I remind you that I have plotted the first 384 values, but the signal p starts out three times that long.

Okay, let’s apply our filter. Here is a snapshot of my code and the plot:

The first line of our code

p = dnsample[conv[hu, p]] // N;

says convolve hu with p, downsample, and replace the signal by that output. The convolution corresponds to the RHS , and the downsampling should have the effect of setting the result to time t rather than time 2t: rather than

Once again, the second line of the code,

Take[%, 384];

says take the first 384 values of the sequence p. Note that p itself was redefined in the first line; the second line selects a subset of p, but the next iteration will use more than the 384 points selected on the second line.

The third line, as before,

data = Transpose[{Range[384]/128, %}];

just changes the scale on the horizontal axis. The fourth line, as before,

dwg[1] = Draw2D[{

Thickness[.01], line, ListDraw[%, Joined -> True]

}, Axes -> True, PlotRange -> {All, {-.5, 1.5}}, Background -> back,

AspectRatio -> 1/GoldenRatio]

plots the first 384 values of the current sequence p.

## The plots and summary

The initialization, a constant signal = 1/9, :

After one application (convolve and downsample), :

After the second application, :

After the third…

After the fourth…

After the fifth…

After the sixth…

After the seventh…

Finally, after the eighth (which is the picture I started this post with)…

We could worry about how close the answer is, but – as I said – we don’t need this function numerically; what we need in practice is its 4 filter coefficients h.

The summary is simple enough. I am delighted that there is so little overhead involved in this algorithm: convolve and downsample, repeatedly. This is slick and quick.

I do have more details to show you. Soon.

April 20, 2009 at 7:41 pm

here’s a pretty clear description of the cascade, the indexing is over [0,1] though

http://cnx.org/content/m10486/latest/

April 25, 2009 at 12:28 pm

Thank you for the link.

He is describing what I would have called “the dyadic expansion” rather than the cascade algorithm. Regardless, it is a different algorithm from the one presented here., As i said in “Happenings Apr 25”, I expect to be presenting the dyadic expansion as an explicit recursion.

February 17, 2013 at 6:32 am

Rip,

Can you explain how the Daubechies factorisation method works ? Just a simple example – constructing a non-standard wavelet filter, formulas for the coefficients, etc 🙂