Details of approximating the D4 scaling function

the math

All too long ago, it seems, I showed you how to approximate the D4 scaling function. Let me fill in more of the details of the calculations.

I defined four functions and used three of them: two for upsampling – only one of which I used – , one for downsampling, and one for convolution. Let me do what I want other people to do for me: show me the code and show me what it does. Whether I can produce the same code in the same language, or corresponding code in another language, the output serves as a check.

First, let’s see what the code does.

playing with upsample & downsample

So what does the one-parameter upsample do?

Here is a test signal:


I call the 1-argument upsample, which inserts zeroes after each value:

upsample[x] produces


They have a comment which says that their code does precisely the same thing. Note that we insert zeroes after every value in the signal, including the last.

Downsampling (dnsample) after that upsampling will recover the original signal:


If we downsample again, of course, we lose the even-indexed terms (2nd and 4th, since I’m counting from 1 rather than from 0):


And upsampling cannot now recover the lost data:


The two-parameter upsample function, called with integer S > 1, inserts S-1 zeroes after every value of its input signal. The one-parameter upsample function is equivalent to the two-parameter function called with “2”, because we insert 2-1=1 zeroes.

upsample[x] produces


while upsample[x,2] produces


For second parameter “3” (upsample[x,3]), as you should expect, we get…


… that is, 2 zeroes between each of the original values.

What if we iterate the one-parameter upsample function? Call it once…




We have 3 zeroes between each original value, not 2. Do it a third time…


We have inserted 7 zeroes. N repetitions of upsample will give us 2^N-1 zeroes. It turns out that we are going to want 127 = 2^7-1 zeros, so we could have used the original upsample function instead of writing a new one; but we would have had to call it 7 times. Yuck. The new one is simple enough and very convenient. In addition, when I’m trying to match the functionality of code in another language, I want as few differences as possible. They used the two-parameter function, so I did too, automatically. Then I – very briefly – considered using the one-parameter function instead.

Oh, we also saw that the one-parameter function can give us 1 or 3 or 7, etc. zeroes, but not other numbers of zeroes. If we were to use only one of these two functions, the two-parameter function is the more general.

Oh, oh. The reason for calling it with S-1 is just to let the computer do the arithmetic for us: the parameter S is the number of points per half-open unit interval (or, the points are 1/S apart).


Here is conv(u,v) from the online Matlab® documentation:


(That is a snapshot. Interestingly, I went back to find it again, and I found something functionally the same, but written differently, with k – j instead of k + 1 – j. It doesn’t affect what follows, and I’ll point out why.)

In addition, the Matlab® documentation says that if the two sequences u and v are the same length n, we get

w(1) = u(1)*v(1)
w(2) = u(1)*v(2)+u(2)*v(1)
w(3) = u(1)*v(3)+u(2)*v(2)+u(3)*v(1)

w(n) = u(1)*v(n)+u(2)*v(n-1)+ … +u(n)*v(1)

w(2*n-1) = u(n)*v(n)

Let’s look at that sum for n = 4, so we should get nonzero values w[1] through w[7]. I define the sum….

w[k_]=Sum[u[j] v[k+1-j],{j,4}]

… and Mathematica® prints

u[4] v[-3+k]+u[3] v[-2+k]+u[2] v[-1+k]+u[1] v[k]

If I ask for w[1], I get

w[1] = u[4] v[-2]+u[3] v[-1]+u[2] v[0]+u[1] v[1]

Oh, of course. v[m] = 0 except for m = 1, 2, 3, 4. If I tell Mathematica® to do that, then the expression reduces to

u[1] v[1]

and so on. The fourth term, in contrast, starts out as…

w[4] = u[4] v[1]+u[3] v[2]+u[2] v[3]+u[1] v[4]

and that’s what it remains; nothing vanishes from it.

Perhaps you see now why it doesn’t matter whether the definition has k – j or k + 1 – j. All that matters is the numerical value of the result, e.g. 4 or 5: v[4] is not zero, v[5] is zero, and it has nothing to do with the intermediate values of k and j that produce 4 or 5.

A table of the values w[0] thru w[8] is…

\begin{array}{c} 0 \\ u[1] v[1] \\ u[2] v[1]+u[1] v[2] \\ u[3] v[1]+u[2] v[2]+u[1] v[3] \\ u[4] v[1]+u[3] v[2]+u[2] v[3]+u[1] v[4] \\ u[4] v[2]+u[3] v[3]+u[2] v[4] \\ u[4] v[3]+u[3] v[4] \\ u[4] v[4] \\ 0\end{array}

where I have deliberately chosen to confirm the two zero values for w[0] and w[8].

So why not use this instead of a convolution command which I don’t fully understand?
Because I don’t want to have to set things to zero myself. I want to reduce my overhead.

OK, so much for their equation for w. The following Mathematica® ListConvolve command matches that table of sums, but only w[1] through w[7] this time (and displays it as a column:


The output is

\begin{array}{c} u[1] v[1] \\ u[2] v[1]+u[1] v[2] \\ u[3] v[1]+u[2] v[2]+u[1] v[3] \\ u[4] v[1]+u[3] v[2]+u[2] v[3]+u[1] v[4] \\ u[4] v[2]+u[3] v[3]+u[2] v[4] \\ u[4] v[3]+u[3] v[4] \\ u[4] v[4]\end{array}

I went with it. As I have said, I do not yet understand what these options to ListConvolve mean, but they appear to be what I want.

I have, quite frankly, taken one heck of a leap. I have matched the ListConvolve command to the conv command in one case, and it isn’t the case I’m interested in.

Don’t be afraid to take a big step if one is indicated; you can’t cross
a chasm in two small jumps.

William Lloyd George

There are several things I could do to investigate these implementations of convolution. I could have a friend play with the Matlab® conv command. I could install something called Octave – a free Matlab® clone – and play with it. I could simply use the given definition of w and play with it. I could take a serious look at the options to the ListConvolve command.

Or, since I now know how to compute the D4 scaling function exactly, using what I would call a dyadic expansion, I can just let it go and move on.

For now, I am going to move on.

upsampling the filter coefficients h

There are two things I want to check, but not here and not now.

First, I think this algorithm should work for any subdivison, not just for powers of 2: I think 100 should work as well as 128. If, however,we want to compare it to the dyadic expansion, we need powers of 2.

Second, I do want to investigate the convergence of this algorithm, by running more iterations of this approximation, and comparing the results to the exact answers.


Let’s see why they upsampled the filter coefficients h.

We have the dilation equation

\varphi(t) = \sum_n {h(n)\ \sqrt{2}\ \ \varphi(2\ t - n)}

and we have 4 nonzero h(n), but it is crucial to note that they are labeled from 0 to 3. let me enforce that. (In Mathematica®, I use an array starting at index 0, instead of a list.)


(Having manually changed “(” and “)” to “[” and “]”, I’m not not going to enforce the typography in what follows. Suffice it to that h[0] is the 0th entry of an array. The h() etc. which appear below are translations to TeX of h[] in Mathematica®.

Write out the dilation equation…

\varphi (t)=\sqrt{2}\  h(0) \varphi (2 t)+\sqrt{2}\  h(3)   \varphi (2 t-3)+\sqrt{2}\  h(2) \varphi (2 t-2)+\sqrt{2}\    h(1) \varphi (2 t-1)

We also know that \varphi(t) is zero outside the interval [0,3]. (In fact, it is zero outside the open interval (0,3), since (although I have not shown you) \varphi(0) = \varphi(3) = 0\ .)

    now we want to do three things:  

  • change the scale to 0 .. 384;
  • shift the first time from 0 to 1;
  • discretize the time scale from [1, 385], using increments of 1/128.

(I am ever more certain that we should have plotted the first 385 points, despite that their code plots the first 384. I’ve gone back and done it, and it seems just fine. Of course, since I’ve told you that the D4 scaling function is 0 at 0 and at 3, neither the first nor the last approximations need to be computed. On the other hand, I wanted to know that the algorithm displayed a reasonable value at point 385.)

To change the scale – that is, just to change the labels on the x-axis – from [0, 3] to [0,384] should involve changing only “n”.

We should now change the values of n from 0,1,2,3 to 0, 128, 256, 384. That’s it, the purpose of upsampling the filter coefficients h is just to change the time scale.

\varphi (t)=\sqrt{2}\  h(0) \varphi (2   t)+\sqrt{2}\  h(384) \varphi (2 t-384)+\sqrt{2}\  h(256)   \varphi (2 t-256)+\sqrt{2}\  h(128) \varphi (2 t-128)

Then, to shift the x-axis by 1, we change the values of n from 0, 128, 256, 384 to 1, 129, 257, 385.

\varphi (t)=\sqrt{2}\  h(385) \varphi (2 t-385)+\sqrt{2}\    h(257) \varphi (2 t-257)+\sqrt{2}\  h(129) \varphi (2   t-129)+\sqrt{2}\  h(1) \varphi (2 t-1)

And shift t to t+1.

\varphi (t+1)=\sqrt{2}\  h(385) \varphi (2   (t+1)-385)+\sqrt{2}\  h(257) \varphi (2   (t+1)-257)+\sqrt{2}\  h(129) \varphi (2   (t+1)-129)+\sqrt{2}\  h(1) \varphi (2 (t+1)-1)

but now we have \varphi(t+1) on the LHS and we want \varphi(t)\ . Perhaps better to call it \varphi(\tau) just for a moment.

\varphi (\tau )=\sqrt{2}\  h(385) \varphi (2 \tau   -385)+\sqrt{2}\  h(257) \varphi (2 \tau -257)+\sqrt{2}\    h(129) \varphi (2 \tau -129)+\sqrt{2}\  h(1) \varphi (2   \tau -1)

In other words, if we shift the origin, whatever we call it, it is still true that if we have “t” on the LHS then we have “2t” on the RHS.

the programming

Now let me say a bit more about my Mathematica® implementation of these functions. Let me recall them for you.

upsample[x_List] := Table[{x[[i]], 0}, {i, Length[x]}] // Flatten

upsample[x_List, S_Integer] :=
Table[{x[[i]], ConstantArray[0, {S – 1}]}, {i, Length[x]}] // Flatten

dnsample[x_List] := Table[x[[i]], {i, 1, Length[x], 2}]

conv[x_List, y_List] := ListConvolve[x, y, {1, -1}, 0]

upsample – downsample

The upsample function has only two parts, Table and Flatten. The heart of the upsample function is the Table command (Table[{x[[i]],0},{i,Length[x]}]), which, applied to our test signal…

x = \{4,3,2,1\}

creates a list of ordered pairs whose 2nd element is 0:


The “Flatten” command then turns that list of lists into a single list, which is what I want:


I am sure that I could use an Append command, with some options, to accomplish the same thing – but why should I try to figure out its options when I already have a conceptually trivial implementation?

For the two-argument function, the single “0” has been replaced by

ConstantArray[0, {S – 1}]

which is, as you would hope and expect, a list of S-1 zeroes. For example, with S = 6 (ConstantArray[0,{5}], not to be confused with upsample[x,6]) delivers


The Table command

Table[{x[[i]], ConstantArray[0, {2}]}, {i, Length[x]}]

gives me a list of lists of lists…


and then the Flatten command reduces it to a 1D list, which is, as before, what I want:


The downsample function

dnsample[x_List] := Table[x[[i]], {i, 1, Length[x], 2}]

just uses a Table command to extract the even-indexed entries of its input: the “2” is the increment.

My conv function

conv[x_List, y_List] := ListConvolve[x, y, {1, -1}, 0]

uses one set of example options I found in the Mathematica® documentation, and is known to match the Matlab® conv command in one test case. Maybe someday I’ll have to figure out all the options, but I have better things to do.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: