Wavelet properties: orthogonality & counterexample


Edit 3 Aug. It is embarrassing to be so wrong in my guesses. Well, I understand more today than I did yesterday.

The two forms of orthogonality are not the same. We saw in the subsequent post about semi-orthogonality that we could have an orthogonal direct sum V_0 \oplus W_0 while having non-orthonormal bases in V_0 and W_0\ . Conversely, if we had a direct sum which was not orthogonal, we could still choose orthonormal bases in the two spaces. Okay, let me rephrase that: if we have bases in V_0 and W_0\ at all, I know we can make them orthonormal.

I am no longer sure about the following guess about integer translates of elements of W_0\ . And I am completely wrong about the conjecture in red.

Edit 28 Jun: I believe that once I know that \psi(t)\ is a wavelet (i.e. an element of W_0\ ), then so are all of its integer translates. This remark occurs once later, as well as here.

Let me speak in general terms first.

There seem to be two properties subsumed under the rubric “orthogonality”.
Edit June 29. “Seem” is the correct word. I am not ready to make this precise, but these two apparently different properties are very close, and possibly the same. I think I can say that if the scaling function and its translates are not orthogonal, then W_0 is not an orthogonal complement to V_0\ . To put that another way, we can still define a direct sum decomposition

V_1 = V_0 + W_0

but V_0\ and W_0 are not orthogonal subspaces. And so, when I convinced myself that “if \psi(t)\ is a wavelet (i.e. an element of W_0\ ), then so are all of its integer translates”, I was right… because I was assuming W_0\ to be orthogonal to V_0\ , and that’s one heck of an assumption.

I am looking into non-orthogonal subspaces, and into biorthogonal wavelets.

One is that we have, for example, the orthogonal direct sum decompositions

V_1 = V_0 \oplus W_0\

V_2 = V_1 \oplus W_1\

V_2 = V_0 \oplus W_0 \oplus W_1\ .

We are using the L2 inner product, \int f(t) g(t) \, dt\ . The functions f and g are orthogonal if \int f(t) g(t) \, dt = 0\ .

The first decomposition tells us that all the functions in V_0\ are orthogonal to all the functions inW_0\ . The second decomposition tells us the same thing about V_1\ and W_1\ : all the functions in V_1\ are orthogonal to all the functions in W_1\ .

The third decomposition tells us more: all the functions in V_0\ are orthogonal to all the functions in W_1\ ; and all the functions in W_0\ are othogonal to all the functions in W_1\ .

In particular, the scaling function \varphi and its integer translates which defined V_0\ are orthogonal to all the wavelets in W_0\ and all the wavelets in W_1\ .

In orther words, for the scaling function \varphi we should have

\int \varphi(t)\ \phi(t) \, dt\ = 0\ .

\int \varphi(t-k)\ \phi(t) \, dt\ = 0\ .

where \phi\ is any function (any wavelet) in any space W_i\ . (Yes, in particular this is true if \phi is the mother wavelet \psi\ ; we must hesitate to talk about the arbitrary translates or the integer translates of \psi\ until we know that they are in W_0\ .) Edit 28 Jun: I believe that once I know that \psi(t)\ is a wavelet (i.e. an element of W_0\ ), then so are all of its integer translates. We can stop hesitating.

Similarly, we would have, for example,

\int \varphi(2t)\ \phi(t) \, dt\ = 0\ .

where \phi\ is now restricted to any function (i.e. any wavelet) in W_i\ except W_0\ . Why? \varphi(2t) \in V_1\ , which is orthogonal to W_1 \text{ or higher}\ but not necessarily to W_0\ .

Let me point out that one of the challenges here is that we don’t know much about the spaces W_0\ ; in particular, we don’t know yet if \psi(t) \in W_0\ implies that \psi(t-1) \in W_0\ .

The second property is that we may have that the scaling function and its integer translates are orthogonal:

\int \varphi(t)\ \varphi(t-k) \, dt\ = P\ \delta(k) \ .

where \delta(k) \ is the Kronecker delta, \delta(0) = 1\ and \delta(k) = 0\ for all k ≠ 0.

There I do not need the distribution, the “Dirac delta function”, whose defining property is the inner product \int \delta(t)\ f(t) \, dt\  = f(0)\ . I just want to write one equation instead of these two:

\int \varphi(t)\ \varphi(t) \, dt\ = P\ .

\int \varphi(t)\ \varphi(t-k) \, dt\ = 0 \ , for k ≠ 0.

However we write it, this is a property of major importance. We can deduce a whole lot more about wavelets in general, about a suitable mother wavelet, and about integer translations of wavelets. In particular, if the scaling function is orthogonal to its integer translates, then integer translates of the mother wavelet are also orthogonal to each other, and we speak of orthogonal wavelets.

This property does hold, of course, for the Haar system and for all the Daubechies scaling functions. (Technically, that’s redundant: we will eventually show that the Haar system is in fact D2, the Daubechies wavelets defined by 2 nonzero h’s.)

The reason I emphasize that there are two ideas of orthogonality here –orthogonal direct sums and integer translates of the scaling function — is that I sometimes get the impression – and it may be a mistaken impression – that people are using the later to prove consequences of the former. But the orthogonal direct sums are more general.

Recall that we can — and have — used non-orthonormal (no, I didn’t say non-orthogonal) bases in finite dimensional vector spaces. These lead us quickly to the definition of a reciprocal basis: we take dot products of a vector with the reciprocal basis to compute the basis-coefficients of that vector. (As I post this, at least, “reciprocal basis” is in the tag cloud, so you can look there.)

Well, we can do the same thing with wavelets. If we have a basis of wavelets but it is not orthonormal, then we need to construct the reciprocal basis. The pair, basis of wavelets and reciprocal basis, is called a bi-orthogonal system. Just a different name for something we’ve seen before.

(If we have an orthogonal basis which is not orthonormal, the reciprocal basis differs from the original only by scale factors on each vector but the directions are the same; this means we just end up with scale factors associated with the dot-products. I will show you this, below. But what it means is that, in practice, a merely orthogonal basis is almost as good as an orthonormal one.)

I am looking forward to playing with a bi-orthogonal system, so that I can look for the properties which depend on the ortogonal direct sum decomposition rather than on the orthogonality of integer translates of the scaling function.

If the integer translates are orthogonal but not orthonormal, we can fix that just by dividing \varphi\ by its norm \sqrt{P}\ , where P is defined by

\int \varphi(t)\ \varphi(t) \, dt\ = P\ .

digression on Fourier

Suppose we have a function which is precisely a finite Fourier series, say,

f(x) = 3 \cos (2 x)+2 \sin (x)\

That seems unambiguous and straightforward: our f(x) can be written exactly using sin(x) and cos(x). But, with the L2 inner product on {[0,\ 2\pi]}\

\int_0^{2 \pi } f(t)\ g(t) \, dt\

and the associated norm

||g|| = \sqrt{\int_0^{2 \pi } g(t)\ g(t) \, dt}\

we have that the set {1, cos[nx], sin[nx]} is orthogonal but not orthonormal. Each of sin(nx) and cos(nx) has norm \sqrt{\pi}\ (because the integral of the square is \pi\ )…

trig norms

and the constant function 1 (= cos (0x), if you will) has norm \sqrt{2}\ \sqrt{\pi}\ .

const norm

The orthonormal basis is constructed by dividing each of these functions by its norm; the reciprocal basis is constructed by dividing each of these functions by the square of its norm; equivalently, the reciprocal basis is constructed by dividing each of the orthonormal functions by the norm of the original function.

However we carry it out, the end result is that the set

\{\frac{1}{\sqrt{2\ \pi}}, \frac{cos[nx]}{\sqrt{\pi}}, \frac{sin[nx]}{\sqrt{\pi}}\}\

is an orthonormal basis; and the set

\{\frac{1}{2\ \pi}, \frac{cos[nx]}{\pi}, \frac{sin[nx]}{\pi}\}\

is the basis reciprocal to \{1, cos[nx], sin[nx]\}\ .

In this form…

f(x) = 3 \cos (2 x)+2 \sin (x)\

f(x) is given in terms of the orthogonal but non-orthonormal basis. We can confirm that its components {3, 2} wrt that basis can be computed as the dot products of f with the reciprocal basis. Only two terms are nonzero:

get coeffs

We are more likely to have simply been told that the coefficients are to be found using

\frac{1}{\pi}\ \int_0^{2 \pi } sin(x)\ f(x) \, dt\


\frac{1}{\pi}\ \int_0^{2 \pi } cos(2x)\ f(x) \, dt\ .

Nothing about a reciprocal basis, just a recipe with a god-given factor of \frac{1}{\pi}\ in front of the integrals. Conceptually, those factors belong inside the integrals, under the trig functions. But it’s the same calculation.

If, on the other hand, we had constructed the orthonormal basis \{\frac{1}{\sqrt{2\ \pi}}, \frac{cos[nx]}{\sqrt{\pi}}, \frac{sin[nx]}{\sqrt{\pi}}\}\ , then the coefficients wrt the orthonormal basis are

get 2nd coeffs

which may look weird until I remind you that we are implicitly writing

f(x) = (3\ \sqrt{\pi})\  \frac{\cos (2 x)}{\sqrt{\pi}}+(2\ \sqrt{\pi})\  \frac{\sin (x)}{\sqrt{\pi}}\

which is exactly our function f.

The point I am belaboring – perhaps excessively – is that most of us have been coping with an orthogonal but non-orthonormal basis most of our lives, and constructing the reciprocal basis in such a case amounts to just sticking a scaling factor in front of the dot product calculation for the components. We get the components without explicitly desribing the reciprocal basis.

We do much the same for wavelets and the scaling functions: so long as they are orthogonal to their integer translates, we cope with their norms not being 1.

a counterexample

Since I haven’t yet itemized the consequences of orthogonality under integer translation, I won’t show you that the following scaling function violates them. But I want to put this example out here, because it does satisfy all 6 consequences of the dilation equation; but it is not orthogonal under integer translation, and it will not satisfy the consequences of orthogonal integer translates.

And yet, because we have clearly defined spaces V_0\ , V_1\ , etc., we should have orthogonal direct sum decompositions.

Here it is. Just a triangle. You can also call it a linear spline. (And higher-order splines are also counter-examples.)


So, we can define the space V_0\ as the span of \varphi(t)\ and its integer translates \varphi(t-k)\ .

NOTE that because the triangle \varphi(t)\ is positive on (0,2), it cannot be orthogonal to two of its integer translates, namely \varphi(t-1)\ and \varphi(t+1)\ . They overlap and there are no negative values to cancel things out.

That is, it is false that \varphi(t)\ is orthogonal to all of its integer translates. All but two is two too few.

For example, we can look at \varphi(t)\ (black) and \varphi(t-1)\ (red) and their intersection (yellow)…


That the area under the intersection is positive tells us that these functions are not orthogonal.. The inner product, however, is not the area. We can estimate the inner product by a sum…

approx ip

I should draw the product \varphi(t)\ \varphi(t-1)\

exact ip pic

but of course we can compute the inner product exactly, too. Since the functions are linear, their product is quadratic (where its not zero) and the integral is cubic. In fact, it’s

exact ip calc

Anyway, we take that triangular scaling function and all its integer translates to be V_0\ .

We can define V_1\ as the span of the set \{\varphi(2t-k)\}\ . Those are just narrower triangles of height 1, e.g \{\varphi(2t)\}\ looks like….

v1 triangle

The really big question is: is V_0\ a subspace of V_1\ , V_0 \subset V_1\ ?

The answer is yes. (Otherwise this wouldn’t be a counterexample!) We have that

\varphi(t) = \frac{1}{2}\ \varphi(2t) +  \varphi(2t-1) +  \frac{1}{2}\ \varphi(2t-2)\ .

I can show it to you (the command shows that I am adding three functions in V_1 to get my scaling function in V_0\ :

sum of triangles

Now, that’s our dilation equation, except that we don’t have the factor of \sqrt{2}\ . To put that another way, if the h’s are


then their sum is 2.

To put that a third way, right now we have A = 1.

No big deal; to change the sum we multiply and divide by \sqrt{2}\ . This gives us our usual dilation equation

\varphi(t) = \sum_{n} h(n)\ \sqrt{2}\ \varphi(2t-n)\ ,

with new h’s

\{\frac{1}{2 \sqrt{2}},\frac{1}{\sqrt{2}},\frac{1}{2 \sqrt{2}}\}\ .

Now, we have 6 properties to check…

  • The sum of the h’s = 2/A.
  • \varphi(t) = \sum_n {h(n)\ A\ \ \varphi(2\ t - n)}\ .
  • E = \int\ \varphi(t)\ dt \ .
  • The sum of the even h’s = the sum of the odd h’s.
  • \sum_k { \varphi(\frac{k}{2^j})} = 2^j \text{(for E = 1)}
  • \sum_k { \varphi(t+k)} = 1 \text{(for E = 1)}\

And I have been consistently referring to even h’s, for example, when I really mean h’s with even index, h(2n).

We have (one) set A = \sqrt{2}\ and (two) the sum of the h’s is \sqrt{2}\ .

There is only one odd h, h[1], and it’s \frac{1}{\sqrt{2}}\ , and each of the two even h’s is half that value, so we do have (three) the sum of the even h’s equals the sum of the odd h’s. Now is a good time to remark that the number of h’s is not even.

What about E = \int\ \varphi(t)\ dt \ ?

Our scaling function is a triangle of base 2 and height 1, so (four) its area is 1, so E = 1.

That triangle is continuous, so we should have (five) the generalized partition of unity

\sum_k { \varphi(t+k)} = 1 \text{(for E = 1)}\

Let’s try a few. Here are t = 1/2, 1/4, -3/8, 31/16. This sample is not a proof, but it’s all I personally need for now.

generalized partition

And since E = 1 we should have (six) the dyadic sum

\sum_k { \varphi(\frac{k}{2^j})} = 2^j\ .

Let’s try the first few.



Those linear splines and their translates do provide us with spaces V_0\ , V_1\ , and so on. They satisfy the 6 properties I expect them to.

But I have no idea, so far, how to find a function in W_0\ . There are two obvious candidates — orthogonal to the scaling function — but they fail to be orthogonal to all of its integer translates.

This is not a big deal. We’re going to warp those splines to hell and back, I think. Or maybe it will be the scaling function. I’m not sure yet, but something will be transmogrified.

There is a way to get an orthonormal basis from the set of splines: the result is called Battle-Lemarie wavelets. In contrast to everything we’ve seen, however, they do not have finite support (equivalently, they do not have a finite number of h’s) — but they are useful nevertheless.

So, we will see splines again.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: