## orthogonality

**Edit 3 Aug. It is embarrassing to be so wrong in my guesses. Well, I understand more today than I did yesterday.**

The two forms of orthogonality are not the same. We saw in the subsequent post about semi-orthogonality that we could have an orthogonal direct sum while having non-orthonormal bases in and . Conversely, if we had a direct sum which was not orthogonal, we could still choose orthonormal bases in the two spaces. Okay, let me rephrase that: if we have bases in and at all, I know we can make them orthonormal.

**I am no longer sure about the following guess about integer translates of elements of . And I am completely wrong about the conjecture in red.**

Edit 28 Jun: I believe that once I know that is a wavelet (i.e. an element of ), then so are all of its integer translates. This remark occurs once later, as well as here.

Let me speak in general terms first.

There seem to be two properties subsumed under the rubric “orthogonality”.

Edit June 29. “Seem” is the correct word. I am not ready to make this precise, but these two apparently different properties are very close, and possibly the same. I think I can say that if the scaling function and its translates are not orthogonal, then is not an orthogonal complement to . To put that another way, we can still define a direct sum decomposition

but and are not orthogonal subspaces. And so, when I convinced myself that “if is a wavelet (i.e. an element of ), then so are all of its integer translates”, I was right… because I was assuming to be orthogonal to , and that’s one heck of an assumption.

I am looking into non-orthogonal subspaces, and into biorthogonal wavelets.

**One is that we have, for example, the orthogonal direct sum decompositions**

.

We are using the L2 inner product, . The functions f and g are orthogonal if .

The first decomposition tells us that all the functions in are orthogonal to all the functions in. The second decomposition tells us the same thing about and : all the functions in are orthogonal to all the functions in .

The third decomposition tells us more: all the functions in are orthogonal to all the functions in ; and all the functions in are othogonal to all the functions in .

In particular, the scaling function and its integer translates which defined are orthogonal to all the wavelets in and all the wavelets in .

In orther words, for the scaling function we should have

.

.

where is any function (any wavelet) in any space . (Yes, in particular this is true if is the mother wavelet ; we must hesitate to talk about the arbitrary translates or the integer translates of until we know that they are in .) **Edit 28 Jun: I believe that once I know that is a wavelet (i.e. an element of ), then so are all of its integer translates. We can stop hesitating.**

Similarly, we would have, for example,

.

where is now restricted to any function (i.e. any wavelet) in except . Why? , which is orthogonal to but not necessarily to .

Let me point out that one of the challenges here is that we don’t know much about the spaces ; in particular, we don’t know yet if implies that .

**The second property is that we may have that the scaling function and its integer translates are orthogonal:**

.

where is the Kronecker delta, and for all k ≠ 0.

There I do not need the distribution, the “Dirac delta function”, whose defining property is the inner product . I just want to write one equation instead of these two:

.

, for k ≠ 0.

However we write it, this is a property of major importance. We can deduce a whole lot more about wavelets in general, about a suitable mother wavelet, and about integer translations of wavelets. In particular, if the scaling function is orthogonal to its integer translates, then integer translates of the mother wavelet are also orthogonal to each other, and we speak of orthogonal wavelets.

This property does hold, of course, for the Haar system and for all the Daubechies scaling functions. (Technically, that’s redundant: we will eventually show that the Haar system is in fact D2, the Daubechies wavelets defined by 2 nonzero h’s.)

The reason I emphasize that there are two ideas of orthogonality here –orthogonal direct sums and integer translates of the scaling function — is that I sometimes get the impression – and it may be a mistaken impression – that people are using the later to prove consequences of the former. But the orthogonal direct sums are more general.

Recall that we can — and have — used non-orthonormal (no, I didn’t say non-orthogonal) bases in finite dimensional vector spaces. These lead us quickly to the definition of a reciprocal basis: we take dot products of a vector with the reciprocal basis to compute the basis-coefficients of that vector. (As I post this, at least, “reciprocal basis” is in the tag cloud, so you can look there.)

Well, we can do the same thing with wavelets. If we have a basis of wavelets but it is not orthonormal, then we need to construct the reciprocal basis. The pair, basis of wavelets and reciprocal basis, is called a bi-orthogonal system. Just a different name for something we’ve seen before.

(If we have an orthogonal basis which is not orthonormal, the reciprocal basis differs from the original only by scale factors on each vector but the directions are the same; this means we just end up with scale factors associated with the dot-products. I will show you this, below. But what it means is that, in practice, a merely orthogonal basis is almost as good as an orthonormal one.)

I am looking forward to playing with a bi-orthogonal system, so that I can look for the properties which depend on the ortogonal direct sum decomposition rather than on the orthogonality of integer translates of the scaling function.

If the integer translates are orthogonal but not orthonormal, we can fix that just by dividing by its norm , where P is defined by

.

## digression on Fourier

Suppose we have a function which is precisely a finite Fourier series, say,

That seems unambiguous and straightforward: our f(x) can be written exactly using sin(x) and cos(x). But, with the L2 inner product on …

and the associated norm

we have that the set {1, cos[nx], sin[nx]} is orthogonal but not orthonormal. Each of sin(nx) and cos(nx) has norm (because the integral of the square is )…

and the constant function 1 (= cos (0x), if you will) has norm .

The orthonormal basis is constructed by dividing each of these functions by its norm; the reciprocal basis is constructed by dividing each of these functions by the square of its norm; equivalently, the reciprocal basis is constructed by dividing each of the orthonormal functions by the norm of the original function.

However we carry it out, the end result is that the set

is an orthonormal basis; and the set

is the basis reciprocal to .

In this form…

f(x) is given in terms of the orthogonal but non-orthonormal basis. We can confirm that its components {3, 2} wrt that basis can be computed as the dot products of f with the reciprocal basis. Only two terms are nonzero:

We are more likely to have simply been told that the coefficients are to be found using

and

.

Nothing about a reciprocal basis, just a recipe with a god-given factor of in front of the integrals. Conceptually, those factors belong inside the integrals, under the trig functions. But it’s the same calculation.

If, on the other hand, we had constructed the orthonormal basis , then the coefficients wrt the orthonormal basis are

which may look weird until I remind you that we are implicitly writing

which is exactly our function f.

The point I am belaboring – perhaps excessively – is that most of us have been coping with an orthogonal but non-orthonormal basis most of our lives, and constructing the reciprocal basis in such a case amounts to just sticking a scaling factor in front of the dot product calculation for the components. We get the components without explicitly desribing the reciprocal basis.

**We do much the same for wavelets and the scaling functions: so long as they are orthogonal to their integer translates, we cope with their norms not being 1.**

## a counterexample

Since I haven’t yet itemized the consequences of orthogonality under integer translation, I won’t show you that the following scaling function violates them. But I want to put this example out here, because it does satisfy all 6 consequences of the dilation equation; but it is not orthogonal under integer translation, and it will not satisfy the consequences of orthogonal integer translates.

And yet, because we have clearly defined spaces , , etc., we should have orthogonal direct sum decompositions.

Here it is. Just a triangle. You can also call it a linear spline. (And higher-order splines are also counter-examples.)

So, we can define the space as the span of and its integer translates .

NOTE that because the triangle is positive on (0,2), it cannot be orthogonal to two of its integer translates, namely and . They overlap and there are no negative values to cancel things out.

**That is, it is false that is orthogonal to all of its integer translates.** All but two is two too few.

For example, we can look at (black) and (red) and their intersection (yellow)…

That the area under the intersection is positive tells us that these functions are not orthogonal.. The inner product, however, is not the area. We can estimate the inner product by a sum…

I should draw the product …

but of course we can compute the inner product exactly, too. Since the functions are linear, their product is quadratic (where its not zero) and the integral is cubic. In fact, it’s

Anyway, we take that triangular scaling function and all its integer translates to be .

We can define as the span of the set . Those are just narrower triangles of height 1, e.g looks like….

**The really big question is: is a subspace of , ?**

The answer is yes. (Otherwise this wouldn’t be a counterexample!) We have that

.

I can show it to you (the command shows that I am adding three functions in to get my scaling function in :

Now, that’s our dilation equation, except that we don’t have the factor of . To put that another way, if the h’s are

then their sum is 2.

To put that a third way, right now we have A = 1.

No big deal; to change the sum we multiply and divide by . This gives us our usual dilation equation

,

with new h’s

.

Now, we have 6 properties to check…

- The sum of the h’s = 2/A.
- .
- .
- The sum of the even h’s = the sum of the odd h’s.

And I have been consistently referring to even h’s, for example, when I really mean h’s with even index, h(2n).

We have (**one**) set A = and (**two**) the sum of the h’s is .

There is only one odd h, h[1], and it’s , and each of the two even h’s is half that value, so we do have (**three**) the sum of the even h’s equals the sum of the odd h’s. Now is a good time to remark that the number of h’s is not even.

What about ?

Our scaling function is a triangle of base 2 and height 1, so (**four**) its area is 1, so E = 1.

That triangle is continuous, so we should have (**five**) the generalized partition of unity

Let’s try a few. Here are t = 1/2, 1/4, -3/8, 31/16. This sample is not a proof, but it’s all I personally need for now.

And since E = 1 we should have (**six**) the dyadic sum

.

Let’s try the first few.

## Remarks

Those linear splines and their translates do provide us with spaces , , and so on. They satisfy the 6 properties I expect them to.

But I have no idea, so far, how to find a function in . There are two obvious candidates — orthogonal to the scaling function — but they fail to be orthogonal to all of its integer translates.

This is not a big deal. We’re going to warp those splines to hell and back, I think. Or maybe it will be the scaling function. I’m not sure yet, but something will be transmogrified.

There is a way to get an orthonormal basis from the set of splines: the result is called Battle-Lemarie wavelets. In contrast to everything we’ve seen, however, they do not have finite support (equivalently, they do not have a finite number of h’s) — but they are useful nevertheless.

So, we will see splines again.

## Leave a Reply