## introduction

For a good reason which I have not yet discussed, Malinowski wants to find x vectors which are close to vectors. (His x and are usually written y and for a least-squares fit). He finds a possible x and tests it to see if xhat is close. He recommended computing an intermediate t vector which was the for his least-squares fit to x.

Since he seems to care about t only when is close to x, and since is incredibly easy to compute directly, I prefer to delay the computation of t. Find t after we’ve found a good x.

It will also turn out that he wants a collection of t vectors in order to pick a nicer basis than u or u1. And I’m not going to follow him there, because all of that is what practitioners call “non-orthogonal rotations”. (That strikes me as an oxymoron.) It’s what Harman spends most of his book doing, and that’s where I’ll look if I ever I want to. It’s important, but I’m not going to look at it this time around.

Anyway, we factored the data matrix

as X = R C, with R = u1 w1 and .

In addition, I’m keeping the full SVD handy: .

For partial reference, we had

(u1 is the first two columns of u.)

We computed the hat matrix H directly as

,

and we saw a few examples of computing . In particular, we applied H to a vector of all 1s and got

=

I closed by saying that I could simplify the computation, clarify the concept, and even dispense with H if I chose.

Let’s see all that.

## computational simplification

We saw that was quite simple (diagonal, just the square of w1) when we computed it. First, let’s confirm the simplicity of . Since

,

,

because (u1 is orthonormal although not orthogonal).

Let’s take a look at the H matrix. We have

because w1 is diagonal.

Now we can expand the definition of H:

.

In other words, if we want the matrix H, we just compute . That’s a lot easier than using the normal equations. (I should probably say that for regression, the hat matrix is defined as ; it simplified to here because R was special.)

## conceptual simplification

Fine, we have simplified the computation of H. This is about the time that I decided to find the matrix of H wrt the new basis u (not u1). I will confess that the hat matrix is a new friend of mine, not an old one; it took me a while to recognize him. The following computation helped a lot.

To transform H to the u basis, we would compute

i.e.

because u, unlike u1, is orthogonal. We get

OMG. Now that is a **projection operator**, in all its glory. It’s the identity on a 2D subspace, and the zero operator elsewhere.

It projects any vector onto the 2D subspace spanned by the data. All components other than the first two are mapped to zero. (If w1 were 3×3, we would have three 1’s, and be projecting onto a 3D subspace, and so on.)

It turns out that the hat matrix H is always a projection operator; it doesn’t require the special form .

Incidentally, there is a very simple test for whether a matrix A represents a projection operator: a matrix is **idempotent**, , if and only if the matrix represents a projection operator. And if A is idempotent then so is any similar matrix; i.e. if the linear operator is a projection, then all of its matrix representations are idempotent. It’s easy to show that: if B is similar to an idempotent matrix A,

then B is idempotent too:

.

We can show that the hat matrix is always a projection operator.

I personally find it far more revealing, for this application, that is the projection of x into the 2D subspace than that it is the result of a least-squares fit. It is both, of course; I just prefer seeing H as a projection operator. That speaks to me.

So, although I computed , I could (and should!) have computed

.

or, we could have computed H from that nice simple projection operator P:

.

Yes, I found P from H, but now that we know that H has that form wrt the u basis, we could define P and compute H from it. Nevertheless, although the projection P is visually stunning, it’s not an easier way to compute H. I would still compute H from u1.

But only if I had some use for H.

## H is convenient, not essential.

Tell me again: what is xhat? It’s the projection of x into the 2D subspace spanned by u1.

So what would happen if we took x, found its components wrt the u basis, and zeroed out the components outside the subspace?

We would have the new components of xhat.

We would also have seen explicitly how far from the subspace x was. Then we could get the old components of xhat. (This works because u1 is a basis for the 2D subspace, and u1 is a subset of u.)

For an example, let’s take his all-1s vector. Get its new components by applying the inverse transition matrix , and we get

=

These are the new components of x; we see that the 4th component is very small, the 5th is small, but the 3rd component is as large as the second. This vector isn’t close to being in the 2D subspace. Now zero out all but the first two components and we have the new components of xhat

– but we want the old components of xhat. Fine, the old components of xhat are found by applying the transition matrix u …

to s2, and we get exactly what we computed in the previous post by applying H to x:

Because we understand that xhat is the projection of x onto the 2D subspace, and because u was designed to include a subbasis for that 2D subspace, we can compute xhat without using H.

As it happens, I find it easier to compute and use H for finding xhat, but I also really like knowing that I can get additional insight by looking at the new components of x, especially the ones outside the 2D subspace.

In summary then, if I must compare x with , I would compute . In the case that x and xhat are not close, that’s when I would use to look at the new components of x, to see the components outside the subspace.

## Leave a Reply