I’ve seen two things like this. First, the diagonalization of a square matrix A:

[diagonalization]

where

D is diagonal

**correction: if A is symmetric, **P may be chosen orthogonal.

We recall that not every matrix can be diagonalized; instead they can be brought to Jordan Canonical form. OTOH the set of matrices which cannot be diagonalized is a subset of measure zero. Nevertheless, for some applications (e.g. differential equations), matrices which cannot be diagonalized play a significant role. I’m sure i’ll talk about this someday.

Looks a little different from the SVD, you say? we always have to pay attention to whether we decomposed a matrix or transformed it. Rewrite the transformation as a decomposition of A:

and then as

[eigenvector decomposition]

since P was chosen orthogonal.

Now that resembles the SVD:

.

A is square, X is not (in general); D is diagonal most of the time, while w is as close to diagonal as a rectangular matrix can get; and instead of one orthogonal matrix P, we have two orthogonal matrices u and v.

The SVD is a generalization of the eigenvector decomposition, from square matrices to matrices of arbitrary shape.

Among other things, this means that any square matrix can be diagonalized by u and v of its SVD; it just can’t always be done with u = v = P.

To put that another way, if we can change the bases independently in the domain and codomain, then we can diagonalize any square matrix; but if werequire the same one change of basis in both the domain and codomain, then we may not be able to diagonalize the matrix.

### Like this:

Like Loading...

*Related*

## Leave a Reply