## discussion

There was a lot going on in the example of the SN decomposition (2 of 3). First off, we found eigenvalues of a non-diagonable matrix A, and constructed a diagonal matrix D from them. Then we found 2 eigenvectors and 1 generalized eigenvector of A, and used them to construct a transition matrix P. We used that transition matrix to go from our diagonal D back to the original basis, and find S similar to D.

So S is diagonable while A is not. And A and S have the same eigenvalues; and the columns of P should be eigenvectors of S. They are. The generalized eigenvector that we found for A is an (ordinary) eigenvector of S, but we had to get a generalized eigenvector of A in order to construct S from D.

I wonder. Can I understand the distinction between eigenvectors and generalized eigenvectors by studying S and A? We’ll see.
Read the rest of this entry »

## let’s look at an example of the SN decomposition.

This comes from Perko p. 35, example 3. We will compute exp(A) using the SN decomposition described in the previous post (“1 of 3”).

We take the following matrix:

$A = \left(\begin{array}{lll} 1 & 0 & 0 \\ -1 & 2 & 0 \\ 1 & 1 & 2\end{array}\right)$

I ask Mathematica® to find its eigenvalues and eigenvectors. For eigenvalues, I get

$\lambda = \{2,\ 2,\ 1\}$

For the eigenvector matrix, I get

$\left(\begin{array}{lll} 0 & 0 & -1 \\ 0 & 0 & -1 \\ 1 & 0 & 2\end{array}\right)$
Read the rest of this entry »

## discussion

one correction 30 Aug 2008, per Brian Hall’s comment and counterexample here. see “edit” in this post.

Let me emphasize something I take for granted: i’m writing about theory as opposed to computations by computer. every time I say, for example, that two matrices A and B are similar via

$B = P^{-1}\ A\ P\$,

I am asserting that P is a transition matrix for a change-of-basis, and by definition it is invertible. In practice, the matrix P can be ill-conditioned, and computing $P^{-1}$ on a computer may be extremely hazardous.

An updated version of a classic paper may be found here . Having just looked at it, I may pepper my theoretical discussion with some vaguely computational comments.

Back to my ivory tower. Well, it’s hardly that, but it’s still far removed from numerical algorithms.

The definition of the matrix exponential is easy enough to write: as the exponential of a number x has the expansion

$e^x = 1 + x + x^2\ /\ 2+ x^3\ /\ 3!\ +\ ...$

we define

$exp(A) = I + A + A^2\ /\ 2 + A^3\ /\ 3!\ +\ ....$

But how do we actually compute it (in theory)? Well, we could just try computing the powers of A and look for a pattern, but there are easier ways. Far easier ways. (and just doing it numerically can be a bad idea: we can encounter disastrous cancellation between one power of A and the next.)
Read the rest of this entry »

## References for generalized eigenvectors, the matrix exponential and for the SN decomposition.

one correction 30 Aug 2008, per Brian Hall’s comment below. see “edit” in this post.

First, let’s talk about books which are already in the bibliography. It’s pretty easy to find statements that for matrix exponentials,

exp(A+B) = exp(A) exp(B)
Read the rest of this entry »