transpose matrix & adjoint operator 2

(
begin digression
just in case you’ve seen this in another form, let me make some connections. if you don’t recognize any of this digression, that’s ok. you can move along, there’s nothing to see here.
i am taking a linear operator L: V\rightarrow V, from a vector space V to the same vector space V. in fact, i have more than a vector space: V is an inner product space; i have a “dot product”. 
what i call the reciprocal basis is often called the dual basis. in fact, halmos calls it the dual basis. but that terminolgy is also associated specifically with the so-called dual space V* of linear functionals on V; in that case, the dual basis is a basis in V*. there can be a great deal of confusion here. the dual space V* can be defined without an inner product on V; the inner product on V can be defined without ever mentioning the dual space V*. but if we introduce both inner product and V*, then there is a natural isomorphism between elements of V and of V*. i have seen people think of the one-to-one relationship between elements of V and V* as an identity, and to confuse the inner product of two vectors in V with the effect of a linear functional on a vector. (worse, i have seen people assert that an inner product involves one element of V and one element of V*.
there is a one-to-one correspondence between my right shoe and my left, but they are not identical. isomorphism is not always identity.
here, i have a finite-dimensional vector space V with an inner product on it. i have two bases (original and new) on V, and i want to construct a third basis for V. i call it the reciprocal basis to emphasize that it is not a basis on V*.
end digression
)
let’s see how this plays out.

our new basis defined by the columns of P is not orthonormal. we need to figure out what the reciprocal basis needs to be. its purpose is to make dot products – and transposes – come out right.
the columns of P are the basis vectors for the new basis in which our matrix A became the diagonal matrix B.
recall P:
P = \left(\begin{array}{cc} 1&1\\ 1&0\end{array}\right)
what are the dot products of those two basis vectors with each other? well, we have them wrt the original basis, so we can compute their dot products as P^T\ P (that’s a convenient way of getting them in one fell swoop):
P^T\ P =\left(\begin{array}{cc} 1&1\\ 1&0\end{array}\right) x \left(\begin{array}{cc} 1&1\\ 1&0\end{array}\right) 
= \left(\begin{array}{cc} 2&1\\ 1&1\end{array}\right)
in fact, we did that in the schur’s theorem post, but i let it slide after saying “but P is not orthogonal”. 
the “2” says the first vector is of length \sqrt{2}; the diagonal “1” says the second vector is of length 1. each off-diagonal “1” says that the cross-term dot products are 1 instead of the 0 we get from orthogonal vectors: that is, if the basis vectors are e_1 and e_2:
1 = e_1\cdot e_2 = |e_1| \ |e_2| \ cos \theta = \sqrt{2}\ cos \theta
so
cos \theta = \frac{1}{\sqrt{2}}\ and \theta = 45{}^{\circ}
“we knew that.”
now, what do these two basis vectors look like wrt the new basis, i.e. wrt themselves? what are their new components? they’re trival wrt themselves:
{(1,\ 0.)}
{(0,\ 1)}
the first new basis vector is 1 times itself, plus no part of the second vector; the second new basis vector is no part of the first vector plus 1 times itself.
how could we compute the euclidean inner product of the new basis vectors using new components? 
we can’t do it by just multiplying together the components. if we did that, we would get the identity matrix, and we would mistakenly conclude that the two basis vectors were orthonormal. 
there’s something going on with the inner product and the transformation to the new basis. (we should get to this, but not today. what we’re looking at is called the induced metric; what we’re about to do is the linear algebra equivalent of lowering indices using the metric tensor g_{ ij} in tensor analysis.)
i want another basis, the so-called reciprocal basis. i want a pair of vectors whose dot products with the new basis are 1 and 0. to put it another way, for this reciprocal basis i want an attitude matrix \alpha such that when i multiply \alpha times P (the rows of the reciprocal basis times the columns of the new basis) i get the identity. bear in mind that i am writing a matrix equation in the original basis, where the inner products come out correctly.
that is, i want
\alpha \ P = I
but P is invertible, inverses are unique, and therefore 
\alpha = P^{-1},
so we want to define the reciprocal basis by that attitude matrix.
\alpha = P^{-1} = \left(\begin{array}{cc} 0&1\\ 1&-1\end{array}\right)
and then i would write the transition matrix for the reciprocal basis as the transpose of its attitude mattrix. call it Q:
Q = \alpha^T = \left(\begin{array}{cc} 0&1\\ 1&-1\end{array}\right)
(yes, that was a trivial computation.) 
we now have three bases: original, new, and reciprocal.
it’s worth noting for future use that the transition matrix for the reciprocal basis is the inverse transpose of the transition matrix of the new basis: 
Q = P^{-T}.
what i have claimed is that: as A is diagonal in the new basis, the transpose of A is diagonal in the reciprocal basis; more, i claim that it’s the very same diagonal matrix B. in our original basis, we have A and its transpose:
A = \left(\begin{array}{cc} 1&1\\ 0&2\end{array}\right)
A^T = \left(\begin{array}{cc} 1&0\\ 1&2\end{array}\right)
they represent L and L* in the original basis. in the new basis, whose transition matrix is P, we knew that A becomes the diagonal matrix B:
B = \left(\begin{array}{cc} 2&0\\ 0&1\end{array}\right)
but that A^T becomes C^T which is not diagonal:
C^T = \left(\begin{array}{cc} 3&1\\ -2&0\end{array}\right)
we also believe that C^T represents L* wrt the new basis; it was defined by the change-of-basis formula for a matrix. now, in the reciprocal basis, whose transition matrix is Q, we compute Q^{-1}\ A^T\ Q to see what A^T becomes.
Q^{-1}\ A^T\ Q = \left(\begin{array}{cc} 1&1\\ 1&0\end{array}\right) x \left(\begin{array}{cc} 1&0\\ 1&2\end{array}\right) x \left(\begin{array}{cc} 0&1\\ 1&-1\end{array}\right)
=\left(\begin{array}{cc} 2&0\\ 0&1\end{array}\right)
which is B, as promised.
to repeat: after we diagonalize A, getting B, we still say that the transpose of B – which is B itself – is the matrix of the adjoint of A. the catch is that we have to be using a different basis, the reciprocal basis, for B^T.
it is so tempting to say the following, that i must:
the adjoint of B is B^T, but wrt a different basis.
there, i’ve said it. it’s terribly sloppy, sloppier than even i am comfortable being. the word adjoint should be reserved for an operator, as the word transpose is reserved for a matrix. but if we started with a matrix A, and we never really got our hands on the operator L, it can be awkward. say what works for you, but be ready to introduce the operator L as soon as you need the clarity.
in our case, A and A^T represent L and its adjoint L* wrt the original basis; B and C^T represent L and L* wrt the new basis; and B represents L* wrt the reciprocal basis. in tabular form, what we have so far is:
picture-8.png
or, A and B represent L wrt the original and new bases; A^T, C^T and B represent L* wrt the original, new, and reciprocal bases.
so what represents L wrt the reciprocal basis?
i hope you answered C, and not just because it wasn’t listed! with that hole filled, we have:
picture-10.png
to check on C, we transform A to the reciprocal basis by computing Q^{-1}\ A \ Q: we belive this is C.
Q^{-1}\ A \ Q = \left(\begin{array}{cc} 1&1\\ 1&0\end{array}\right) x \left(\begin{array}{cc} 1&1\\ 0&2\end{array}\right)x \left(\begin{array}{cc} 0&1\\ 1&-1\end{array}\right)
=\left(\begin{array}{cc} 3&-2\\ 1&0\end{array}\right)
and that is, indeed, the transpose of C^T, which is C:
C = \left(\begin{array}{cc} 3&-2\\ 1&0\end{array}\right)
let me close by saying that if we start with a given matrix M, instead of a linear operator L, the presumption is that M is wrt an orthonormal basis. if that’s not true, someone should have said something.
we started with the matrix A; we never had any other definition of the linear operator it represents.
the pair of operators L and L* are represented by a pair of matrices M and M^T so long as that pair are wrt a basis and its reciprocal basis.
confusing? if a basis is not orthonormal, you’ve got to introduce the reciprocal basis because dot products using components are messed up; and you can always get the matrix of an adjoint L* by transposing the matrix of L, but it’s wrt the reciprocal basis.
if you think you’ve got it, then i have to ask you: what would have happened if we had started with a self-adjoint or normal matrix?
Advertisements

2 Responses to “transpose matrix & adjoint operator 2”

  1. Abera Says:

    i need the relation between adjoint of A and its inverse .If A is matrix

  2. rip Says:

    I’m sorry, but I don’t know what you mean by “the relation”. Can you be more specific?

    I did point out that the transition matrix for the reciprocal basis is the inverse transpose of the transition matrix for the original basis.

    rip


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: