## PCA / FA example 4: davis. Reciprocal basis 2 & 3.

The reciprocal basis for A^R using explicit bases.

Here I go again, realizing that I’m being sloppy. i call

$\{2,1,-3\}$

the second data vector, but of course, those are the components of the second data vector. a lot of us blur this distinction a lot of the time, between a vector and its components. So long as we work only with components, this isn’t an issue, but we’re about to write vectors. i wouldn’t go so far as to say that the whole point of matrix algebra is to let us blur that distinction, but it’s certainly a major reason why we do matrix algebra. but for now, let’s look at the linear algebra, distinquishing between vectors and their components.

i need a name for the second data vector; let’s call it s. we will write it two ways, with respect to two bases, and show that the two ways are equivalent.

We have the original basis. We never write it, because the basis vectors have components (1,0,0), (0,1,0), and (0,0,1). let me call this basis $e_i$. because it’s an orthonormal basis, it is its own reciprocal basis, and i will call the reciprocal basis $e^i$. yes, i use the same symbol e, but the original basis has subscripts, the reciprocal basis has superscripts.

We have the basis described by $A^R$. let me call those vectors $f_i$. we found the reciprocal basis B, and i will call it $f^i$. same convention: subscripts or superscripts on the common symbol f.

let us recall the (transition matrix B) for the reciprocal basis:

$\left(\begin{array}{ll} 0.0890871 & 0. \\ -0.0445435 & -0.204124 \\ -0.0445435 & 0.204124\end{array}\right)$

let us also recall the design matrix X:

$\left(\begin{array}{lll} -6 & 3 & 3 \\ 2 & 1 & -3 \\ 0 & -1 & 1 \\ 4 & -3 & -1\end{array}\right)$

When we say the components of the second data vector are

$\{2,1,-3\}$

we are asserting that the second data vector is

$s = 2\ e_1 + e_2 - 3\ e_3$.

When we say that the first reciprocal basis vector has components given by the first column of B…

$\{0.0890871,-0.0445435,-0.0445435\}$

we are asserting that the first reciprocal basis vector is

$f^1= 0.0890871\ e^1\ -0.0445435\ e^2 - 0.0445435\ e^3$

Similarly, the second reciprocal basis vector, whose components are the second column of B, is

$f^2 = 0. e^1 - 0.204124\ e^2 + 0.204124\ e^3$

Let us recall (the first two columns of) $S^R$:

$\left(\begin{array}{ll} -67.3498 & 0. \\ 22.4499 & -9.79796 \\ 0. & 4.89898 \\ 44.8999 & 4.89898\end{array}\right)$

When we say that the second row of $S^R$

$\{22.4499,-9.79796\}$

are the components of the second data vector wrt the reciprocal basis $\left(f^i\right)$, we are saying that

$s = 22.4499 f^1 - 9.79796 f^2$.

but we defined the vector s as

$s = 2\ e_1 + e_2 - 3\ e_3$.

The two forms of s should be the same vector. We start with…

$s = 22.4499 f^1 - 9.79796 f^2$.

Plug in the expressions for $f^1$ and $f^2$

$s = 22.4499 \left(0.0890871 e^1 - 0.0445435 e^2 - 0.0445435 e^3\right)$
$- 9.79796 \left(0. e^1 - 0.204124 e^2 + 0.204124 e^3\right)$

and Mathematica® simplifies that to

$s = 2. e^1 + 1. e^2 - 3. e^3$,

just what it should be.

The reciprocal basis for A^R using tensors.

Third, i can write this another way, and some of you will have seen it this way. If you have, i just want to refresh your memory. Using tensor notation (and the einstein summation convention), we could have written

$s = s_i\ f^i$ and $s = s^i\ f_i$.

which is another way of saying that the components of s wrt the reciprocal basis $f^i$ are $s_i$ and the components of s wrt the basis $f_i$ are $s^i$. It is absolutely crucial that corresponding components and vectors do not both have superscripts or subscripts, but are mixed.

what is significant are these equations

$s\ \cdot\ f_i = s_i$

and

$s\ \cdot\ f^i = s^i$,

which I offer as reminders; i’m not going to work them out for you at this time. they are another way of writing what we’ve been saying all along: dotting a vector into some basis vectors gives you the components wrt the reciprocal basis.