Questions – early February

When I look back over the past year, I see that I do not have all that many posts about the doing of mathematics — almost none, in fact. If anything, the introductory material for my “books added” posts is the closest I have come to talking about the doing of mathematics. Even those are more about “here’s what I don’t know and where I think I might be able to learn about it”. Fair enough, they’re something like what I had hoped to do; I’ll try to keep on writing non-trivial introductions to the “books added” posts.

I can point out that many of my “Davis” posts on PCA / FA are, in retrospect, mostly about my learning to use the SVD (singular value decomposition). They were as much about learning mathematics as doing it, and are a rare example – in public, on the blog, I hope! – of my blundering through something. I daresay the calculations were correct, but my assessment of issues was not very clear. It took me a while to decide what was important in that work.

So let’s see if I can do something else, too, to show the process. Let me try posting the questions I’m working on. Now, not everything I am doing is the answering of a clearly stated question. All too often my question is the all too general “what the hell is going on?” In that situation, there isn’t much I can say until I make some progress. In fact, some of the following questions are still a bit vague.

And I will confess that I am a little afraid of making these questions public. For one thing, they may seem silly. I expect that most of them will certainly be simple. By publishing them, I may make myself look foolish and ignorant; and lazy, too, because surely I should have been able to answer the questions in less time than it took to publish them. (Yes and no.)

And how long will it take me to answer them? Will the list of questions grow so long as to become unmanageable? Will my readers start throwing the answers at me? (Hmm. Wouldn’t that be a good thing?)

Okay. Let me just dive in. I have three sets of questions: mechanics, principal components, and simplical coordinates.

Let’s take the mechanics question first. My derivation of the equation

v = T \nu - \omega\ N\ r = T \nu + \omega \times\ r

assumed that \omega is constant. Now, you don’t have to read very much physics to find people saying that the equation is true whether \omega is constant or not. So the first question is simply:

is that equation true when \omega is not constant, and how do I prove it?

(The answer, of course, goes in reverse: I can prove it, therefore the equation is true even when \omega is not constant. Okay, I am posting a question for which I know the answer. On the other hand, I didn’t know the answer at the end of last weekend. I was completely unconvinced by my physics books, and finally figured out who would have a valid proof. It will take some explaining.)

The PCA / FA questions all pertain to Basilevsky. We saw, in three posts starting here, that our usual \sqrt{\text{eigenvalue}} weighted matrices A were in fact covariance matrices. I had not known that. In addition, we saw that if we scaled the rows of Ac (that is, the Sqrt[eigenvalue] weighted matrix computed from centered data Xc) then we got a covariance (in fact, correlation) matrix between the standardized data Xs and the new centered data Zc. We also saw that Basilevsky focused on writing equations.

That we can write linear equations between two sets of variables is equivalent to saying that we have a transition matrix. So writing linear equations is nothing new and exciting.

To summarize it in very general terms: our original matrices A were simultaneously transition matrices, covariance matrices, and eigenvector matrices; when do these three properties coincide? Okay, I can get two precise questions out of that:

When is a transition matrix (“linear equations”) also a covariance matrix?

Is the orthogonal eigenvector matrix V itself a covariance matrix?

As written, the questions were not very precise, because our third A matrix (Ar) was not an eigenvector matrix. Once we scaled its rows, its columns ceased to be eigenvectors of the centered data. Recall, however, that scaling the rows of Ac went hand in glove with scaling the columns of Xc to get standardized data. Well that gives me another question:

Suppose we start with centered data Xc, but then scale its rows to constant sums. Is there a corresponding scaling of the columns of Ac which leads to a covariance matrix for the constant-row-sum data?

In addition, since the matrix Ar was gotten by scaling both the columns and the rows of V, it would seem to be a rather a general matrix. The proof that Ar was a covariance matrix relied heavily on the original nature of Ac as weighted columns of V, and yet the Ar matrix seems to have been stretched pretty far from its beginnings. Because Ar seems almost arbitrary, even though the proof required an eigenvector matrix, I find myself wondering if an arbitrary transition matrix is a covariance matrix. Maybe the way to phrase that is: I don’t believe it but then i have to believe that scaling both the columns and rows of a matrix preserves some of its structure.

I think I have to leave these questions a mixture of precision and vagueness.

The final question pertains to coordinates for simplices. Bloch tells us, “This ability to extend maps affine linearly is why simplices are more useful than arbitrary polygons.” (p. 123.)

Does he mean that the coordinates assigned to points inside a triangle are unique? Does he mean that coordinates cannot be assigned uniquely to the inside of a rectangle? Does he mean something more?

I am pretty sure that the coordinates assigned to points inside a triangle are unique; I’m just not sure that that is what he is talking about. I am pretty sure that coordinates cannot be assigned uniquely to the inside of a rectangle, but I haven’t decided how the assignment fails.

As for a rectangle, the last time I looked at this — about two weeks ago — I was being a little more general, and looking at the coordinates assigned by two different triangulations of a quadrilateral.That is, I take the quadrilateral abdc, and the two triangles abc and abd. Consider a point which lies inside both triangles. It acquires coordinates from each triangle. Are the coordinates the same?

bloch-subtriangles

I do not know if that is the issue, but it’s what I’m looking at.

In summary, then, I have some almost irrelevant questions about principal components; I have a computational question about the nitty-gritty of simplicial complexes; and I need to know whether a crucial equation in dynamics is true when the axis of rotation is not constant.

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 44 other followers

%d bloggers like this: