Let’s talk about centering the data. Recall the questions:
- is there any chance that we can always center both the columns and the rows?
- is there any chance that we can always standardize both the columns and the rows?
- if we can’t have our cake and eat it too, should we give up the duality between R-mode and Q-mode?
- what if rows of the data matrix add up to 1 or 100 (i.e. the variables are percentages)?
(My third question is poorly phrased. we always have the duality between A’s and S’s; what i feared losing was the idea that Q-mode is just R-mode applied to the transpose ; but what if X doesn’t have row-centered data?)
The third question is also a red herring. We can always make both the columns and the rows add up to zero. Proving it isn’t too hard; some convenient notation might simplify things. If the data is
are good symbols for the column means and row means, respectively. The grand mean (the mean of all the matrix values = mean of column means = mean of row means) would be denoted
BTW, it’s very handy that the mean of column means = mean of row means; we’ll see how handy, soon.
The only subtlety is that what we have here are row and column means of the original data, instead of, for example, column means of the original data and row means of the column-centered data. We can get “doubly-centered data” by computing
Why are we adding the grand mean? Because we subtracted it twice. If we had taken row means of the column centered data, we wouldn’t add the grand mean back in; but i chose, and i’m sure most people choose, to take row means of the original data.
I leave the proof that we can always get doubly-centered data for the interested reader. Really, it shouldn’t be difficult.
It means that we can always do both the R-mode and Q-mode analyses on a doubly-centered design matrix X.
That had bothered while i was looking at Q-mode. I knew that davis’ data was row-centered as well as column-centered (had zero-mean rows as well as zero-mean columns, resp.), but i just assumed he had picked the numbers that way in order to avoid discussing the case when the rows were not zero-mean.
We can always have both row-centered and column centered simultaneously.
(We can have our cake and eat it too, so we don’t have to worry about Q-mode being done with non-centered data.)
NOTE that we cannot do that with standardized data (or, therefore, with the correlation matrices). That is, we cannot standardize both the columns and the rows. The unit column-variances get messed up when we standardize the rows after standardizing the columns.
As I draft this, it is an open question for me whether we can always arrange to have column-centered data with row sums equal to 1. Let me follow some great advice, scary as it is: let me guess. (from John A. Wheeler, the great physics teacher: if my guess turns out right, i have reinforced my intuition; if my guess turns out wrong, i correct my intuition.)
I think we can do it. Adding that grand mean back in looks promising for adding 1 back in.
(I won’t string you along any further: my intuition was wrong. We’ll see what happens instead. It’s almost as good.)
Let’s try an example. I start with a real simple set of numbers:
Square the first row, square-root the second row, leave the third alone:
While i’ve got it in this form, get what will be the row means.
(Much of Mathematica® works on rows; I’ve just been working with .) Transpose. This is my example data matrix.
Get the column means.
The grand mean is any of the following: the mean of row means… the mean of column means… or the mean of all 12 values. No matter how we compute it, we get
We get doubly-centered data by
which gives us
(Yes, the column means are zero and the row means are zero.)
That was easy enough.
Let’s back up and look at just column-centered data from the original.
The column means, of course, are zero… the row means are not (these, of course, are the row means of the column centered data, not of the original data)…
but the grand mean is zero. (go ahead, compute it if you must.)
The grand mean is zero because it is the mean of the column means, each of which is zero. This implies that the mean of the row means is zero. In particular, since 1 ≠ 0, we cannot have column-centered data with constant row sums = 1. (I said it was handy that the grand mean could be computed more than one way.)
maybe I should emphasize what we just saw: the act of column-centering the data implies that the sum of the new row means is zero. If each row mean is the same, then that common value must be zero. That example did not have constant row sums, but we see what must happen if it had.
I probably do not need to work an example, but i’m going to. And i’m going to do it because I had actually worked this example first; then i realized what had happened, and i told you you up front. So suppose we had started with a matrix where the rows each add up to 1. Go back to the raw data again….
but change the rows. Divide each row by its mean, and by 3. (I don’t want row mean = 1, I want row sum = 1.)
Get the column means…
and having said that we want row sums = 1, we confirm that each row mean is 1/3, because it’s an easier command to Mathematica®:
I then construct column-centered data:
We could confirm that the column means are zero; and we compute the row means…
We automatically got row-centered data, as expected.
We could look at standardized data, but i’ll leave this to you, too: a counterexample should be convincing enough (and I’ve given you an matrix to play with). We can’t preserve unit variance of the columns when we then standardize the rows.
- we can always get doubly-centered data.
- my guess was wrong: we cannot have column-centered with constant row sums, unless the constant is 0.
- but if the rows do have a common row mean (equivalently, a common row sum), then just centering the columns automatically gives us doubly-centered data.
- we cannot always get doubly-standardized data.
- if i center the columns, then i would be inclined to center the rows: use doubly-centered data in preference to only column-centered (assuming, as i prefer, that the variables are columns.)
Let me elaborate on (5). For a matrix X with variables in columns, column-centered data means that is proportional to the covariance matrix of X. If we want to be proportional to the covariance matrix of , then we need row-centered data. If we want both and to be proportional to covariance matrices, then we need doubly-centered data.