## setup

Let’s just fit a parabola to noisy data. Instead of using real data I will manufacture some. First I pick some x values; you might note that they do not have to be equally spaced.

Now I generate 5 random variables u from a normal(0, .05) distribution. (the 2nd param is std dev, not variance.) my data is generated as

.

that is, my 5 noise values are

and my 5 data values are

Here, then, are my (x,y) points:

Here is a plot of the true function without noise (y = x^2 + 2) and the (x,y) points (with noise).

In the real world, of course, what we have is only the 5 points; we want to fit a curve to them. I’m going to do this “by hand”, as I showed you in the expository post.

## “by hand”

Let’s confirm all that, “by hand” as it were. I am going to just jump in and fit a general polynomial of degree 2: (i.e. ), following the computations described in the expository post.

i need a design matrix whose first column is 1s, whose second column is the x’s, and \[Dash] because I want to fit a quadratic \[Dash] whose 3rd column is the squares of the x’s. That is, I construct the design matrix X

We compute … and its inverse…

Incidentally, that is the step for which you probably do not want to write your own code: you want someone else to provide code for computing the inverse of a matrix.

having the inverse, we compute the from the normal equations:

and get

That is, we have just fitted the equation…

to our data. We can plot it, but we shouldn’t see anything surprising. After all, the coefficients of the fitted equation are 1.97 instead of 2, .096 instead of 0, and .976 instead of 1. Pretty close.

Now we compute the predicted values from our equation; we are just using the data values of X in the fitted equation:

The residuals are the difference between the predictions and the actual data. I never remember which is subtracted, but I write

,

and then I see that it’s .

so we compute the e’s and the sum of squares of the e’s (the residuals and the error sum of squares):

We should plot the residuals (the 5th one is on the “5” on the x-axis).

For our design matrix, we have n = 5 observations and k = 3 variables. Then the “estimated error variance” SSE/(n-k) is…

Let’s get the total sum of squares. We compute the mean of the y values, center the data by subtracting the mean, and get the sum of squares (as the dot product of two zero-mean vectors).

The mean of the y’s is 7.87991; after subtracting it, we get centered data

and then

Now we can compute the R^2:

and the adjusted R2:

The covariance matrix of the s is

and, in particular, the standard errors are the square roots of the diagonal elements of c:

i remark that although the matrix inverse was one of the first things we computed, we need the estimated error variance s2, computed much later, in order to get the standard errors.

The t-statistics are the s divided by the standard errors:

Let me remark that some people present the s and the t-statistics, others present the s and the standard errors. No problem: given any two of the s, t-statistics, and standard errors, you can compute the third.

## let Mathematica® do it

Here’s where you use whatever you’ve got available; of all my choices, I’ll go with Mathematica®. Just in case you’re also using Mathematica®, here’s the specific command I used, and the output:

(“data” consists only of the (x,y) data; “{1,x,x^2}” tells Mathematica® what design matrix to construct; the “x” after that says that the first column of “data” is “x”; the “Clear” command is because I need “x” to be a symbol, not a list of numbers. I wouldn’t have this problem if I didn’t insist on using “x” as the name of the independent variable, i.e. doing double duty.)

The first line, BestFit, shows us the fitted equation. Again, just in case you’re using Mathematica®, here’s the command that extracted the BestFit and the result:

That’s exactly what we computed by hand. Similarly, the FitResiduals are the residuals e:

The PredictedResponse are the 5 computed values, yhat, used to compute the residuals.

In the ParameterTable, “Estimate” refers to , “SE” is its standard error, and TStat is its T-statistic.

PValue is a probability corresponding to the t-statistic. I don’t remember whether Mathematica® is doing a 1-sided or 2-sided test and the documentation doesn’t seem to say. I don’t really care: my rule of thumb is to compare the (absolute value of the) t-statistic to 1. If I ever need to know exactly what PValue means, I’ll grab a t-distribution from one of my stat books and see what Mathematica® did.

Mathematica’s standard errors and t-statistics, R^2, adjusted R^2, and estimated error variance all agree with our earlier computations.

Finally, the ANOVAtable contains, among other things, the error and total sum of squares. (i’ll remark that for experimental data, the F-test can more useful than the R^2 and the adjusted R^2, because repeated x’s with different y’s means that we have some data points on vertical lines; we cannot make the error sum of squares zero. That in turn means that the R^2’s cannot be 1. that in turn means that we can’t tell what constitutes “a good” R^2.)

Our computed SSE and SST were

respectively, and they agree with Mathematica®.

## better model, worse fit

So, I have confirmed that my recipe matches Mathematica®. So much for computation. What about the results themselves?

The parameter table, or our own calculations, showed a t-statistic of 0.431732 for the x term in the fitted equation. That the t-stat for x is less than 1 suggests that the linear term vanishes. That’s very nice, since we know that true model did not have a linear term. We created the data without a linear term. It’s the small t-statistic that says: for this data there’s a high probability that .

i cannot overemphasize that if our goal is the best interpolation, instead of finding the “true model”, we might choose to use the equation we have.

but, I want to find the true model, so let’s drop the x term; and let Mathematica® do it all for us:

Our two fitted equations, then, are

so dropping the x term has gotten us a little closer to the true equation.

What do the R-squares tell us? Old and new are {0.999926, 0.999919}, so the new one is very slightly smaller. In absolute terms, the new fit is not as good as the first one. In this case the difference is tiny – miniscule, even – , but if our goal is the best interpolation, then a higher R-squared equation may be preferable to a lower.

What about the adjusted R-squares? Old and new are {0.999852, 0.999892}. The newer one is the higher; in some sense, the newer fitted equation is more likely true. Again, in this case, the difference is damned small.

Finally, for the newer ft, all the t-stats (all two of them!) are larger than 1. That’s why we might go with the new equation.

Incidentally, the estimated variance for the equation is smaller, 0.00457182 vs. 0.0062731.

and, FWIW, the sample variance of our noise – which we only know because we created this data – was 0.00453426.

The second equation has a smaller estimated error variance, even though the first equation has smaller errors in total. This is hand in glove with “closeness of fit” versus “true model”, R^2 versus adjusted R^2.

BTW, had I done the newer fit “by hand”, the design matrix would have had two columns: 1s and the squares. It’s ok to drop the x’s themselves. This is exactly how we specify a model, by computing and using whatever columns we choose. To be specific,

There you are. That’s how to fit to the data.

## Leave a Reply