Regression 1: Example 3, Women’s Participation in the Labor Force

Let us work another example from Ramanathan. Perhaps I should emphasize that although I am using his dataset, this is not his analysis. Let me assure you that his analysis is worth reading.

The data

I described how to get the data from his website in the previous regression post. This is dataset DATA4 – 5. XLS, and it appears to be the same for both the 4th and 5th editions of the text. The data for the 3rd edition, however, is different from this, and the regressions are slightly different.

Here is his description of the data from the 4th edition file, which included descriptive information; the 5th edition data contains the variable names but nothing else (that I saw). I am, in fact, using the 5th edition data, despite the 4th edition description.

DATA4-5: THE FOLLOWING ARE 1990 CENSUS DATA BY STATES PARTICIPATION RATE (IN %) OF ALL WOMEN OVER 16
Compiled by Louis Cruz

  • wlfp = persons 16 years & over–percent in labor force who are female
  • yf = median earnings (in thousands of dollars) by females 15 years & over with income in 1989

  • ym = median earnings (in thousands of dollars) by males 15 years & over with income in 1989

  • educ = females 25 years & over–percent high school graduate or higher

  • ue = civilian labor force–percent unemployed

  • mr = female population 15 & over–percent now married (excluding separated)
  • dr = female population 15 & over–percent who are divorced
  • urb = percent of population living in urban areas
  • wh = female population–percent 16 years and over who are white

The following commands import the Excel file… display the first line (variable names)… put the variable names into the list n1… create a data matrix d1 with the dependent variable WLFP in the last column… and print the dimensions of the data matrix….

We see that we have 9 variables (8 independent) and 50 observations.

As I did last time, here is the data… it is, after all, already publicly available on the Internet. Oh, For some reason I had to do it two pieces: the first 25 observations, then the remaining 25 observations.

\left(\begin{array}{ccccccccc} 15.735 & 25.532 & 66. & 6.9 & 53.26 & 9.43 & 60.4 & 58.42 & 52.3 \\ 25.62 & 35.621 & 86.1 & 8.8 & 59.07 & 11.91 & 67.5 & 54.18 & 66.4 \\ 18.976 & 27.292 & 78.2 & 7.2 & 54.15 & 11.56 & 87.5 & 64.35 & 54.8 \\ 14.736 & 21.92 & 66.1 & 6.8 & 56.52 & 9.38 & 53.5 & 65.49 & 51.9 \\ 23.123 & 31.98 & 75.5 & 6.6 & 50.8 & 11.01 & 92.6 & 55.24 & 57.7 \\ 20.001 & 29.285 & 83.8 & 5.7 & 54.67 & 11.99 & 82.4 & 69.12 & 62.5 \\ 24.057 & 35.485 & 78.8 & 5.4 & 51.61 & 8.82 & 79.1 & 71.28 & 60.9 \\ 20.256 & 30.222 & 77.4 & 4. & 52.01 & 9.18 & 73. & 64.52 & 61.1 \\ 17.867 & 25.789 & 74.2 & 5.8 & 53.58 & 11.08 & 84.8 & 69.14 & 52.8 \\ 18.291 & 26.684 & 70.1 & 5.7 & 52. & 10.33 & 63.2 & 55.84 & 59.9 \\ 20.073 & 27.147 & 78.4 & 3.5 & 55.33 & 9.03 & 89. & 24.69 & 63.3 \\ 16.122 & 25.475 & 80.2 & 6.1 & 60.92 & 9.75 & 57.4 & 70.06 & 56.1 \\ 20.325 & 31.406 & 75.7 & 6.6 & 50.82 & 9.06 & 84.6 & 62.48 & 57.7 \\ 17.101 & 28.197 & 74.9 & 5.7 & 54.74 & 10.4 & 64.9 & 70.97 & 57.4 \\ 16.465 & 25.391 & 80.6 & 4.5 & 56.79 & 7.97 & 60.6 & 75.88 & 57.8 \\ 17.336 & 26.535 & 80.9 & 4.7 & 57.48 & 9.59 & 69.1 & 70.41 & 58. \\ 16.058 & 25.011 & 65. & 7.4 & 55.79 & 9.74 & 51.8 & 72.41 & 51.2 \\ 15.993 & 25.876 & 68.4 & 9.6 & 49.96 & 8.88 & 68.1 & 51.8 & 50.2 \\ 17.406 & 26.024 & 79.4 & 6.6 & 55.44 & 9.95 & 44.6 & 77.73 & 57.5 \\ 22.36 & 32.078 & 78.2 & 4.3 & 50.34 & 8.57 & 81.3 & 56.8 & 63.4 \\ 23.09 & 32.749 & 79.6 & 6.7 & 47.69 & 8.07 & 84.3 & 74.02 & 60.3 \\ 20.263 & 32.344 & 76.8 & 8.2 & 51.63 & 10.06 & 70.5 & 65.28 & 55.7 \\ 19.756 & 29.475 & 82.8 & 5.1 & 55.36 & 8.13 & 69.9 & 73.57 & 62.5 \\ 14.472 & 22.251 & 64.5 & 8.4 & 50.09 & 8.57 & 47.1 & 49.64 & 52. \\ 17.421 & 27.06 & 73.1 & 6.2 & 53.97 & 9.71 & 68.7 & 69.14 & 56.4\end{array}\right)

the remaining 25 observations:

\left(\begin{array}{ccccccccc} 15.268 & 24.769 & 81.7 & 7. & 58.51 & 9.88 & 52.5 & 71.37 & 55.8 \\ 16.009 & 24.333 & 82.2 & 3.7 & 56.8 & 8.09 & 66.1 & 72.63 & 60.3 \\ 19.291 & 27.764 & 77.9 & 6.2 & 54.18 & 15.06 & 88.3 & 66.66 & 62.9 \\ 20.468 & 30.94 & 82.3 & 6.2 & 56.36 & 9.14 & 51. & 77. & 64.4 \\ 23.243 & 35.622 & 75.9 & 5.7 & 51.12 & 7.47 & 89.4 & 64.92 & 58.8 \\ 16.783 & 25.372 & 74.3 & 8. & 54.22 & 11.32 & 73. & 58.09 & 53.9 \\ 22.437 & 31.861 & 74.2 & 6.9 & 46.88 & 7.57 & 84.3 & 60.66 & 55.5 \\ 16.475 & 23.452 & 70.2 & 4.8 & 53.52 & 8.11 & 50.4 & 60.96 & 59.8 \\ 14.731 & 22.559 & 78.3 & 5.3 & 58.78 & 6.42 & 53.3 & 72.67 & 57.3 \\ 18.666 & 29.796 & 75.3 & 6.6 & 52.97 & 10.13 & 74.1 & 69.18 & 54.7 \\ 16.82 & 25.566 & 73.7 & 6.9 & 56.64 & 11.16 & 67.7 & 65.23 & 53.5 \\ 18.42 & 27.72 & 81.7 & 6.2 & 55.52 & 11.84 & 70.5 & 73.36 & 56.1 \\ 18.845 & 28.85 & 73.9 & 6. & 51.38 & 7.33 & 68.9 & 72.76 & 52.8 \\ 19.631 & 29.841 & 71.1 & 6.6 & 49.33 & 8.85 & 86. & 75.45 & 58.3 \\ 16.14 & 24.13 & 67.8 & 5.6 & 52.2 & 7.84 & 54.6 & 54.6 & 58.3 \\ 14.271 & 21.425 & 78.6 & 4.2 & 57.8 & 7.22 & 50. & 70.32 & 58.5 \\ 16.367 & 24.988 & 66.7 & 6.4 & 54.04 & 10.38 & 60.9 & 66.35 & 55.7 \\ 18.629 & 27.081 & 71.5 & 7.1 & 54.58 & 10.26 & 80.3 & 58.26 & 56.4 \\ 17.208 & 28.597 & 84.7 & 5.3 & 59.24 & 8.69 & 87. & 64.2 & 58.6 \\ 19.951 & 28.974 & 75.3 & 4.5 & 53.64 & 9.32 & 69.4 & 77.34 & 60.7 \\ 18.613 & 25.883 & 82. & 5.9 & 53.63 & 8.33 & 32.2 & 61.94 & 62.3 \\ 20.607 & 31.026 & 83.5 & 5.7 & 55.41 & 11.77 & 76.4 & 69.63 & 57.9 \\ 15.299 & 26.184 & 66. & 9.6 & 55.57 & 8.66 & 36.1 & 76.81 & 42.6 \\ 17.465 & 27.653 & 78.8 & 5.2 & 54.72 & 8.08 & 65.7 & 72.37 & 60.1 \\ 16.26 & 28.28 & 83.1 & 5.9 & 60.6 & 10.38 & 65. & 70.03 & 58.7\end{array}\right)

Forward Selection

For no particular reason, except to be different, let me run stepwise first. (Okay, until and unless I drop a variable, this is forward selection.)

We have three candidates; interestingly, Cp agrees with Adjusted R Squared… a few criteria would choose #6 instead of #7… and HQc would choose #4.

Let me save those three…

best1={reg1[[4]],reg1[[6]],reg1[[7]]};

Now let’s look at numbers 4 thru 7… (note that the Adjusted R Squared is printed below each parameter table.)

We see that regression #4 is the only one all of whose t-statistics are “significant”. In #5, we see that URB comes in with a t-statistic less than 2, but nothing earlier falls too far.But in #6, the t-statistic for YM has fallen below 2.

So take it out.

Stepwise

We see that it is the 2nd name… so we drop it… and then we drop the 2nd column from the data matrix d1….

Run stepwise…

Wow, almost unanimous. Even Cp agrees with the consensus. Note, however, that this Cp was computed using a different touchstone regression from the previous output from stepwise.

Save these two, anyway…

best2={reg2[[5]],reg2[[6]]};

Let’s look at numbers #5 through #7:

In regression #5, all t-statistics are significant. In regression #6, DR comes in with a t-statistic slightly less than 2, but no prior t-statistic falls below 2.

In regression #7, MR comes in with a t-stat less than 2, and DR is still below 2, but nothing earlier has fallen below 2. (In fact, DR has a higher t-stat in regression #7 than in #6… I suppose that if it had fallen a lot instead, I might drop DR… but I’m not sure. Hey, do what you like – this is largely heuristic. After all, I’m going to let the criteria check out whatever I run, so it won’t kill me to run too many regressions.)

I’m done. That is, I choose to stop here.

I’m going to combine the candidates from the two stepwise runs, and I’m going to add the original touchstone regression – all 8 independent variables (that’s reg1[[-1]], the first from the end of the list reg1).

As I said in the previous example… I don’t see how to relate two Cp values which used different touchstone regressions. Since the first one, and in what follows, the backward selection and all possible subsets will use the 8-variable regression as the touchstone, I’m going to use it here as well.

On top of it all, since the choices in stepwise and in backward selection do not use Cp to get the small subsets of regressions, we’re really not being fair to Cp anyway.

Until and unless I change my code, all possible subsets is the only fair way to assess Cp… at least, that’s my take on it.

Anyway, here are the five candidates from reg1 and reg2, and the 8-variable regression for the touchstone.

Those appear to be six different regressions, just judging from the constants. (Oh, the Union command doesn’t do much, and I should have dropped it. I did confirm that the last regression in this list is, as it should be, the 8-variable one.)

Select…

Save those three candidates…

bestS={reg[[2]],reg[[4]],reg[[5]]};

Backward selection

Now let’s run backward selection…

This time Cp disagrees.

Note that we dropped MR first and then YM, and then DR.

I decided to look at regressions 5–8, and I did it in reverse order, with #8 first.

In the 3rd edition, Ramanathan ran 5 models. The first was all the variables… and the next one dropped YM, MR, and DR. Then he said he preferred to see what happened if he dropped them one at a time – he did backwards selection without calling it that. He dropped ony YM, then YM and DR, then YM, MR, DR and URB. (He already had the regression that omitted TM, MR, and DR.) He effectively presented the results of a backward selection, to the point of dropping four variables.

On the different data for the 4th and 5th editions, my backward selection dropped MR, then YM, and then DR, instead of his YM, DR, and then MR. As far as I can see, I would have matched his sequence for the 3rd edition data, but the sequence is different for the 4th = 5th edition data.

Let’s save the three candidates…

Let’s recall the three candidates from stepwise…

They appear to agree, judging once again from just the constant terms. (I have no idea why the model display changes.)

Both stepwise and backward selection would choose three candidates: one selected by Cp, one selected by HQc, and the third selected by the other thirteen criteria.

All Possible Regressions

Now, did they make the right choices? Let’s run all possible subsets:

Nice. We get three candidates again: one chosen by Cp again, one chosen by HQc again, and the third chosen by the other thirteen criteria, again.

Save them.

bestA={all[[189]],all[[238]],all[[254]]};

We already know that stepwise and backwards agreed, so compare backwards with all possible regressions:

They agree… that was the choice of HQc.

Next?

They agree, too. That was the choice of the other thirteen criteria.

What’s the difference between these two candidates? The addition of DR, with a t-statistic slightly less than 2. I find it worth noting that only HQc decided against adding DR.

Next?

These are Cp choices. They are comparable, because they had the same touchstone regression. We see that backwards selection (and therefore stepwise) did not agree with all possible subsets – backward selection and stepwise got the wrong answer.

Once again, I can’t be all that surprised. Backward selection uses the t statistics to choose what variable to drop… a fair test might be to compute the Cp for all the regressions with one fewer variable, and choose the best of those.

As in the previous example, I myself wouldn’t give a second look to any of the selections made by Cp. I would look at the other two candidates – and if I were pressed for time, I would look only at the one selected by thirteen of the criteria.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: