When my family returned from our spring break trip in early April, one of the first things my college-senior daughter did was to check with friends about their latest college admissions news. While for most of the cohort, college plans were solidified months ago, some top students were awaiting admissions word from elite private schools that don’t announce decisions until the end of March.
One of my daughter’s friends, a premier student, unfortunately didn’t receive good news from her dream East Coast destination. A top 1% in class ranking, a perfect 36 on the ACT, and superior performance in a portfolio of college-level AP courses were apparently not enough to convince the admissions committee. So she’ll have to “settle” for a full ride as a designated freshman scholar to an honors program at one of the top state school computer engineering departments in the country. Pretty nice consolation if you ask me.
Knowing that I’m in the computer industry, the girl’s dad asked me if I thought her job prospects would be at all diminished by her “misfortune.” Would his daughter’s career earnings potential suffer because she didn’t get a degree from that top private school? I quickly said no, drawing on extensive experience recruiting out of colleges over the years. But in fact I wasn’t sure.
The question of whether there’s a career income benefit to a degree from an elite private college versus a less selective state university is an interesting one and got me thinking about how it might be tested. One idea would be to compare incomes of students who attended highly selective private schools with a control group who were rejected from those schools but were at the “top” of the reject list. A big disparity in the incomes of these proximate groups would suggest that the elite schools make an income difference. Alas, the so-called regression discontinuity design is not very practical in this case, since there’s no single measure that’s used to guide the admission’s process.
A recent study from Princeton, “Estimating the Return to College Selectivity over the Career Using Administrative Earning Data,” takes the question head-on and offers a cautionary tale on the use of regression techniques with observational data. The research reports on analyses from the 1976 and 1989 cohorts of the College and Beyond Survey, which comprised applications/transcripts of students from a combination of 34 public and private universities. These data were then merged with earnings information from the Social Security Administration’s Detailed Earnings Records for 1981 through 2007.
Before the Princeton studies, most analyses had used simple multiple regression techniques relating income to school quality attributes such as average SAT and Barron’s Index, as well as to individual student characteristics like SAT score and grades. The results from these models indicate a significant income advantage for selective schools over their less selective counterparts. And that advantage increases over time. Of course, if there were other key factors not accounted for in the regression equations – if the models were in fact incorrectly specified – then the resulting parameter estimates might be biased, the results potentially invalid.
The Princeton researchers were less credulous than their research peers. One interesting finding was that for the subgroup of students who were admitted to an elite school but instead attended a less selective university, there was no noticeable school income difference. This suggests that income might have less to do with the prestige of the school than with the capabilities and motivation of the students.
In follow-up analyses, the authors take that analysis a step further. Instead of simple school and student skill attributes alone, they include in addition an “unobserved” student attribute measured as the average SAT score of all schools to which the student applied, the new variable seen as proxy for motivation and ambition above and beyond student capability.
The results of the regressions with this variable in addition to the others are provocative. With the motivation variable added to the model specification, the significance of the school selectivity indicator essentially evaporates. “We find that the return to college selectivity is sizeable for both cohorts in regression models that control for variables commonly observed by researchers, such as student high school GPA and SAT scores. However, when we adjust for unobserved student ability by controlling for the average SAT score of the colleges that students applied to, our estimates of the return to college selectivity fall substantially and are generally indistinguishable from zero.” In other words, it could well be that student capability, drive and motivation, rather than school prestige, are the main drivers of lifetime income.
For those of us who believe in self-bootstrapping for success, the findings from this study are heartening. They also speak to the perils of ill-specified predictive models that are all too often the norm in the business world. If important variables are omitted or the functional form of the relationship between variables is incorrect, predictive analytics “findings” might well be spurious. Modelers beware!