It's back to school time, so, like every year, I rush out to drop $10 on the latest U.S. News compendium of Best Colleges. And, like every year, I'm immediately annoyed at what I read.

Yet I tolerate this frustration, actually finding the ratings an inciteful illustration of performance measurement useful for BI.

After more than 20 years of ratings, the U.S. News Best Colleges is now the college bible for families of many high school juniors and seniors. Best Colleges has made its mark by establishing what it claims is an objective methodology for rating colleges based on readily-accessible measures.

Among the key indicators are academic reputation, graduation and retention rates, student selectivity and financial resources. The weightings given to each measure and the secret sauce of how the whole is determined from the parts is tweaked from year to year.

There are a number of changes to this year's methodology from 2010. Chief among them is the more heavy weighting of graduation rate performance – from 5 percent to 7.5 percent of overall score. GRP is implemented as the difference between actual and predicted graduation rates, higher positive numbers apparently indicating more value-add for the school. I'm wary. What do Cal Tech, MIT, Carnegie Mellon and Georgia Tech have in common? All are among the top engineering schools in the country – and all have large deficit GPRs. Is this testimony to the absence of value-add of these schools or instead acknowledgment that engineering education is tough? At the same time, are the surplus GPRs for prestigious sports schools like Notre Dame, UCLA, Virginia, Washington, and Penn State proof of great education or indicative of a few easy majors, perhaps to fuel athletic programs?

A second 2011 refinement has to do with a change in peer ratings of academic reputation to this year include a survey of high school counselors in addition to the existing survey of college presidents, provosts and deans. In contrast to last year's 25% weighting, this year only 15 percent comes from college presidents, et. al., with the remaining 7.5 percent from counselors. Wisconsin and Illinois, both higher in reputation with academics than with counselors, dropped noticeably in the rankings.

It'd be interesting to determine if the counselor survey methodology per se had a perverse impact on their scores. For example, were more counselors from California contacted, perhaps leading to a bias for the local schools they know best?

The annual changes in measures and weights may have more impact on rankings than organic deltas in school quality. A few years ago, a significant methodology change catapulted Cal Tech to the top ranking. That change was essentially undone the next year and “sanity” restored with the return of Harvard/Princeton to No. 1. What would be interesting to see is the re-calculation of historical rankings with each year's new formula – chronicling the re-write of history.

In BI, this is akin to handling slowly-changing dimensions by adding new records with time flags to dimension tables. I suspect what would be found is that most change is an artifact of methodology.

One consistent finding of the U.S. News rankings is that no public university cracks the top 20, not even world-renowned UC Berkeley. This is a consequence of the rating methodology, reflecting a bias towards selectivity, graduation rates and financial resources that favors privates. Yet if you look at ratings of specific departments like business, engineering, biology and political science, many publics consistently stand out. Apparently, the whole is less than the sum of the parts. Indeed, ratings that obsess on reputation and research prowess routinely report half a dozen or more publics in the top 25 US universities.

Selectivity is a major factor in the U.S. News rankings. The lower the acceptance rate and the higher the interquartile range ACT/SAT scores of entering students, the more selective, and hence higher-rated, the school. Also critical is the percentage of entering students graduating in the top 10 percent of their high school classes, higher numbers obviously better.

But percentage in the top 10 percent is problematic since it's uninformative for top schools. No. 1 Harvard reports 95 percent in the top 10 percent, while UC Davis cites 100 percent. Is UC Davis more selective than Harvard? Hardly. Probably two thirds of Harvard freshman graduate in the top 2 percent of their high school classes. I doubt U.C. Davis can match that. A continuous “density” tabulation of percentage class ranking would be invaluable for discriminating selectivity.

Under-the-radar Lehigh University in Pennsylvania reports 93 percent of incoming freshman from the top 10 percent of their high school graduating classes. But a footnote next to the 93 percent figure reveals that less than half of incoming freshman actually submit their rank. This is because many high schools, especially privates and affluent publics with a surplus of outstanding students, no longer report individual class rank. Their reasoning? The forced ranking hurts their students in comparison to those of lesser schools. One bet I'd make: if high school rank were actually known for all Lehigh students, the top 10 percent figure would be quite a bit less than 93 percent.

Indeed, all high schools are not equal. My H.S. junior daughter currently ranks in the top 3 percent of our excellent suburban public school. Yet if she were to transfer to the selective private school a few miles away, she'd probably not even make the top 10 percent of that class. This shouldn't be surprising: private schools select their students; publics do not. In fact, several elite New England boarding schools – Exeter and Andover come to mind – routinely have students in the bottom half of their classes accepted to Ivies.

The annual findings are not all suspicious. Several schools that were not very prominent in my day now justifiably make the elite list. Washington U in St. Louis, Boston College and USC come to mind. Always excellent academically, but with just regional appeal back then, Wash. U has been an innovator in high school student marketing, perfecting the “recruit to reject” strategy for increasing the applicant pool and hence selectivity.  Those efforts have paid off handsomely: Wash. U is now a to-die-for destination for many smart preps. I certainly can attest to its popularity in suburban Chicago. Boston College was once faithful to its mission of educating the sons and daughters of Boston Catholics. Now BC has a national cachet and processes over 30,000 applicants annually. And USC was once derisively referred to as “University of Second Choice” by UCLA snobs. Not any more.

My overall take on the booming college ratings “business?" It certainly helps sharpen the focus on the components of school quality, a good thing. It's also fun to monitor the evolving rankings methodology, evaluating pieces that might be pertinent for business performance measurement. But one has to be wary of methodological landmines in the interpretation of Best Collges data. What looks like year to year change might in fact be little more than computational chicanery. When I was in college over 30 years ago, the consensus top five schools were Harvard, Princeton, Yale, Stanford and MIT. The top five today from my current research? Harvard, Princeton, Yale, Stanford and MIT.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access