Relative Versus Absolute Predictions - I
BusinessWeek magazine often aims to be provocative with its lead articles, but the January 17, 2008 cover story, Do Cholesterol Drugs Do Any Good? seemed to hit a particularly sensitive public nerve. The article raised a host of issues, including consideration of the individual patient against the population, the relationship between cholesterol levels and coronary health in general and the relative benefits of statin drugs for a heart-healthy population versus those with coronary disease.1
The articles most important contribution, however, was to raise public awareness of differences between relative versus absolute risk of disease. Medical studies that deploy predictive models often quote performance differences in relative terms. An example is a control group experiencing 50 percent more disease than a group treated with a certain drug. On an absolute scale, however, that 50 percent might be much less meaningful, if 2 percent of patients from the treatment group versus only 3 percent of the control experience disease. In this instance, the 50 percent difference represents one case out of 100. In other words, 100 patients would need to be followed to find one less occurrence of disease in the treatment group in contrast to the control. Indeed, this calculation, known as the number needed to treat or NNT, and growing in popularity as a reporting statistic, represents the total of patients who must be treated to spare one person the disease. What seems compelling on a relative scale is often not nearly as meaningful in absolute terms.
Eric Rifkin and Edward Bouwer expand on the differences between relative and absolute risk in their gem of a book: The Illusion of Certainty Health Benefits and Risks.2 The authors discuss the effectiveness of statins for treating heart disease, as well as a host of other important disease inquiries, including Vioxx and heart attacks, the benefits of colorectal cancer screening, ecological risk assessment, and health risks of Asian oysters in the Chesapeake Bay. Rifkin and Bouwer also introduce the Risk Characterization Theater (RCT), a graphic that resembles a theater with 1,000 seats.3 For a given disease, the RCT shows the absolute risk - the number of additional occurrences of disease in the untreated group out of 1,000 total cases, as illustrated in Figure 1. A preponderance of filled seats is thus an indication of a large absolute difference in treated versus untreated. In Figure 1, the darkened seats are bad, representing additional disease. Of course, the RCT can also be used when treatment and control are reversed so that control is good and treatment is bad. In addition, it is often quite meaningful to compare RCTs based on dimensional variables such as age and risk exposure.
Relative Versus Absolute Predictions - II
With one son in college and another soon to be a high school senior, Im probably closer to the college admissions process than I care to be. Last year, I remember listening to one parents tirade about the inequities of the process. Her son, in the top 3 percent of his senior class with a high ACT score and a litany of Advanced Placement (AP) Program success thatd make your head spin, had been wait-listed for a prestigious private university. Another kid, a solid top 10 percent student with decent test scores, but without the éclat of her son, had not only been accepted to the same school, but also awarded a significant scholarship. How could that happen?
Colleges generally report characteristics of admitted or enrolled students, often by reporting the midrange of grades and test scores. A state university might, for example, note that the middle 50 percent of the incoming freshman class had high school grade point averages (GPAs) that ranged from 3.5 to 3.9, with ACT scores between 26 and 30. The university might admit over 50 percent of its applicants. An Ivy League school might have comparable interquartile GPAs of 3.9 to 4.0, with ACT scores of 31 to 34 - but accept only 15 percent of applicants.
A mistake many parents make with their college-bound first borns is to underestimate the selectivity of top schools, assuming their kids have a strong chance for admission if they present grades and test scores at or above the median of those accepted in previous years. But while theres a relationship between grades, test scores and admissions for the most selective schools, theres also a significant oversupply of quality applicants, leading ultimately to a surplus of disappointed families when acceptance and rejection letters are sent. Indeed, this is another example of flawed thinking based on the different interpretations of relative versus absolute risk.
Consider the following admission/rejection figures from Brown University,4 listed as the seventh most selective national university in 2008 by U.S. News and World Report, with an overall admissions rate of 14 percent. For the class of 2011, Brown accepted 29 percent of the high school valedictorians who applied, and 19 percent of applicants reporting a top 10 percent rank in class, but only 4 percent of those whose rank was lower than top 10 percent. Brown also admitted 29 percent of those applicants reporting the top score of 800 on the critical reading section of the SAT, in contrast to 20 percent of those who scored 750 to 799, 16 percent who scored 700 to 749, 11 percent of those scoring 650 to 699, and less than 10 percent of those under 650.5 So even with a strong relationship between credentials and odds of admission, there is still a scarcity of opportunity at Brown.