Two Septembers ago, families of students at our suburban Chicago high school received an email from the principal detailing school performance over the previous 12 months. His note chronicled the considerable accomplishments of individual students, student organizations and school athletics. He also made special mention of the school's large number of new semi-finalists from the National Merit Scholarship program, a whopping 50 percent increase from the previous year.
While touting the new "scholars," the principal observed that the school had contracted with a local education company to coach top students on the vagaries of the PSAT, the test used to select semi-finalists. And though he stopped short of directly connecting the new program to test results, I made a mental note to check back the following Fall to see if the year's large number of scholars persisted from year to year. Call me suspicious.
As I feared, the email we received last year noted the much more modest-sized new class of semi-finalists and didn't mention the prep company at all. My guess is the principal realized the big jump in semi-finalists the previous year had been a random blip; the latest results more a return or "regression" to the long-term school mean.
Confusing real change with an aberration that later returns to the the long-term average is a common cognition mistake that leaders make attempting to explain their organizations' success/failures. Indeed it seems we're often tripped up by such errors in thinking when it comes to probablity and statistics, the failure to acknowledge regression to the mean only one of many prominent mis-conceptions.
Our poor principal may have been guilty of a second, related fallacy on merit semi-finalists as well – making assessments based on inadequate sample size. I suspect, however, he wouldn't have been guilty of this mistake had the initial aberration been a 50 percent decline in scholars rather than an increase! The investment specialists that pester me constantly certainly see strong patterns in the small sample sizes that underpin their portfolio performance. Unfortunately, they generally come up empty when I insist on 10 years experience as the minimum.
A few weeks ago, I waited in line at the 7-Eleven while a customer bought 10 lottery quick picks in succession, each new one after learning the previous was a bust. At purchase seven or eight he exclaimed to all in line: "I'm due." He wasn't, an unfortunate victim of the gambler's fallacy, "the tendency to think that future probabilities are altered by past events, when in reality they are unchanged.” That the gentleman had picked eight losers in a row had no bearing on him picking the ninth, just as tossing eight heads in a row with a fair coin make the ninth toss any more likely to be tails.
The gamblers fallacy's related cousin, the hot hand, often seen in sports, is equally pernicious. Hot hand thinking assumes that the gambler/shooter “is either ‘hot’ or ‘cold,’ depending on whether the luck is good or bad, and the good or bad luck will continue at a probability greater than chance.” Indeed, the hot hand is more often than not skill, independence and randomness playing out over many trials.
Ray Allen of the Boston Celtics is generally considered among the greatest 3-point shooters in NBA history, making almost 40 percent of his attempts over an illustrious career. If Allen takes ten 3-point shots per game during an 80 game season with a .4 chance of success on each, he'll hit a “hot” 6 or more on the average of over 13 games per year, and a smoking 7+ more than 5 percent of the time. Allen's hot hands are really nothing more than the expected distribution of results given his incredible shooting skill.
While the general population struggles with basic statistics, they seem even more befuddled with the concepts of elementary probability. An axiom of basic probability is that the likelihood of the intersection of two events is less than or equal to the likelihood of either event separately. Yet researchers repeatedly find subjects guilty of the conjunction fallacy that posits the probability of A intersect B is greater than the probability of A alone. Nobel-winning psychologists Kahneman and Tversky demonstrated the fallacy in action with “the Linda problem” in which subjects told a story about fictitious Linda were more likely to believe she was both a bank teller and feminist than a bank teller alone.
When it comes to the rules of conditional probability, even the intelligentsia can be far off the mark. Doctors and other highly-educated medical staff are often tripped up by thinking surrounding Bayes theorem. Bayes theorem states that the probability of an hypothesis given the evidence is equal to the the probability of the evidence given the hypothesis multiplied by the probability of the hypothesis, all divided by the probability of the evidence: P(H|E) = P(H)*P(E|H)/P(E). In BayesSpeak, the posterior probability, P(H|E), is equal to the prior probability, P(H), times the likelihood ratio, P(E|H)/P(E).
Consider an example. A test for a disease that occurs in only 0.03% of the population has the following characteristics: if someone has the disease, the test will produce a positive result 99% of the time, and give a false negative 1%. If, on the other hand, a person doesn't have the disease, the test gives a negative result 95% of the time, with a false positive 5%. What is the probability that a person who tests positive actually has the disease?
Though that person is probably shaking in her boots, Bayes theorem tells us that the probability she actually has the disease given the positive test result is: P(D|+T) = P(D)*P(+T|D)/P(+T). Now since P(+T) = P(+T|D)*P(D) + P(+T|~D)*P(~D), which comes out to 0.99 * 0.0003 + 0.05 * 0.9997 = 0.0052955, the desired probability is then 0.0003*(.99/0.0052955) = 0.56 – or 6%, a much less menacing threat.
There are several fallacies relating Bayes theorem to real world applications. The first is the inversion fallacy in which the probability of evidence given the hypothesis is confused with with the probability of the hypothesis given the evidence. The second is the base rate fallacy, in which the probability of the hypothesis given the evidence doesn't adequately factor in the probability of the hypothesis or base rate alone.
With the disease example, the inversion fallacy would confuse the P(D|+T), which is small, with the P(+T|D), almost one. Part of the problem might be inadequately accommodating the base rate, P(D), in the calculation. Since P(D) is very small, the desired conditional probability calculated by its multiplication is greatly reduced. Hence inversion and base rate ignorance can often make small probabilities seem much larger than they actually are.
I think my conservative, self-made millionaire friend makes inversion or base rate mistakes when he argues that anyone could achieve the level of success he's enjoyed as a stock broker if she simply worked hard enough. He confuses the probability of getting rich given than you've worked hard, which, despite our fantasies, is small, with the probability that you've work hard given that you're rich, which I'm sure is high. My friend inverts the probability of A given B to the probability of B given A. He also fails to account for the minuscule base rate (<3%) of millionaires in the population.
There's certainly no shortage of current investigation into psychological, logical, economic and statistical and statistical fallacies that seem so pervasive in human behavior. I'll continue to write about these biases that trip up our decision-making in coming columns.