I’ve received several emails/comments about my recent series of blogs on Duncan Watts’ interesting book “Everything is Obvious: *Once You Know the Answer -- How Common Sense Fails Us.” Watts’ thesis is that the common sense that generally guides us well for life’s simple, mundane tasks often fails miserably when decisions get more complicated.

Three of the respondents suggested I take a look at “Thinking Fast and Slow,” by psychologist Daniel Kahneman, who along with the late economist Amos Tversky, was awarded the Nobel Prize in Economic Sciences for “seminal work in psychology that challenged the rational model of judgment and decision making.”

I read Thinking Fast and Slow a while back and agree with the blog responders that it’s a terrific work explaining how humans routinely make mistakes in judgment. In fact, when I think of other such books on the topic I’ve blogged about over the past few years, e.g. “The Halo Effect,” “Fooled by Randomness,” “The Drunkard’s Walk,” “The Invisible Gorilla,” and “Everything is Obvious,” I’d say “Thinking Fast …” is the most scientific and comprehensive of the bunch. In contrast, the other books articulate their mistaken judgment message by combining science with entertaining narrative.

Perhaps TFS’s most significant contribution is as an exhaustive compendium of findings on judgment and decision making spawned by Kahneman and Tversky’s research in cognitive psychology. When the authors started working together in the 1970s, the guiding principles of human behavior were that people generally acted rationally, and on those occasions when they didn’t, explanations could be found from among emotions such as fear, affection and hatred. Their work “challenged both assumptions … documented systematic errors in the thinking of normal people, and traced these errors to the design and machinery of cognition rather than to the corruption of thought by emotion.” Indeed, much of the book revolves on the many systematic errors or biases that recur regularly in human intuition.

TFS identifies two systems of the mind. “System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control. System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.”  

System 1 is, of course, thinking fast, while System 2 is slow. System 1 is generally very good with work it performs, but does have biases and little understanding of logic and statistics. Since it operates automatically, biases are unavoidable. System 2, on the other hand, with its enhanced monitoring and methodical vigilance, can prevent many System 1 errors, but is too slow and inefficient for routine decisions. The best that can be done as a compromise is "learn to recognize situations in which mistakes are likely and try harder to avoid significant mistakes when the stakes are high.” Sound a bit like Watts’ common sense/uncommon sense dialectic?

A statistician by vocation, I’ve been interested in human statistical failings for a long time – actually, since reading Kahneman and Tversky’s seminal paper “Judgment Under Uncertainty: Heuristics and Biases*” (Appendix A in TFS) in grad school many years ago. The article identifies cognitive biases resulting from judgmental heuristics that trip up even sophisticated scientists. The authors classify several of these statistical biases under the rubric “representativeness”.  

A commonly-made error pertains to the “law of small numbers”, where “our predilection for causal thinking exposes us to serious mistakes in evaluating the randomness of truly random error … if you follow your intuition, you will more often than not err by misclassifying a random event as systematic.” Think Super Bowl champion, New York Giants head coach, Tom Coughlin, under fire when his team lost four straight games in the middle of the season, would agree?

That causes trump statistics is even more evident in conditional probability Bayesian analysis. There are two components to a Bayesian calculation: the generic or base rates and case-specific probabilities – both of which are combined by Bayes rule to determine the correct posterior probability. Yet individuals, looking for cause and effect, predictably ignore base rates and overweigh specific probabilities in their personal computations, often with catastrophic results.  

The author illustrates the bias in the following hypothetical example: A taxi was involved in a hit and run accident. Two cab companies, Green, with 85% of the taxis, and Blue, with 15%, operate in the city. A witness, whose reliability is calibrated at 80%, identified the hit and run taxi as Blue. What’s the probability that the cab involved in the accident is Blue? Ignoring the base rate that 85% of taxis are green, most respondents incorrectly conclude that the probability is 80%. Proper application of Bayes rule produces a much less certain figure of 41%.

Our need to attribute cause and effect also makes us miss instances of “regression to the mean”, in which below average performance is followed by above average and vice-versa -- by chance alone. The problem, of course, is that there’s much more randomness and luck involved with prediction of behavior and performance than we’d like to acknowledge. Kahneman’s personal regression to the mean “heuristic” for predicting scores in a pro golf tournament is telling:

  • above-average score on day 1 = above-average talent + lucky on day 1
  • below-average score on day 1 = below-average talent + unlucky on day 1

We’re also overly confident of making predictions, even when there’s no support for them.  Without evidence, a “best” prediction will simply be the average of all cases. But because of representativeness or “halo” biases, individuals often offer bold – though unwarranted – predictions.  “Psychologists who conduct selection interviews often experience considerable confidence in their predictions, even when they know of the vast literature that shows selection interviews to be highly fallible.”
If we’re not very good statisticians, we might be even worse probabilists. A basic probability axiom holds that the likelihood of conjunctive (and) events A and B must be less than or equal to each likelihood separately. Similarly, the probability of disjunctive (or) events A and B must be greater than or equal to the probability of both A and B individually. The findings from K&T research? Subjects see the conjunctive events as more likely than simple ones which they find, in turn, more likely than the disjunctive  – precisely the reverse of what the math says.

Subsequent blogs will investigate systematic economic biases as well as Kahneman’s “prospects” for making us more rational and scientific in our decision making – guiding us from error-prone System 1 to the methodical System 2.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access