My daughter's a pretty good high school volleyball player. Her travel team has enjoyed a measure of success the last two years and so the players have now started to attract attention from college programs. My daughter was delighted with a letter she received a few months ago from one college in particular, a school she dreams about independent of volleyball. When she shared her interest with one of the parents at the end-of-season club party, the adult mildly scoffed, opining that the school's volleyball program was a loser and destined to remain one, citing a dismal 2008 season in which the team lost more than 70% of its games overall, 90% in conference. Concerned, I checked the school's recent record and found something much different: a 60% winning record in the six years prior to 2008. Even taking the last dismal year into account, the school's seven year performance had been competitive -- a middle of the pack team in a very tough conference. Several years earlier, when my daughter's cohort was preparing for high school, one of the families broke ranks with the consensus of having the girls attend our community's excellent public school, choosing instead an exclusive – and expensive –  prep school.  They cited better test scores as a primary consideration, noting a significant difference in the prestigious number of National Merit semifinalists between the schools the previous year. When I looked at the numbers for five years prior, however, the merit scholar difference disappeared. The parents' one year sample from the public school happened to be a random low-water mark. The recurring theme of Leonard Mlodinow's The Drunkard's Walk, How Randomness Rules Our Lives, first mentioned in Part I, is that the general population isn't innately proficient with probability and statistics, consistently making predictable mistakes in their internal calculations of likelihoods, patterns and measurement error. As probabilists, we see patterns where random fluctuations might in fact prevail. As statisticians, we're often too quick to generalize from very small samples, in many cases making rash decisions as a consequence of a need to attribute cause and effect. And we generally don't understand the concept of measurement error at all, befuddled by variations in multiple measurements of the same thing. The scary part for decision making is that the errors are both deductive -- from population to sample -- and inductive, from sample to population. Mlodinow takes us for a gentle ride through this blindness, adroitly mixing amusing historical developments in probability and statistics with illustrations of our predictable quantitatively-challenged irrationality. A long way from my P&S student days, I was quite entertained by the stories of important figures like Cardano, Galileo, Pascal, Fermet, Bernoulli, Laplace, Gauss, Galton, Pearson and Fisher. And the author's discussion of the Paramount and Columbia movie executives who were cashiered after short runs of failure but whose legacy performance turned out strong post-departure certainly resonates in the business world. Regression to the mean has probably made and broken more executives than we know. It often works quickly, too. Look for the new coach of last year's 0-16 Detroit Lions to be a beneficiary this season. I've written quite a bit about conditional probability and Bayes Law, yet still find Mlodinow's treatment of the topic illuminating. Rather then derive the algebraic Bayesian formulas, the author sticks with the simple concept of sample space to demonstrate his ideas. Unconditional probability is thus just the ratio of included sample space cases to total cases. Conditional probability further restricts cases in the denominator. Mlodinow's discussion and calculations pertaining to false positives in medicine and law particularly hit the mark.  Alas, humans are not very adept at Bayesian thinking either, often making the mistake of inverting conditional probability – confusing the probability of A given B, with the probability of B given A. As an illustration, consider the all-American bromide: “You too can become a millionaire if you dedicate yourself and work hard.” In this case, we'd like to know the probability of becoming a millionaire, given that we work hard. Current demography suggests that about 2.2% of Americans are millionaires. If we make the assumptions that about 30% of Americans work hard, and that 90% of American millionaires are hard workers, then, using Bayes' Law, the probability of an individual becoming a millionaire, given that she works hard, is .022*.9/.3 = .066 or 6.6% – hardly inspiring odds. The mistake that's often made is to confuse the 6.6% figure with the 90% of millionaires who worked hard to get where they are. That type of error often makes a good story, where work and persistence pay off.  Mlodinow's discussion of the luck that helped Bill Gates get started with DOS and the good fortune that launched Bruce Willis's acting career can be incorrectly viewed through an inverted worthiness lens. In the end, human probability and statistics shortcomings often become predictable irrationality. “...people have a very poor conception of randomness; they do not recognize it when they see it and they cannot produce it when they try, what's worse, we routinely misjudge the role of chance in our lives and make decisions that are demonstrably misaligned with our best interests.”  Mlodinow cites the seminal work of cognitive scientists  Amos Tversky and Daniel Kahneman, whose research on decision making heuristics and biases became the foundation for the study of predictably irrational behavior. Early on, Kahneman learned that regression to the mean, not positive and negative reinforcement, best explained patterns of behavior in flight training. Kahneman and Tversky repeatedly demonstrated in experiments that subjects assign a higher probability to the intersection of two events than to either event individually, when the events together make a compelling story. This of course violates a basic tenet of probability. Among the biases they brought to attention are confirmation, availability and recency, all of which involve errors in sampling events to support decision making. Dan Ariely, in a BusinessWeek article on optimism bias, notes that  “This recession has delivered a huge lesson in how far human folly and irrationality can lead us astray ... extrapolating long-term projections from short-term trends...”  The gamblers fallacy, the expectation that one “is due” following a period of “bad luck”, shows than many are not good probabalists, either. Finally, rats often outperform humans in tests to identify the color of random lights because they always guess the most common color versus attempting to divine a pattern. Seems that'd be pertinent for the investment business. All is not woe, however. Companies are more and more starting to understand irrational behavior in their customer base. Dan Ariely argues for a basic corporate behavioral economics competency center. And to say that outcomes of human behavior have significant random elements is not to doom business prediction. On the contrary, randomness helps us understand that human behavior is complicated and not deterministic.  As a consequence, BI needs to be especially attentive to design considerations to disentangle the effects of planned interventions from simple randomness. Prudent companies are those that understand how randomness and predictable irrationality mistakes drive their customers, while guarding against such errors in corporate management and intelligence.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access