We just returned from the 17th annual family vacation in the Outer Banks of North Carolina. Just as it seems to be every year, the week in Nags Head was grand, with lots of sun, surf, seafood and cold beer. What set this trip apart from others for me, though, was its "Bayesian" theme.
Stats geeks are certainly familiar with Bayes theorem. And now, Bayes has become very popular in the business world as a statistical means to model learning. The attraction of Bayes is the constant updating of probabilities of what's known. Starting from scratch with an initial idea of where things stand – the "priors" – Bayes theorem updates the "posteriors" from learning. The latest posteriors then become the next priors. Bayesian thinking seems a natural to help organizations gain intelligence.
Formally, Bayes theorem states that the probability of an hypothesis given the evidence is equal to the the probability of the evidence given the hypothesis multiplied by the probability of the hypothesis, all divided by the probability of the evidence: P(H|E) = P(H)*P(E|H)/P(E). In Bayes parlance, the posterior probability, P(H|E), is equal to the prior probability, P(H), times the likelihood ratio, P(E|H)/P(E).
The teenagers of five years ago who are now college-educated adults embarked then on what appears to be a Bayesian comparison of catch performance from the deep sea fishing boats originating in the Oregon Inlet. Twice each trip for the last five years, we've made late afternoon pilgramages to the inlet to review the day's bounty from the returning heroes. Starting with "uniform priors" that assumed all boats were equal, we've recalced the "posteriors" on each visit, updating our assessment and rankings. We now are assured beyond statistical doubt that Tuna Fever and Carolinian set the performance standard of the boats we've covered. Almost never is either at the very top of a day's rankings. Almost always is each in the top 20%. It's comforting to know your fishing group can expect 100 pounds of tuna and dozens of great-for-tacos mahi-mahi on a typical clear day.
Perhaps my most significant Bayesian pleasure vacation week was a read of the splendid Sharon Bertsch McGrayne book: “The Theory That Would Not Die: How Bayes's Rule Cracked the Enigma Code, Hunted Down Russion Submarines & Emerged Triumphant From Two Centuries of Controversy.” McGrayne does a masterful job detailing the over 200 years of squabbling surrounding the use of Bayes theorem in statistics and science. What I found most enjoyable – though panned by many – is her deep but entertaining look at the individual agonists, much of the information gathered from interviews with retired "players."
McGrayne introduces controversy from the start, arguing that while Englishman Thomas Bayes might have articulated the theorem in the 1740s "An Essay Toward Solving a Problem in the Doctrine of Chances," he mysteriously abandoned his work, leaving it to Pierre Simon Laplace later in the century to promote the "inverse probability of causes" for updating old knowledge with new. "Bayes combined judgements based on prior hunches with probabilities based on repeatable experiments. He introduced the signature features of Bayesian methods: an intial belief modified by objective new information."
It's the hunches or "subjective priors," the point of departure for Bayesian calculations, that are at the center of the statistical divide between Bayesians and the more purely-scientific frequentists led by the work of Ronald Fisher, Jerzy Neyman and Egon Pearson. Frequentists see probability as the objectively-measured, relative frequency of an outcome over a large number of trials, while Bayesians view probability as a more subjective concept tied to an individual’s judgment of the likelihood of an event. Mainstream frequentists historically have struggled with the subjective prior probabilities of Bayesians, arguing they cannot be measured reliably. Bayesians, on the other hand, feel their approach is more pertinent and natural for solving real world problems.
The divide between the camps was passionate and personal as frequentism became statistical orthodoxy in the 19th and 20th centuries. Seeing themselves as non grata to the mainstream, Bayesians went "underground" to continue their work, often eschewing the "B" word to describe their efforts. Yet, as McGrayne poignantly chronicles, the contributions of underground Bayesians helped shape world history. The chapters on using Bayesian logic to tackle intractable cryptography problems and to help locate U-2 boats during World War II are fascinating.
And years later, that Bayesian Jerome Cornfield was among a group of researchers to first make the definitive claim of the linkage between smoking and lung cancer is most telling. Especially when frequentist leaders Fisher and Neyman, both heavy smokers and one a tobacco industry consultant, discredited the research. Fisher's counter hypotheses? "The first, believe it or not, was that lung cancer might cause smoking. The second was that a latent genetic factor might give soem people hereditary predilections for both smoking and lung cancer. In neither case would smoking cause lung cancer."
The Bayesian/frequentist wars appear to be over now, new generations of statistical scientists looking for answers to big data analytics anywhere they can find them. Though academic Statistics departments are still important, many of the advances in statistical science now come from biology, physics, the scocial sciences, mathematical science, computer science, business, finance et. al. And computational science now finesses approximations to thorny mathematics that stymied Bayesians in the past.
Stanford statistician Brad Efron has experienced much of the Baysian/frequentist discord in his storied career. An approach he now champions, Empirical Bayes, is a compromise that exploits the abundance of data available today to move from subjective to objective priors, at the same time harnessing computational power to circumvent intractable mathematics. "Empirical Bayes combines the two statistical philosophies, estimating the “priors” frequentistically to carry out subsequent Bayesian calculations." Efron's recent book, “Large Scale Inference …” is considered a major bridging contribution.
Fueled by the riveting Would Not Die story, the vacationers set out the final evening to use similar techniques to locate a misplaced lighter as we prepared to grill our last dinner. Alas, we were unsuccessful when one overwhelming "prior" – consumption of large amounts of Hefeweizen – torpedoed the efforts.
Look for increasing use of Bayesian approaches to the science of business in the coming years, the Bayesian paradigm a better fit than frequentism for the way organizations analytically learn. The challenge will be to develop simplified methodologies and software to put Bayes to work. Empirical Bayes will hopefully be a foundation of an answer.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access