Last week's blog introduced Bayes theorem for conditional probability and contrasted the Bayesian and frequentist approaches in modern statistics, noting that the frequentist camp dominated when I studied 30 years ago. Now, however, though frequentism is still on top, the playing surface is much more level, with a steeper trajectory for Bayes.

There are a host of reasons for the ascent of Bayesian methods in recent generations. First is the increasing demand for applied statistical analysis – and new techniques – in the engineering, biological, epidemiological, social, physical, mathematical, computer and business sciences. Second is an evolution in statistical emphasis from scientific inquiry to organizational decision-making. Often as not, applied statisticians are now evaluated by the ability of their models to predict into the future. Other things equal, models that predict well are often preferred to those that explain well.
Third are advances in computation and numerical methods that can now closely approximate what was many times intractable Bayesian calculation in the past. Fourth is the evolution in Bayesian analysis itself. Now, “empirical Bayes” replaces the subjective priors so disdained by frequentists with “uninformative priors” computed from data. Indeed, in many ways the empirical Bayes branch looks a lot like traditional frequentist methods.

Distinguished Stanford Statistician Brad Efron offers much insight on the evolution of statistics as a discipline. He argues that the debate between Bayesians and frequentists ultimately reflects different attitudes about data analysis. Frequentists are conservative and cautious statistical purists, aiming for universal conclusions, while Bayesians look to exploit all available evidence to make the quickest progress. Efron illustrates the contention by noting the Bayesian interests of drug manufacturer Pfizer to use any and all available opinions and information to determine how well its new drugs will work, while the frequentist FDA looks only for objective, experimental proof of drug efficacy. In a sense, frequentists behave like exacting scientists; Bayesians more like decision-making businessmen. In contrast to the inside view of frequentists that focuses on statistical orthodoxy, the outside Bayesian goals obsess on answers to important strategic questions – wherever they may be found.

From an interview two years ago, Efron opined: “It’s pretty clear to me that statistics will play an increasing role helping 21st century sciences like biology, economics and medicine handle their large and messy data problems, and that an integration of frequentist and Bayesian approaches is probably ideal for meeting those data challenges.”

“One of the main responses of Bayesians to the objectivity demanded by frequentists is ‘uninformative priors’ that are devoid of specific opinions. And the bootstrap, initially developed from a purely frequentist perspective, might provide a simple way of facilitating a genuinely objective Bayesian analysis. Though still suggestive, the bootstrap/priors connection merits further investigation.”

“Empirical Bayes combines the two statistical philosophies, estimating the ‘priors’ frequentistically to carry out subsequent Bayesian calculations. Bayes models may prove ideal for handling the massively parallel data sets used for simultaneous inference with microarrays, for example. The oft-repeated structure of microarray data is just what is needed to make empirical Bayes a suitable approach.”

From a business intelligence perspective, a major benefit of the Bayesian approach is the organizational  learning that accumulates over time. Indeed for BI, the Bayesian model is best viewed as systematic learning for intelligence and business decision-making. The posterior probabilities for analysis one become the prior probabilities for analysis two; the posteriors for analysis two become the priors for analysis three, etc. Thus the Bayesian orientation promotes learning and better decision-making, all the while refining priors to be more reliable and less “subjective” over time.

Adding to the growing legitimacy of Bayesian methods is the traction that the posteriors = priors*likelihood learning equation is getting as a plausible model of cognition in psychology. In a study that tested whether certain cognitive tasks were executed in accord with Bayes theorem, psychologists from Brown University and MIT conducted laboratory experiments, giving participants nuggets of “prior” information, then asking them to draw conclusions – to make posterior judgments.
A first example question went something like: A movie has grossed $X million to an unknown date (where X is either 1, 6, 10, 40, 100), how much was the total gross? A second question: How long would you estimate the life span for an 18 year old (choices 18, 39, 61, 83, and 96 years)? Among the findings: Participant judgments for life spans, movie grosses and several other like questions were indistinguishable from optimal Bayesian predictions based on the empirical prior distributions. Though there wasn’t total alignment of findings with Bayesian predictions, the results are certainly suggestive and warrant additional scrutiny.

Indeed, the evidence that Bayes theorem supports models for both individual decision-making and organizational learning is certainly compelling motivation for additional investigation.

In Part 3 of this blog series, I'll discuss a Sloan Management Review article that touts Bayesian solutions to the current malaise of risk management.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access