My last two blogs have introduced Bayes theorem as an important probability maxim for business decision-making, and have also discussed the evolving role of the Bayesian approach in modern statistics.

For many reasons, not the least of which is the demand for new statistical techniques and the ascendance of computation in the field, the use of Bayesian methods is on the rise. The Frequentist/Bayesian wars that marked my statistical studies seem an artifact of the past.

The paradigm of Bayesian thinking where posteriors = priors*likelihood, also appears to reflect how individuals use information for decision-making and how organizations “learn” from their accumulating intelligence. The posterior probabilities for analysis one become the prior probabilities for analysis two; the posteriors for analysis two become the priors for analysis three, etc. So it's not surprising that much attention is now being expended on a Bayesian model for BI and analytics.

I recently came across an interesting article in MIT's prestigious Sloan Management Review that puts Bayesian ideas to work in the critical (and obviously currently flawed) area of risk management. The authors, Adam Borison and Gregory Hamm, are unsparing in their criticisms of the RM methods that did not protect us from the depression of 2008.

Addressing the purported risk management failures at elite financial institutions, the authors opine: “Yet at these companies, and at others with comparable ‘cultures,’ risk management apparently performed quite dismally. How could this be? We contend that the answer lies in the concepts and practices of traditional risk management, which tend to look for risk in all the wrong places. That is, failure did not stem from merely paying lip service to risk management or from applying it poorly, as some have suggested. Instead, collapse resulted from taking on overly large risks under the seeming security of a risk-management approach that was in fact flawed. The more extensive the reliance on traditional risk management, we believe, the greater the risks unknowingly taken on and the higher the chances of corporate disaster.”

Borison and Hamm imply that the failings of modern risk management derive from a naïve statistical orientation that overemphasizes recent statistical history. “Traditional risk management has instead adopted the frequentist view, despite its three inherent, and major, shortcomings. First, it puts excessive reliance on historical data and performs poorly when addressing issues where historical data are lacking or misleading. Second, the frequentist view provides little room — and no formal and rigorous role — for judgment built on experience and expertise. And third, it produces a false sense of security — indeed, sometimes a sense of complacency — because it encourages practitioners to believe that their actions reflect scientific truth. Many of a corporation’s most important and riskiest decisions — which often do not fall into the narrow frequentist paradigm — are made without the help of the more sophisticated and comprehensive Bayesian approach.”

A Bayesian approach to risk management is clearly a step up from currently accepted RM practice, according to the authors. “Risk management built around the reality of judgment (supported by available data) is superior to risk management built around the fantasy of fact … the Bayesian perspective leads naturally to adjustments in the risk-management activity itself. Because Bayesian risk assessment combines both data and judgments, its underlying logic provides a built-in and rigorous way of updating assessments as new data arrive or new judgments emerge. Equally importantly, it provides a natural way to adjust risk-management activity in response to this learning.”

And as for the importance of subjective judgment in addition to the data: “Why is this distinction between fact alone and fact-plus-judgment so important? First, because the fact-alone perspective provides no effective guidance on issues — often those with greatest impact — where there are little or no frequency data. It’s a classic case of losing one’s keys where it is dark but looking for them under the street lamp because the light there is so much better. Second, because unlimited faith in historical data — even large amounts of it — leads to overconfidence and excessive risk taking. And finally, because a system based solely on historical fact inevitably lurches from crisis to crisis.”

In sum, Borison and Hamm argue that with such a narrow focus on recent history and the limiting frequentist perspective, it's little wonder that many financial institutions failed. “The shortcomings of the frequentist view — narrowness of thinking, unwillingness to accept the possibility of error and the inability to adapt to changing circumstances — clearly played a significant role in the recent financial collapses. Companies such as Citigroup Inc. and Merrill Lynch & Co. continued to increase their exposure to sub prime mortgages even as evidence regarding deterioration in the housing market accumulated: ‘During the early years of the housing boom, default rates on all mortgages were unusually low.’ That led bankers — and, more important, rating agencies — to build unrealistic assumptions about future default rates into their valuation models. At these companies, early warning signs were ignored and unrealistic default rates were not adjusted until it was too late.”

The authors add that the wider Bayesian perspective that includes subjective judgment and dynamic learning might have headed off at least some of the problem. “With a Bayesian view, such problems may not have been eliminated altogether, but they could have been substantially reduced through more comprehensive and realistic risk assessment and more dynamic and adaptive risk management. Many measures are being deployed to recover from the collapses and to build a more robust system that prevents future crises — a shift from traditional risk management to Bayesian risk management should be a part of this effort.”

I agree with the authors' conclusions on the need for judgment to supplement data in our current analytic world and am similarly enthused about the prospects for help from Bayesian techniques. I must admit, though, I'm underwhelmed by the article's RM illustrations as justification. For example, Borison and Hamm create a frequentist “straw man” when they discuss RM for a company dependent on rainfall for its well-being. In the authors' thinking, the frequentist risk manager is likely to assume a normal distribution of annual precipitation, consequently under-estimating low-rainfall risk, whereas a Bayesian would not be so naive and would furthermore supplement data with judgment. I would counter that it's a dumb frequentist that'd assume a normal distribution with such a key risk variable.

The authors also fail to mention empirical Bayes with uninformative priors that promotes compromise between data and judgment – using objective prior distributions derived from multiple “experiments” to “make the Bayesian Omelet without breaking the Bayesian eggs.” This evolving approach, fueled by the data deluge and computational statistics, has the potential to turbo-charge Bayesian analysis to the statistical and business mainstream.

All things considered, though, Borison and Hamm do an excellent job explaining Bayesian concepts in the context of risk management. Indeed, that their article was published by SMR speaks volumes on the progress of Bayesian thinking in the business world. In coming years, BI analysts should expect to see increasing utilization of Bayesian methods as models for both individual decision-making and organizational learning.