I seem to have settled on a pattern of writing about articles from the Harvard Business Review three times a year. I'm not sure whether that means three times a year HBR is particularly pertinent for BI or that three times a year I read HBR through a BI lens. Whatever. 
The October 2009 HBR spotlights Risk Management and Performance Measurement, both significant consumers of BI. Four Harvard Business School professors and an industry expert opine on the future of enterprise risk management in the interview article Managing Risk in the New World.  Fooled by Randomness and The Black Swan author Nassim Taleeb leads a co-authored, acerbic essay entitled The Six Mistakes Executives Make in Risk Management. Robert Merton, the 1997 economics Nobel prize winner, shares his wisdom in the insightful conversation Making the Financial Markets Safe. And Andrew Likierman, Dean of London Business School, sounds off on The Five Traps of Performance Measurement
The HBS professors conduct a post-mortem of the financial crisis, suggesting many contributing causes of death. These include the dangerously low savings rates and high debt levels, as well as the false assurance of 25 years of prosperity, and demands to deliver ever-increasing returns to shareholders. More subtly, they blame the ascendance of financial engineering, opining that FE is simply a tool that can be used correctly or incorrectly: “Before you start to model risk, you have to think about whether you understand the nature of it.” The academics are also critical of the almost exclusive focus on short-term financial incentives, arguing they “encourage managers to take on high degrees of risk to generate return, leading to a big moral hazard problem.”
The proposals to redress the problems include scenario playing, rather than the current futile attempts to predict extremely rare “Black Swan” events, the use of business risk heat maps to quantify exposure, a balanced scorecard emphasis to management, and the simple use of insurance to soften the blows of calamities. The authors are ambivalent on future financial regulation, arguing that while it must play a role going forward, “We need to be skeptical about the ability of regulators to understand the kinds of risks being taken on in innovative enterprises.” They promote the role of Chief Risk Officers (CRO), who'd be both conversant in technical risk management and act as trusted advisers to executive teams: “...we now live in an era when many of the concerns in running organizations are being reframed in terms of risk, which suggests risk professionals are likely to rise to the top.”
Echoing the observations of the HBS academics, Taleeb continues his rants on the mistakes the financial services community made leading to the recent implosion. He's adamant  that we're powerless to predict extreme, low-probability Black Swan events, and would instead be better served focusing on mitigating their consequences. As well, according to Taleeb, companies are ill-advised to study the past to predict future Black Swans.  The authors' research shows that past events have little relation to future shocks; indeed, calamitous events like September 11 often do not have predecessors. In place of this current misguided focus, organizations should purchase insurance against such catastrophes. And included in the insurance planning should be provision for organizational redundancy to combat the most serious risks. Finally, Taleeb et. al. argue for establishing strong disincentives in addition to  incentives to motivate organizational performance, and tout the importance of measuring loss prevention as well as gains, noting  “... a company can be successful by preventing losses while its rivals go bust.” 
Nobel laureate Merton proffers that it was not financial engineering per se, but it's mistaken application, that precipitated the financial crisis. He notes, however, “In finance, as elsewhere, you have to make a trade-off between the benefits and risks of (financial) innovations.” Merton argues that certain easy-to-deploy regulations “would probably give more than 90% protection against the kind of systemic breakdown we've seen.”  He likes MIT professor Andrew Lo's proposal for a “financial equivalent to the National Transportation Safety Board” which, while not a regulator, can inform the decisions of all constituents.  
Business executives and BI analysts alike should pay special attention to Likierman's cautions on business performance measurement.  Indeed, his caveats for PM reinforce those for risk management noted above. Likierman isn't much a fan of the performance planning activities that BI often supports. He's particularly suspect of the common stratagem of comparing results against plan, noting that a company “may be doing better than plan, but are you beating the competition ... you need information about the benchmarks that matter most – the ones outside the organization.” He's equally suspicious of looking back in time, comparing this year to last. Such backward-focused measurement often misses the point. “Look for measures that lead rather than lag the profits in your business.” Instead of evaluating performance post hoc, obsess on measures and models that predict performance into the future.
Likierman's a man after my own heart with his disdain for the widespread abuse of ROI for nonfinancial activities such as IT and HR, arguing that the numbers and causal analysis that underlie this thinking are easily gamed. My historical frustration with quantifying benefits of new BI programs no doubt aligns with this suspicion. Likierman's cautions about the gaming of other performance metrics resonates as well.  As a personal illustration, I can identify several occasions in my career when I've seen over-billings by consulting firms driven by performance incentives. “The moment you choose to manage by a metric, you invite your managers to manipulate it.”   Finally, Likierman promotes a healthy skepticism of the one-size-fits-all performance measures that aren't adaptive to dynamic business environments. He argues that, over time, credit agency meanings for AAA ratings evolved to represent the likelihood of default under normal conditions, not the extremes of 2008-2009. Somehow I don't think that's much consolation to the recent bust's AAA losers.
What the HBR risk management wisdom has in common with traps of performance measurement is the rejection of many tenets of traditional organizational planning and forecasting – often the mindless quarterly construction of forecasting spreadsheets and planning slide decks where all revenues are growing and profitable, all strategies successful, all risks mitigated. While I'm not sure the authors would consider themselves searchers in contrast, it's clear they take serious issue with the capricious thinking that can subvert existing  “methodologies”. From this vantage point, a challenge for business and BI is to divine new risk management and performance measurement techniques and package them in a way that adds value to the complex environments of twenty-first century business.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access