The recent mid-August meltdown of global credit, bond and stock markets has raised many questions about the use of computerized quantitative analysis. In an article in The New York Times,1 Clifford Asness, the co-founder of the money management firm AQR Capital Management, said, "'When we make a boatload of money, we get imitators, and our risks increase. That's how capitalism works.'" Do all computer-based decisions have an Achilles' heel making them vulnerable to making huge decision errors?
Asness was describing his firm's sophisticated "market neutral" computer models. The models simultaneously buy long and sell short positions in thousands of financial instruments in equal amounts. They do this by the minute to detect almost imperceptible undervalued instruments that have a rule-based measure of sales momentum while the total investment portfolio remains constant after pocketing the siphoned gains. The winning trades beat the averages. Asness' comment about imitators is as more of them trade, they materially affect market values and can cause "herd behavior."
But during those dark August days, Asness' hedge fund and many like it got clobbered. What happened? The hedge funds were actually not the initial source of a ripple effect on rest of the financial markets but rather the recipient of a ripple that began in a corner that the funds had little or nothing to do with - namely, the subprime mortgage process. Initially the experts assured everyone that the credit problems were "contained," but the mess spread. Their problems bled right into the equity markets. This is troublesome to investors because despite investment managers' insistence that that diversification - and therefore safety - can be attained through so-called asset allocations between stocks and bonds, it is apparent these investment instruments are intertwined.
What Lessons Can We Learn When Using Computers for Decisions?
This contagion affected all the other computerized formulas of the investment funds in lockstep amplifying the downturn in market values. The explanation as to why the "neutral" total position failed gets more complicated, involving the panic drop removing short side trades, but that is not the issue I am raising. My interest is this: Are there lessons about robust quantitative analytics that business managers of real tangible assets, not financial instruments, can learn?
So that I do not frighten those organizations who believe that mastering analytics of all flavors is an important competence, let me allay your concerns. I think there is big difference between money investment managers who try to make profits using investors' money - their surplus cash - and internal managers whose goal is to make money selling nonfinancial products and services. One lesson based on this difference is that business decisions are ideally made to improve a supplier's relationship with its customers. The types of decisions reflect all those subjects that students learn in an MBA school curriculum, such as new product development, risk management and customer service.
The Shift Toward Fact-Based Decision-Making
Decision making can no longer simply be based on intuition or what "feels right." They increasingly involve various forms of quantitative analysis. For example, new product development requires the discipline of target costing. That is, a cost plus pricing mark-up is not optimal. Companies must first determine the price elasticity of various customer segments for various product features and service levels and subtract a desired profit margin to determine a maximum allowable cost. With that ceiling cost in mind, they then must work with their product engineers and component suppliers to drive down the expected cost to or below that maximum cost with reduced feature trade-offs or process efficiencies. And these require analytics that apply activity-based costing principles, segment different types of customers, and predict customer demand behavior.
As another example of quantitative analysis, risk management requires a myriad of what-if scenarios to evaluate, and various customer service levels require understanding which levels (including incentive deals and offers) are optimal to know how much or little to spend on retaining, growing or acquiring various customer micro-segments. Computers only do what they are told, so if risk factors are not included, then the computer is "unaware" of them.
The nice thing about applying computer support to performance management (PM) is that its integrated methodologies examine behaviors from many angles - such as looking at customers, suppliers, employees, processes and economics. Any angle in isolation can be limiting, but integration adds speed to decision making, and extending predictive analytics to PM provides the power of confidence as future uncertainty is reduced. With predictive analytics, organizations can be proactive - testing and learning in advance before a minor problem becomes a major one.
Many Decisions Involve Ambiguity
Is there ambiguity present when making a business decision? Of course there is. In sports it is easier because many decisions made by the coaches are binary. In baseball, when a runner is on base, should the batter bunt to advance the runner or hit away? Taking in other considerations like the score and each player's running and hitting capabilities, the coaches apply known probabilities for the best outcome.2 But business decisions are not all yes or no or approve or reject. There are ranges and degrees that typically have nonlinear relationships.
The broad description of enterprise performance management, as I have constantly written about in this monthly column, is all about applying modeling techniques with trade-off analysis and risk management. Decision support depends on the data. Software provides intelligence, not new data. Of course, it helps to start with clean source data to manage and integrate - a problem many organizations still wrestle with.