By Carol E. Curtis Technology may be a key component of sound risk management, but it needs to be applied in conjunction with sound judgment rather than as a substitute, says David Rowe, EVP of risk management at SunGard Data Systems. "Trust your structural imagination," says Rowe, who provides strategic input on Wayne, Pa.-based SunGard's risk management offerings and acts as its spokesperson on risk issues. "Everybody knew that a general fall in housing prices would have a devastating effect. Everybody chose to ignore it." Rowe, who works out of the financial services technology giant's London office, knows a thing or two about the subject. Before joining SunGard ten years ago, he spent a decade in banking and financial risk management, first as CFO of Security Pacific Securities, then as SVP of Bank of America Corp.'s risk management group. Rowe recently spoke with Securities Industry News about the evolution of risk management, where it failed during the credit crisis, and what SunGard clients are looking for as they struggle to build more effective systems. How has risk management changed in the last several decades? From 1986 into the early 1990s, there were a series of very public losses at Bankers Trust, Merrill Lynch and other firms that served as catalysts for the creation of financial risk management as a recognized profession that ran across institutions. Along the way, it became significantly quantified, as illustrated by the value-at-risk [VAR] concept. Before that, there was no communication between the trading floor and trading management or senior management. The problem was that nobody could express what the complicated mesh of limits meant. VAR was the first effective vehicle for communicating between the trading floor and management. But the problem was that too many fell into the habit of calling VAR a worst-case loss, which it never was. Also, we always knew VAR did not assess what lurks beyond the 1 percent threshold. You have insidious feedback loops. Once you put VAR in place, traders will abide by their limits. But they still want to make their returns. You need something besides VAR to look at extreme, potentially lethal events. What other risk lessons are emerging from the credit crisis? A key lesson relates to stress testing. The kinds of distributions with which we deal in assessing operational risk are such that you get self-referential risk. But unlike the physical sciences, in financial systems you can get huge sudden moves--a crystallizing event causes everyone to react in the same way at the same time. The behavior of extremes is informed hardly at all by studying the mid-99 percent of distribution. How can this knowledge be used to make risk management more effective? A lot of this is fundamentally cultural. The biggest institutions have found it difficult to assemble data on a timely basis--the biggest banks have become almost unmanageable. Risk management has to be a basic concern of everybody, and weighed at the level of the CEO. There are some lessons we have neglected. First, the problem of statistical entropy--growing disorder arising unless you start from the data. You cannot squeeze information out of a set of data that the data do not possess in the beginning. A classic example is the AAA ratings on sub-prime mortgage tranches. The experience with sub-prime lending is that you didn't have a lot of evidence to support whatever conclusion you were trying to reach. We haven't spent enough time thinking about second-order uncertainty. How is SunGard working with clients to improve their approach to risk management? Simple summary answers are no longer sufficient to satisfy their demands. Functionality that goes beyond the results of what appears to many to be an opaque black box is coming to the fore. This requires systems that are more adaptive and flexible and able to respond faster to changing needs than in the past. It also demands the capability to store significantly more extensive details of the calculations. How have customers' needs changed as a result of the meltdown? Fundamentally there is greater demand for added detail and the ability to do ad hoc inquiries and analyses. Where does risk management fit into clients' IT budgets? In the intermediate timeframe--say, two to five years--we foresee a significant upswing in spending on risk management infrastructure. Many, indeed most, firms were ill-equipped to marshal the risk information they needed quickly and reliably as the crisis unfolded. The very stress of the crisis, both financially and in terms of demands on management time, is an obstacle to addressing the long-term shortcomings quickly. For us, that manifests itself in a surprising level of RFIs, RFPs and workshops, but a struggle in achieving final closure on new initiatives. What is the appropriate role of technology in risk management? We should be creating technology in support of sound judgment. There is a tendency to view technology as something that can replace it. You need to drill down and develop stress scenarios in the context of your current risk system, and do it efficiently. Institutions shut off doing major stress scenarios in a way that accurately portrays their impact, [and] summary sensitivities tend to break down. One strategy you need to apply is diagnosing what kind of events would be most injurious, given the nature of your current positions. Big technology developments that assist with this are, one, the emergence of grid computing and parallel processing as mainstream computer technology; two, the development of 64-bit architecture, which opens up possibilities [since] you have to retain immediate, speedy access to lots of results in order to analyze details; and three, the cost of memory has continued to drop dramatically, so employing needed memory is not prohibitive commercially. What should regulators be doing? The approach seems to be that we need to be more intrusive and have more regulation. That is a knee-jerk reaction. The heart of the problem is institutions so large they can threaten the welfare of the entire country, and other countries as well. If they are too big to fail, maybe they are too big to exist. Over the last 15 years, larger institutions have had lower capital requirements relative to their size. That is going just the wrong way-you create a significant degree of moral hazard. Also, we need to have greater coordination of bank oversight. Tougher supervision is not the ultimate answer. Are there significant differences in the way the U.S. and Europe approach risk management? There is not an obvious European view. Large multinational banks are global and staffed [globally]. There is quite a range of sensitivity to the appropriate technology--CEOs who are serious and CEOs who view risk management as eyewash for regulators. In the U.S., the best example of doing it right is JP Morgan. [CEO and chairman Jamie Dimon] went on the warpath in 2006 to cut exposures, and he probably saved the bank. You need someone at the top with the gut feel to make that choice and the personal authority to carry the organization with that decision. This article can also be found at SecuritiesIndustry.com.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access