Lending institutions have developed systems and processes to manage their risks over the centuries, but the past few decades have seen the discipline of risk management evolve to new levels of sophistication including advanced methods of quantification. Regulatory authorities have tracked these theoretical developments closely as an ideal means to monitor and ultimately control the riskiness of banks within their respective jurisdictions.
The Basel II Accord is the latest incarnation of these efforts and suggests sweeping changes in the ways that banks calculate, monitor and report the amount of risk that they take in their businesses. The Accord presents a detailed, standardized, rule-based scheme for calculating most major risks within the bank, which implies that each bank must capture every relevant aspect of data that goes into the measurement of risk.
International acceptance of the Basel initiative does not mean that there are no critics; many bankers feel that the huge investments required for compliance are in fact merely expenditures with no commensurate gain. One of the challenges that must be addressed is to ensure that the work done for Basel compliance also enables business objectives longer-term.
While the efforts at Basel II compliance only apply to a few large banks in a select few countries, ripple effects are already being felt in wider circles.1 Many other countries are adopting these standards as are financial institutions not subject to these regulations - investment banks and nonbanking finance companies, to name a few. It therefore behooves companies in these industries to closely study the experiences of banks in these initiatives.
Garbage In, Garbage Out - It's All About the Data
The old computer adage applies equally well to risk management. Measurement of risk and commensurate responses can only be as good as available data. The new regulations bring this aspect of risk management into sharp relief since there is a singular focus on quality and completeness of the risk measurement within a bank.
These requirements have translated into a need for transaction-level transparency in the bank. This is a requirement not only of Basel II but also one that is espoused by the Sarbanes-Oxley legislations. In essence, the organization must be able to show how a particular item of data came to be, from its origin as part of a business transaction through any modifications within the organization to the reporting functions of the bank. This is a tall order and one that most organizations view as very difficult to achieve. Another allied issue of great importance is of reconciliation with other statements being put out by the bank - for example, financial reporting. To ensure consistency, it is important for the bank to reconcile the data between risk and financial statements.
Manipulation, modification, tracing, reporting and data comparison make for a taxing job. Most organizations with disparate systems, processes and data repositories find these challenges almost insurmountable and "make do" with armies of people whose sole job is to massage data to fit a particular requirement. The resulting cost and inefficiency make for a pyrrhic victory at best. One is often confronted with the specter of highly skilled (and expensive) statisticians and mathematicians performing mundane data collection and proofing tasks.
While the first wave of banks tried to achieve Basel compliance with the same Band-aid approaches, these are increasingly being seen as inadequate. Progressively more banks are opting to "do it right" by building a robust data infrastructure which includes at its core an enterprise risk data warehouse (RDW). Such a data warehouse would encapsulate bank-wide risk management data (such as all consumer and commercial loans, investment and trading room deals, etc.) and allow users to query data and generate reports. Critically, the warehouse needs to be augmented by data management infrastructure such as data quality procedures, data stewardship processes, data security and meta data technologies (data about data). Done properly, a data warehouse can offer the most cost-effective solution both in implementation as well as ongoing operations.
The Enterprise RDW - Not a Typical Data Warehouse
Over the past two decades, several best practices for building data warehouses have emerged. Most consist of an extract, transform and load (ETL) process which moves data from source systems, a data warehouse where data is stored and a business intelligence or analytics layer which is used to disseminate information to the user population. Significant amounts of time and energy have gone into structuring the process of constructing these components.
Building an RDW presents all these challenges and more, since there are enough variations to make this a treacherous and difficult task in the best of times. Some of these challenges are:
- Source Systems Changes: Regulatory requirements are forcing changes in source systems. Because of compressed timelines, it is entirely possible for source systems to be changing in parallel with data warehouse development.
- Heightened Need for Accuracy: RDWs, used for sensitive reporting, have far higher quality standards than their marketing counterparts (i.e., CRM, etc.). There is consequently a need for a process to modify incorrect data within the warehouse in advance of source changes.
- Dealing with Organizational Silos: An enterprise RDW also needs to collect data from disparate parts of the organization that are not adept at working together. Incompatible reference data, overlapping data sets and irreconcilable terminology are some typical data warehouse problems, which are exasperated by the enterprise-wide nature of the task.
- The Funding Challenge: While data warehouses used for business improvement can be supported by ROI computations, funding an RDW is much more difficult. Risk management functions at its best when all risks are avoided. The lack of materialized risks in turn makes it difficult to fully appreciate the value of risk management investments; at best, one can anecdotally point to cases where a lack of risk management has led to huge losses. To be sure, several organizations are attempting to use metrics to tie risk events to resultant costs, but there is typically a qualitative component to these measurements, which leave them open to debate.
Life Cycle of a Enterprise Risk Management Implementation
The parts of a risk management implementation program are shown in Figure 1. This transformational process moves the organization from the current state of (presumably inadequate) risk management processes, methodologies and systems to a future state where these are much better integrated and address specific business and regulatory needs. The striking thing about the diagram is the number of processes that are expected to run in parallel during the execution phase. Indeed, each of the segments such as business policy and methodology typically involve many projects - it is not atypical for a bank to run upwards of 50 projects simultaneously.
Figure 1 : Enterprise Risk Management Implementation Program
The first step of this program is to understand the current state of the organization in terms of business processes, systems and data. As this is likely to be an area of imperfect understanding, it is important to focus time and adequate resources on the current state process, system and data analysis segments, the outcome of which is robust gap analysis documentation.
An enterprise transformational program such as this also involves business process re-engineering. While there is understandable reluctance to change processes "that work well" and to focus on solving "the technology problem," in many cases, changing processes is the most effective way to address new business and regulatory requirements. Business and system architecture tasks must therefore be allocated adequate time and resources to do a solid job of specifying the high-level future state of risk management within the organization. A plan that combines architecture with gap analysis will minimize course changes during the execution phase.
It is apparent that many components of this initiative have little direct connection to the data warehouse. They can, however, have very real impacts on the timeline and scope of the warehouse implementation. For example, there are likely to be projects redefining facility parameters that are inputs to calculation of the loss-given default (LGD) parameter. These inputs will need to reside in the data warehouse, as well, so these parallel development tracks would impact warehouse tasks such as mapping source systems to the target data model and data profiling.
Keys to a Successful RDW Implementation
The entire enterprise risk program leadership would be well advised to give careful consideration to the points made in the previous section. In contrast, the data warehouse development group does not have broad authority needed to directly address these issues and is often constrained by project exigencies such as ill-defined business and/or system architectures that lead to changes in business requirements. Even well-run programs will encounter change. For example, as regulators clarify rules, there will be trickle-down effects on business processes, systems and data. Aside from changes, there is also the challenging task of building the data warehouse while simultaneously dealing with changing source systems on a large scale.
The result is that a data warehouse development authority must take many proactive steps to ensure success in the face of such daunting odds:
1. Engage business users and emphasize early definition of business and system architecture. While these responsibilities that lie outside the warehouse development group, it is nevertheless important for the warehouse leaders to stress the importance of proper architecture definition early and often. This includes educating appropriate groups (such as the business process design group) on the tremendous downside of doing otherwise.
2. Stress iterative development. It is imperative that the warehouse be built in bite-size chunks. For a Basel implementation, iterations can be roughly equated to product groups known as asset classes. For example, one may tackle commercial loan portfolios first, followed by credit card portfolios and then mortgages.
The sequence of the iterations will be ordered to address the most important portfolios in the bank. The bank may also decide to pick low-hanging fruit first and pursue portfolios where little change is anticipated, keeping more difficult ones till later. Iterative development can thus alleviate the problem of changing source systems.
3. Take advantage of the divisional organization structure. Asset classes typically line up with the divisional organization of the bank (more or less), so mapping this to required systems and data can considerably accelerate development.
4. Release early and often. Iterative development permits frequent, rapid releases of warehouse data. The tangible advantage is that data can be used even before the warehouse is live in important risk management applications like risk modeling and parameter estimation processes. The user community will experience quick wins, which can be powerful positive force in promoting goodwill and minimizing inevitable organizational skepticism.
5. Use requirements definition as an early warning indicator. Business requirements definition comes in many forms, such as the development of the logical model, data mapping and associated data quality constraints and metrics. These aspects are doubly important in RDW development because the difficulty in properly defining requirements may be symptomatic of inadequately defined business or system architectures. It is imperative to escalate these issues to the right levels so the underlying problems can be resolved.
6. Data profiling - a make-or-break proposition. Data quality is a prominent goal of a risk management warehouse, so proper data profiling is imperative to ensure data quality at inception. Even the most seasoned system analysts will not know all data characteristics of a source system, and the resultant imperfect mapping and quality-related business requirements can be refined only by effective data forensics.
7. Version control and data stubbing. Many source systems are likely to change in the course of the risk program, so effective project management discipline is imperative. Data files from source systems or stubs must be versioned and tied to ETL programs, data models, mapping documents and other artifacts. Strict version control of these artifacts will ensure that source-system changes will be managed in an orderly fashion.
8. Default rules - making do with bad data. Existing source system data will doubtless violate some quality rules. It is not feasible to wait for the source system to fix these problems, as they will likely be undergoing other changes simultaneously. Institute a process for setting up default rules that will be used if these quality violations are observed in data. As this is also a business requirements process, it is critical to set expectations for these kinds of rules with the business requirements teams.
9. Use the right technology. Warehousing technologies are becoming ever more sophisticated and scalable, but many technologies still place severe limits on the amount of data that can be held in the warehouse and on query volume or other metrics. Pick a technology that can scale properly to the expected volume of data and broad usage. Warehouses have a way of becoming too successful, leading to scalability issues and ultimate user disillusionment.
10. BI tools are distraction. Many teams spend far too much time focusing on business intelligence tools. The RDW provides massive benefit simply by aggregating data, so flashy BI tools are not really necessary to score wins with users. The user population is likely to be small at inception (a half-dozen users for the whole warehouse is not uncommon), so scalability of data is not as important, either.
A risk data warehouse also needs to be flexible and available because users need to discover uses for data by studying this in an unstructured fashion. A relatively amorphous environment is therefore best at the inception. Once there is a buy-in from business users, there will be plenty of time (and money) to apply sophisticated BI technology on top of the warehouse.
Building a risk warehouse is an intimidating task, especially in a time-challenged regulatory environment, but it's not an insurmountable one. By using proper project management techniques and employing adequate resources, in conjunction with the methods described in this article, it is eminently feasible to build a robust RDW that can generate tremendous goodwill and become a significant corporate asset with business-transformational potential.References:
1. These are the so-called G-10 countries: Belgium, Canada, France, Germany, Italy, Japan, the Netherlands, Sweden, Switzerland, the United Kingdom, the United States and Luxembourg.
Dilip Krishna, CFA, consults on risk management in financial institutions and heads NCR Teradata's enterprise risk management practice. He has previously been involved with several large implementations of financial and trading systems in large North American institutions. He can be reached at email@example.com.