The parts of a risk management implementation program are shown in Figure 1. This transformational process moves the organization from the current state of (presumably inadequate) risk management processes, methodologies and systems to a future state where these are much better integrated and address specific business and regulatory needs. The striking thing about the diagram is the number of processes that are expected to run in parallel during the execution phase. Indeed, each of the segments such as business policy and methodology typically involve many projects - it is not atypical for a bank to run upwards of 50 projects simultaneously.
Figure 1 : Enterprise Risk Management Implementation Program
The first step of this program is to understand the current state of the organization in terms of business processes, systems and data. As this is likely to be an area of imperfect understanding, it is important to focus time and adequate resources on the current state process, system and data analysis segments, the outcome of which is robust gap analysis documentation.
An enterprise transformational program such as this also involves business process re-engineering. While there is understandable reluctance to change processes "that work well" and to focus on solving "the technology problem," in many cases, changing processes is the most effective way to address new business and regulatory requirements. Business and system architecture tasks must therefore be allocated adequate time and resources to do a solid job of specifying the high-level future state of risk management within the organization. A plan that combines architecture with gap analysis will minimize course changes during the execution phase.
It is apparent that many components of this initiative have little direct connection to the data warehouse. They can, however, have very real impacts on the timeline and scope of the warehouse implementation. For example, there are likely to be projects redefining facility parameters that are inputs to calculation of the loss-given default (LGD) parameter. These inputs will need to reside in the data warehouse, as well, so these parallel development tracks would impact warehouse tasks such as mapping source systems to the target data model and data profiling.
Keys to a Successful RDW Implementation
The entire enterprise risk program leadership would be well advised to give careful consideration to the points made in the previous section. In contrast, the data warehouse development group does not have broad authority needed to directly address these issues and is often constrained by project exigencies such as ill-defined business and/or system architectures that lead to changes in business requirements. Even well-run programs will encounter change. For example, as regulators clarify rules, there will be trickle-down effects on business processes, systems and data. Aside from changes, there is also the challenging task of building the data warehouse while simultaneously dealing with changing source systems on a large scale.
The result is that a data warehouse development authority must take many proactive steps to ensure success in the face of such daunting odds:
1. Engage business users and emphasize early definition of business and system architecture. While these responsibilities that lie outside the warehouse development group, it is nevertheless important for the warehouse leaders to stress the importance of proper architecture definition early and often. This includes educating appropriate groups (such as the business process design group) on the tremendous downside of doing otherwise.
2. Stress iterative development. It is imperative that the warehouse be built in bite-size chunks. For a Basel implementation, iterations can be roughly equated to product groups known as asset classes. For example, one may tackle commercial loan portfolios first, followed by credit card portfolios and then mortgages.
The sequence of the iterations will be ordered to address the most important portfolios in the bank. The bank may also decide to pick low-hanging fruit first and pursue portfolios where little change is anticipated, keeping more difficult ones till later. Iterative development can thus alleviate the problem of changing source systems.
3. Take advantage of the divisional organization structure. Asset classes typically line up with the divisional organization of the bank (more or less), so mapping this to required systems and data can considerably accelerate development.
4. Release early and often. Iterative development permits frequent, rapid releases of warehouse data. The tangible advantage is that data can be used even before the warehouse is live in important risk management applications like risk modeling and parameter estimation processes. The user community will experience quick wins, which can be powerful positive force in promoting goodwill and minimizing inevitable organizational skepticism.
5. Use requirements definition as an early warning indicator. Business requirements definition comes in many forms, such as the development of the logical model, data mapping and associated data quality constraints and metrics. These aspects are doubly important in RDW development because the difficulty in properly defining requirements may be symptomatic of inadequately defined business or system architectures. It is imperative to escalate these issues to the right levels so the underlying problems can be resolved.
6. Data profiling - a make-or-break proposition. Data quality is a prominent goal of a risk management warehouse, so proper data profiling is imperative to ensure data quality at inception. Even the most seasoned system analysts will not know all data characteristics of a source system, and the resultant imperfect mapping and quality-related business requirements can be refined only by effective data forensics.
7. Version control and data stubbing. Many source systems are likely to change in the course of the risk program, so effective project management discipline is imperative. Data files from source systems or stubs must be versioned and tied to ETL programs, data models, mapping documents and other artifacts. Strict version control of these artifacts will ensure that source-system changes will be managed in an orderly fashion.