This is an article from the July 2006 issue of DM Review's Extended Edition. Click on this link for more information on DMR Extended Edition or to download this entire issue in a PDF format.Organizations have been struggling to provide users with the means to get access to information they need for monitoring processes and making decisions. A constant decentralization and centralization cycle exists, which swings back and forth every three to five years. Apparently central IT initiatives are not good enough to stop this cycle in favor of a real enterprise-wide solution, which has - at first sight - a lot of real advantage to offer.

Since the 1970s, organizations have been busy with the implementation of means to support their users in the execution of managerial processes. And now, after the maturation of the concept of data warehousing for business intelligence (BI), we find ourselves still struggling with the challenge of making the business users happy with the information they want and need.

The four major developments BI users have been confronted with during the last 30 years are complexity, speed, change and speed of change. The environment in which an organization operates is changing at an accelerating speed. The almost continuous wave of mergers and acquisitions as well as more sophisticated market and competition engagement models are important contributors to change. The time available to make decisions is decreasing accordingly. Because of the unpredictable nature of the changes, it has become more difficult for managers to predict what information they need. Also, the time to collect, prepare and reflect on information has virtually disappeared.

This means that information needs to be:

  1. Timely enough to be useful,
  2. Relevant for the decision at hand,
  3. Of sufficient quality and level of detail for the decision at hand, and
  4. Produced at acceptable costs in relation to the consequences of the decision.

While emphasizing the importance of speed and change, it is crucial not to fall into the trap of exaggeration. Decision-making at tactical and strategic levels of the organization does not require near real-time data capture and integration. The requirements for this decision-making process are less vulnerable to change, at least within a "normal" reporting horizon. It is different for operational monitoring, which is trending toward near real-time data capture. Confusion is generated because organizations have become flatter; operational management and tactical management are squeezed into the same group of people. These managers develop two information usage patterns: the preformatted operational view on one hand and the free-formatted analytical view on the other.
Decentralization starts when users are dissatisfied because the centralized information system is not able to produce the information they want. "I never get what I ask, and if I get something it's too late, not relevant, not good enough and costs too much money and effort." The outcome is that the manager will start up an initiative to bypass the centralized department.

The pendulum swings back with a call for an enterprise-wide data warehouse, when the Excel "fungus" has grown over the desktops. "When I ask for something I'll get something, but the quality is no good; it is inaccurate, inconsistent and time- (thus money) consuming for my people who have better things to do."

Perhaps a reason for this unending cycle might be the fact that, in general, IT people are accustomed to working in an environment of controlled, planned and managed change. Unfortunately in the managerial world, change is uncontrolled, unplanned and unmanaged.

What can be done to break through the deadlock? To start, IT will have to accept some basic facts of life:

  1. Change will occur, and time to react will decrease.
  2. The demand for timely, available, relevant, high-quality and consistent information will increase.
  3. Individual managers will never fund an infrastructural provision to serve the complete organization (the first pedestrian will never pay for the whole bridge), but once an individual manager is unhappy enough, he will always find some space in his budget to help himself.
  4. Consistency, probably one of the shortcomings causing the most pain in managerial information, is not dependent on the systems architecture and the individual applications but on the business logic of the data structures and definitions (the master data or reference data).

Once these facts of life are accepted, IT can start its journey. The objective is to get data out of the operational systems and turn it into actionable information at the users' desks. Therefore an information logistic infrastructure must be designed and implemented, as shown in Figure 1.

Figure 1: Information Logistic Infrastructure


Governance


From a governance perspective, an information logistic infrastructure must be:

  • Funded and managed as a corporate asset as it relates to the core information logistic services (e.g., the office infrastructure),
  • Funded and managed as a customer-specific service organization as it relates to the extended information logistics services,
  • Operated as a profit center (demand-driven capacity and customer-oriented servicing) instead of a cost center (budget-defined capacity and regulations-oriented operation),
  • Based on challenging (time to market) and flexible (change) service level agreements (SLAs) with the owners of both the daily operations supporting back-end applications and the information providing front-end applications, and
  • Prioritized by the combined customer community and not by the provider of the logistic services.

Almost all IT organizations are run as a cost center. Outsourcing the information logistic services to an external and specialized service provider (who is by definition a service-oriented profit center) could be really worthwhile to consider. After all, most organizations have also outsourced the services of handling the logistics of physical goods in the organization with great success.

Architecture

From an architectural perspective, an information logistic infrastructure must be:


Capable of supporting different front-end applications with different front-end technologies for different types of users;

  • Capable of extracting data from a broad range of underlying applications using divergent technology and different instances of packaged applications based on the same technology;
  • Designed in such a way that changes in specific parts of the process can be isolated and carried through without the need to the change the whole process (decoupled processes);
  • Supportive of push reporting (system-controlled distribution of a large number of standard reports to a large number of users) as well as pull reporting (user initiated requests to retrieve or compose and conduct ad hoc query, reporting and analysis);
  • Supportive of controlled write-back (closed-loop information logistics), from the front-end applications to data marts, the data warehouse and even the operational systems, so that BI has an explicit effect on the primary processes;
  • Capable of handling more technical metadata throughout the whole process;
  • Providing fully traceable (what is done where) and auditable (can someone sign "in blood" for the truth of it) process information.

And maybe the most important and probably the most difficult criterion to achieve:


  • Supportive of all different perspectives of representations of all facts (history, actuals and planned) and all elements of the business model represented by the reference data (history, actuals and planned) at any given point in time.

These capabilities can be met by a combination of clever architectural design (based on functional split in different layers with controlled interaction), using intermediate and temporary data storage, advanced business and data modeling (generic modeling is preferable over traditional ER modeling) and suitable technology. A carefully selected combination of products of specialized niche vendors might be preferable to the products of the large and well-known vendors of enterprise resource planning (ERP) applications and general mainstream IT products. After all, this market is still in development, and in such a situation, big is not always beautiful.

Master Data Management

Inconsistent information caused by problems related to master data is closely connected to the challenge of information logistics. Users will turn to the person or department that delivers the reports to them. If two different reports present different figures on the same subject, the information provider is asked to explain or correct the differences. From this perspective, the operator of the information logistics process has a large interest in solving these master data problems to provide the required service level. However, many aspects are connected to master data that have nothing to do with the operator of the information logistic process.

Master data is all about:

  • Assigning ownership of the definition of data structures and data elements to the right organizational business units,
  • Assigning data stewards (delegated owners) in these organizational units to execute the actual work,
  • Organizing a process in which the relevant functionaries (also in other organizational units) are able to say their piece, and
  • Organizing a workflow to route master data under construction through the organization for comment and approval.

These aspects of the solution, which is needed to address the master data challenge, are fully owned by the business and can only be solved by its active leadership and participation. This is the place where IT is really only the provider of enabling applications and technology.
However, in this role there are still a number of things for IT to do:


  • Identify in which applications the same master data is defined and maintained more than once, and where the business has to decide which application is leading for which master data element.
  • Develop an architecture (preferably based on a near real-time publish-and-subscribe mechanism) to distribute master data elements to relevant applications.
  • Select and implement the appropriate technology to support the definition and maintenance of master data and the workflow involved.

Figure 2 shows that the data warehouse is not the repository for all the master data but just one of the systems updated from the central master data repository. The data warehouse is shown in the middle of the circle, as the function is somewhat different compared to the operational systems depicted on the circumference. Ideally, the master data repository should contain all the master data, including the definitions and structure of all the hierarchies of the business model. Therefore, it is likely that a good master data repository bears a considerable resemblance to the modeling and storage structure of the data warehouse.

Figure 2: Conceptual Architecture

The current emphasis on master data is about ownership of definitions of data elements and initiation of new occurrences of the master data elements at the atomic level of the individual customer and the individual product. Although this is an important issue, it is even more important to reach organization-wide agreement on the definitions and structure of all the hierarchies of the business model.

The primary objective of these dimensions is to enable and ensure consistent reporting across functional and geographical boundaries as well as across managerial levels. This does not mean that all functions or local organizations should be obligated to use the same hierarchies. Product grouping from a logistic perspective can be different than product grouping from a marketing perspective. Sales grouping per customer in an organizations with only a limited number of customers can be different compared to sales grouping in very large organizations with 10 or 50 times the number of customers.

Good insight into the why, how and what of these dimensions and the way they must be linked to the atomic level of transactions (e.g., an order line) is essential to enable and maintain consistency. Data stewardship is a business responsibility and the enterprise data warehouse is an individual business- independent infrastructure. The data warehouse holds "the golden copy" of the reporting structures (whereas individual applications might have their own structures). This makes the master data management repository and the enterprise data warehouse two sides of one coin.

Although we are convinced that shared master data management is the way to go, it will still be quite a journey. Once master data management has regained the place and importance it deserves, the call for the enterprise data warehouse might lose importance. This could give way to architectures based on a federation of data warehouses sharing parts of the business model with local extensions managed by individual organizational units close to the actual users.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access