One of the key design factors in architecting the integrated enterprise is determining how to distribute processing to ensure the speed and bandwidth required to meet business needs. The parameters that drive this decision are industry- and business-specific. Financial-services companies using the Internet to interface to their customers require systems that are always available and that work in real time; they also must consider dynamic load balancing. Other industries, such as insurance claims processing, do not have such stringent processing requirements.

However, one area of performance is often overlooked – the area of change management. Without the ability to perform efficient, global impact analysis, even the optimal execution environment can fail when it needs modification. In order to better understand the requirements for developing an enterprise infrastructure that supports effective change management, we will first consider the types of change an organization must anticipate and then the technical challenges that must be addressed to successfully respond to these sources of change.

Sources of Change

The sources of change are extremely varied and include:

Regulatory requirements. Every industry is affected by this kind of change, although the frequency of these changes will vary. For example, healthcare, utilities and banking are heavily regulated industries in most countries. As a result, companies in these markets must expect to accommodate changes on a regular basis, ranging from relatively simple changes such as tax rates or currencies (e.g., the Euro) to the kind of sea change required when governments choose to reduce regulation. Deregulation, such as that previously seen in the banking industry and currently seen in the utility industry, fundamentally changes the way companies do business. It can lead either to dramatic failures, such as California's electric utilities, or to large numbers of mergers and acquisitions as companies try to acquire the systems and skills needed to compete.

Advances in technology. Once again, no industry is immune to this type of change. The survival of technology companies depends on their ability to generate and promulgate these advances, but the ability of other companies to survive depends on their ability to adapt their business to address these changes. The mom-and-pop hardware store will not be able to incorporate systems such as those used by Wal-Mart or Home Depot, but if they don't develop something that differentiates them from these behemoths, they won't survive. In fact, even large companies such as Montgomery Ward or International Harvester have failed to survive because they were unable to adapt their systems and strategy to changing market conditions.

New product offerings and market strategy. These changes are at the heart of capitalism – industries' constant drive to develop products that deliver more value or convenience to the consumer, whether that consumer is an individual or a corporation. Dell's success, in a large part, can be attributed to its use of technology in minimizing inventory, avoiding middlemen and dynamic pricing.

In short, success in the marketplace has always depended upon the business owner's ability to generate and benefit from innovation. The irony is that information technology (IT) – the very industry that has enabled innovation by increasing the speed and accuracy of communication – is now threatening to be the bottleneck that impedes innovation. This is because it has not developed in such a way as to speed deployment. While new IT products are often increasingly easy to deploy and manage by themselves, they too often ignore the three Ds that characterize the environments in which they must be deployed – data, delivery and deed.

Data. From an IT perspective, the oldest of these is data, with the earliest use of computers geared at more rapid access to and manipulation of data. As a result, one of the biggest problems that must be addressed in developing a change management strategy is a strategy for managing data integration. A recent article in InfoWorld cites IDC reports suggesting that for every dollar companies spend on middleware, software and installation, they spend between $5 and $20 to integrate disparate back-end systems, including legacy.1 Moreover, the problem is not limited to brick-and- mortar companies that are rife with legacy applications running under hierarchical and network database managers on mainframes. The failure of many dot-coms such as toysrus.com can be attributed to the site's inability to access data that was timely and accurate enough to deliver on product.

Delivery. With the advent and deployment of wide- and local-area networks, timely and accurate access to data depended not just on the efficiency of a database management system (DBMS) and the database schema (the layout of the data), but also on where the data resided and where it needed to be delivered. The Internet, with its "publish-and-subscribe" paradigm, was envisioned as simplifying this aspect of timely and accurate data access. However, the Internet's ability to deliver on its promise is limited by the fact that most of the data required to drive e-business initiatives resides – and must be maintained – in disparate databases residing on proprietary networks. The complexity of managing these networks and the messages/interfaces that connect the applications hosted on these networks is growing dramatically. This phenomenon is largely due to companies seeking to integrate applications across the enterprise, and to interface with trading partners and customers across the Internet.

Deed. Webster defines "deed" as an action – or, in the world of IT, a transaction, an event that should be recorded as having occurred. When applications were standalone, there were usually one or two ways to enter a transaction into a system. However, in the new enterprise, an event from the Internet may trigger a chain of transactions. Middleware vendors such as SeeBeyond or Vitria provide products that allow business users to define business processes, a set of rules that describe how transactions should be "chained" to achieve the desired behavior in the organization.

For a change management strategy to work, it must be able to capture the interfaces between deed, delivery and data. Unfortunately, most software products that address these levels have no means for capturing this information, although that failure may not be clear if one relies only on the marketing material. For example, most middleware providers minimize the importance of data access by saying that they provide or support adapters between packaged applications, standards such as ODBC or other gateways. While these interfaces may simplify aspects of the problem, they far from solve it.

For example, consider the case where an organization purchases a set of adapters between packaged applications. These adapters should understand the interrelationship between the data values represented in different applications and the fact that transaction X in application Y should trigger a particular exchange of information with application Z. Additionally, the adapters should have the capability to directly interface to the applications. However, they would work in a plug-and-play mode only if the data values were represented consistently in the two applications and if the adapters were designed to interface with customized applications. Yet, the odds of meeting these criteria are slim.

While companies strive for data consistency, the reality of how applications are implemented means that this consistency is only imperfectly achieved. As a result, for an adapter to work, some (usually relatively simple) code is often required to transform one or more data values. In fact, in his report "Demystifying B2B Integration," Simon Yates of Forrester Research quotes one of the 50 companies interviewed as saying, "We're using MQSeries and webMethods, and these products have a high degree of customer development that goes with them. They are not an out-of-the-box solution, so the learning curve is steep."

Complexity as a Fact of Life

The reality is this: The task of IT is and will remain complex for the following reasons:

  • Enterprise applications – whether legacy or packaged – are complex. The copybooks for some legacy COBOL files are tens of thousands of lines long. Oracle's MRP application has over 3,000 tables in its schema definition.
  • The interfaces (whether through messaging or Web services) required to implement the integrated enterprise usually involve thousands of snippets of custom code.
  • The rate of change is increasing at the same time that systems are trying to interface in real time. The result is that IT's window for responding to change is significantly reduced.

Meta Data Acquisition and Exchange

Without some methodology for capturing information describing the interfaces between deed, delivery and data in a queryable environment, it will be impossible to determine which interfaces are affected when something changes. The need for this kind of audit trail is driving a much broader definition of meta data. No longer is it sufficient to understand the data layouts and relationships. One must also capture the business rules and transformation logic, as well as the dependencies created by the expected sequence and timing of events that result from the definition of business processes. Key, then, to an effective change management strategy is the ability to acquire and centralize this information for rapid impact analysis. The centralization of this information in directories or a repository is critical for rapid determination of the technical changes required to react to some change, even if the actual implementation of the changes is carried out by different groups within the IT organization.

Achieving this goal places two major requirements on IT within the enterprise – one technical and the other organizational. Preferably, this meta data should be captured as a side effect of using the software products chosen to address each of the 3-D layers, rather than documented by hand to ensure that the meta data in the repository accurately represents what is happening when the software executes.

The second requirement is that this information be collected and owned by a single group in order to minimize the time required to determine the required changes. Ideally, the collection would be automated by some check-in mechanism when a project goes into production. If the group owning this information map is also the one in charge of implementing EAI (the definition and execution of "deeds"), the necessary changes probably would not affect the organizations owning the delivery or data. If the change originates in either the data or delivery sector of the organization, the most serious impact – in terms of number of other changes required – is likely to be in the interfaces between applications, reinforcing the need for ownership by some interface management service organization.

Over the past 20 years, IT has evolved from providing productivity to being the backbone of a company's very business. In IT's transition from the back office to the executive suite, it is not surprising that mistakes were made. However, with its increased importance, the potential damage caused by such mistakes becomes much more costly. In the past, one could point to a failed SAP implementation as a $10 or $15 million mistake. However, failure in mission-critical applications, such as supply chain management, can lead to significant losses on Wall Street and devastating results for the company. Without a solid methodology and infrastructure for change management, the risk for such failures greatly increases.

References

1. Tom Sullivan, "Take Your Medicine," InfoWorld, August 13, 2001.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access