The Invisible Risk of Poor Data Quality on Change Management

Published
  • May 20 2008, 5:33pm EDT

With the success of today’s businesses tied so firmly to the performance of their IT departments, it should come as no surprise that any situation which causes IT processes to be slower, clumsier or less effective will have a devastating impact on the profitability of the business. However, in the typical enterprise IT department, change management is exactly this type of drain on resources: responding to incidents can be costly and time-consuming, and prominent analyst firms estimate that 50 to 80 percent of incidents are caused by poorly implemented change.1

In looking at the big picture, several phenomena seem to have converged in order to make change management so problematic at this stage. For years, the IT industry has maintained a focus on technology itself - investing at a rapid pace in machines that are smaller, faster or better than their predecessors - with the result being that the contents of any given IT estate are now incredibly diverse and not especially standardized. IT has also traditionally yielded its direction to developers, whose advances in creating new platforms and applications have resulted in very complex application environments, and an emphasis on external services, like service-oriented architecture (SOA) and software as a service (SaaS), designed for speeding up time to value. Finally, businesses have long relied on the tribal knowledge of IT “gurus” in order to run a part of the business few people were expected to know anything about.

These trends are now reversing themselves. Many businesses are shifting from a focus on technology to a focus on processes through business service management initiatives, Information Technology Infrastructure Library implementation, better IT governance and data center consolidation (including both platforms and infrastructure). At the same time, forward-looking companies are putting an emphasis on moving from tribal knowledge and data silos to a more “industrial” view of IT. This new view calls for a comprehensive model of service dependencies that remains updated and reconciled as a source of truth, allowing us to reliably predict the results of change.

As an industry, we’re far from having achieved this fully industrialized model, and meanwhile, change management remains an area of failure, risk and encumbering expense. Why do so many changes fail? Simply put: users are relying on bad data, bad data begets worse data and users end up placing so little stock in the existing data that it doesn’t matter anyway.

In order for a change to go as expected, the business needs an accurate, up-to-date picture of all of the relevant data involved in a given process - including a register of assets, detailed attributes and metadata for all configuration items, and information about the dependencies between services, applications and the underlying infrastructure. Most businesses that I have worked with estimate that this information needs to be at least 97 percent accurate in order for change control to be effective. However, most are falling incredibly short of this percentage. A data center audit will typically reveal that existing data about hardware assets in the data center is approximately 90 percent accurate, data about applications and platforms is only 80 percent accurate, and data about dependencies is a mere 60 percent accurate. Because dynamic and complex dependencies are one of the most difficult things to track manually, it is unsurprising that most businesses relying on manual gathering of data perform so poorly in that area.

The trouble doesn’t end there. While poor data quality can be risky and expensive in its own right, existing bad data has a way of turning into very bad data fairly quickly. This is because an organization implementing changes based on bad data operates differently than one making changes based on accurate and reliable data. Even in the absence of effective change management, people want to avoid disproportionate incident management costs. Thus, when faced with excessive problems, users create workaround solutions to push changes through that often ignore process or protocol. These out-of-bound changes are “invisible” and cause the environment to drift further from its recorded state, making systems of record even less accurate and less relevant than they were before. This broken cycle of change management is illustrated in Figure 1.

Manual processes for tracking change requests and verifying successful changes may suit small organizations with few mission-critical IT processes and a very slow rate of change. For larger enterprises, data quality will always be somewhat of a moving target, and the reliability of systems of record will plummet when approved changes are recorded inaccurately or incompletely and unapproved changes are not recorded at all. Many organizations have rushed to implement configuration management databases (CMDBs) in the hope that simply having a comprehensive system of record will solve the problem of poor data quality, and some have taken a step in the right direction by implementing systematic or ad hoc audits in order to periodically refresh this data. However, the result is that the quality of data is raised only at a given point in time. If data is actionable sometimes but falls below the 97 percent level of accuracy that users consider acceptable the rest of the time, the value of a CMDB is negligible - after all, change requests are still coming in, no matter how long ago the data was audited. An enormous number of hours and dollars go into supporting the manual maintenance of a system that’s only useful for short periods of time. As it turns out, users are very sensitive to data quality and will give a system of record a very short time to prove its value. After the first few major incidents, people will lose faith in the data and rely on the resident IT guru’s knowledge instead of the system of record. This is why so many CMDB projects fail.

Ineffective change management is expensive in a few different ways, encompassing the entire cycle of change. When there are large areas where a runtime environment and a system of record don’t intersect, false negatives and false positives are both introduced. A false negative - that is, an item that appears in the runtime environment but not in the system of record - creates a blind spot where users cannot anticipate the impact of change. Rectifying a failed change requires rolling back the change, repairing any collateral damage, diagnosing the cause of the problem and executing the change again. This ends up costing an order of magnitude more than getting it right the first time. On the other side, false positives - items that appear in the system of record but do not actually exist in the run-time environment - also introduce superfluous work and expense by causing users to consider those items in change planning and impact analysis for no reason.

These expenses add up quickly. Analysts estimate that in a high-performing IT environment, unplanned work constitutes around five percent of time spent.2 In the average IT environment, however, unplanned work accounts for a whopping 45 percent of staff time.3

Truly effective change management cannot take place without acceptable (97 percent accurate) data quality. Ensuring that the data available is consistently at this level will require an automated, accurate and comprehensive model of dependencies within the IT environment, meeting the following criteria:

  • The model will be flexible enough to map complex distributed applications in a multivendor environment in order to capture the breadth and depth of data existing in today’s global enterprises.
  • The model will identify custom applications and nuances of deployment in a given IT environment.
  • The model will identify off-the-shelf applications in a timely fashion, keeping up with the daily pace of vendor product releases, upgrades, patches and licensing requirements.
  • The model will integrate with existing systems and service management tools.
  • The model will provide users with a collaborative interface so that an entire IT team can act on the same dynamic intelligence at the same time.
  • The model will prove the quality of the data it provides by letting users see a complete chain of evidence leading back to the source.

The result of building a change management cycle underpinned by this type of automated dependency mapping is illustrated in Figure 2. At the same time as this cycle becomes more rigorous, it also becomes more lightweight. By reducing unplanned work and accelerating the pace of incident resolution, organizations lend themselves the opportunity for better agility and, as a result, better profitability.


Although application dependency mapping is a cornerstone of mature change management, it is not the entire solution. Maintaining data quality needs to be a continuous process. Because systems of record must earn users’ trust in order to be used effectively, building transparency into service management tools and processes is essential.

At its core, this dependency model serves as a bridge between an organization’s view of IT and its view of the business. Using a model like this as a foundation for effective change management will take us one step further into the future, where IT is the business.

References:

  1. Jean-Pierre Garbani. “Invisible IT Risk: The Impact of Data Quality in Change & Configuration Management.” Forrester Research Webinar, January 23, 2008.
  2. Garbani.
  3. Garbani.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access