Continue in 2 seconds

The Gift that Keeps on Giving

Published
  • December 01 2003, 1:00am EST
More in

As we approach the holidays, we tend to focus our thoughts on others. We shop for gifts to give our loved ones. I do not remember where the saying, "the gift that keeps on giving" originated. However, of the various gifts given during the holiday season, some (e.g., food gifts) are very transient. Some gifts are very meaningful and will be remembered for a lifetime because they touch an emotional or symbolic need of the recipient. Still other gifts "keep on giving" because they grow in value, as in gifts of financial instruments that increase in value over time. Figure 1 provides a comparison of gift types.

We can give our coworkers a gift when we implement information quality (IQ) improvement. Furthermore, when we improve processes to increase the quality of the information produced, we give our knowledge-workers a gift that keeps on giving.


Figure 1: Comparison of Gift Types

Unlike data cleansing, when you improve defective processes that create inaccurate or incomplete data, you eliminate errors that create the need for correction. Data cleansing cannot correct future errors. The differentiating ingredients that make an initiative an information "quality" initiative are its focus on the customer and its "process improvement" ­– not just inspection (data measurement) and correction (data cleansing). When you improve processes to prevent them from producing defective data and put them in control to produce consistent quality, you have given your enterprise a gift that keeps on giving. You have permanently eliminated the costs of process failure and associated scrap and rework caused by defective data that is not being produced by the previously defective process before the improvement. Furthermore, and more importantly, you may have prevented a loss of end customers and their customer lifetime value as a result of the defective data.

If you have defective data that knowledge-workers cannot trust, you will have to conduct a data correction (cleansing) effort in order to bring the defective data up to a level the knowledge-workers can work with. A value-focused information quality management function will treat data correction as a one-time activity for a given database. That value-focused IQ function will also begin a parallel process improvement initiative to prevent recurrence of the defective data, hence minimizing the need for future data correction.

Defective Processes Cause Defective Information

The economics of information process improvement are quite simple. Defective data does not just appear; it is produced by broken processes. Let us take a process that produces one million records that cost $5 each for an annual production cost of $5 million. If the process has a current defect rate of 20 percent that causes downstream processes to fail, it will create 200,000 defective records per year, or 1,000,000 defective records in five years. I have seen defect rates of 25 to 35 percent of the records.

The direct costs of such defective data can easily be $5 to $50 million or more per year in process failure and recovery, hunting and fixing it, not counting opportunity costs. The costs of lost customers and missed opportunity can well be double or treble that! For this analysis, let us say that the costs of process failure, its recovery, the local hunting and chasing the data and correcting it costs $2.5 million in the year the defective records are created and an additional $250,000 in subsequent years compounded for the remaining uncorrected data. Experience shows that not all defective data can be corrected. The five-year cost, including cost of nonquality is shown in Figure 2.


Figure 2: Five-Year Cost Due to Defective Processes

Data Cleansing Attacks Yesterday's Problems –­ Not Tomorrow's

Let's now say that a data correction (cleansing) initiative is taken, and that it is able to correct 90 percent of the incorrect data, as no data correction process will eliminate all errors. This still leaves 100,000 defective records. Some types of errors, such as event data, will never be able to be corrected. It will be far more costly to correct other data than it will be to capture it correctly at its source.

The costs of the data correction are incurred precisely because defective processes produced defective data. Additionally, the results of the data will not be perfect elimination of the defects; hence, we have both information scrap and rework. The total costs for data correction can span from $300,000 to several million dollars for large databases. For this example, let us say the cost of correction is $750,000 plus a $250,000 acquisition for data cleansing software and a $50,000 annual maintenance fee after the first year, for a five-year data cleansing effort cost of $1,200,000. These costs are data correction costs only and do not include all of the costs of process failure and local scrap and rework caused by the original defective data.

However, here are the facts. If you do not improve the processes to prevent the current rate of error, in five more years you will have 1,100,000 defective records and you will have incurred the $5 to $50 million or more per year in local process failure and information scrap and rework costs again. Note: this assumes that you correct the data at its source databases.

Now let us look at the five-year cost. You still have the $25 million information production costs and you still have the $16.25 million costs of nonquality, but you have also added the $1 million data cleanup costs plus $200,000 maintenance fees over five years (see Figure 3).


Figure 3: Five-Year Cost with Data Cleansing Initiative

Now the five-year total cost of ownership is $42.45 million. Your total costs are actually higher in any accumulative year if you do nothing to improve the process! Additionally, you will need to spend that $750,000 for data correction plus maintenance again –­ and with a 90 percent correction rate, you now have 110,000 defective records.

The Gift that Keeps on Giving

Now let us say that you conduct your data correction and you improve the process at the same time using an improvement method such as Plan-Do-Check-Act that seeks to discover the root causes of the defects and then define improvements that prevent recurrence. It is very feasible to eliminate 90 percent of that 20-percent error rate, leaving a two-percent error rate and only 20,000 defective records per year.

If the one-time cost of a process improvement initiative is $1 million, and the additional operating costs are a 10 percent increase from $5 per record to $5.50 per record, the annual cost of producing 1 million records is $5.5 million. With a 90 percent reduction in the error rate and its cost of failure and scrap and rework, the first year savings will be $2.475 million of waste and process failure with a return on investment of $0.975 million (see Figure 4).


Figure 4: Five-Year Cost with Process Improvement

The five-year total cost of ownership is $31.325 million. This is a five-year profit of nearly $10 million on a one-time investment of $1 million for a process improvement initiative. After the first year, the ten-percent additional operating cost produces an annual savings of approximately $3 million each year by reducing the costs of process failure, recovery and local hunting, fixing and workarounds caused by the defective information.

For information on how to conduct process improvement initiatives, see my book Improving Data Warehouse and Business Information Quality: Methods for Reducing Costs and Increasing Profits (New York: John Wesley & Sons, 1999, pp. 285-310); or J. Juran, "The Quality Improvement Process," Juran's Quality Handbook 5th Ed. (New York, McGraw-Hill, 1999, pp. 5.7- 5.71).

It is necessary to perform data correction to restore trust in the current databases. However, without a corresponding improvement of the processes that caused the defective data, you will not have real long-term benefits.

The real value in information quality management – ­ and what makes it "quality" management ­– is realized when you conduct process improvement to eliminate the causes of the defective data.

What do you think? Let me know at Larry.English@infoimpact.com and have a most happy and joyous holiday time.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access