This is the last of a three-part series that describes some fatal misconceptions about information quality. Misconceptions will undermine a well-intentioned information quality initiative and will minimize business effectiveness. In this final part, misconceptions 6 and 7 are examined.

Misconception 6: Information quality problems can be edited out by implementing business rules.

There is a major movement today that addresses defining business rules. Business rules are business policies that govern business actions and, as such, may provide integrity rules for data. The temptation is great to think that once one has defined and implemented business rules they have created process quality. This is not true. Well-implemented business rules provide important editing and validation of data; however, if they are not accompanied by process quality principles, the very rules that are meant to improve information quality can actually guarantee just the opposite. Consider the bank that discovered over two dozen of its customers had the same social security number. Root-cause analysis revealed that an edit routine required a valid nine-digit social security number when creating a customer record. But if the customer did not know their social security number, one information producer simply entered her own, because the application would not let her create a record without a nine-digit number. Valid social security number? Yes. Quality data? No! The implemented business rule actually "edited in" nonquality data.

There are two important requirements to move from "implemented business rules" to "information quality improvement." The first is to implement business rules properly. Allow "unknown" or null values when the information producers do not know the data. Forcing a value to create a record when that value is not known creates nonquality. Note that it is not acceptable to create "dummy" values such as 999-99-9999 or 000-00-0000 to represent the absence of a social security or social insurance number. To do so requires applications to have logic to exclude such values from being used. Without this "exception" logic, processes using these "dummy" values will fail, providing invalid tax reporting, for example. Implement integrity rules at the right place. When creating knowledge about customers, it may not be possible to capture all facts about them, although the quality principle is to capture all possible data at the point the knowledge is knowable. Provide reasonability editing and validation at the creation of each attribute including duplicate record matching. But, when creating a loan for a customer, certain facts such as social security number must be known. It is the process of "create loan" that must validate and verify that all required customer data exists and is correct. Implement edits within processes that use the data to assure process integrity. Finally, recognize the limitations of automated business rules. They define the required business policies but cannot guarantee data accuracy through application edits. Implement business rules from an "error proofing" perspective (Juran calls this "fool proofing") to prevent inadvertent human error but recognize that business rules in and of themselves cannot prevent inaccurate values.

The second requirement to move from business rule definition as an academic exercise to information quality improvement is to provide adequate training to information producers and provide clear business procedures. The people who create data must understand: 1) the information customers who use the data; 2) what their quality requirements are; 3) how the data is used; and 4) the costs of nonquality data. Without adequate training of the information producers and without effective procedures, all the implemented business rules in the world will not produce quality data.

Misconception 7: Information quality is too expensive.

The fatal misconception is that it costs money to produce quality information. However, just the opposite is true. Information quality proponents around the world are being asked for a cost justification for changing the status quo ­ that is, spending money to "improve" the current processes. The question is based on a perception that the current processes that produce information must be working properly. After all, we are conducting business successfully, and we are making money. While this is a fair question to ask, it is the wrong question. The real cost justification question is, "Can we afford the costs of information scrap and rework?"

The tragedy in this is that management has accepted the costs of poor quality information as a "normal" cost of doing business. In fact, I routinely find (with telecom companies, insurance companies, financial companies and manufacturing companies) that top management is generally unaware of the real costs of nonquality data. Management removed from the actual operations may not see the costs of nonquality information. Why do organizations accept the costs of knowledge workers hunting for information, having to correct inaccurate data, sending inaccurate bills, requiring customers to have to change their addresses multiple times ­ because they exist redundantly in line-of-business databases ­ as normal business costs? They accept the costs because of a misconception that this rework is not really hurting the business.

Numerous information quality cost analyses I have conducted illustrate that the direct cost of nonquality information in the typical organization claims 15 to 25 percent of its revenue or operating budget. The costs of rework, workarounds, data correction and cleanup, creating and maintaining proprietary databases because of inaccessibility to or nonquality data in production databases, multiple data handling of data in redundant databases, etc., have an incredible, but often transparent, toll on the bottom line.

Is my experience unique? Information scrap and rework is to the information age what manufacturing scrap and rework was to the industrial age. Philip Crosby (Quality is Free) found the costs of manufacturing scrap and rework to be 15 to 20 percent of revenue. Juran (Juran on Planning) finds the costs of poor quality to be from 20 to 40 percent of sales. W. Edwards Deming (Out of the Crisis) cites Feigenbaum's estimate that "from 15 to 40 percent of the manufacturer's costs of almost any American product . . . is for waste embedded in it." The authors of the BBC video, Quality in Practice, state the costs of quality in the typical manufacturing company to be around 20 percent of sales, while those of the typical service company are around 30 percent of sales. In the information age, nonquality information contributes to nonquality products and services.

Management can ­ and will ­ understand that the costs of nonquality information are unacceptable when information professionals help management quantify the costs of nonquality information in tangible, bottom-line costs to the business. Remember that American management accepted the costs of manufacturing scrap and rework until the Japanese illustrated these costs are not necessary and through continuous process improvement eliminated the costs of scrap and rework.

Management will likewise accept the costs of information scrap and rework until the competition eliminates their nonquality data, thereby reducing its costs of information scrap and rework, increasing its product and service quality and increasing its customer satisfaction, resulting in increased market share.

The bottom line is that quality information increases the bottom line ­ both in reduced costs of conducting business and in increased opportunities as a result of accurate and managed knowledge about customers, products and services and sales.

What then is information quality? It is quality in all characteristics of information such as completeness, accuracy, timeliness, clarity of presentation that "consistently meets knowledge worker and end-customer expectations" to meet their objectives. The process of information quality improvement is one of continuous process improvements of any and all processes to eliminate the causes of defective data. The purpose is to reduce costs of information scrap and rework and process failure, to increase customer and employee satisfaction and to increase business opportunity and profits.

What do you think? Send your comments to larry.english@infoimpact.com or through his Web site at www.infoimpact.com.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access