Larry would like to thank Susan Garza for writing this month's column.

The rubber has met the road, and it has failed. This is not to make light of a human tragedy, and it is not to declare Firestone and Ford the villains. It is to remind all of us that we are probably managing (well, at least collecting) information about people and products; that this information is hopefully being used to make decisions; and that these decisions can impact people's lives, either directly or indirectly, with consequences ranging from minor to very major. If this human equation does not, in turn, remind us of our responsibilities to plan, collect, secure, maintain, measure and share (appropriately) the information entrusted to us, then we are a hopeless bunch.

Of course, we are not hopeless. We have practices in place to:

  • Assess our customer and knowledge worker satisfaction with the availability, quality and presentation of the information they need.
  • Assure improvement of our information management processes to meet those needs.
  • Audit and continuously improve the quality of our information.
  • Assure appropriate confidentiality of the data that people have entrusted to us.

Don't we? What is the connection of all this to a tire problem, anyway? First, there are many analogies to be drawn between manufacturing hard goods and "manufacturing" information. A relevant example would be "fixing" things without knowing the root cause of the problem or the impact of the fixes we are making.
Second, there is a direct data connection. One of the first things reported about this problem was that Firestone and Ford (as well as any other organizations that collect relevant data) did not have effective processes to share the data and do the combined analysis and presentation that could have led to earlier research and action (Wall Street Journal, August 10, 2000). If all the organizations that had information about this problem had shared their data and were using data warehouse and data mining tools, the investigation and corrective actions could have begun much earlier. When Ford finally did obtain more data and performed some basic analysis, they pinpointed the source of most of the problems very quickly.

Third, if you are looking only at failure rates and not measuring the impact of failures, you do not know the quality of your product or service. Until this year, the initial U.S. complaint data for the tires showed a "reported" failure rate of about one tire per million. That would represent six-sigma quality (3.4 defects per million parts). Most of us would not be concerned about one failure per million products; in fact, most of us would be pleased to achieve such quality levels. This year, the actual complaints about Firestone tires grew to only 38 per million (U.S. data only), but the recall acknowledges that 14 percent or more of the tires may be defective (6.5 million tires recalled out of 47 million tires manufactured). The impact of the tire problem is currently estimated at 134 fatalities, $300 to $500 million (before any litigation) and two to three points in market share as of an August 9, 2000, CNN report.

Never fail to pay attention to any product or service issue that could affect the safety, security, health and integrity of people. Every time we have any failure in any product or service, we have almost always have some impact on people's lives. We should not wait for the second failure to occur before we investigate the cause of the problem.

To continue the analogy, you may not be manufacturing tires where the rubber literally meets the road, but your company and its products or services are meeting the customer with every direct and indirect contact.

John Guaspari's book, The Customer Connection, defines quality as the presence of value as determined by customers, not just the absence of defects as determined by the producer. This implies that our information quality metrics must include having the right data about customer relationships; measuring the quality of the data about customer relationships; and assessing the data that customers and knowledge workers need with quality measures (accuracy, completeness, time lines and others) they require in order to provide value to their customers

When your company meets the customer, what information do you have to tell you how well you are doing?

  • How do you know that your product or service is useful, meaningful and adding value to your customers' business or personal lives?
  • Can you integrate the data about all your customer contacts, from the prospect stage, to the acquisition stage, to the purchase stage, through the service stage and to the retention or loss of a customer stage? Can you assimilate quickly all the customer, company and third-party contact data from the call center to the Web site in the letter to the president?
  • What is the quality of your information – is it the right information, is it accurate, is it understandable, is it timely and is it presented in a way that it truly represents the value of your products and services to the customer?

Many of us may be in the stage where we don't really know the quality of our data and have only opinions that vary according to each person's perspectives. Some of us have developed metrics and can recite mean errors per data element or maybe per record. Do these metrics represent the validity of data as measured by automated methods or do they represent the true accuracy of data as measured by how well the data represents reality? How many of us can say that we have control charts showing our ongoing sampling results by customer and can show that we are measuring what our customers have told us is important to them?
Those of you who have been measuring data quality may have noticed how error rates and deviations are very different when measuring at data element versus record versus customer levels. It is enlightening to see how small error rates for individual parts (data elements in the information world) can translate into much larger error rates for a product (i.e., customer account). For example, a mean error rate of 0.2 percent for data elements (one error per 500 data element values), which seems rather insignificant, can translate into a mean error rate of 2.5 percent for records (one or more errors per 40 records) and into an error rate of more the 10 percent for customers (one or more errors per 10 customers). We may make only one very minor error (in our opinion), but that minor error may alienate the customer causing them to not return.

If we have the right data that is understandable, accurate, timely and complete are we using it appropriately? Is it presented to the right people in a useful format? Is it being secured and shared appropriately? For example, do we have a strong and enforced data security and information policy? Are we appropriately sharing data within and without our companies that could contribute to advanced knowledge or improved quality of personal lives and, therefore, benefit our customers and society?

As John Guaspari further points out, when it comes to quality, the customer has all the votes. So, how are your election campaigns (acquiring new customers) and re-election campaigns (keeping customers) coming along? What does your polling (metrics about customer relationships) show?

Is this about ethics or information quality? Yes, both. Let me hear from you at

Susan J. Garza is an associate of INFORMATION IMPACT International Inc. and president of Garza Data Consulting, Inc., Moorestown, New Jersey. Garza has more than twenty-five years of experience in all areas of information management. She is an author and speaker on data management topics.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access