MAR 22, 2011 4:48pm ET

Related Links

Innovative Organizations Likely to have More Pervasive BI and Data Governance
September 2, 2014
Revolutionize Your Business Intelligence with Lean, High-Performance Solutions
August 21, 2014
Should You Always Obey Orders from Your Executives?
August 7, 2014

Web Seminars

Why Data Virtualization Can Save the Data Warehouse
September 17, 2014
Essential Guide to Using Data Virtualization for Big Data Analytics
September 24, 2014

Retroactive Data Quality

Print
Reprints
Email

As I, and many others, have blogged about many times before, the proactive approach to data quality, i.e., defect prevention, is highly recommended over the reactive approach to data quality, i.e., data cleansing.

Get access to this article and thousands more...

All Information Management articles are archived after 7 days. REGISTER NOW for unlimited access to all recently archived articles, as well as thousands of searchable stories. Registered Members also gain access to:

  • Full access to information-management.com including all searchable archived content
  • Exclusive E-Newsletters delivering the latest headlines to your inbox
  • Access to White Papers, Web Seminars, and Blog Discussions
  • Discounts to upcoming conferences & events
  • Uninterrupted access to all sponsored content, and MORE!

Already Registered?

Advertisement

Comments (1)
Does anyone know where to get a flux capacitor these days? They are in short supply! Good news is you can buy a used Delorean for less than the price of a flux capacitor!

I suggest many organizations have traveled to the past numerous times. It is obvious from the fact that they keep doing the same things over and over again so they can predict the outcomes in advance. I think that gives them some comfort. Of course this may be nothing more than an episode from the movie Groundhog Day.

The problem is current data quality practices cannot "predict" the next error, except on the basis of having seen the error in the past. There are techniques to predict the probability of errors in large data sets that few organizations use or are aware of but these techniques do not identify a single instance error which is what we are seeking!

In some industries such as financial services, some of the underlying data quality problems are systemic. Asset ratings and the ever more complex risk assessments are two such examples. Both of these have been issues for decades and the industry tolerated the sometimes unimaginable variances in the data.

E-mail address accuracy is another challenge. There are ways to authenticate and validate as well as statistical techniques to assess the overall quality of e-mail address, but single instance errors are difficult to prevent.

Data quality is retroactive while we hope it will be a prophylactic!

Posted by Richard O | Wednesday, March 23 2011 at 10:33AM ET
Add Your Comments:
You must be registered to post a comment.
Not Registered?
You must be registered to post a comment. Click here to register.
Already registered? Log in here
Please note you must now log in with your email address and password.
Twitter
Facebook
LinkedIn
Login  |  My Account  |  White Papers  |  Web Seminars  |  Events |  Newsletters |  eBooks
FOLLOW US
Please note you must now log in with your email address and password.