We are in the process of loading a data warehouse from a mainframe source. Some of the files contain millions of rows. I am interested in how accurate the data is. Instead of checking all the records, I would only like to sample a few hundred or so. Is there a formula or rule of thumb that would give me a 95 percent or 99 percent confidence level that the data is correct assuming the data is either right or wrong?


David Marco’s Answer: Sampling only a few hundred out of millions of records would not give me a 99 percent confidence level. However, here is the formula that I use: test the first record, the last record and at each interval for the remaining number of records that you wish to check. For example, if I only wished to test 11 out of 100 records with ids and sorted 1 – 100, I would test the following record numbers 1, 100, 10, 20, 30, 40, 50, 60, 70, 80, and 90.

Sid Adelman’s Answer: Unless you really know sampling and are confident you can pull a representative sample, you’re taking a chance. Also, will your users be comfortable hearing there is a 95 percent chance that their results are correct? The nice thing about checking the accuracy of the data is that it’s usually not window dependent; you should be able to run these checks during off-hours. Unless you are terribly machine constrained, I’d run the checks against all the rows.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access