Using data coverage and dimension to drive artificial intelligence outcomes
One of the biggest data quality challenges is most often the acquisition of “right data” for machine learning enterprise use cases.
With “wrong data,” even though the business objective is clear, data scientists may not be able to find the right data to use as inputs to the ML service/algorithm to achieve the desired outcomes.
As any data scientist will tell you, developing the model is less complex than understanding and approaching the problem/use-case the right way. Identifying appropriate data can be a significant challenge. You simply must have the “right data.”
So, what does it mean to have the right data?
To start with, you may have identified a set of attributes in your organization’s daily transactions such as the “channel last used,” which are likely predictors of customer behavior, but if your organization isn’t collecting these data points currently, you have a challenge. Having been caught in this situation, many data scientists believe that the more data collected the better.
Another option, however, is to clearly scope the collection of data required for the use case based on research about the sensitivities and relationships between existing data attributes.
In other words, know your business and its data. By doing this up front, you’ll ensure that the time spent collecting new data isn’t wasted. Knowing this also enables you to define data quality rules that ensure that data is collected right the first time.
Coverage of data as a data quality dimension
In the financial services sector, the term coverage is used to describe whether all of the right data is included.
For example, in an investment management firm, there can be different segments of customers as well as different sub products associated with these customers. Without including all the transactions (rows) describing customers and associated products, your machine learning results may be biased or flat out misleading. It is acknowledged that collecting all of the data (often from different sub entities, point of sale systems, partners etc.) can be hard, but it’s critical to your outcome.
More broadly speaking, coverage, can be categorized under the completeness dimension of data quality and called the record population concept within the conformed dimensions standard. This should be one of the first checks to be performed before proceeding to other data quality checks.
With standard resources like the standard dimensions from IQ International, that explain the meaning of all of the possible data quality issues, you will not be bogged down in the cleansing work required later.
Understanding conformed dimensions
Conformed dimensions are a standards-based version of the dimensions of data quality that you’re already familiar with, such as completeness, accuracy, validity, integrity and other dimensions etc.
The IQ International Standards offer robust descriptions of sub-types of data quality called underlying concepts of a dimension and example data quality metrics for each of these. The use of these data quality standards at the beginning of your exercise will ensure your data is the best fit for your purpose.
In the next blog post, I’ll discuss what you can do when you don’t have enough data for the training phase of your project.