Data quality a bigger challenge than many organizations first realize
One of the biggest challenges of data management is ensuring data quality—and for many organizations this remains a struggle.
Companies “always face data quality issues, but often do not understand the extent of the challenge until they attempt to compile data to solve a business matter,” said Don Loden, managing director of data management and advanced analytics—business intelligence at global consulting firm Protiviti.
“This scenario becomes apparent when clients take on analytics projects to combine disparate data from multiple systems,” Loden said. “The reason that this phenomenon exists is that data quality constraints will never be consistent from system to system. That holistic design to support quality data across systems is intrinsic for cross-platform analytics design, but effectively out of scope for a single-source system alone.”
Organizations can use tools in a reactive manner to fix and respond to data quality after the data is collected and created by systems of record, Loden said. “This is done either within the analytics platform or by some master data-driven system,” he said.
Another option is to use proactive tools to correct, cleanse and standardize data at the time of creation.
“Reactive efforts often lead to proactive efforts, as the latter is more complex and expensive,” Loden said. “Reactive efforts are often a good way to understand the extent of the data quality problem and the relative value that can be produced by solving it.”
In the future, Loden expects to see traditional data quality and data exploration tools supported by techniques such as machine learning, in an effort to increase the reach and productivity of these products.
“This will drive the fixing of data, over waiting to find data of suspect quality before fixing it,” he said. “Vendors are starting to shift in this direction, and I would expect this shift to accelerate.”