Insurance consultant Mark Gorman says for carriers quality data should mean more than getting the numbers right on a report.

The data quality issue is a multi-faceted one for insurers. Insurance Networking News asked Mark Gorman of Lanesboro, Minn.-based Mark B. Gorman & Associates to look behind the numbers of INN's exclusive research on data quality. In partnership with INN, Gorman polled respondents representing 75 insurance organizations across all lines of business about their data quality, and says the breadth, depth and diversity of the data quality issue surprised him.

INN: What are the primary constituents of quality data?

MG: The primary constituents are best determined by what organizations are willing - or are compelled - to invest in to improve data quality and how proactive or reactive they are in addressing data quality issues. One organization I recently talked to is a good example. They have been able to reduce the time for monthly financial reconciliation from 21 days to three days.

While accuracy was primary for them, timeliness also was an important dimension of quality, and they were willing to invest the resources to achieve a three-day turnaround.

To measure the willingness to be proactive or reactive, we asked about data quality activity based on six stages of a data lifecycle (initial data capture, data entry, data extraction, data conversion, data delivery and data archival). More than 80% of all respondents "strongly agreed" or "agreed" data quality was being addressed in their firms at all six stages, and more than 40% of respondents indicated further investment in the next 12 months in data entry, initial data capture, data extraction and data conversion initiatives.

INN: What capabilities are most important for carriers to begin a successful data-quality initiative?

MG: To get at this issue, we asked the respondents what capabilities they currently support for data quality, and more than 60% indicated the ability to apply data standards and data validation. Data profiling and data monitoring came in next at about 50% of all respondents. Extract/transform/load (ETL), data augmentation and matching were capabilities supported by more than 40% of all respondents.

Interestingly, just one-third of the respondents indicated a corporate capability around de-duping of data. This may increase significantly as organizations invest more and more in master data management initiatives.

INN: So, is there a technology fix (i.e. de-duplication) to data quality issues or is a broader approach necessary?

MG: While technology certainly supports the resolution of data quality issues, our research suggests a technology-only approach doesn't "work." The organization also needs to invest in the people and processes required to effectively deliver on the goals and objectives of the data quality initiatives.

Where any one of the three - people, process or technology - are out of balance, the results as a whole are less than optimal. Still, when we asked about where the data quality capabilities were most automated, the top three responses, with more than 30% of the respondents indicating their solution was "automated," were data validation, the ETL process and data quality monitoring. Interestingly, the least automated capability was data profiling, with less than 15% of respondents indicating "automated."

INN: Are data quality issues getting enough priority with insurance organizations?

MG: We're starting to see examples of senior managers placing a greater priority on the people, processes and technology necessary to proactively resolve data quality issues at an enterprise level. That said, more than 55% of the respondents still ranked themselves in tiers 1 and 2 out of 5 tiers on their organization's commitment to data quality. The tiers were ranked based on a proactive versus reactive approach to data quality, the level of senior management awareness and involvement, whether the scope was enterprisewide, and on the breadth and depth of business-side involvement.

Finally, when asked where they felt additional attention needed to be paid, nearly 75% said that the business side should take a stronger role, and nearly 35% strongly agreed that C-level executives and line-of-business management should be more involved. So there is certainly room for improvement.

INN: What do data quality issues mean for expanded use of analytics?

MG: There appears to be a direct correlation between expansion of the use of analytics and enhanced emphasis on data quality. While changes in financial reporting and stat reporting requirements are certainly change drivers, a movement toward increased use of data-driven analytics increases the visibility and transparency of data quality issues.

Organizations utilizing data for automating decisions (underwriting decisions for personal lines auto) or enhanced decision support (propensity for fraud) are encountering data quality issues at all stages - in the aggregation of data for analysis, in the decisions on which data characteristics to include and in the data capture during transaction processing. Also, since analytics use not just the core data, but derived data and third-party data as well, issues in quality have a tendency to be exacerbated.

Finally, those utilizing data driven analytics for decision-making quickly recognize the "fitness of the data for its purpose," driving heightened urgency for resolving data quality issues.

This article can also be found at InsuranceNetworking.com.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access