Data is data (or are data, for you grammar buffs), but on its own it is no more than static information awaiting use by intelligent beings (that would be us humans). Much has been written here—and elsewhere—about the need for better quality data, less redundant data and more easily accessible data—all of which are very important.
Yet there is another facet that trumps them all—namely, how the information is used. For example, if I somehow knew ahead of time that New Orleans was going to win the Super Bowl handily, that data would be useless unless I did something with it; say, mortgaged the farm and put down everything I had at the sports book on the Who Dat! gang. Now I’ll admit we usually aren’t dealing with such fascinating data in insurance and financial services; nonetheless it behooves us to make the best use of the data we work so hard to collect and protect.
To that end, IBM recently previewed new information monitoring software to help organizations expand their use of trusted information to improve decision-making.
In talking about critical decisions being made and muffed, IBM noted that “the number of high profile examples of data mismanagement is growing, making the need for proper oversight and use of information key to success.”
IBM, in a recent study from its Institute for Business Value, found having superior data governance is critical to the success for top-performing companies. By a factor of three to one, the study found that top performers were much more sophisticated in their approach to governing organizational information relative to lower performing companies (42% versus 14%).
To help organizations address this issue, IBM has introduced a program tracks the quality and flow of an organization's information and provides real-time alerts of potential flaws. “For example,” the company noted, “if a health insurance company was analyzing profit margins across different product lines (individual, group, HMO, Medicare, etc.), decision makers would immediately be alerted when a data feed from a specific geography was not successfully integrated.”
Another IBM offering protects an organization's information by automatically recognizing and removing sensitive content from documents and forms. This, and the other IBM data governance efforts, are laudable, but detectable errors are only part of the picture. The real issue is that while it’s fine to alert me to a problem, I need to have an idea how to respond to that message. When my bank alerts me that my checking account balance is low, I know just how to respond (e.g., by raiding my savings account to get the balance back up).
All this points to a need to have everyone in the enterprise up to snuff on what to do when data is somehow mismanaged. This is both a matter of common sense and of policy. It may be that in some cases we will be able to automate responses to alerts, but I suspect that most of us will want to leeway in how we respond to error messages. That’s why it is critical to regularly assemble anyone who has permission to deal with and manipulate critical data in order to discuss ideas and agree on what steps should be taken, and when they should be taken.
When it comes to data governance—indeed as with any aspect of human-computer interaction—the most daunting task will be to strike the most profitable and efficient balance between automation and human input. In many ways, this will be unique to each enterprise, yet all will share the necessity of putting policies in place to deal with the many contingencies that can arise when dealing with critical data.
This article can also be found at InsuranceNetworking.com.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access