The insurance industry, though conservative, is not immune to change. New market opportunities, new books on business and new sales channels are all known entities that can be accommodated without too much disruption. But change is now coming in the form of new data sources, such as in-vehicle geolocation and telemetry data, social networking sites and microblogging feeds such as Twitter. The question naturally arises of what, if any, of this data is relevant, let alone useful to insurers. Incorporating data from new sources, specifically “live” sources, into existing data warehouses is a technical challenge. Moreover, the sheer volume of this data, often referred to as big data, can be daunting. But with a slight shift in perspective, a powerful framework for analysis emerges, allowing mature legacy systems to become part of groundswell changes.

In the era of big data, new and potentially rich sources of data are continually appearing on the horizon. In addition to more data and cheaper storage, it’s become desirable to perpetually build more sophisticated back-end and customer-facing applications. Storage costs that continue to decline also allow for greater data retention, contributing to the potential enrichment of BI and analytics. Cloud storage, though not fully proven for BI, makes data readily available across a geographically diverse enterprise. But the data can be noisy and irrelevant at times.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access