At Progressive, big data is defined by data sets too large to store organize and share. To counter that, the insurer recycles data through segmentation.

Similarly to how Apple’s robot “Liam” recycles old iPhones for parts, Progressive uses segmentation to characterize data from sensors or social media into viable sectors for usage, according to Pawan Divakarla, Progressive’s big data business leader.

“We recently surpassed $20 billion in premiums. That means we have a lot of data that has to be baked in,” said Divakarla. “It’s on us to use all the data we collect to give back to customers. Innovation is built into the way we think.”

The problem Progressive has is scaling big data, Divakarla acknowledged at INN’s DigIn Conference this week in Austin. Progressive’s Snapshot usage-based insurance option alone has collected more than 15 billion miles of data over the past six years.

Distributive computing, however, has helped many facets of the insurer’s business, including claims, fraud and its UBI offering, he says. The approach also tells the insurer what information has actual value and which is just noise.

“We turned to distributive computing in 2013 so that we could process data in shorter amounts of time,” Divakarla said. “What used to take us a month to process started taking nine hours to do, increasing speed to insight.”

(This article appears courtesy of our sister publication, Insurance Networking News)

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access