© 2019 SourceMedia. All rights reserved.

Prediction models benefit from having built-in anomalies and glitches

At the Artificial Intelligence conference in New York last week, Ira Cohen, co-founder and chief data scientist at Anodot, discussed a novel approach for building more reliable prediction models by integrating anomalies and glitches in them, combining correlation analysis with the prediction of business blind spots.

Predicting business blind spots within time series data is a necessary but insufficient step, due to the fact that anomaly detection over a set of live data streams may result in anomaly fatigue, thereby limiting effective decision making, Cohen believes.

Cohen discussed ways to address this issue, either by predicting issues in a multidimensional space or on individual data streams, leveraging correlation analysis to minimize the amount of false positives. This process brings about actionable insights faster, he says.

Information Management spoke with Cohen about how organizations can take advantage of this process, and what results to expect.


Information Management: How does a data scientist go about combining correlation analysis with the prediction of blind spots to achieve desired results?

Ira Cohen: The correlation analysis discovers the links between different parts of the data - where the parts represent various measurements of the health of a business. When forecasting key measurements, the correlations help enrich forecasting models with additional data from other measurements/events, which substantially improves model accuracy.

IM: And exactly what type of results are we expecting here?

Cohen: More accurate forecasts.

IM: Your session at the Artificial Intelligence conference in New York was about integrating anomalies and glitches into prediction models to make them more liable. That sounds like a contradiction. What do you mean by that?

Cohen: Anomalies represent parts of the data that will prove harder to forecast. If those anomalies are used when training a forecast model, without the proper preparation, it can hurt accuracy. We developed methods that account for anomalies so they do not degrade the performance of the forecast models.

IM: You spoke about anomaly fatigue and how that can limit decision making. What is anomaly fatigue, and how does it restrict the decision-making process?

Cohen: Anomaly fatigue (also referred to as “alert fatigue”) often occurs when a monitoring system sends a high volume of alerts each day to support teams. They find there are too many alerts to deal with, especially when they include false positives or “alert storms” (one incident triggering many related alerts), and these teams progressively pay less attention to them.

data anamoly.jpg

This often leads to cases where:

  1. Real and important incidents are missed; or
  2. Investigating critical incidents happens too slowly because different alerts point to different root causes.

IM: How does an organization best avoid this problem?

Cohen: Reducing false positives is achieved by:

  1. Applying ML algorithms trained to detect meaningful anomalies.
  2. Providing support teams simple and quick means -- such as anomaly scoring, minimum duration and influencing metrics, to name a few features -- to filter anomalies that are not important, from a business perspective.
  3. Applying a system that automatically groups related anomalies/alerts, to quickly point the team to the most probable root cause.


IM: What would be the measure of success here?

Cohen: Reduction in mean time to detection and mean time to resolution, and the impact that has on incident costs.

For reprint and licensing requests for this article, click here.