Models are the foundation for predicting outcomes and forming business decisions from data. But all models are not created equal. There’s a huge variety of models and their capabilities – ranging from simple trend analysis to deep complex predictions and precise descriptions of how variables behave.

The most powerful and useful type is the analytical model, because it’s accurate and can be analyzed, interpreted and understood by business users. In basic terms, an analytical model, also known as a mathematical model, is a mathematical equation describing how a system behaves. Over the past few decades, analysts have struggled to develop analytical models, as they require incredible skill, deep knowledge and vast amounts of time to build and tune.

The good news is that modern artificial intelligence (A.I.) has the capability to infer these models directly from structured data. Analytical models are used in enterprises, natural sciences, engineering disciplines and social sciences to identify causation and to make predictions about future behavior and actions.

The reason these models are being used widely across industries is that they generate the most accurate predictions and the most concise explanations behind them. They give enterprises the ability to forecast and understand how elements will react under entirely new and varying circumstances.

So why not exclusively use analytical models? The short answer is that there are other models that are easier and cheaper to create. For example, linear models, polynomial models and spline models can be used to fit curves and quantify past trends. They can estimate rates of change or interpolate between past values. The catch is that they are less powerful and less accurate. They underperform at extrapolating and predicting future data because they’re too simple to capture general relationships from vast amounts of data.

Similarly, these models often need to be quite large and high-dimensional in order to capture global variation in the data at all, which subsequently often makes them difficult or impossible to interpret.

Another path to explore is black box machine learning algorithms. Black box algorithms attempt to improve the predictive accuracy over standard linear or nonparametric methods with decision trees, neural networks, ensembles, and the like.

While they are more efficient at encoding trends in the data than the simpler models, they generally repeatedly apply the same nonlinear transformation over and over (such as a logistic function or split average), regardless of the actual underlying system. This is done to improve accuracy, but it makes it virtually impossible to interpret these models meaningfully. Moreover, black box algorithms require significant expertise from their users. A third alternative option that many are curious about is the enormously large, complex models produced by deep learning methods. There’s no denying these models perform extremely well with data like images and text.

An important precaution is that deep neural networks will use every single input available, even irrelevant variables and signals. While this may be beneficial with some data sets, it creates inaccurate results in most business applications, as it incorporates factors that could be completely irrelevant or spurious.

Analytical models achieve the same accuracy but without the complexity compared to other algorithms. Instead of reapplying the same transformation over and over, the structure of the model is specific to the system being modeled. This approach makes the model’s structure special – it is by definition the absolute best structure for the data, and the simplest and most elegant hypothesis on how the system works.

The drawback to analytical models is that they require significant amounts of computational effort to build. But this has changed in the past few years.

A new type of A.I. called “machine intelligence” leverages advances in computing to automatically create analytical models from raw data, giving analysts accurate – and interpretable – models in minutes.

Instead of completely automating a task or simply fitting data, machine intelligence makes discoveries in the data we collect and interprets them back to us automatically, enabling humans to do what they do best: ask the right questions, and apply the answers the right way. Automation has never looked so good.

(About the author: Michael Schmidt is the founder and CTO of Nutonian, a machine intelligence company that automates data-driven discovery for financial services, IoT, and security organizations. Michael is the creator of the Eureqa project, a popular software program for discovering hidden mathematical relations in experimental data, and was named one of the “World’s 7 Most Powerful Data Scientists” by Forbes)

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access