This is the second correspondence on last week’s Predictive Analytics World (PAW) in San Francisco. About a year and a half ago, I wrote a book review on Super Crunchers by Yale economist Ian Ayres, in which I noted that super crunching as the amalgam of predictive modeling and randomized experiments. Randomization to treatment and control groups allows investigators to minimize the risk of study bias so that the only important differences between groups out of the gate are that one is named treatment while the other is called control. Predictive modeling by itself allows analysts to infer relationships and correlation; the addition of experiments sharpens the focus to cause and effect. The combination of predictive modeling and experiments is thus a very potent tool in the business learning arsenal of hypothesize/experiment/learn.

The power of analytics plus experiments was understood well by PAW participants. Conference chair Eric Siegel noted the importance of experiments in demonstrating the value of predictive modeling, citing the oft-told story of Harrah’s Entertainment that “not using a control group” is rationale for termination. Siegel also detailed the champion/challenger experimental analogy used by enterprise decision management practitioners.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access