I was a bit annoyed as I read the first couple of paragraphs in “A Step-By-Step Guide to Smart Business Experiments” by Eric Anderson and Duncan Simester in the March 2011 Harvard Business Review. In setting the stage for their article on business experiments, the authors contrast the ease of deploying experimentation with the difficulty of analytics.
“Analytics, which focuses on dissecting past data, is a complicated task. Few firms have the technical skills to implement a full-scale analytics program. Even companies that make big investments in analytics often find the results difficult to interpret, subject to limitations, or difficult to use to immediately improve the bottom line.”
I don't think thought-leading authors Ian Ayres (“Super Crunchers”) and Tom Davenport (“Competing on Analytics”) would agree. Both see analytics as the combination of “dissecting past data” using statistical/machine learning techniques and the experimental method. And my recent experiences with Strata and Predictive Analytics World leave me no doubt on the acceptance of analytics with predictive modeling techniques in the business world – with or without experiments. My take is that though businesses can do either predictive modeling or experiments separately, it's in tandem where they harness the most value. Tight experimental designs inspire cause and effect confidence in the findings of data dissection.
Fortunately, my irritation with the article was short-lived, as the authors offered wisdom for BI from their extensive experience deploying experiments in business settings. Their point of departure is that experiments, especially in marketing, are more pervasive than we think, citing examples of J. Crew, Pottery Barn, Capital One and most organizations that solicit charity contributions. I'd add data-driven companies Amazon, eBay (an interview with eBay CEO John Donahue in the same edition of HBR mentions the company's “Culture of Experimentation”), Google and Facebook. Just about any time you migrate to one of those websites, you're participating in an experiment.
Critical features of business experiments are control groups and feedback mechanisms built from measurable outcomes. Urging businesses to think like scientists, the authors argue for randomization to treatment and control groups. In some cases, “it makes sense for the company to set up a treatment group, then use the remainder of the customer base as a control...The key to success is to ensure separation between them so that actions taken with one group do not spill over to the other.” While feedback measures can be either behavioral or perceptual, “experiments that measure behavior provide a more direct link to profit, particularly when they measure purchasing.”
Anderson and Simester articulate “Seven Rules for Running Experiments” which have to do with:
- Looking for simple victories – experiments involving individual customers in addition to ones that are easy to execute (web experiments driven by technology come to mind). Also, embrace smaller proof of concepts before embarking on large commitments.
- Measuring/slicing and dicing the data looking for unique performance on certain metrics by subgroups of the experimental populations. Curiously, this is precisely the analytics the authors cautioned on upfront. They urge analysts to drill into aggregate results to see if findings persist with dimensional break-downs. What's often true for the aggregate is reversed for significant sub-populations. The “Simpsons Paradox” (see URL below) discussed often in academia illustrates this well.
- Developing out-of-the-box thinking. The article sites UK supermarket chain Tesco's discovery “that it was profitable to send coupons for organic food to customers who bought wild birdseed … If you never engage in 'what-if' thinking, your experiments are unlikely to yield breakthrough improvements.”
- Looking for natural experimental settings. Though often not randomized, natural “quasi-experiments” may still be strong enough to infer cause and effect of business interventions. As an illustration, the article notes that when an apparel retailer opened its first store in a certain state, it was then required to collect state sales tax on all state business – store, catalog and online – whereas earlier, online and catalog business had been tax free. The store opening “provided an opportunity to discover how sales tax affected online and catalog demand.”
The authors concede that internal obstacles – resistance to an experimental culture by intuition-base decision makers – might well be more formidable than external barriers. Their final message: “Whether your experiments are large or small, natural or created, your goal as manager is the same – to shift your organization from a culture of decision making by intuition to one of experimentation …This will encourage out-of-the-box innovations that lead to real transformation.”
One caveat not mentioned in the HBR article: It's critical to corroborate findings over time by reproducing experiments akin to the original. What was once a clear treatment effect may well mitigate to no difference or even a reversal later. Examples of such diminished treatment effects in the worlds of pharmaceuticals and psychology are described in a recent New Yorker article “The Truth Wears Off.” Frustrated psychologist Jonathan Schooler refers to the problem as “'cosmic habituation' by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. 'Habituation is why you don’t notice the stuff that’s always there,' Schooler says. 'It’s an inevitable process of adjustment, a ratcheting down of excitement.'”
Those a bit rusty on the nuts and bolts of experimental design might want to take a look at this informative PDF slide deck from an OpenCourseWare class in Cognitive Science at MIT. Like I've done for the last two years, I'll review other OpenCourseWare materials helpful for BI in a subsequent blog.