In the age of big data there are huge opportunities to provide customers with truly personalized experiences that reflect their needs and their desires. However, these opportunities are not without risk and it will take a systematic approach if we want to transform the digital experience from static and constant to dynamic and adaptive. Mining data for small nuggets of insight is not sufficient; to truly succeed we need to use data to drive the acquisition of knowledge about our customers, and to generate understanding of how we can design products to serve their needs and ours.

By themselves, data are not useful; it is only when we apply data to generate hypotheses, to validate theories and to make informed predictions that we can harness their power. The smart application of data is often the weakest link in an analytics-to-action pipeline. This is not because we fail to use sufficiently clever algorithms, but rather because we fail to compare the outcomes of different actions systematically.

The gold standard for comparing different “things” is a randomized trial: for a population of test subjects in a controlled environment we try one thing vs. the other, and systematically analyze the differences. Although this sounds straightforward, in reality there are many pitfalls for the would-be experimenter. The literature and history of scientific endeavor are littered with examples of poor experimental protocols -- it is very easy to mislead oneself when trying to turn data into knowledge. If personal biases and accidental missteps can mislead Nobel prize winners, then the rest us should be wary of blindly assuming simple A/B testing can lead us to better models of the world.

The Human Element

We are all very human in the way we approach testing. We have our personal biases and we tend to see the data in whatever fashion is most reflective of those biases. We stop our tests prematurely when the models we favor are producing the best results. We remove unusual data points when they work against us, and we keep them when they’re in our favor. These very human actions can destroy the validity of experiments and lead us to personalization models for our customers that worsen their experience and reduce our KPIs.

It’s not just constructing smart experiments that can be challenging when building optimized personalization strategies. We also should worry about how to incorporate existing domain knowledge into our decisions. It is very tempting to march blindly forward with a data-driven strategy and ignore decades of hard won insight from marketers and merchandisers on how customers will behave and what they want. The biggest wins in personalization are going to come from the effective application of heuristic domain knowledge and algorithmic machine learning to massive amounts of data.

Blended Approaches

If the big wins are going to come from smart experiments and effective blending of algorithms and human knowledge, what are the tools that we need to get there? To put experimental validation front and center of a personalization strategy requires a robust framework for doing experiments. Best practices in experimental design may be hard to set up, but they change little between experiments, so there is no need to invest resources in starting from scratch each time a retailer has a new personalization model to test. An effective experimental framework can exploit this paradigm, allowing for the rapid testing of both new models and existing heuristics by marketers, merchandisers and data scientists alike.

To get the most from data requires working fast. Tastes change and behaviors can shift on very short timescales, and the best personalization models need to adapt. In such a rapidly changing landscape it can be very tempting to forge onwards with an apparently attractive new strategy without first validating it in a systematic fashion. That can be a very, very expensive mistake; a mistake that can be easily avoided by working with an experimental framework that will force you make decisions that are driven by the cycle of hypothesis and testing.

What are the characteristics that organizations should look for when implementing an experimental framework? The primary focus should be ease of use; experimenting should be within everyone’s purview and means it should be achievable without the requiring the involvement of data science specialists and IT at every step. A framework also needs to enforce good experimental technique while enabling users to apply their understanding of customers as effectively as possible: the user should generate the hypothesis and the experimental framework should do everything else necessary to answer the question.

We are at a technological tipping point that enables us to gather data at a scale and speed that is unprecedented. Further, advances in machine learning and predictive analytics at scale have transformed our ability to use this data to provide unique, personalized experiences for our customers. However, neither algorithms nor more data will succeed in building better experiences if we do not anchor our actions in the effective use of experiments.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access