We hear in the media that data has become the source of all answers. The future of big data is promising; we’re in a new age in which we have so much information that our decisions can practically be made for us. However, the opposite seems to be true – many decision-makers simply stare blankly at the data in front of them thinking, “Now what?”
 
Edward McQuarrie, a marketing research professor at Santa Clara University, points out a common misconception in data literacy today: that the best insight is crafted using data we have on hand.  However, in actuality more insight can be found by taking stock of internal assumptions and measuring them against the captured data. This requires the uncomfortable process of being proven wrong and adjusting one’s view accordingly. Although the difference seems subtle, the results are worlds apart.

Consider these examples.

The 2012 Election: Barack Obama’s Campaign

A story that marketers loved to talk about in 2012 was the success of President Obama’s fundraising campaign. Although the conversation about campaign funds in the 2012 election revolved around Super PACs, the email campaigns representing the incumbent candidate raised a whopping $690 million, which is certainly nothing to scoff at. The real story, however, lies in the process that the staff employed.

It’s easy to assume the email writers were so good that they knew how to grab people’s attention with catchy headlines and perfectly designed templates, but this view is too simplistic. In fact, the writers themselves admitted to not being very accurate at predicting which emails would be successful and which would not resonate with recipients. How can this be true, when they were wildly successful? Is it luck? Not quite.

H1N1 – The Swine Flu

Most of us remember the 2009 swine flu  pandemic, or H1N1, and how it was declared an international concern. It was a matter of national emergency, and projections in the U.S. estimated that half of the population could be infected, potentially resulting in as many as 90,000 related deaths. Schools were closed, vaccination manufacturing was scaled up and everyone began washing their hands a whole lot more often.

However, the results were much less dramatic than anticipated. Only one-third of the anticipated cases were reported, with  approximately 12,300 related deaths by March 2010 . This put the swine flu’s lethality of .02 percent  at a lower level than the flu strain of .03 percent for an average season.

The Difference in Approach

Both of these stories have something in common – data. This data had to be collected, aggregated and correlated in order to make enough sense for decision-makers to leverage. While they are worlds apart in function (and in what was at stake), the object lesson we should focus on is the difference of approach. The H1N1 case used data to extrapolate, where the Obama campaign used data to learn.

The swine flu situation caused a lot of initial alarm as a result of how the data was interpreted. With around 1,600 cases in Mexico reported in late April 2009, at 103 deaths, the mortality rate of H1N1 could be calculated at 6.4 percent. Add this to the fact that the flu is easily spread, with cases cropping in many countries, and it is understandable how frightening this sounded at face value.

The projections were confounded in different ways. First, swine flu was actually underreported in Mexico. Some cases were mistaken for H3N2. Others were not serious enough to warrant medical attention (nor did they result in deaths). This implies that the denominator of people that caught the virus was higher than what the CDC realized, which, in turn, tells us that the mortality rate was lower. This would have impacted extrapolative models at the time. The quality of medical care was also a factor which did not get consideration, which would have created different results among the infected countries. Identical treatment of all infections and cases was a very simplistic assumption.

The point of this example is not to demean vigilance. It serves to point out how difficult it still is to predict the future outright, particularly when a large number of factors come into play.

Business is equally complex. High-level executives, when presented with this type of information, want to use the insight to know what the future holds for the organization. This is because the potential to make prescient decisions based on such predictions is very alluring when dollars or even lives are on the line. It is a very natural reaction to desire predictions. It is important to remember, however, that there are limitations to deriving predictions because it skips examination of assumptions within the data set. If you are not careful, this can amount to blind reliance on foundations that could have surprisingly little merit.

Compare that to the email campaign in where the writers’ assumptions were completely blown out of the water. By testing their assumptions against reality and proving themselves to be wrong initially, they wound up being more “right” in the long run.

Some key lessons they found from their marketing data:

  • Many of the people on their email list received consistent messages and did not mind the volume. Very few ever unsubscribed.
  • Light profanity and racial targeting worked as a trigger. Catch phrases such as “Hell yeah, I like Obamacare” and “We’ve Got Your Back!”  were subject lines that generated a lot of click-through.
  • Some of the best calls-to-action were also the least attractive in terms of visual design. They worked well because they garnered the most attention.

These insights only came about once the team crashed their ideas against their data sets, learning where their successes would lie and where they would not. It’s also interesting how counterintuitive these results were. In fact, and had they stuck to traditional viewpoints (and extrapolated from them), they would have missed out on their best opportunities.
A Difference in Mindset

Professor McQuarrie pointed out that at the end of the day people don’t like being wrong. However, this attitude can prevent the sort of learning that will help companies grow and adapt.
 
Below are the three existing barriers that can hinder many organizations on their way to achieving this learner’s mentality.

First, many employees are under immense pressure to deliver certainty. Audiences ask for indications from political polls. Shareholders wish to know exactly by how much their stock investment will grow in the coming years. Business executives must to know what’s coming down the sales pipelines so they can create financial predictions. This often forces hasty assumptions at the cost of cautious caveats (which may sound too uncertain) and nuance.

Second, no one likes to challenge the status quo. If things are working, why bother to investigate any further and create more work? No one should doubt that Mitt Romney was able to raise some level of funds through online sources. We might even assume the campaign enlisted best practices that were well thought out. However, when the goals are seeking opportunities and insights, sticking to the status quo is very unlikely to yield different results. After all, if the intent is to do things the same way, why bother measuring anything at all?

Lastly, continuous improvement, as McQuarrie states, is quite difficult. Various disciplines that rely on modeling require that existing models go through iterations to improve upon past assumptions. While big data was the buzzword in recent years, and seems to be a very new development, continuous improvement through the use of data has been around for a long time. We call it kaizen and tend to apply it to operations and manufacturing. We’ve also referred to it as A/B testing in direct marketing fields. President Obama’s email fundraiser in 2012 is this exact concept in action. However, its analogous use has not completely spread into all data-driven fields.

It behooves all of us, as consumers of data, to remember why we have it: We gather data to improve what we’re doing. That is fundamentally different from using it to extrapolate. Extrapolation is a comfort zone of answers, all too often built on a foundation of sand. Improvement is uncomfortable and confrontational, but is built on the solid bedrock of reality. If big data is to be a commonplace tool, it stands to argue that the next step in competition is not merely to possess it, but to harness it. Harnessing it does not mean staring at charts until one sees a self-fulfilling prophecy. Wielding data properly means being open to having one’s own mind changed and adapting to insights that many others could be too stubborn to see.

The bad news is that it’s hard enough to do personally, much less lead teams and corporations toward a new paradigm. On the other hand, as I left the good professor’s office, he mentioned one last thing to me: “Always bet on apathy.”

And that really is the best news: The people who pursue this learner’s mentality will get a competitive advantage that most will not.


The views in this commentary do not necessarily reflect those of Information Management or SourceMedia.