I just finished an informative read entitled “Everything is Obvious: *Once You Know the Answer – How Common Sense Fails Us,” by social scientist Duncan Watts.

Regular readers of Open Thoughts on Analytics won’t be surprised I found a book with a title like this noteworthy. I’ve written quite a bit over the years on challenges we face trying to be the rational, objective, non-biased actors and decision-makers we think we are.

So why is a book outlining the weaknesses of day-to-day, common sense thinking important for business intelligence and data science? Because both BI and DS are driven from a science of business framework that formulates and tests hypotheses on the causes and effects of business operations. If the thinking that produces that testable understanding is flawed, then so will be the resulting BI and DS.

According to Watts, common sense is "exquisitely adapted to handling the kind of complexity that arises in everyday situations … But 'situations' involving corporations, cultures, markets, nation-states, and global institutions exhibit a very different kind of complexity from everyday situations. And under these circumstances, common sense turns out to suffer from a number of errors that systematically mislead us. Yet because of the way we learn from experience … the failings of commonsense reasoning are rarely apparent to us … The paradox of common sense, therefore, is that even as it helps us make sense of the world, it can actively undermine our ability to understand it."

The author argues that common sense explanations to complex behavior fail in three ways. The first error is that the mental model of individual behavior is systematically flawed. The second centers on explanations for collective behavior that are even worse, often missing the “emergence” – one plus one equals three – of social behavior. And finally, “we learn less from history than we think we do, and that misperception skews our perception of the future.”

What are some of the “common” individual common sense foibles? The notion that humans behave rationally, that we’re “homo economicus” and that all behavior can be explained by individuals’ attempts to maximize utility, is certainly at the top of the list. A corollary to homo economicus is that if it’s determined that an intervention failed to attain its objectives, it must be the case that the incentive scheme didn’t work. All that needs to be done is “fix” the incentives.

Current research in psychology and economics, however, suggests instead that unbridled rationality is not the norm. Humans are very biased thinkers and often adopt simplifying (and irrational) heuristics such as framing, anchoring, availability and loss aversion to guide their behavior.

Framing, which occurs when “preferences can be influenced simply by changing the way a situation is presented,” is a particularly pervasive and insidious common sense error. In fact, “no matter how many times we fail to predict someone’s behavior correctly, we can always explain away our mistakes in terms of things that we didn’t know at the time. In this way, we manage to sweep the frame problem under the carpet – always convincing ourselves that this time we are going to get it right.”

Watts uses his training as a sociologist and position as scientist at Yahoo! to conduct field experiments on collective behavior. In one experiment called Music Lab, the author and his colleagues demonstrated the workings of “cumulative advantage” by showing how social influence impacted which songs were downloaded by participants. He also examined data from Yahoo! and Twitter to conclude that the tidy “Law of the Few” explanation of how exceptional people influence adoption of new products and innovations in social networks is far too simplistic. His finding? “The outcome depends far more on the overall structure of the network than on the properties of the individuals that trigger it.” “The Tipping Point” author Malcolm Gladwell probably wouldn’t agree.

Chapter 5, "History the Fickle Teacher," is most illuminating. Watts argues there’s no shortage of reasons to challenge historical explanations of events and behaviors. Indeed, since historical explanations are only constructed after an outcome is known, history is generally just describing, rather than explaining. The combination of frame, hindsight, creeping determinism and sampling biases often leads to the post-hoc fallacy – because A preceded B, A caused B – that undermines historical explanations.

The problem with history as a scientific method, is of course, that sequences of events occur just once. The inability to witness “counterfactuals” or perform experiments controlling important factors limits the production of evidence needed to infer cause-and-effect relationships. Without experiments, history turns to storytelling, which can easily bury unsuitable evidence and describe “something that happened at a particular point in time but do so in a way that invokes knowledge of a later point.”

The treatment of the roller-coaster ride of Silicon Valley giant Cisco by the business press is a comical illustration of such historical explanation. Lauded by Fortune as “computing’s new superpower” when its stock price was at $80 in 2000, Cisco was derided as lacking in strategy, execution and leadership just a year later following the Internet bust. History had “haloed” Cisco’s overall evaluation up and down to match its stock performance.

Next time I’ll focus on several “Everything Is Obvious” recommendations for addressing common sense mistakes in business thinking.

(Editor's note: For the second installment from Steve, "The Dream of Business Prediction," click here.)