I knew I was going to like my Outer Banks vacation read, "Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics and Society," from the very first pages of Chapter One. Author Jim Manzi introduced the book’s main theme – “that real experiments were required for adjudicating among competing theories for the effects of business interventions” – by illustrating with strongly-opinionated macroeconomic analyses surrounding the stimulus program response to the 2008 economic crisis.
While “liberal” Nobel laureates argued that government spending should have been even higher than it ultimately was, conservative laureates countered that “it is a triumph of hope over experience to believe that more government spending will help the US today.” Worse than this 180 degree assessment difference by the best in economics, though, is that there was no way either side would be proven wrong even after the fact. The conservatives scoffed the stimulus didn’t accomplish what it purported; the liberals countered we’d have been in a full-scale depression without it – and what about the failure of Europe’s draconian conservative austerity policies anyway? “The key problem is that we have no reliable way to measure the counterfactual – that is, to know what would have happened had we not executed the policy – because so many other factors influence the outcome.” In short, there was no way to implement a controlled experiment to test the competing hypotheses. And without a smoking gun test, no way politically-charged economists will accept policy defeat.
Manzi established his experimental chops as a business consultant, later founding Applied Predictive Technologies (APT), a software company that uses experiments to establish causality. Uncontrolled’s an ambitious read, progressing the experimentation discussion from the philosophy of science, to the physical sciences, to the social sciences, to business strategy and, ultimately, to political action. Most of it works.
The author builds the case for the need of randomized experiments to establish cause and effect in social and business environments by taking on several prominent but less methodologically-rigorous “quasi-experimental” studies published in the social sciences. In each case – the first asking whether Republican presidents “caused” income inequality, the second proposing that legalized abortion “caused” a reduction in crime -- Manzi proffers alternative explanations consistent with study data. Had randomized experiments been feasible for either case, “other factors equal” would have helped assure, within the limits of probability, definitive testing of competing hypotheses.
One area where I think Manzi particularly hits the mark is in his focus on the generalizability of experimental results – what scientists call external validity. Even if a business or social experiment establishes the cause and effect “internal validity” of an intervention, it may have little to say about the contexts or conditions under which the findings hold. Manzi identifies the “causal density” complexity of human behavior as the culprit. While the results of experimental tests of a new vaccine might generalize across geographies, a charter school program that works in New York City may fail in Chicago, and it’s important to understand why. Alas, external validity concerns often temper the positive findings of business and social experiments.
Manzi’s overall take on the future of testing designs in business is that progress will be slow and methodical, with most experiments failing to demonstrate a strong intervention impact. He sees non-experimental analytics and experimental techniques as complements, with the former used for preliminary performance assessment and hypotheses suggestion for subsequent experiments. These experiments in turn should be replicated and their conditional/contextual limitations understood.
For rigorous testing methodologies to drive business results, organizations must be all-in, starting with commitment at the CEO level. In addition, here must be institutionalized processes in place to support a culture of repeatable experimentation. And, perhaps most difficultly, the internal political ramifications of making decisions from data rather than experts must be addressed across the enterprise. Manzi recommends a small organizational entity focused on learning to design/execute/analyze experiments. The entity should have no decision-making authority, but remain close enough to operations that it doesn’t become “academically” marginal.
I often cite eminent Stanford statistician Brad Efron’s political lens perspective on analytics based on adherence to strong mathematical model foundations. Credulous, left-leaning data mining simply listens to the data with little a priori math. Centrist statistics is both model and computation-based. And right-wing econometrics is mathematical model-obsessed.
You can think of Manzi’s design continuum in similar, liberal-conservative terms. Observational data pattern-finding (mining) is the credulous leftist. Pattern finding with multiple observations over time (panel/pooled regression) is left-of-center, though not as extreme. More sophisticated quasi-experimental designs embellishing pattern-matching are centrist. And true randomized experiments, the most skeptical, are hawkish conservatives. In this light, "Big Data Revolution" authors Viktor Mayer-Schonberger and Kenneth Cukier are Socialists; traditional academic survey social scientists are middle-of-the-roaders; and Manzi’s a tea partier.
Much as I liked Part I on Science and Part II on Social Science, I was less than enthused about Uncontrolled’s Part III, Political Action, finding it little more than gospel of Manzi’s political philosophies. In the aggregate, though, it was easy for me to just gloss over Part III and see Uncontrolled as an important book for our big data/analytics time that should be appreciated by those of all political data persuasions. One and a half thumbs up.