I've written several articles on the distinction between planners and searchers for BI. My experience with the planning-searching dichotomy originated with developmental economist William Easterly and his research on the international poverty war. Easterly has little use for heavy-handed central planning that seems the norm for foreign aid to impoverished countries, preferring instead the work of searchers, “who explore solutions by trial and error, have a way to get feedback on the ones that work, and then expand the ones that work, all of this in an unplanned, spontaneous way.”
Easterly's straw men contrast the work of Planners quite unfavorably with that of Searchers. “Planners determine what to supply; Searchers find out what is in demand. Planners apply global blueprints; Searchers adapt to local conditions. Planners at the Top lack knowledge of the Bottom; Searchers find out what the reality is at the Bottom. A Planner thinks he already knows the answers; he thinks of poverty as a technical engineering problem that his answers will solve. A Searcher admits he doesn’t know the answers in advance; he believes that poverty is a complicated tangle of political, social, historical, institutional, and technological factors. A Searcher only hopes to find answers to individual problems by trial and error experimentation. A Planner believes outsiders know enough to impose solutions. A Searcher believes only insiders have enough knowledge to find solutions, and that most solutions must be homegrown.” In short, in Easterly's world view, Planners execute elaborate top-down solutions that may or may not hit the mark. Searchers, by contrast, acknowledge they don't have the answers but systematically set out to learn for the benefit of their constituents. I believe this same dichotomy that serves the war on poverty is also pertinent for the development of business strategy and intelligence.
If searching-planning is a continuum, I'm much more a searcher -- at least 80% left-leaning. This preference reflects my bottom-up, inductive, analytics approach to decision-making, in contrast to a deductive, top-down method. At the same time, I'm certainly more than only a searcher, and recognize the need for upfront hypotheses and plans. For me, it's a matter of emphasis and balance as I struggle to reconcile the disparate approaches.
An interview in the October 22, Wall Street Journal Business Insight edition, Learning from Corporate Flops, with Columbia professor Rita Gunther McGrath, weighs in heavily on the planning-searching dilemma. In tandem with Ian MacMillan of Wharton, McGrath has developed an approach to corporate strategy formulation called Discovery-Driven Growth. Their thesis can be encapsulated in a single sentence: The only plan is learn as you go. Kind of a searcher's guide to planning.
According to Gunther: “Discovery-driven growth is a way of planning to grow that doesn’t require a lot of analytical information at the outset. It recognizes that many of the data that you need to make decisions don’t exist at the time that you have to make the decisions. It’s a plan to learn.” Gunther observes that even when new ventures are beautifully planned, decision-makers often make bad choices anyway: “The first thing is they take the untested assumptions that underlie the plan and treat them as facts. And what happens then is two interrelated cognitive things start to kick in. The first one is confirmation bias, our all-too-human tendency to embrace data that support what we believe to be true and reject data that might call into question what we believe to be true.”
The antidote,according to Gunther, is to recognize precisely where you are in the planning process. “Now what you’ve got is a roadmap to when you’re going to learn what. What you then can do is say, “If there are a bunch of assumptions that we can test early and cheaply, let’s do that first before we start messing around investing in plant and equipment.” Instead of the goal being to predict what’s going to happen, your goal is to cost your company the minimum amount you possibly can while you’re learning what you need to learn.”
MacMillan embellishes: “You have to make assumptions. The key issue is to know from the start that the plan is wrong. That's the only thing I know about the plan -- that it's wrong. So, how do I plan in such a way that I come out with the right solution but not necessarily know what it is at the outset? That's what discovery-driven planning is all about. It's a plan to learn. We don't know. So what are we going to learn? What you really want to do is you want to learn cheap and you want to learn fast, and if it's wrong you want to go and do something else or you want to redirect ... I document all the assumptions that I'm making. And then I test these assumptions. I plan to test these assumptions at checkpoints. So, for instance, I might develop a model of the product if it's a product I'm making. I might do a market test. I might do some focus groups. But at each one of these checkpoints, what I do is come back and test whether my assumptions are right or not. To the extent that they are right, I continue. To the extent that they are not right, I shut down the project. So, I deliberately design checkpoints where I can learn. The last challenge is to creatively invest only as you learn. In the beginning you invest very little so you can afford to be wrong. As you get more and more confident in your assumptions, which may change, and as you redirect the project you may make bigger and bigger investments.”
I like the concept of discovery-driven planning a lot. It seems a reasonable compromise between pure planning and searching, combining the “theory-building” strengths of traditional strategic planning with the continuous review and show-me intelligence of directed searching. In a way, it's akin to the Bayesian model of learning: Start with the assumptions and hypotheses -- the priors and likelihood functions, respectively. Adapt those beliefs as new information is gathered, reacting to the strength of evidence. The evolving beliefs or posteriors are derived from the priors and the likelihood functions, providing for a continuous cycle of learn as you go, where the priors for the next iteration are posteriors from the last.
Steve Miller's blog can also be found at miller.openbi.com.