It’s been over a year since I’ve blogged on anything from the Harvard Business Review. For business intelligence, big data and analytics relevance, the more tech-focused MIT Sloan Management Review’s probably a better bet at this point. But every now and then HBR hits the evidence-based mark with an article that can have a big influence on our discipline.

Like many HBR editions, September 2012 revolves on business strategy, the Editor’s introduction articulating the challenge of a “Proper Theory of Business (is) to marry empirical rigor with creative thinking.” Included in the spotlight is an intriguing piece entitled “Bringing Science to the Art of Strategy,” written by several B-school academics and the former chairman and CEO of Procter & Gamble. A credible team it would seem.

I’ve authored several articles on the collaboration of strategy and science, cleverly naming the fusion “the science of business.” In fact, my “Science of Business Manifesto” does little more than steal from Kaplan and Norton’s classic “The Balanced Scorecard” which “builds upon the premise of strategy as hypotheses. Strategy implies the movement of an organization from its present position to a desirable but uncertain future position. Because the organization has never been to this future position, its intended pathway involves a series of linked hypotheses to be described as a set of cause-and-effect relationships that are explicit and testable. Further, the strategic hypotheses require identifying the activities that are the drivers (or lead indicators) of the desired outcomes (lag indicators).”

“Bringing Science to the Art of Strategy” expands on this notion, detailing a methodology for hypotheses-driven strategy development. The authors’ point of departure is that traditional strategic planning is broken, and actually perpetuates the status quo rather than produce new strategies. The reason? Conventional strategic planning is not scientific, lacking rigorous hypotheses and tests of same. “It is though modern strategic planning decided to be scientific but then chopped off essential elements of science.”

The 7-step approach the authors propose starts with the “recognition that the organization must make a choice and that the choice has consequences.” In response to the choice comes formulation of two or more possibilities, or well-specified hypotheses, asking what must be true for them to be credible. “The need for a choice drives consideration of the wide range of possibilities to consider.”  The status quo of no change should be among those possibilities.

As an illustration, the authors cite P&G’s 1990’s contemplation of entry into the beauty care market, where it lacked a skin care brand. Strategic possibilities to meet the felt need included going up-market with Oil of Olay, purchasing Clinique from Estee Lauder, or extending its Cover Girl brand into skin care.

Once the possibilities have been delineated, the methodology specifies the conditions for success – what must be true each possibility to be credible. This step does not discuss what is true, but rather what would have to be true for the possibility to be viable.  

Step 4 identifies barriers to success, sorting conditions according to their likelihood. “The ideal output is an ordered list of barriers to each possibility,” several of which are cause for concern.

At this point in the process, BI and performance measurement come into play as tests are designed on the barrier conditions. Prospective tests might involve analysis of existing company data, the conduct of customer surveys, focus group inquiries or other methods. A critical success factor is that all involved “believe that the test is valid and can form the basis for rejecting the possibility in question or generating commitment to it.”

After the testing methodology’s identified, the tests must, of course, be conducted. The authors recommend that the “testers” be from outside the strategy group – and focus on testing only, not on revisiting conditions. With the P&G beauty care example, “the most challenging condition for the Olay masstige possibility related to pricing.” Hypotheses-driven tests examined premium price points between $12.99 and $18.99 with very different – and unexpected – results: $12.99 was really good, $15.99 was mediocre, and $18.99 was great.

With the possibility-based methodology, the final “the choice-making step becomes simple, even anticlimactic. The group needs only to review the analytical test results and choose the possibility the faces the fewest serious barriers.” In the P&G illustration, the company ended up following the methodology to its logical conclusion, “deciding to launch an upmarket product called Olay Total Effects for $18.99,” transforming a mass market product “into a prestige-like product line at a price point close to that of department store brands.”

I like the possibilities-based strategy development methodology. It’s sensible to me, making strategy seem less artsy and more scientific – evidence-based. The connections to business intelligence/performance management are strong, especially with the data-driven design and conduct of condition tests.  At the end of the day, I guess what I like most is that in the experts versus analytics continuum, PBS pushes the needle towards the latter!