In Part 1 of “The Science of Business Manifesto,” I discussed at a high level using the long-established scientific method of observe, hypothesize, predict and test as a foundation for performance management and BI. In Part 2, I will delve into a bit more detail, paying special attention to the transition from business to science – from strategy/hypothesize to test/evaluate.

For me, the most significant challenge executing the science of business in practice is translating the language of business to the language of science. This is where Kaplan and Norton's Balanced Scorecard technique of formulating strategy maps comes in handy. The strategy map provides “a framework for describing and managing strategy in a knowledge economy … describes the process for transforming intangible assets into tangible customer and financial outcomes.”

According to Kaplan/Norton: “The Balanced Scorecard design process builds upon the premise of strategy as hypotheses. Strategy implies the movement of an organization from its present position to a desirable but uncertain future position. Because the organization has never been to this future position, its intended pathway involves a series of linked hypotheses to be described as a set of cause-and-effect relationships that are explicit and testable. Further, the strategic hypotheses require identifying the activities that are the drivers (or lead indicators) of the desired outcomes (lag indicators). The key for implementing strategy is to have everyone in the organization clearly understand the underlying hypotheses, to align resources with the hypotheses, to test the hypotheses continually, and to adapt as required in real time …With the Balanced Scorecard, the hypotheses underlying the strategy are made explicit through the strategy map's cause and effect linkages across the four perspectives. But hypotheses are just assumptions about how the world works; they need to be continually tested for their validity and rejected when evidence accumulates that expected linkages are not occurring. So the first task in strategy adaptation is testing the underlying hypotheses.” Thanks K&M, for doing my explaining for me!

Bikard and Eesley reinforce the importance of hypotheses testing: “The ability to formulate hypotheses based on explicit assumptions, to test them, and to modify them over time is a major source of success … In what ways do entrepreneurs test their hypotheses? Scientists and engineers are well versed in the methods that can be used to test the product and other technical sides of the business. But how can entrepreneurs test other aspects of venture creation such as the market and customer demand? Mark Pincus, founder of online gaming giant Zynga, noted in a recent speech at Stanford University that his company experiments relentlessly and tests many new ideas before expending significant resources on them. It will even create pages on the site advertising games that do not exist yet and then track the clicks on those links to prioritize development. Pincus says that the biggest mistake that he made in his previous venture was only trying one experiment.”

A key first step for business intelligence is transitioning from the hypotheses implied by strategy maps to testable and measurable constructs of the form “the more of A we do, the more of B that will result – ultimately leading to more C.” Once hypotheses are operationalized, performance indicators can be formulated and mapped to existing data. Also central are the designs used to ensure that relationships between A, B and C are causal: A causes B which in turn causes C. Randomized experiments are the platinum standard for design, but other methods, such as natural time series with a comparison group, can be effective where randomization isn't feasible. Critical “deliverables” are both well-specified hypotheses and rigorous designs for testing.

For author Ian Ayres, super-crunching is a combination of predictive modeling and the experimental method fueled by the data deluge.  In an interview three years ago, Ayres opined: “Predictive modeling, either statistical or machine learning, is the heart of Super Crunching. But you’re right, I am kind of obsessed with the power of randomized trials. With randomized experiments, businesses can test what causes what. By randomly assigning stores to two different groups, a business can powerfully estimate the impact of a policy.” From the Ayres vantage point, BI/analytics/experimentation provides the means to explore/test/revise business hypotheses.

Tom Davenport and Jeanne Harris seem aligned with Ayers that the combination of predictive analytics with the experimental method is optimal for testing and revising business hypotheses. Their wonderful book, “Analytics at Work,” coins the clever acronym, DELTA, to describe the capabilities and assets needed as a foundation for supporting a science of business methodology. In the book, D stands for accessible, high-quality data; E represents an enterprise focus; L is enlightened analytic leadership; T represents strategic targets; and A depicts the analysts who power the initiative. Strategic targeting, which is all about finding analytics opportunities and setting the analytics ambition – generating hypotheses -- is of special relevance to the science of business.

Davenport's under-the-radar but excellent HBR article “How to Design Smart Business Experiments” shares a wealth of practical experience on using the experimental method for testing hypotheses and evaluating business alternatives. Davenport's testing dogma:

  •  Create/refine hypotheses
  •  Design Test
  •  Execute Test
  •  Analyze Test
  •  Plan Rollout
  •  Rollout
  •  Document in a learning library

Perhaps the most important lesson from this article is the criticality of test design – either experimental or quasi-experimental – to support BI measurement.
OpenBI uses a science of business methodology in many of its BI engagements, particularly those oriented to performance measurement/management. We've found the “strategy as hypotheses” approach to requirements gathering an effective means of communicating with business leaders, facilitating the translation of often arcane business concepts to BI and analytics. A secondary benefit is helping the business tighten its hypotheses on how it operates – or wants to operate – in turn encouraging the creation of precise KPIs for measurement. Finally, the thinking leads naturally to the use of data and analytics to evaluate business performance – to test/adapt the hypotheses. Our experience is that the business understands and enthusiastically takes to this approach of bridging the gap with technology/BI. Indeed, the ROI of BI as a facilitator for the science of business is generally a no-brainer, not the arduous exercise it often is when BI is incorrectly seen as an IT initiative.

In the end, “Hard Facts” authors Pfeffer and Sutton offer solid advice for those looking to adopt a scientific approach to running their businesses. Citing psychologist John Meacham who notes that “the essence of wisdom is to hold the attitude that knowledge is fallible and to strive for a balance between knowing and doubting”, P&S propose that scientific leaders “treat their organization as an unfinished prototype...acting on what you know at a moment in time, based on the best available data you have, even as you try to create the conditions for learning more.”