I did a quick tally of my IM articles for the past 15 months and determined that over 75% were devoted in some fashion to the science of business, many to predictive analytics, methodology and luck. More of the methodology-focused columns have to do with randomized experiments than to other non-experimental designs. Randomized experiments are, of course, the platinum standard for evaluating competing scientific hypotheses. Implemented properly, they inspire confidence that, aside from statistical chance, differences in measurement observed are due to factors under the control of the experimenter and not by confounding outside variables. The digital economy is a veritable playground for industrious “searchers” to use randomized experiments for developing and testing hypotheses on how businesses operate, in the process determining “optimal” factors for performance.

Alas, much of performance-measuring BI has to make do with less than full experimental control. Unlike website visitors, it's often impractical to assign customers, offers or strategies randomly in bricks and mortar environments. Without such a rigorous design, BI must be prepared to refute alternative explanations to its performance findings. Perhaps a performance gain is the result of a rising sector tide rather than company-specific activities. BI analysts needn't despair, though, since they can draw on non-randomized or “quasi-experimental” designs almost as powerful as experiments to help determine cause and effect of company performance.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access