I did a quick tally of my IM articles for the past 15 months and determined that over 75% were devoted in some fashion to the science of business, many to predictive analytics, methodology and luck. More of the methodology-focused columns have to do with randomized experiments than to other non-experimental designs. Randomized experiments are, of course, the platinum standard for evaluating competing scientific hypotheses. Implemented properly, they inspire confidence that, aside from statistical chance, differences in measurement observed are due to factors under the control of the experimenter and not by confounding outside variables. The digital economy is a veritable playground for industrious “searchers” to use randomized experiments for developing and testing hypotheses on how businesses operate, in the process determining “optimal” factors for performance.
Alas, much of performance-measuring BI has to make do with less than full experimental control. Unlike website visitors, it's often impractical to assign customers, offers or strategies randomly in bricks and mortar environments. Without such a rigorous design, BI must be prepared to refute alternative explanations to its performance findings. Perhaps a performance gain is the result of a rising sector tide rather than company-specific activities. BI analysts needn't despair, though, since they can draw on non-randomized or “quasi-experimental” designs almost as powerful as experiments to help determine cause and effect of company performance.
For those interested in the wealth of experimental and quasi-experimental designs who can make the leap from psychology/education applications to business, I recommend Shadis, Cook and Campbell's comprehensive text, Experimental and Quasi-Experimental Designs for Generalized Causal Inference. The authors exhaustively explain the critical concepts of both internal and external validity, enumerating the many threats of concern to analysts. They then detail a host of experimental and quasi-experimental designs along with their strength and weaknesses – the “threats” to their validity. Among the many useful quasi-experimental designs for business are simple pre-test/intervention/post-test, pre-test/intervention/post-test with control group, replication, reversed-treatment and cohort designs, and regression discontinuity.
But perhaps no designs are more natural for BI and performance measurement than the many variants of interrupted time series (ITS). The simplest ITS consists of a sequence of observations on a single group, followed by an intervention, and then additional measurements. Schematically, the simple ITS looks as follows:
O O O O O O X O O O O,
where O represents an observation or measurement, and X the intervention. ITS is generally much more powerful than simple pre-test/intervention-post-test, since the major plausible alternative explanation of history – the possibility that forces other than the treatment influenced the dependent variable -- is diminished by the multiple measurements pre and post intervention.
And the countless variations of simple ITS add even more protection against alternative explanations. Variant one includes a “control” group that doesn't get a treatment but is observed nonetheless:
O O O O O X O O O O O O
O O O O O O O O O O O.
Another possibility is a sequence of different interventions over time preceded and followed by measurements:
O O O O X1 O O O O O X2 O O O.
ITS is a “natural” design, easy to deploy in the business world. Indeed, most BI is based on time series measurement, though many companies don't take advantage of the many data points to aid interpretation.
An interesting illustration of ITS at work in the business world comes from an April 27, 2010 WSJ article, Home Care Yields Medicare Bounty. The article tells the story of the rapid growth of the home-health services business over the last 13 years fueled by evolving Medicare payment policies. Using time series data available from Medicare, analysts tracked changes in home-health utilization surrounding three different programs of reimbursement starting in 1997. The following is a schematic of the business time line detailed in the article, the X's representing different Medicare payment regulations,the O's measurements of home-health utilization and payment:
X1 O O O O X2 O O O O O O O O X3 O O O.
X1 – Medicare spending on home-health services dropped from $17B to $9B between 1997 and 2000.
X2 – In 2000, Medicare instituted a new, more salutary home-health reimbursement policy: $2200 per year per patient for 9 visits, with a bonus payment of $2200 kicking in at 10 or more visits.
X3 – In 2007, Medicare revised its policy to control outlays, eliminating the $2200 bonus payment at 10 visits, replacing it with smaller (several hundred dollar) bonuses at visit thresholds of 6, 14 and 20.
Subsequent analyses by the WSJ and a Yale professor of public health confirmed that home-health agencies were quick to adapt to the economics surrounding the Medicare payment changes. Among the analysts' findings:
After X1, one third of home health agencies went out of business, with the industry leader's losses growing to $25M in 1998 from $1M is 1997. Home Health was not the place to be between 1997-2000.
The X2 changes in payment policy of 2000 initiated a period of prosperity and rapid growth in home-health services. The industry leader's annual revenues grew from $88M to $1.5B in 2009, in turn, making the company a Wall Street favorite. There are now over 10,000 home-health agencies, a figure up 50% from 2002.
Agencies appear to have responded to the largesse of 2000. One large provider reported just 3% of patients with 9 visits in 2005-2007, in contrast to 10% with exactly 10 visits. That same company reported 28.5% of patients receiving 10-12 visits in 2007, triggering the additional $2,200 payment – and big profits.
X3 starting in 2008 seems to have changed the behavior of home-health agencies as well. And it was at this point the Medicare auditing agency began noticing the questionable utilization patterns. “In its March report, the agency said that the industry-wide percentage of therapy visits in the 10-to-13 range dropped by about a third after the policy change in 2008.”
The number of patients getting precisely 10 visits declined by 64 % from 2007 to 2008 for one prominent agency; the drop was 39% for a second agency and 27% for a third. The clustering of visits/patient also changed, with noticeable increases at the 6, 14 and 20 breakpoints. The large agency's percentage of patients with 14 visits rose 33% while the percentage getting 20 increased 41%. It seems the utilization patterns quickly adjusted around visit parameters that optimized buck for the bang. Providers, of course, argue what was optimized was the care delivered for their specific patient mix with no consideration of payment. Go figure.
I wish I had a complete data set of home-health utilization and payments from 1997-2010 to graphically correlate provider care changes with deltas in Medicare payment policies. Even the partial data, however, suggest the health care system is quick to respond to changes in government payment programs. And, not surprisingly, the deltas are are in the direction of optimizing individual provider economics. Similar deployment of ITS thinking will surely yield benefit for BI as well.
Steve also blogs at Miller.OpenBI.com.