The halo effect is a psychological bias in which evaluations of some traits of persons or objects are generalized to other traits as well, even if they're unrelated. An example might be judging attractive people as intelligent. The bias was named by psychologist Edward Thorndike for his 1920's studies in which he found high correlations between the evaluations of different personality traits of soldiers by their commanding officers. Those evaluated positively tended to have high ratings on all traits. Negative assessments showed the same consistency.
Rosenzweig uses the halo effect metaphor to describe “business research” that underpins bestsellers such as “In Search of Excellence,” “Built to Last,” and “Good to Great.” Methodologically, that research is generally flawed, often relying on weak “case-control” sans control designs, examining successful companies at the points in time of their best performance to ferret out why they're successful. The top outcomes of those companies are then “haloed” to generalize to timeless management principles and traits those companies manifest. Comically, regression to the mean in performance almost always catches up with the “winners.” Indeed, Campbell's soup has appeared in this “research” as both a winner and a loser!
In good times, Swedish industrial company ABB “was celebrated as bold and daring. Action had been preferred to lengthy analysis – the willingness to act was a reason for ABB's success. But once growth stalled and U.S. Asbestos claims mounted, ABB's ambitious growth strategy was perceived differently. Now ABB was described as being both impulsive and foolish. In 2003, ABB's chairman, Jurgen Dormann remembered: 'We had a lack of focus as Percy went on an acquisition spree. The company wasn't disciplined enough,'” Rosenzweig surmises: “So many of the things that we – managers, journalists, professors, and consultants – commonly think contribute to company performance are often attributions based on company performance.” And so in a halo effect world, the same traits seen as positive when profits are high become negative when they disappear.
While the research behind the management classics is flawed, the haloed thinking that drives it makes for good stories that inspire readers. Rosenzweig quotes the late evolutionary biologist Stephen Jay Gould: “We are story-telling creatures, products of history ourselves. We are fascinated by trends, in part because they tell stories by the device of imparting directionality in time....But our strong sense to identify trends often leads us to detect a directionality that doesn't exist, or to infer causes that cannot be sustained.”
Indeed, Rosenzweig describes “Good to Great” as “the best story of them all … It speaks of Hedgehogs and Foxes. It speaks of Flywheels and Doom Loops – we can almost feel the momentum as companies surge upward to glory or spiral downward to their death …Why, even the title ‘Good to Great’ is an echo of Rags to Riches. No wonder the book is such has been such a hit – it speaks to a perennial dream and appeals to our deepest fantasies … In Mr. Collins neighborhood, the simple story with the positive message is paramount, And if that story can be made to look like rigorous science, well, it would be all the more convincing.”
Alas, the real “science of business” often makes for less than riveting stories. One rigorous study of the relationship between CEO managerial style and company performance mentioned by the author identified controllable factors that, other things equal, could elevate performance by 4 percent. A second methodologically sound analysis that actually matched performance leaders and laggards – a real case control design – found specific management practices that were responsible for 10 percent of the variation in firm performance, a very significant result.
Yet the fact that these findings are statistical rather than definitively formulaic makes them less than attractive to many writers and readers. “The result, according to James March and Bob Sutton of Stanford University, is that studies of organizational performance stand in two very different worlds. The first world speaks to practicing managers and rewards speculations on how to improve performance. Here we find studies whose main desire is to inspire and comfort. The second world demands and rewards adherence to rigorous standards of scholarship. Here, science is paramount, storytelling less so.”
An interesting recent blog on NYTimes.com sheds additional light on the stories versus statistics dialectic. “There is a tension between stories and statistics, and one under-appreciated contrast between them is simply the mindset with which we approach them. In listening to stories we tend to suspend disbelief in order to be entertained, whereas in evaluating statistics we generally have an opposite inclination to suspend belief in order not to be beguiled.”
The author then ties an individual's stance on stories versus statistics to relative emotions on the importance of potential statistical error: Type I, which is rejection of the no-relationship hypothesis when in fact the relationship doesn't exist, or Type II, where a relationship isn't seen that's actually there. It seems “story” people are more concerned about Type II, while statistics advocates obsess on Type I. “People who love to be entertained and beguiled or who particularly wish to avoid making a Type II error might be more apt to prefer stories to statistics. Those who don’t particularly like being entertained or beguiled or who fear the prospect of making a Type I error might be more apt to prefer statistics to stories.”
He further observes: “the most fundamental tension between stories and statistics. The focus of stories is on individual people rather than averages, on motives rather than movements, on point of view rather than the view from nowhere, context rather than raw data. Moreover, stories are open-ended and metaphorical rather than determinate and literal.” His conclusion? “At different times and places most of us can, should, and do respond in both ways.”
I agree with the business strategy commenter who notes: “my job is all about striking the right balance between stories and statistics. be rigorous, have your hard data to back up your claims, but make your claims memorable and give them authenticity through story. even a software engineering division of 1000 people will never all remember a statistic to powerful effect and outcome. but they will remember a powerful story about a person.”
My “method”? Use stories on the front-end to generate research hypotheses and on the back-end to craft the final messages to my audience. In the middle, it's research methods and statistics to quantify, evaluate and modify the thinking.
What do readers think? Where do you stand on stories vs. statistics for BI?