I regularly follow several social science blogs for new ideas on analytics design, statistical methods, advanced visualization and other quantitative goodies.

Two that I’d especially recommend to IM readers are Andrew Gelman’s Statistical Modeling, Causal Inference and Social Science, and the Social Science Statistics Blog, from the Institute for Quantitative Social Science at Harvard University. Business is, after all, a social science, and BI analysts can learn a lot from the vanguard work of applied academia. In 2011, I’ll periodically report on postings from these sites I find of special interest to IM readers, hoping to add value by making the material relevant for BI.

I recently came across an SSSB post that referenced a paper I couldn’t resist entitled “Seven Deadly Sins of Contemporary Quantitative Political Analysis” by political scientist Philip Schrodt. I guess I’m a sucker for critiques of contemporary statistical practice, having written a number of blogs on the topic over the years. I never tire of recommending “Statistical Modeling: The Two Cultures” by the late Leo Breiman, originator of the time-proven statistical learning technique, random forests.

Schrodt’s paper is dense and rambling academese, but full of wisdom nonetheless. So I set out to distill the nuggets from the roughage.  As I progressed, I felt the way I do when I eat steamed crabs: I love the meat but dread the work to get it.

Like with Breiman, the point of departure for Schrodt is current bad habits surrounding the application of linear regression models. Indeed, three of the deadly sins revolve on abuses with regression: kitchen sink models that ignore the deleterious impact of strong relationships (collinearity) among predictors; the lack of understanding of what happens when model assumptions are violated (robustness); and, the use of regression models to the exclusion of newer statistical learning techniques like support vector machines, decision trees and Breiman’s random forests.

Schrodt thinks, though, that linear regression is overburdened with even a few predictors. “With more than three independent variables, no one can do the careful data analysis to ensure that the model specification is accurate and that the assumptions fit as well as the researcher claims … Truly justifying, with careful data analysis, a specification with three explanatory variables is usually appropriately demanding – neither too easy nor too hard – for any single paper.”

In addition, Schrodt feels that today’s analysts over-interpret regression results, confusing observational controls with their more powerful experimental counterparts. The result is often statistical chicanery. “… [E]xcept in carefully randomized samples, and certainly not in populations and with sets of statistically independent variables, ‘controls’ merely serve to juggle the explained variance across often essentially random changes in the estimated parameter values. They are in no way equivalent to an experimental control. Yet too frequently these ‘control variables’ are thrown willy-nilly into an estimation with a sense that they are at worst harmless, and at best will prevent erroneous inferences. Nothing could be further from the truth.” My take: All the more reason to obsess on the design behind the analytics.

Like Breiman as well, Schrodt attributes much of the current regression malaise to the philosophic obsession with explanation over prediction, for him a false and dangerous dichotomy. With Schrodt you can’t explain what you can’t predict. Regression models, though, are popular with “theorists,” in part because they provide coefficients for the predictor variables, the direction and strength of which are routinely used to accept/reject hypotheses. Contrast the purported explanatory power of linear models with “black box” statistical learning techniques that generally do a better job of prediction at the expense of explanatory power.

Overall, Schrodt sees an ominous trend towards complexity in statistical methods. “Not infrequently, complex methods whose assumptions are violated will perform worse than simpler methods with more robust assumptions … In the meantime, this bias towards complexity-for-the-sake-of-complexity (and tenure) has driven out more robust methods. If you can make a point with a simple difference of means test, I'll be decidedly more willing to believe your results because the t-test is robust and requires few ancillary assumptions.” I agree wholeheartedly, often eschewing statistical tests and using advanced visualization techniques alone to “prove” my assertions.

Indeed, like growing numbers of statisticians, Schrodt’s not bullish on the state of frequentism, the currently-accepted statistical orthodoxy, noting “incompatibilities between the hypothetical-deductive method and the frequentist statistical paradigm within which these linear models are embedded … It is not even logically consistent, suggesting that one of the reasons our students have so much difficulty making sense of it is that in fact it doesn't make sense … The pathologies resulting from frequentism applied outside the rarified domain in which it was originally developed – introduction from random samples – are legion and constitute a sizable body of statistical literature.” Ouch. At least my trusty random experiments and surveys appear safe!

Finally, according to Schrodt, the ascent of computational statistics using computer power and Monte Carlo techniques has hastened the demise of frequentism’s sacred cow: significance testing. “The ease of exploratory statistical computation has rendered the traditional frequentist significance test all but meaningless. Alternative models can now be tested with a few clicks of a mouse and a micro-second of computation (or, for the clever, thousands of models can be assessed with a few lines of programming).” I’m on the computational wagon as well, often using simulation techniques to solve probability and statistics problems without even consulting the “book.”

So what can be done to redress the current problems with the mainstream? Schrodt thinks that both statistics and the philosophy underpinning science must change simultaneously. And the statistical winner? None other than Bayesian induction. What is needed is a “better philosophy of statistical inference based on current Bayesian practice and incorporating this, rather than mid-20th century frequentism and logical positivism, into our methodological pedagogy … The way out is a combination of renewing interest in the logical positivist agenda, with suitable updating for 21st century understandings of stochastic approaches…. The quantitative community does, however, provide quite unambiguously the Bayesian alternative to frequentism, which in turn solves most of the current contradictions in frequentism … In short, we may be in a swamp at the moment, but the way out is relatively clear.” Sensible, if provocative.

Interestingly, Schrodt sees the ascendance of the open source R Project for Statistical Computing, with its army of enthusiastic worldwide contributors, as a vehicle to expand the frontier of statistical science. “I am actually somewhat optimistic, and much of that optimism can be summed up in a single letter: R … The open-source R has promoted the rapid diffusion of new methods: We are rapidly reaching a point where any new published statistical method will be accompanied by an implementation in R, at least available from the author, and if the method attains any sort of acceptance, from CRAN.”  

These sure are exciting times in the world of statistics. What a change from my bland grad school days of 30 years ago when statistical computing was in its infancy and Bayes was persona non grata! For philosophy of science-oriented readers who’d like to understand the Bayesian alternative in the philosophy of science, I’d strongly recommend Colin Howson’s excellent book “Scientific Reasoning: The Bayesian Approach.”

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access