I just got back from a vacation week at the outer banks of North Carolina – yet another great confluence of family, sun, beach, hard crabs, tuna sushi and hefeweizen.

The primary beach read this year was “Expert Political Judgment: How Good Is It? How Can We Know?” by University of California psychologist Philip Tetlock. Tetlock’s research revolves on developing standards for evaluating the forecasting ability of “experts,” and, in addition, exploring what constitutes good judgment and why experts often badly miss the accuracy mark.

The author’s major forecasting/judgment performance criteria are correspondence and coherence. Correspondence is about predictive accuracy, while coherence obsesses on thinking correctly and learning along the way (especially from mistakes).

Tetlock’s points of philosophical departure are the extreme “radical skepticism” vantage in contrast to more conciliatory “meliorist” thinking. For radical skeptics, forecasting skill is equated with luck. “Their guiding precept is that, although we often talk ourselves into believing we live in a predictable world, we delude ourselves; history is ultimately one damned thing after another, a random walk with upward and downward blips …” Meliorists, on the other hand, maintain that the quest to forecast the future is a noble one, and that “there are better and worse ways of thinking that translate into better and worse judgments.”

Come to find out Tetlock’s a meliorist. He acknowledges as much, though, only after conceding that skeptics may be on to something. For his research data that include a total of 27,451ex-ante forecasts on political and economic events by both experts and dilettantes , “Who experts were – professional background, status and so on – made scarcely an iota of difference to accuracy. Nor did what experts thought – whether they were liberals or conservatives, realists or institutionalists, optimists or pessimists.” Pretty discouraging, it would seem.

Tetlock’s search for correlates of forecasting acumen did bear fruit, however, when he shifted from what experts thought to how they thought. Using a multivariate statistical technique called factor analysis on a 13 item questionnaire designed to determine cognitive style, he found variable “loadings” suggesting a resemblance to the distinctions between the protagonists of Isaiah Berlin’s famous essay “The Hedgehog and the Fox.”

Hedgehogs view the world through the lens of one defining idea while foxes draw on a wide variety of experiences that don’t trace to a single concept. In Tetlock’s research, “Low scorers look like hedgehogs: thinkers who ‘know one big thing’,  aggressively extend the explanatory reach of that one big thing … and express considerable confidence that they are already pretty proficient forecasters … High scorers look like foxes: thinkers who know many small things, are skeptical of grand schemes … and are rather diffident about their own forecasting prowess.” In short, “the intellectually aggressive hedgehogs knew one big thing and sought, under the banner of parsimony, to expand the power of that big thing to ‘cover’ new cases; the more eclectic foxes knew many little things and were content to improvise ad hoc solutions to keep pace with a rapidly changing world.”

For Tetlock, the hedgehog-fox cognitive style dimension accomplished what none of the professional background and political orientation measures could: discriminating superior forecasting records. “On the two most basic measures of accuracy – calibration and discrimination – foxes dominate hedgehogs.”

Evaluation of forecasting prowess doesn’t stop at accuracy or correspondence, however; it extends as well to the coherency of the judgment process with over-time learning. Rational forecasters should modify their “priors” in a “Bayesian” manner as they assimilate new information. “Good judges should be good hypotheses testers: they should update their beliefs in response to new evidence … And good judges should not be revisionist historians….and resist the temptation of hindsight or ‘I knew it all along’ bias.” They should, in sum, be scientific learners in their forecasting. In Tetlock’s tests of these axioms, foxes again outperform, in the presence of new information, adjusting their “reputational bets” to align much more closely with Bayes rule than do hedgehogs. “Foxes Are Better Bayesians Than Hedgehogs.”

Fascinating stuff. Questions that come to mind after reading this book include other business attributes that might be separated on the hedgehog-fox cognitive style dimension as well as implications of these differences for the conduct of business. Can money be made on the knowledge that foxes are better prognosticators than hedgehogs? Next week I’ll continue the discussion, adding to the metaphor the planning/searching dialectic of development economist William Easterly.