One of the most striking things about attending the annual meeting of the American Economic Association after a long absence is that economics is now really all about the data. Older theorists such as Eric Maskin (who won the 2007 economics Nobel), Jean Tirole (the 2014 Nobelist) and Bengt Holmstrom were still accorded prominent roles as luncheon speakers at this week's gathering in San Francisco, but in the sessions where actual research was being presented most of the activity and excitement surrounded empirical work.
Daniel S. Hamermesh of the University of Texas documented this shift in a 2013 article in the Journal of Economic Literature. In 1963, 1973 and 1983, the majority of the articles published in the American Economic Review, Journal of Political Economy and Quarterly Review of Economics, three of the field’s most influential journals, were works of theory -- with theory's dominance peaking in 1980. By 2011, theory's share was down to 27.9 percent.
One cause seems pretty clear. The biggest shift toward empirical work occurred between 1983 and 1993, and it was between 1983 and 1993 that personal computers became commonplace. That made crunching data much easier for economics professors; the subsequent rise of the Internet and digitization of much that was once analog in the economy opened up a huge new array of data for them to crunch.
In Hamermesh’s taxonomy, borrowed data means “ready-made … government-provided … macroeconomic time series or … large household surveys,” while own data means that the authors of the article created the data set -- even if the source was government records, as it was with Thomas Piketty and Emanuel Saez’s famous work on top incomes. The continued rise in empirical research since 1993 has been entirely in this latter category. Economics has also seen the advent of experimental work, most of it taking place in campus "labs" where students and other subjects participate in market-related games and exercises.
Disillusionment with theory has also been an issue. From the late 1930s through the 1970s, economics was full of excitement about grand mathematical models that seemed to explain everything about the world. Then some things happened that the grand models -- particularly the macroeconomic ones -- didn't explain very well, while a new generation of theorists took things in increasingly narrow and convoluted directions. The goal was often to make the theories more realistic, but the result, as Hamermesh puts it, was that:
Economic theory may have become so abstruse that editors of the leading general journals, recognizing that very few of their readers could comprehend the theory, have cut back on publishing work of this type.
Piketty, who was a promising young theorist at the Massachusetts Institute of Technology in the early 1990s, wrote in the introduction to "Capital in the 21st Century" that he decided to move back to France in part because economists are less respected there and thus must "set aside their contempt for other disciplines and their absurd claim to greater scientific legitimacy, despite the fact that they know almost nothing about anything.” Then he went looking for some data to crunch.
Now that's what all the cool economics kids are doing. Here's a sample session title from this week: “Data Gold! Exploiting the Rich Research Potential of Lifetime Administrative Earnings Data Linked to the Census Bureau's Household SIPP Survey.” I actually didn't attend that one (seemed a little too technical), but the most entertaining presentation I did see involved Philippe Aghion of the College de France bounding about the front of a crowded hotel ballroom explaining how he and two co-authors had linked data from the European Patent Office, the Finnish statistical agency and the Finnish military to study whether people with high IQs invented more things and made more money than others. They did, even controlling for parental education and income. The meritocracy reigns! In Finland, at least.
My Bloomberg View colleague Noah Smith, who happens to be an academic economist, has written favorably about this shift to empiricism, although he has also raised some caveats on his personal blog. For a non-economist like me it all seems pretty refreshing. Theory is inevitably a bit arrogant; exploring the data isn't.
Still, the data can't tell us everything. Economics in the U.S. had an earlier empirical heyday in the 1920s and 1930s, led by Wesley Clair Mitchell, a Columbia University economist and co-founder of the National Bureau of Economic Research. It was the NBER that pioneered the systematic collection of macroeconomic data in the U.S., and Mitchell believed that if he could only gather enough data the secrets of the economy -- and in particular the business cycle -- would organically reveal themselves.
They didn't, and Mitchell was mostly flummoxed by the Great Depression. In a famous takedown of a 1946 book on business cycles by Mitchell and his successor as head of NBER, Arthur F. Burns (who went on to be a markedly unsuccessful Federal Reserve chairman in the 1970s), physicist-turned-economic-theorist Tjalling Koopmans complained that:
The movements of economic variables are studied as if they were the eruptions of a mysterious volcano whose boiling caldron [sic] can never be penetrated.
Today's economic empiricists aren't nearly that theory-shy. I heard lots of potential explanations this week for the trends and correlations found in the data. But they were usually offered in tentative tones. Thanks to the empirical boom, economists know more than ever before. But they seem to be learning that they are still awfully far from knowing everything.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access