While I was visiting the web site of Dartmouth financial economist Ken French a few weeks ago, gathering portfolio returns data for my Ruby Monday series of blogs, I wandered into the French/Fama Forum and found a most interesting essay entitled: Luck versus Skill in Mutual Fund Performance. The article is actually the dumbed-down version of French and University of Chicago colleague Gene Fama's arcane academic paper: Luck Versus Skill in the Cross Section of Mutual Fund Returns. I downloaded both but found the former much more comprehensible than the latter. As discussed in my blog several weeks ago, attribution of luck versus skill is central to portfolio performance measurement. I suggested then that luck vs skill allocation should be a primary focus of business PM as well.

French and Fama's simple Luck Versus Skill essay is actually quite accessible for those with a basic statistics background. Recall the discussion of the Capital Asset Pricing Model (CAPM) and its applicability to BI. With the CAPM, returns for portfolio(j) = alpha(j) + beta(j)*market returns. In the assessment of investment portfolio performance, alpha represents manager skill, controlling for risk and the overall direction of the market. It's widely used as a differentiating measure of portfolio performance. A manager with a monthly alpha of .2 is adding 12*.2 or 2.4% annual return above “expectation” for his investors; a manager with a -.1 is subtracting 1.2%. Consistent, over time positive alpha's make for rock star fund managers.

French and Fama expand the CAPM to what they call a three factor model, adding two additional variables to their equation, so that:
returns for portfolio(j) = alpha(j) + beta(j)*market returns + value(j)*value returns + small(j)*small returns.

The new factors enhance the predictive power of their model over the CAPM, allowing for sharpened estimates of alpha as a measure of manager skill. Luck Versus Skill then uses estimates from this model to assess the performance of the population of managed mutual funds from 1982 to 2006, looking to distinguish skill from market common causes. In addition, with a to-die-for sample size of 3,156 funds, the authors are able to investigate another important factor – luck. They deploy boostrapping techniques similar to those I discussed in Ruby Monday, Part 3,  to ultimately partition alpha into real skill versus random luck. In the end, they apportion individual portfolio performance to market risk/common causes, luck and manager skill. It'd sure be nice for BI's work to similarly discriminate the common cause, skill and luck components of business performance.

The results of the authors' extensive statistical analyses are, alas, not so flattering for the mutual fund industry. They distinguish passive funds, inexpensive portfolios that simply mimic a market index, from managed funds, where investors pay dearly for the “skill” of one or more managers.  A first cut three factor, linear regression model fit to the 3,156 managed mutual funds' data for 1982-2006, with fund expenses netted out of returns, reveals a negative alpha that indicates an .81% annual shortfall for overall managed fund performance in contrast to passive market indexes. Moreover, that negative alpha is statistically significant, “which is rather strong evidence that active mutual funds as a whole provide returns to investors below those of an equivalent portfolio of the three passive benchmarks”. Worse, the aggregate managed fund portfolio behaves much like the passive market benchmark anyway, the index explaining 99% of the variance in month-to-month returns. In sum, the additional expenses that customers incur for the expertise of fund managers seem to add nothing but fees for investors!

Even when expenses aren't subtracted, the results are less than inspiring. Without managed fund costs (the expense ratio) weighing down returns, alpha is just barely positive, implying a scant and non-significant .13% annual boost over passive index funds. Hardly an advertisement for the value-add of managed funds.

That funds in the aggregate are not worth their costs, however, doesn't imply there are no top performers. It just means for  every winner – a fund with a plus alpha – there are one or more losers, funds that deliver less than expected. French and Fama take a hard look at these winning and losing alphas, attempting to distinguish real skill from simply the good fortune experienced by some of the 3,156 funds by chance alone. And they use our old friend the bootstrap as their trusty guide.

Considering the wealth of data  at their disposal as a population, the authors resample 10,000 times under the null hypothesis that alpha is zero to see how sample alphas might be distributed if they were, in fact, all zero. Of course, with the resamples, alpha is known to be zero, so any “significance” – either good or bad – is nothing more than randomness. The findings from the bootstrap are then compared with actual alphas to apportion skill from luck. The authors expected that “the worst performing funds should perform worse than we expect just by chance if every fund has a true alpha of zero, and the best performing performing funds should perform better than we expect by chance.”

Once again, unfortunately, the statistical findings were not salutary for the managed fund industry. The authors confirm that “poorly performing funds do worse than we expect if true alpha is zero for all funds.”, implying  there's no shortage of poor performers. On the other hand, “the seemingly impressive alpha estimates of most of the best performers are actually low relative to what one would get in a world where true alpha is zero”. In other words, most positive alphas are actually the result of pure chance. The overall French/Fama assessment? “...fund managers do not have enough skill to produce risk adjusted returns that cover their costs.”  Ouch!

A final blog on this common causes, luck and skill performance measurement perspective will examine a study of corporate outcomes published recently by Deloitte Consulting.

Steve Miller also blogs at miller.openbi.com.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access