Written in tandem by a University of Chicago economics professor and a sports journalist, Scorecasting's format looks suspiciously like award-winning bestsellers “Freakonomics” and “SuperFreakonomics.” If you're a sports fan and like the statistical sleuthing of Steven Levitt, this book's for you.
Among the fast moving, self-contained chapters are incisive analyses debunking “the hot hand” for basketball shooters, demonstrating racial bias in the hiring of NFL head coaches and explaining the lost century for the Chicago Cubs. Alas, consistently incapable management and a clientele that responds more to beer prices than wins and losses appears to undermine the Little Blue Machine.
I was most intrigued, though, with the two chapter analyses of home field advantage. The authors' point of departure is an unequivocal performance advantage for home teams across sports and times. The home field advantage ranges from just above 53 percent victories for home teams in major league baseball to over 69 percent in college basketball. The authors debunk the conventional wisdom explanations for home advantage of crowd support, rigors of travel and scheduling, and unique home field characteristics.
Most such explanations get summarily dismissed like this one on a climate/weather factor in professional football: “After studying data from every NFL home game from every season between 1985 and 2009 – nearly 6,000 games – and matching those games to the outside temperature and wind, rain, and snow conditions, we found that cold weather teams are no more likely to win at home when the weather is brutally cold, nor are warm weather teams more likely to win at home when the temperature is awfully hot. And the winning percentages for dome teams immune from extreme weather conditions – our placebo test – do not vary with the weather any more than they do for cold and tropical weather teams … Contrary to conventional wisdom, weather gives a team no additional home field advantage.”
What then gives home teams such an edge? The authors provide a hint to their thinking in the introduction of their second home field advantage chapter. A University of Wisconsin student and Brewers' fan listens in frustration to a tied Cubs-Brewers game at Wrigley Field that 's decided in the bottom of the ninth with two outs, bases loaded and a full count – by ball four. “What we've found is that officials are biased, confirming years of fans' conspiracy theories. But they're biased not against the louts screaming unprintable epithets at them. They're biased for them, and the bigger the crowd, the worse the bias. In fact, “officials' bias” is the most significant contributor to home field advantage.”
How can we be sure that it's referee bias, though? The fact that home teams win more and away teams commit more fouls (or are on the short end of negative ref decisions) than home teams demonstrates only covariation, not causality. It could well be the case, for example, that away teams compensate for being away by playing more aggressively and therefore committing more fouls, in turn leading to poor outcomes. What might enable us to interpret covariation as causation are designs that allow analysts to refute alternative explanations for the established relationships. The authors cleverly apply several.
When random assignment to intervention and control isn't feasible, a “quasi-experimental” design commonly used by analysts is the “interrupted time series.” With ITS, a sequence of measurements is taken over time. At some point in the series, a change or interruption occurs. Measurements before the interruption are compared to those after. If the values before the change are similar and those after radically different, there's evidence that the interruption in some way caused the difference in measurement. If there's a control group that's measured over time and is not “interrupted,” so much the better. While not as powerful as randomized experiment, ITS can nonetheless be a very useful analytical tool.
Soccer is a sport played with a continuously running clock. At the end of regulation time, generally 90 minutes, the referee continues the game for the amount of time lost to injury stoppage. A study of 750 Spanish league matches conducted by several economists found that when the home team was behind in a close match, the game was lengthened with injury time an average of 4 minutes. If the home team was ahead, injury time was barely 2 minutes. 3 injury minutes on average were given for a tie. For games in which either team had a commanding lead, there was no bias – the extra time was the same for both home and away.
These results certainly suggest home team bias but are still open to alternative explanations. A first “interruption” in Spanish league play occurred in 1998, when points awarded the outcome of games went from two to three for a win, in contrast to one for a tie and zero for a loss. With this change, the importance of a win over a draw increased substantially. “What did this do the referee injury time bias? It increased it significantly. In particular, preserving a win against the possibility of a tie now meant a lot more to the home team, and so the referees adjusted the extra time accordingly to reflect those greater benefits.”
The smoking pistol interruption to the “soccer time series” occurred in a 2007 Italian league where, because of hooligan-induced rioting, the “government forced teams with deficient security standards at their stadiums to play their home games without any spectators present.” When statisticians analyzed the results of 21 matches played in this anonymity, “What they found was amazing. When the home team played without spectators, the normal foul rate, yellow card, and red card advantage afforded home teams disappeared entirely … the home bias in favorable calls dropped … 23 percent for fouls, 26 percent for yellow cards, and 70 percent for red cards … the same referee overseeing the same two teams in the same stadium behaved dramatically differently when spectators were present versus when no one was watching.”
Closer to home, the use of technology to help manage games has changed referee/umpire call patterns in both major league football and baseball. In 2002, MLB introduced a digital technology, QuesTec, at a sample of ballparks “to track where the ball crosses the plate … to determine how closely an umpire's perception of the strike zone mirrored reality.” The authors conducted a study over the life of QuesTec, from 2002 to 2008, analyzing 5.5 million pitches thrown in all games, contrasting QuesTec vs. non-QuesTec parks, home and away. Their findings? “Called balls and strikes went the home team's way, but only in stadiums without QuesTec, that is, ballparks where umpires were not being monitored … We also found something surprising. Not only did umpires not favor the home team on strike and ball calls when QuesTech was watching them, they actually gave more strikes and fewer balls to the home team.”
The “interruption” of the 1999 introduction of instant replay in the NFL has helped shed light on professional football referee bias in a similar way. In the 14 years before instant replay, the home team won 58.5 percent of NFL games. That number dropped to 56 percent in the 10 years analyzed post instant replay. “Before instant replay, home teams enjoyed more than an 8 percent edge in turnovers, losing the ball far less often than road teams. When instant replay came along to challenge wrong calls, the turnover advantage was cut in half.” At the same time, the pattern of penalties, not under the purview of instant replay, didn't change. “The discrepancy in number of penalties and yards per penalty given to home versus away teams hardly changed after instant replay. This helps to confirm that it is instant replay, not something else, that has driven the winning percentage of home teams in the NFL.”
The authors' explanation for referee bias that creates a home field advantage in all sports that attract large audiences? Subconsciously, refs are making accommodations to the fans at the games, “1) because they want to fit in with the group and 2) because they believe the group is better informed than they are … If you want to make a decision and are unsure of your answer, wouldn't you look for other cues and signals to improve that answer?”
At the same time, the use of clever designs and other “Freaky” techniques helped the authors interpret their findings, elevating the work from simply showing statistical correlation to demonstrating cause and effect.