|Fashion icon Bill Belichick|
Like God and his beetles, the internet has an inordinate fondness for ordered lists. Case in point: NFL power rankings. Nearly every major (and minor) sports site has their own subjective assessment of the relative strength of each NFL team - NFL.com, CBS, ESPN, and SBNation just to name a few. There are also a variety of objective stat-based rankings from which to choose, from Advanced Football Analytics simple and open-source team efficiency rankings, to Football Outsiders' more complex and proprietary DVOA model.
Falling somewhat in between a subjective and objective approach are my betting market rankings. They are objective in that the recipe is fixed ahead of time, and requires no judgment on my part (I'm just turning the crank). But the inputs to the model are the Vegas point spreads, which are subject to the whims and prejudices of the market - and the bookies who do their best to keep their books balanced.
There are also very simple (but surprisingly accurate) models that rely solely on scoring margin. There is what's known as the Simple Ranking System (or SRS), which you can find a version of at Pro Football Reference. SRS is based on average scoring margin for each team, with an iterative adjustment for strength of schedule.
Even simpler (in some respects) is the Elo-based ranking system published this year at FiveThirtyEight. Elo rankings were first developed to rank chess players. Nate Silver has extended that methodology to rank NFL teams. Elo and SRS share some similarities. Both are based solely on scoring margin. And both attempt to adjust for each team's strength of schedule. But a key feature that distinguishes Elo from SRS is that Elo uses results from the prior season as a starting point to develop the rankings for the current season. SRS is based on solely on current season results.
Despite their ubiquity, it has never been clear to me what power rankings are really for. A survey of sports forums and article comment sections indicates that the most popular use is to start internet slap fights. But for this post, as I did last year, I will judge each ranking system on its ability to predict future wins. Following week four of the 2014 NFL season, I archived the power rankings for six different systems. We will now see how well each system predicted wins from weeks 5 through 16 (I am excluding week 17 results - too many garbage games). The table below ranks teams by week 5-16 win percentage:
|Weeks 5-16||Week 4 Rankings|
The Patriots' 2-2 start to this season proved to be a head-fake that broke the ankles of many NFL pundits and commentators. From an October 4, 2014 USA Today article:
The New England Patriots are not a good football team. For fans expecting Bill Belichick and Tom Brady to pull a redux of 2012, when the team also started 2-2, leading to whispers that the dynasty might be crumbling, then rallied for an improbable 12-4 season that ended a tipped pass from the Super Bowl — don’t kid yourselves.Following an embarrassing Monday Night loss that dropped the Patriots to 2-2, Steve Young and Trent Dilfer accused the Patriots of rebuilding - thus wasting a year of Tom Brady in his prime. Dilfer went further, insinuating that the Patriots were no longer trying to win a Superbowl. Quoth the former Raven:
Let's face it: They're not good anymore. They're weak.To be fair to everybody piling on at the time, the New England Patriots were terrible, when judged on their 2014 play. DVOA, SRS, and Advanced Football Analytics all had the Pats in the bottom third of the league. ESPN, FiveThirtyEight, and my market rankings fared somewhat better, ranking New England as slightly above average. This is because these rankings don't restrict themselves to just current season data. The FiveThirtyEight Elo method does this explicitly, using the prior season's (regressed) Elo ranking as the starting point for the current season rank. ESPN and Vegas clearly viewed the Patriots' slow start in context. New England still had Tom Brady and Bill Belichick, making for a heck of a Bayesian prior.
In what can only be considered a sign of impending doom for the universe, Skip Bayless may have had it right when he refused to join the pile-on:
The New England Patriots will rise like the phoenix from the ashes of Monday night's 41-14 loss in Kansas City and land at the University of Phoenix Stadium in Glendale, Arizona, playing in Super Bowl XLIX.But enough about New England and who got it wrong. This was actually a good year for most ranking systems, when judged by the Spearman Rank Correlation coefficient. This number measures how well two ordered lists agree with each other. It is what I used in my prior posts on this topic to judge the accuracy of each ranking system. Here are the results (alongside the 2007-2013 results):
|Week 4 Ranking Correlation to Future Wins|
For the second year in a row, the market based rankings published here best correlated with future wins - evidence of market efficiency. And when averaged over the past eight years, the market rankings are the clear winner, with an average correlation of 55%. The FiveThirtyEight Elo rankings had a good first year, coming in second with a 73% correlation to future wins. And in general, the models that didn't rely solely on current season results fared better this year.