Home » Miscellany, Off-Season, Statistics

A Study in Prediction Performance: Updates to the AV Ranking

By · August 30th, 2009 · 0 Comments · 4,390 views
1 Star2 Stars3 Stars4 Stars5 Stars
A Study in Prediction Performance: Updates to the AV Ranking

At Clashmore Mike we strive to monitor the college football landscape as a whole, objectively and without bias. While we have a vested interest in Notre Dame, we attempt to view the performance of the Irish via appropriate statistical metrics that benchmark on-field production (irrespective of how said production is measured).

One such example is the AV Ranking. Rather than rely on Jeff Sagarin, Anderson & Hester, Richard Billingsley, et. al., Clashmore Mike has developed its own college football computer ranking formula.

As shown here and here, the AV Ranking was used to predict the winners of the 2008 conference championships and BCS bowl games.

Upon further review of the 2008 season, it was determined that the accuracy of the AV Ranking was lacking. Specifically, it struggled to correctly forecast the winner in contests where the two teams were separated by a narrow AV Ranking point margin.

While this is not surprising—it is always difficult to predict the winner of a closely matched contest with any regularity—there was certainly room for improvement.

What, Exactly, Was The Problem?

Never content to accept mediocrity, myself and fellow Notre Dame and college football enthusiast Vince Siciliano spent the off-season identifying and quantifying the AV Ranking shortcomings that led to the inaccuracies described above. Two culprits were identified as major contributors to these inaccuracies, both stemming from improperly benchmarking teams to their competition.

First, opponents’ opponents were not considered in the strength of schedule (SOS) algorithm. This afforded the same credit for beating a 8-5 team from the WAC and a 8-5 team from the SEC. No disrespect to the WAC, but LSU was a better team than Louisiana Tech in 2008.

Additionally, the AV Ranking made no attempt to statistically benchmark teams to their competition. Were Tulsa, Houston, Nevada, etc. really prolific offensive teams or did they artificially benefit from poor defensive competition?

This was discussed ad nauseam leading up to the national title game when the potency of Oklahoma’s record setting offense was questioned due to the host of poor defensive teams in the Big 12.

What Are The Answers?

Two problems require two solutions. The AV Ranking (complete description) previously consisted of four metrics: the aforementioned SOS, adjusted win percentage (AWP), margin of victory (MOV) and quality wins/losses (QWL). These four metrics are normalized and combined via a weighted average to achieve a final AV Ranking point value.

The SOS algorithm was updated to include two quantities, one that measures the strength of a team’s opponents and one that measures the strength of a team’s opponents’ opponents. The two were normalized and combined using a simple weighted average assigning considerably more value to the former.

Concurrently, a new AV Ranking metric was created.

This fifth metric benchmarks a team’s production to its competition by defining ratios between the statistical averages of the team and its opponents. A similar version of this analysis was conducted for Notre Dame ‘s offense (mid-year and end-year) and defense (mid-year and end-year), and is a very useful tool for appropriately gauging a team’s production.

For example, suppose Team X averaged 25 points per game (PPG) against competition that allowed—on average—15 points a game. While 25 PPG seems rather pedestrian, it understimates Team X’s ability to score. The difference ratio (here called a performance ratio) of Team X’s average PPG and the average points allowed by opposing defenses ((25 – 15)/15 = 0.67) adjusts for this disparity. In other words, as the difference ratio indicates, Team X averaged 67 percent more points than their competition typically allowed.

While a litany of statistics could be used to measure production, only 15 were selected:

  • Turnover margin
  • Third down efficiency (offensive and defensive)
  • Red zone efficiency (offensive and defensive)
  • Points per game (offensive and defensive)
  • Rushing yards per attempt (offensive and defensive)
  • Rushing yards per game (offensive and defensive)
  • Passing yards per attempt (offensive and defensive)
  • Passing yards per game (offensive and defensive)

Simply speaking (a full description can be seen here), these statistical categories were used to generate 15 performance ratios that were normalized and combined using a weighted average. Slightly more value was assigned to turnovers, third down efficiency, and red zone efficiency than the other ten statistical categories.

This metric was aptly termed the Team Performance Ratio (TPR) as it adjusts the statistical production of a team to its competition.

What About The New Results?

The updated AV Ranking was generated for the 2008 season using the regular season (i.e. no bowl game statistics or win/loss outcomes). For comparison purposes, the values prior to these updates can be viewed here. The tables below show the top 25 AV Ranked teams in addition to the top ten teams in each of the five AV Ranking metrics (SOS, AWP, MOV, QWL and TPR).

Prior to the updates detailed above, the AV Ranking correctly predicted the winner of 79.8 percent of the regular season games but only 51.7 percent of the contests where the two teams were separated by a small AV Ranking point margin (using the season-end AV point values). The updated AV Ranking correctly predicted 80.8 percent of the regular season games and 62.7 percent of those with narrow margins. While the former isn’t a large improvement, the latter certainly is.

AV Ranking
RankTeamPointsStrength of ScheduleQuality Wins/LossesAdjusted Win PercentageMargin of VictoryTeam Performance Ratio
5Texas Tech0.84882381111
6Southern Cal0.8387810734
7Penn State0.836954845
8Boise St0.81510653159
10Ohio State0.795231812166
14Oklahoma St0.7124411201714
15Ball St0.7111187931018
17Georgia Tech0.692718203124
18Oregon St0.679207312827
20Michigan St0.6642657204855
21Boston College0.6552830263023
22Florida St0.6542923312312
24Brigham Young0.65111446121842
Adjusted Win Percentage (AWP)
1Boise St0.893
3Ball St0.83
7Southern Cal0.821
8Texas Tech0.815
8Penn State0.815
Strength of Schedule (SOS)
2Utah St0.832
Team Performance Ratio (TPR)
4Southern Cal0.867
5Penn State0.857
6Ohio State0.842
9Boise St0.779
Margin of Victory (MOV)
3Southern Cal0.916
4Penn State0.886
5Boise St0.878
10Ball St0.763
Quality Wins/Losses (QWL)
3Texas Tech0.706
4Penn State0.538
7Oregon St0.456
8Georgia Tech0.425
10Southern Cal0.39


Similar Posts

If you enjoyed this article, odds are you'll love the following as well.


Enter your e-mail address to receive new articles and/or comments directly to your inbox. Free!


This article is © 2007-2022 by De Veritate, LLC and was originally published at Clashmore Mike. This article may not be copied, distributed, or transmitted without attribution. Additionally, you may not use this article for commercial purposes or to generate derivative works without explicit written permission. Please contact us if you wish to license this content for your own use.