Home » Miscellany, Off-Season, Statistics

Blueprint for BCS Championship Success I: Introduction and Approach

By · July 27th, 2010 · 0 Comments
Blueprint for BCS Championship Success I: Introduction and Approach

It’s been 21 seasons since Notre Dame last won the national championship, and college football has certainly evolved during that time. The Associated and United Press Championships have been replaced by the Bowl Championship Series (BCS). Recruiting has changed focus to the talent hotbeds of California, Florida and Texas. Two tight end, I-backfields have been replaced by spread, multiple wide receiver formations. And defenses are routinely employing three man fronts and hybrid linebacker/safety personnel to achieve more flexibility and counter the aforementioned offensive trend.

Despite these changes, football isn’t fundamentally different. The BCS may be a new era and offensive and defensive schemes may appear radically different from just 20 years ago, but the team characteristics critical to success are essentially the same. Football is about running, blocking and tackling—teams that display strong fundamentals in these areas execute well, excel on the field, and win championships.

But, apart from solid fundamentals, what are the characteristics of a national championship team? Is yardage output or being able to pressure opposing quarterbacks crucial to success? And, if they are, what yardage total and how many sacks are needed to win a BCS title?

In more general terms, what are the common statistical metrics of a BCS champion, and what are the rankings and values of these metrics above which championship teams perform? Furthermore, what was missing from recent Irish squads and why have they not competed at a national championship level for so long?

Laying the Groundwork

This is the first of a five-part installment dedicated to answering these two questions. The first is addressed via an analysis that creates a blueprint for BCS championship success by identifying the common and/or important characteristics of previous champions. The second is answered by measuring the Irish against this standard. An outline of the five articles is as follows:

  1. Introduction And Approach—framing the problem and outlining the analysis approach
  2. Offensive Results—offensive results of the analysis including pertinent data trends
  3. Defensive Results—defensive results of the analysis including pertinent data trends
  4. Outlining The Blueprint—summarizing the offensive and defensive results, conclusions, and defining the blueprint
  5. Measuring The Irish—benchmarking Notre Dame to the blueprint, what has been missing and needs improvement, and how does new head Coach Brian Kelly’s philosophy align with these shortfalls

Analysis Approach: Creating the Blueprint

Here, the statistical metrics of the last 10 BCS national champions (2000-2009) have been used. These 10 teams were selected for two reasons. One, finding data prior to 2000 is very challenging, if not impossible. Value and ranking data wasn’t even available for all investigated metrics over the last 10 years. Second, 10 years of championship team data in the BCS era is considered a large enough sample to evaluate the common characteristics of a modern college football champion. In some cases, however, all 20 teams were used to corroborate the importance of a particular metric and/or to distinguish between characteristics of the BCS title game winners and losers.

The metrics investigated in this assessment are listed below. Those in italics were found to be common and/or important characteristics of  a BCS championship team,  will be presented in detail in the offensive and defensive results segments, and are included in the blueprint. The remaining metrics either weren’t crucial components of a championship caliber team or didn’t have enough data to draw a definitive conclusion (more on this in the fourth installment).

Miscellaneous

  • Time of possession
  • Penalties
  • Turnover margin

Offense/Defense

Total

  • Third down efficiency
  • Red zone efficiency
  • Red zone touchdown efficiency
  • Yards per play (YPP)
  • Yards per game (YPG)
  • Points per game (PPG)

Rushing

  • Attempts (only for offense)
  • Yards per attempt (YPA)
  • Yards per game (YPG)
  • Touchdowns

Passing

  • Attempts (only for offense)
  • Yards per attempt (YPA)
  • Yards per completion (YPC)
  • Yards per game (YPG)
  • Touchdowns
  • Completion percentage
  • Sacks allowed (offense) and sacks (defense)
  • Attempts/sack allowed (offense) and  attempts/sack (defense), normalized to account for differences in pass attempts
  • Pass efficiency

For each of these metrics, a two-fold analysis approach was employed.

First, a metric ranking cutoff was targeted above or below which the majority of teams (usually seven or more, or 70-plus percent) ranked. The upper limit of this ranking cutoff was approximately 30, i.e. a ranking corresponding to roughly the top 25 percent of Football Bowl Subdivision (FBS) teams. On occasion this limit was slightly exceeded to capture seven or more teams.

If the majority of teams were above the metric ranking, it was considered a common metric, important to winning the national championship, and included in the blueprint. If a ranking cutoff in the 25th percentile could not be established, i.e. the majority of teams fell outside the cutoff ranking, it was not considered requisite to winning the title. Additionally, metric rankings spread over a large range with no discernible cutoff were excluded and not considered part of the blueprint.

Second, a metric value cutoff was targeted that captured the same number of teams as the metric ranking cutoff. This process was somewhat arbitrary as a metric value may produce a ranking of X in one year but a ranking of X +/- Y in another. The 2004 Trojans averaged 4.7 yards per carry, good for 23rd in the country. But Florida also averaged 4.7 yards per rush attempt in 2006 and posted a ranking two spots better (21). In most cases (including this example) the disparity in year-to-year ranking for a particular metric value was small.

Essentially, the goal of this two-faceted assessment was to incrementally increase the metric ranking and value cutoffs to capture as many teams as possible without overextending or going too far beyond the 25th percentile. For example, if a ranking cutoff of 12 captured seven teams and a cutoff ranking of 14 captured nine, 14 was used. But if a ranking cutoff of 16 captured nine teams and a ranking cutoff of 27 captured 10, 16 was used. In other words, increasing the ranking cutoff by 11 was not considered worth the inclusion of a single additional team.

First Impression(s) Can Be Deceiving

A quick glance at the relatively common statistics—e.g. yards per game, yards per play, and points per game—seems to indicate that championship caliber teams do everything well. And, to a certain extent, this is true. With perhaps one exception, no BCS championship team in recent memory had a serious weakness that repeatedly showed up in the box score.

But digging deeper reveals a different conclusion, provides insight into the makeup of the past 10 champions, and gives context to how these teams approached the game. As the analysis will show, certain aspects of team performance are far more important than others, and some are not critical at all.

Up next, offensive results of the analysis including pertinent data trends.

Furthermore

Subscribe

Enter your e-mail address to receive new articles and/or comments directly to your inbox. Free!

  •  
  •  

This article is © 2007-2024 by De Veritate, LLC and was originally published at Clashmore Mike. This article may not be copied, distributed, or transmitted without attribution. Additionally, you may not use this article for commercial purposes or to generate derivative works without explicit written permission. Please contact us if you wish to license this content for your own use.