If you didn't watch Boise State's simultaneously epic and tragic loss to the University of Nevada last night, you've probably heard about it already. Chances are, you've also already hear the arguments: this proves that Boise didn't belong in the National Title conversation in the first place, or this proves that even the WAC teams still have to compete every week, and Boise's dominance meant it was extremely worthy of its ranking.
The truth of the matter is, the vast majority of people have chosen a side in the AQ versus non-AQ argument already, and are not at all interested in trying to think objectively. That a game like Boise's loss to Nevada can affirm either position just goes to show that most people - and probably most people on both sides the argument - don't have the slightest clue what they're talking about. The challenge, however, is not remaining objective, but rather finding appropriate evidence to determine whether a team like a Boise State or a TCU deserves national recognition and, in the event of an undefeated season, a possible chance to play for the National Championship.
The issue here is sample size. I've been accused, before, of being too baseball-centric in my thinking about football, but I think that just tries to brush off the bigger issue. The reality is, football - and college football in particular - is a sport of small sample sizes. A full season is only a dozen games, there's no balanced schedule, and even the very best teams might lose thanks to a bad bounce on a fumble, a mistaken penalty call, or a missed chip-shot field goal. What's more, a football team is not a single entity, playing in a vacuum. It might very well be that Nevada is better than Boise State, but Boise is better than Hawaii, and Hawaii is better than Nevada.
What I mean is this: Nevada might be "better" because their roster is perfectly adapted to defeat Boise State's, and so on. Match-ups are everything in football, and gameplans the rest. It is almost certainly true that Alabama and Auburn have better players than Oregon, and Oregon has better players than TCU or Boise. I don't doubt that for a minute. What I do doubt, however, is that just as Oregon would struggle in the SEC, Alabama and Auburn would struggle in the PAC-10. The reason? The match-ups are different, the styles are different, the gameplanning is different.
I don't know any of that for certain, but it's a suspicion I have from the aggregate of small samples I've seen. I would add, however, that I do put some stock in computer rankings. Yes, the very same rankings that the media loves to disparage for their nonsensical orderings of teams and conferences (Sagarin, into this week, and if you let him keep scoring margin as part of the equation, which the BCS doesn't, ranks Oregon 1, Stanford 2, Auburn 5, among other things). Of course the media doing the disparaging here has their own ranking system which is entirely subjective, and heavily weighted to the most recent result.
Given football's small sample sizes, you've probably heard that you have to consider a team's "entire body of work." You have to consider strength of schedule, and prestige of opponents, and other intangible factors. The thing is, most of that is not only tangible, it's quantifiable, and the computers do a much better job of actually considering a team's whole body of work than even the most well-informed voter does. The computer can seamlessly consider games played, opponents played, strength of schedule to two or three or four tiers, and overall conference strength. The different computers weight all of those things differently, of course, but that the BCS uses a variety of rankings and drops the best and worst is a mark in its favor.
There is heavy resistance to computer rankings because they often don't pass the alleged smell-test, just like fielding statistics that say that Derek Jeter isn't a great fielder. Oh wait.
In all seriousness, though, the computers here are constrained by sample size in a way that advanced baseball statistics generally are not, but the analogy is a strong one. We have our old statistics - Won-Loss record, conference and conference standing, opponent lists - and we have new, more advanced statistics that take into account score differential and opponent's opponents, and so on. The problem is, when the more advanced, harder-to-calculate statistics show that Auburn is #5, or when they show that a one-loss Stanford is better than an undefeated Boise State and TCU, we scoff at them because they don't align with our subjective, entrenched, traditional approaches.
Of course, as I've said before, the biggest problem here isn't even sample size or public perceptions or the media or the stickiness of tradition. No, the biggest problem is cognitive dissonance: the notion in our head that one team must be better than another. The idea that we can quantify football team quality at all. Is Boise State a 93 or a 94 out of 100? Does their kicker's misses yesterday drop them all the way to 83? Is it maybe true that they would crush Oregon, but be crushed by Auburn (or vice versa)? Is it maybe true that, even for a full season, they could compete in the SEC (because a full season is still a small sample)? Is it maybe true, on the other hand, that they would go 3-9 playing Alabama's schedule?
It's impossible to say, and, what's more, it's irrelevant. Most of those questions are predicated on the notion that there's a simple answer to the question "Which team is better?" There's not. There never is. And, even if there was, the samples we're working with in football are too small to answer the question. All we have, then, is the drama of single games, of missed field goals and overtime wins. The point of college football is not to crown the best team in the nation, but to crown a Champion.
Saturday, November 27, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment