Cane from the Bend wrote:Exactly; which is why I've always been against the preseason polls.
I've heard this argument a lot lately, but I honestly do not see this mitigating the problem in any way. Which ever teams the consensus deem the best will be ranked in that order at week 8 (or whatever) as they would have been at preseason. If by week 8 some unexpected loss occurs, then whatever effect that has to the on-going preseason poll, the non-preseason poll will integrate in exactly the same way, so having no preseason poll doesn't help.
The problem is teams are now ranked on quality instead of performance--that's what changed to create this problem, so it's that which must change (and specifically change back to the way it used to be, where the best teams that lose might still be considered the best, but it is noted in the rankings that those teams simply did not EARN it that year).
A possible solution is for each entity to publish two polls, one for official ranking (based on performance) and another as a power ranking (based on quality). Those two polls will rarely be the same, just as performance and quality occasionally diverge, but it would satisfy the urge of putting the team you subjectively believe is the best team in the country at the #1 spot, even if they've had an unfortunate let down along the way.
As it turns out, quality (e.g., how good a team is) is entirely subjective, and that's not a good thing for rankings, when the only real objective measurement we have is what happens on the field (e.g., a win vs. a loss). That is why the W-L record is one of the few metrics one can use to construct a valid ranking system. If other factors are considered, like SoS or margin of victory, then that's fine, but these things should be minor factors, not the dominant factors. Today, that paradigm is turned on it's head. For example (and I do not favor or dislike either of these teams), Michigan dropped 4 points after a WIN against Akron, and another 4 points after a WIN against UConn, for a grand total of -8 in the polls for an undefeated team after 2 wins and a bye. In contrast, over the same period 1-loss South Carolina dropped only -1 in the polls after another LOSS and 2 so-so wins. That tells me subjective metrics have completely overrun the objective metrics. Current rankings hardly use the objective metrics they were supposed to be based exclusively on. This is terrible, as it makes perception more important than performance, and great teams already have a significant advantage--now underdogs must overcome both instead of just the one.
To illustrate how the AP poll SHOULD look, Ohio State should probably be ranked #1. Even though almost everyone agrees they are not the best team in the country, that doesn't matter for rankings, only for power rankings. It's very hard to ignore an 18-game win-streak, since the "W" is the ultimate objective metric. If SoS is added as a factor, then probably Oregon should be #1 (noting that OSU, Ore, Bama, and Clem all have similar SoS according to CFP, with Oregon being the toughest). Yet, we see that the AP has Bama overwhelmingly at the #1 position, which can only be explained by pointing out that position is obtained based mostly on the fact that the consensus subjectively believes Bama is the best team in the country--a metric that should NOT even be a factor in being ranked #1--after all, if it's really true Bama is the best team in the country, it is more likely than not they will earn it via performance. I honestly do not mean to disparage Bama (I like watching them play, and it's certainly not their fault they are so good), but at this point in the season there truly ought to be a healthy split of #1 votes between them and the other 4 teams in the top-5, yet there is not. The other four all have considerable argument for the #1 spot, but since that is not represented in the polls, there is no clearer evidence the polls are not serving their ostensible purpose.