Stat Guy tracking
  •  What is NBA trak?
  •  PROFITS standings
  •  WAMMERS rankings

NBA data links
  •  Hoops Stats
  •  82 Games
  •  Doug's Stats

Scoreboard
  •  Previous scores
  •  Statistics
  •  Odds
  •  Teams
  •  Matchups
  •  Transactions
  •  NL standings
  •  NL schedule
  •  NL statistics
  •  NL rotations
  •  NL injuries
  •  NL previews
  •  NL game capsules
  •  AL standings
  •  AL schedule
  •  AL statistics
  •  AL rotations
  •  AL injuries
  •  AL previews
  •  AL game capsules
  •  Pitching report cards
  •  Probable pitchers
  •  Teams





Saturday, May 06, 2006

A note about Big 12 power rankings

Quite understandably, my system for ranking Big 12 baseball programs that I introduced this season has drawn the ire of Kansas Jayhawk fans. As such, I thought I'd post a little more detailed explanation about what the rankings attempt to measure. Also, I want to illustrate the concepts behind the rating number.

First, let me acknowledge my credentials, or lack thereof, depending upon your perspective. I am first and foremost a writer. I majored in English. I spend my spare time reading Henry Miller, Hemingway, Joyce, Collette, etc. My greatest aspiration is to publish a novel and I've got a burgeoning manuscript always beckoning me in my "writing room" where I clack away on an old manual Underwood. Of course my job, or at least part of it, is to write about sports which is an occupation that I absolutely relish.

All this is preamble to admitting that I am not an engineer. I did not attend MIT. I have never been a Bill James research assistant. At the same time, I do have a profound belief in numbers and statistical analysis. I also have a certain aptitude with numbers - once upon a time, I made my living as an accountant. No formal training, it just came naturally to me. Statistics have always seemed to me to be like a conduit between myself and the teams I watch and cover. We keep track of what they do and if you know how to decode that information logically and accurately, then you can know far more about the teams than you would know just by reading games stories and catching a game now and again.

There are few things that bother me in sports media than "power rankings" which are based solely on subjective observation. In baseball, in particular, I do not believe that you can adequately differentiate between teams by watching an occasional game. If you're going to rank teams in this sport, you need to have some sort of objective criteria as your foundation.

That is all background. Here is the nut of the system.

My Big 12 ratings system attempts to rate teams based on true ability. It does not attempt to replicate the won-loss standings. A baseball team's won-loss record is a mixture of ability, execution, quality of competition and luck. That last component is more prevalent than anyone wants to admit and is also why I felt the need to establish a power ranking system in the first place.

In big-league baseball, we have known for years that the runs scored and runs allowed by a team are a more accurate measure of team strength than won-loss record. Given enough games, a team will eventually play to its run differential. However, the number of games even in a 162-game schedule is not enough for this to occur with all teams. Thus the standings at the end of the season do not reflect exactly the order of teams by run differential. They are skewed by noise such as clutch hitting and record in one-run games. And that's great - that's baseball. That's why we love it.

Still, I think it's important to have an accurate picture of the true ability of teams. At the end of the regular season, it gives us a better idea of who is likely to succeed in the postseason. After that, it gives us a better idea of who will sink or rise the next season. For the teams whose record is significantly under their power ranking (like Missouri) we can better come to grips with our disappointment: they should have won more games. For teams who outperform their power ranking (Kansas) we can have that much more appreciation for their accomplishments.

So, to summarize, the object is to rank teams by their true level of ability. And to do this, we want to use runs scored and runs allowed.

In college baseball, this is not nearly as straight-forward as it is in Major League Baseball. The difference, of course, is the wildly different levels of talent from conference-to-conference and even from team-to-team within a conference. (Some schools just don't put much emphasis on their baseball programs.)

The crux of my rankings system is that for each game a team plays, after I enter the final score, the system then adjusts for four things:

1. who was the opponent?
2. where was the game played?
3. when was the game played?
4. was it a weekend game?

If you're playing a team like, say, St. Louis University, you don't get full credit for your run differential. In fact, if you win by, say, 5-4, then the system actually registers that as a defeat. The worse your opponent is, the more you need to beat them by to get positive credit. Games against very small schools, like Nebraska Kearney, for instance, don't count in the system at all.

Home teams win more in college baseball and, especially early in the season, northern teams don't play at home nearly as often as the teams in the souths. Thus the run differential is adjusted again depending on where the game was played.

Since we want to identify teams that are going to do well in the postseason, we try to identity those schools who are improving as the season progresses and vice versa. Thus, each game a team plays has more value to the overall rating than the previous game. By the end of the season, a game played has many times more value than the first game of the season.

We all know that weekend games in college baseball are more indicative of team strength than midweek games. That is the way pitching staffs are structured. Thus, weekend games have double the value in this system of midweek games.

Once the system adjusts the runs scored/runs allowed for each game, one final question is asked. What was the margin of victory? At a certain point, runs are superfluous. Is there a real difference between a 22-1 victory and a 9-2 victory? There is some but the fact of the matter is that if you chart won-loss records and run differentials, you get a gradually rising slope until you hit somewhere between five and six runs. Beyond that, it's just overkill. Thus the system does not reward "extra" runs above a margin of victory of six.

The adjusted runs scored and allowed figures are added together and plugged into a similar formula to Bill James' Pythagorean system (runs scored squared divided by runs scored squared plus runs allowed squared). The total is then multiplied by 100, which expresses the number as a integer between 0 and 100, though I've begun carrying the number out to one decimal so that teams don't appear to be tied if they are not.

That's about it. Pretty simple.

What can skew these results? A couple of things. First, getting squashed by a really inferior team can set you back. But more than anything, a team with an extreme record in close games will fluctuate away from its actual record. That is what has occurred in the case of Kansas. Up and down its schedule, you see that Kansas wins the close ones but often loses the blowouts. Last Sunday was the Jayhawks' season in a microcosm: they lost the first game of a doubleheader to OU 17-5 but won the second game 7-5. As I mentioned, OU doesn't get full credit for that 12-run margin in the first game but KU still comes out minus-four even before you begin to make other adjustments.

Overall, I think the system paints a very accurate picture. Nebraska is the best team in the Big 12. Texas is also strong but is a little down from what it has been in the past. Oklahoma is a veteran team that is really coming on. After that, you have two distinct groups. Missouri, Baylor and OSU are all flawed teams with good talent. Missouri has underachieved significantly and the discrepancy between its won-loss record and its power ranking reflects that.

Then you have the bottom four: Kansas, Texas Tech, Kansas State, Texas A&M. Kansas just hasn't been able to avoid the blowouts consistently enough to raise its rating. Sure, the Jayhawks have a chance to finish fourth in the conference by won-loss record. That would be a great accomplishment. But the teams are so tightly-packed together can you really say that a 10-11 team is obviously better than a 9-12 team based strictly on the conference standings? This is the complaint I've been getting from KU fans.

Other fans point to the RPIs, which have Kansas in the top 40. I don't know the formula for RPI but I am almost certain that they don't use run differentials. They simply use winning percentage adjusted for strength of schedule. (Someone is welcome to correct me if I am wrong in this belief.) Besides, it is not my intent to replicate the RPI system. That system is already out there. As I don't know how it works, I don't trust it. It is important info because the NCAA uses it and that is why I list it with the power rankings. But I don't know that it does what I set out to do: Rate teams by their true level of ability.

KU has had a great season and I really hope it continues. But my system tells me that the Jayhawks have overachieved and, ultimately, I think the system will prove to be correct. In any event, I'm glad folks are paying attention to the ratings and they've obviously accomplished one thing that I wanted them to do: spark discussion.

- Brad

0 Comments:

Post a Comment

<< Home