Thursday, February 24, 2011

A Music Project

I cannot tell you how many times I've started to work on my Beethoven-Wagner post and stopped, unsure as to how exactly to proceed.  You see, the problem is, every time I try to put the thoughts together, I end up falling back to all of the work I did for my undergraduate thesis, tracing the spiritual and political motions of Beethoven's 3rd and 9th Symphonies.  More importantly, I reflect on how many holes I left in that project.  Even though the result was a nearly 40 page paper (cut down from a maximum of over 50 thanks to the wisdom of my advisor), it was a synopsis of what I really wanted to accomplish.

In reality, what I wanted to write was a book and not an essay.  Not a normal book about music, either.  Like the St. John's music curriculum that I went through three years in a row as an assistant, I wanted to think about and write about and talk about music at the intersection of rigorous music theory, simple music appreciation, real music history, and, perhaps above all, music philosophy.  I suppose you could call all of that, in short, "musicology," but that only obscures the particulars.

The problem is, my Beethoven-Wagner post - which, really, could be a one paragraph long "isn't this cool?" - hinges upon at least some semblance of musical knowledge from all of the above fields.  On some level my perception of the link between Tristan und Isolde and the 9th Symphony is merely "feeling," but there's so much history, music theory, and, perhaps above all, a philosophy of how to understand the meaning of music tied up in my understanding of the two works.  That makes it impossible, as T.S. Eliot says in Prufrock, to say exactly what I mean.

Does all that sound pretentious?  Perhaps, but this is one area where I don't care about sounding pretentious.  I respect that musical tastes differ, of course, but I also think that the ad-hoc dismissal of the project of understanding music - and especially classical music - smacks of willful ignorance.  Too often I have heard - from people who love baroque music to people who love hip hop - that to study music is to rob it of its life, its passion, and its power.  To me, that is an excuse.*  It is a shameful agreement to disagree.  For example, the tired "classical music is boring but I'm ok if you like it" argument strikes me as being lazy, principally because even a cursory study of classical music will demonstrate that it is certainly not boring.  Perhaps classical music concerts are boring, and the stuffy culture surrounding the music is often boring, and writing about classical music is frequently boring, but the music itself is intricate and complex, full of (objectively) more harmonic, rhythmic, formal, and melodic variation than the vast majority of modern music.  That means it might be more difficult, more complex, and thus less immediately accessible - all legitimate complaints - but hardly more boring.

*Furthermore, it misses the point.  One's appreciation of music increases with its study, just as in every other field.  It is difficult to find a cosmologist who is not even more awestruck by the universe than an amateur stargazer.  There are few sabermetricians who are less avid baseball fans than the RBI-junkie.  Few musicians, composers, and even music critics are not violently passionate about music, despite, and indeed because of, their knowledge.**

**OK, total tangent on this one.  Have you ever noticed that "despite" can usually be substituted for "because of," even though the two are, in principal, opposites?  It is an interesting quality of the narratives we build about success and failure that the two are so often interchangeable.  For example, "despite his speech impediment, the child grew up to be a successful politician."  Perhaps true, but "because of" fits well, too, because it is often the case, in such stories, that overcoming the speech impediment is what gave the child the confidence and drive to become a politician in the first place.  At the very least, the "victory despite impossible odds" story gets told because of the impossible odds.  Which is perhaps more to the point; that "despite" is the reason the story exists in the first place, and thus, while it may not be that success owes to the thing being overcome, that success is noteworthy at all is.

Phew, where were we?

All of which brings me to the project: learning to listen to and talk about classical music.  Now, I have already begun that process - a process which inspired my thesis, and which was a significant factor in my decision to apply to UCSD's Communications program (with the aim of studying the learning and cultural acquisition of music) for my PhD - but I am no expert.  I am, I would argue, an expert learner, however, and that means that maybe, just maybe, I can drag someone somewhere along with me.  Or, at the very least - and more precisely, in any case - I can chronicle my learning so that it is traceable.  The project, then, is to listen, and in listening, to learn to listen, to Beethoven's 3rd.  If all goes well, then we can launch into the 9th and Tristan und Isolde, and perhaps other pieces as well (indeed, I have already done this with other posts on Bizet's Farandole and Brahms's Variations on a Theme by Haydn, and even Sacris Solemnis, a setting of Beethoven's 7th, but not as well as I would have liked).

Now this is not a one post project, or even a five post project.*  The Eroica's first movement, alone, might take a half dozen posts.  Nor will these posts be continuous.  You can still expect political wranglings, game reviews and observations, philosophical or literary musings, and of course, now that the season is coming, baseball analysis in this space.  Some might say that's a weakness of this blog: it lacks focus.  But I consider it a strength: it is vast, it contains multitudes.

*Indeed, the secret goal, here, is to write a book.  Perhaps a book no one would read, or - more importantly in our commerical age - no one would publish, but a book nonetheless.  Ambitious?  Perhaps, but you have to start somewhere, and whereas aspiring writers used to toil in anonymity for years before having their writing rejected by publishers, modern writers at least have the benefit of toiling publically.

Regardless of the number of posts, or the amount of time, however, we'll be forming a narrative.  Perhaps not a perfectly linear one, and perhaps not a perfectally literary one.  Rather, I'm hoping to build - for myself and for my readers - a mythology of music, a philosophy of music, and, most importantly, a deeper understanding not only of how music works, but why it works, and why, above all, it matters.  That's no small goal, I realize, but it is a goal I believe in, and a goal that I think we forget too easily in a world where music is everywhere, and largely unexamined.  Let us examine it, friends, that we might more joy in it that way.

Sunday, February 20, 2011

Losing the WAR: Rockies Pitchers

Last time we looked at the worst single-season offensive performances by Rockies players, putting together 13 man team that would give the modern day Pittsburgh Pirates a run for their money.  Today we'll fill out the roster with the 12 worst pitchers in Rockies history, beginning with the worst of the worst.

Starting Pitcher - Mike Hampton, 2002, -1.5 WAR
Other Stats: 7-15, 6.15 ERA, 78 ERA+, 74 Ks, 91 BBs, Silver Slugger Award (best hitting pitcher), $9.5 million salary

Yeah, he walked more hitters than he struck out.  Actually, Hampton was worth -0.6 WAR total if you count his positive 0.9 WAR offensive performance in 2002, but even with that boost Hampton would still crack this list.  I don't know what's more damning for Hampton, that he was a significantly more valuable offensive player than pitcher for the Rockies in 2002, or that he was, at the time, playing with the richest contract ever given to a pitcher.  The Rockies may have only paid $9.5 million for Hampton's considerable talents in 2002, but they continued to play $10+ million a year to Hampton until 2008.  That's a lot of money for a guy who put up a negative WAR in his Rockies career.

Really, there's no question that Hampton deserves to be the "ace" of this team.  No player had a more disappointing career with the Rockies.  After Hampton was signed in 2001 - coming off of a career season for the Astros in 1999 and another solid season in 2000 with the New York Mets - expectations were extremely high for the Rockies.  There was talk of contending for a playoff spot or even a World Series title.  There was talk about the Rockies finally being able to attract high-quality free agents to Denver.  There was talk of the Monforts opening up the checkbook.  And then Hampton struggled through 2001 and collapsed in 2002, and the Rockies traded him in 2003, setting the franchise back five years.

Starting Pitcher - Pedro Astacio, 1998, -0.7 WAR
Other Stats: 13-14, 6.23 ERA, 83 ERA+, 39 HR surrendered (league leader), 17 HBP (league leader)

Pedro Astacio was the best pitcher on the Rockies from 1997 until he was traded in 2001, but the 1998 season was an exception.  Astacio was still figuring out pitching at Coors - evidenced by the 39 homers he surrendered in '98 (and the 38 in '99) - and the Rockies had not hatched the humidor idea.  Add to that some bad luck - Astacio's FIP was 5.24, still bad, but much better than his 6.23 ERA - and Astacio's season looks much worse than it really was.

Despite a rough '98, it's tough to criticize Astacio overmuch because all of those things that went wrong for him in '98 went right in '99 (except the home runs).  He finished 17-11 with a 5.04 ERA, (actually a 115 ERA+) recorded 210 strikeouts, and put up a 4.56 FIP at the height of the Coors Field bandbox era.  In all, Astacio amassed 5.3 WAR in '99, easily compensating for his '98 debacle.  Which really goes to show, it was - and is - far too easy to misinterpret pitching performances by Rockies pitchers.

Starting Pitcher - Jamey Wright, 2005, -0.6 WAR
Other Stats: 8-16, 5.46 ERA, 88 ERA+, 171.1 IP, 201 H, 101 K, 81 BB

Jamey Wright owns the dubious distinction of being one of only 15 pitchers in all of major league history with a career ERA of 5.00 or higher whilst pitching at least 1000 innings.  A part of that comes from pitching for the Rockies for 6 seasons, but Wright has spent 9 seasons on other teams, so it's not all Coors's fault.  Jamey's 2005 was, needless to say, pretty bad, but not spectacularly so.  Really, the Rockies couldn't have expected much more from Wright than what he gave them: over 150 innings of below average pitching, good enough to not be completely embarrassing, but not good enough to actually contribute to a winning team.  At the time the Rockies were still a couple years away from competing, so there was little harm in turning the ball over to Wright every fifth day, especially given his under $1 million salary.

Starting Pitcher - Byung-Hyun Kim, 2006, -0.1 WAR
Other Stats: 8-12, 5.57 ERA, 88 ERA+, 1.55 WHIP, 57 priceless at bats

If Jamey Wright somehow had an unembarrassing -0.6 WAR season in 2005, Kim had an extremely embarrassing -0.1 WAR season in 2006.  His pitching was pretty much in line with Wright's in 2005.  He was, in short, an inning-eater (though, because he came up a closer and because he had a propensity for not throwing a lot of strikes, Kim didn't tend to make it very deep in games), a stop-gap while the Rockies waited for Ubaldo Jimenez and company to vault into the Majors in 2007.  What made Kim embarrassing was his hitting.  Of course, you can't dock a pitcher for being a bad hitter, but I have never seen someone look more lost at the plate than Byung-Hyun Kim did.  He tried to hit like Ichiro, and ended up looking like a 5-year-old using a bat about 12 ounces too heavy.*  Miraculously, Kim hit .160 in 2006, in the best offensive season of his career (his career line: .124/.156/.144).  As a fan who watched many of those 8 hits he collected in 2006, I can safely say that he was lucky.

*This is also a reasonable description of how he looked when he pitched.

Starting Pitcher - Josh Fogg, 2006, -0.4 WAR
Other Stats: 11-9, 5.49 ERA, 89 ERA+, 1.55 WHIP, 1 absurd shutout, 93 K, 60 BB, 206 H, 172 IP

Josh Fogg was a thoroughly unremarkable pitcher throughout his Rockies career.  He became a fan favorite in 2007 thanks to a smattering of lucky performances against big name pitchers, earning himself the nickname "Dragon Slayer."  In reality, he was a player who never seemed to be trying all that hard, and who never, in his entire career, managed an ERA+ better than 97 (his 2007 number).  Even so, his pitch-to-contact style - as evidenced by his combined 153 walks and strikeouts in 172 innings in 2006 - led to an improbable shutout.  Against the Seattle Mariners, Fogg pitched a two-hitter, walking one and thus facing the minimum 27 opponent batters thanks to three Mariners double plays.  Needless to say, his game score of 83 was by far his best in an otherwise uninspiring season.

Mop-up / Spot Starter - Jose Acevedo, 2005, -0.9 WAR
Other Stats: 2-4, 6.47 ERA, 1.59 WHIP, 74 ERA+

The Rockies have become somewhat famous for their reclamation projects.  Fogg, Kim, and Jorge De La Rosa (especially De La Rosa) have earned Bob Apodaca a reputation as something of a miracle worker.  The thing is, the Rockies take a flier on about 5 pitchers a year who they hope they can transform from mediocre to passable, or passable to good.*  The reality is, for every De La Rosa or Jason Hammel that Apodaca does manage to set right, there are a dozen Jose Acevedos who never make it.  Which is fine, because they don't really cost anything.  The problem with Acevedo in particular, however, was that he stuck around long enough to put up miserable numbers in 2005.  Not surprisingly, '05 was Acevedo's last season as a Major Leaguer.

* This year's project is John Maine, a former Met who was excellent in 2006 and 2007, good in 2008, and then completely fell off of a cliff.  Keep your eye on him, though, because he's got as much raw talent as anyone 'Dac has had to work with in a long time.

Relief Pitcher - Mike DeJean, 1999, -1.0 WAR
Other Stats: 2-4, 8.41 ERA, 56 G, 61.0 IP, 83 H, 13 HR, 32 BB, 31 K, 1.89 WHIP

No, that's not a typo.  Mike DeJean pitched his way to a 8.41 ERA over 61 innings in 1999.  He really did surrender 13 home runs while facing 288 batters.  DeJean is a case study for evaluating relief pitchers, because while his 1999 is unfathomably bad - and, really, it's tough for relievers to pick up even this much in the way of positive or negative WAR because they pitch so relatively few innings - DeJean was actually a good pitcher for the Rockies in 1997, 1998, and 2000.  What's more, he pitched well for a few other teams before returning to Colorado for some solid performances in 2005.  The moral of the story?  Even one transcendently bad season for a relief pitcher does not mean that pitcher is bad.

Relief Pitcher - Gabe White, 2001, -0.4 WAR
Other Stats: 1-7, 6.25 ERA, 67.2 IP, 18 HR

White was unlucky enough to surrender 18 homers in 2001 against only 290 batters faced.  I say "unlucky," because throughout most of his career White did much better than that.  Indeed, like DeJean White had more good seasons than bad, though because many of his teams used him as a lefty specialist, he rarely accumulated much more than 0.5 WAR in either direction.

White - like Astacio - is the rare player on this list who would make the "best Rockies ever" team, too.  In 2000, the year before his homer-happy debacle that earned him this spot, White put up a career-high 3.3 WAR, finishing 11-2 (out of the bullpen!) with a 2.17 ERA.  That's pretty good, and goes to show how misleading the small sample sizes of relief pitchers can be.  In 2000, for example, White surrendered 5 homers in 83.0 innings, struck out 82 and walked only 14.  In 2001, he gave up 18 (as listed above), struck out 47 and walked 26.  So, three times more homers, about half as many strikeouts, and nearly twice as many walks.  Did he change that much?  Was he burnt out after a busy 2000?  Was he just unlucky in 2001?  Was he exceptionally lucky in 2000?  It's hard to say; probably the answer is some combination of the four, but we don't have enough data to reach a conclusion.

Relief Pitcher - Matt Herges, 2008, -0.4 WAR
Other Stats: 3-4, 5.04 ERA, 93 ERA+, 46 K, 24 BB, 1.60 WHIP, 64.1 IP

Herges, or as a friend calls him, "Hergie Pie," is the definition of a journeyman, barely-above-replacement pitcher.  Taking away his exceptional (for him) first and second full seasons with the Dodgers, he was worth 0.4 WAR over his career, bouncing around so much that he actually ended up playing for every single team in the National League West, as well as the Marlins, the Indians, and the Expos.  While there is little worth mentioning about Herges, it is significant that he did not start his Major League career until he was 29 years old, but managed to last until he was 39.  Unfortunately, the Rockies have him his biggest payday of his career after a solid campaign in the miracle 2007 season, floating him $2 Million for his services in 2008.  Like just about everyone else with the team, he couldn't replicate his 2007 success, and the Rockies were stuck with a (relatively) expensive below-replacement-level, 38-year-old pitcher with the dubious distinction of being mentioned in the Mitchel Report.  Fun.

Lefty Specialist - Mike Munoz, 1995, -0.7 WAR
Other Stats: 2-4, 7.42 ERA, 73 ERA+, 37 K, 27 BB, 9 HR, 43.2 IP, 64 G

Munoz wins the "most unexpected member of this team" award, at least for me.  Perhaps because I was still an optimistic young fan in 1995, far too young to really know much of anything about baseball except that I loved the Rockies and that they were making the playoffs in their third year of existence, I didn't ever consider that maybe some of the players might not be all that good.

Munoz had a better year with Colorado in 2004, but throughout his career he was always a LOOGY (Lefty One Out GuY).  Anyone who appears in over 450 games, but pitches 364 innings is probably facing one or two batters at a time most nights.  And that was Munoz.  The result is that, sometimes a LOOGY has a bad season because of a few bad outings, or because he just happens to miss the strike zone on a few key pitches.  Despite the small sample size and pitching in the inagural season of Coors Field, however, it would be impossible to leave Munoz's '95 off of this team.  For a LOOGY to accumulate -0.7 WAR - in a shortened season no less - is impressive.  In a bad way.

Setup Man - Todd Jones, 2003, -1.4 WAR 
Other Stats: 1-4, 8.24 ERA, 61 ERA+, 39.1 IP, 61 H, 8 HR, 2.01 WHIP

-1.4 WAR in less than forty innings!  I had an unofficial rule that pitchers had to reach 50 innings to be considered for one of these spots (to weed out some of the -0.2 WAR in 10 innings kind of guys), but Jones easily earned this spot despite his midseason release and subsequent signing with Boston.  Jones was a "veteran presence" guy gone wrong for Colorado.  Once upon a time he was a good closer for the Detroit Tigers, but by his mid-30s he was - or looked - done as a Major League pitcher.  Coors Field proved too much for him, and he, to top it off, something of an ass, known for his homophobia, his bigotry, and his general closed-mindedness.  On a team built - for better or worse - around character, Jones did not fit in.  For some reason the Sporting News gave him a column late in his career,* and the writing was (or is, it seems he may still have this gig), well...  It was subpar.

*Indeed, his column began as he was on the Rockies, and while it was called "the closer," he never played that role in Colorado.  Somehow he miraculously became a passable closer again from 2005-2008.

Closer - Shawn Chacon, 2004, -1.7 WAR
Other Stats: 1-9, 7.11 ERA, 70 ERA+, 35 Saves in 44 Chances, 1.94 WHIP

And at least we reach the end.  Assuming this team somehow ever had a lead going into the ninth inning, it would be almost certain to squander it, thanks to Shawn Chacon.  The Greely native was the Rockies golden-child, becoming the first Colorado pitcher to make an All-Star game in 2003.  Thanks to injuries, and in an effort to preserve his arm, the Rockies moved Chacon to the pen in 2004, slotting him in as their closer.  The results were disasterous, and Chacon was gone from the Rockies by mid 2005, traded to the Yankees for Ramon Ramirez.  Of course, Ramirez was later flipped to the Kansas City Royals for Jorge De La Rosa, so it's not all bad.

Anyway, despite his solid 2003, Chacon never really demonstrated that he had the talent to pitch at the Major League level.  Rockies fans and ownership was overly-optimistic about him from day one, and while he may have stuck - had he not faced injury trouble - as a four or five starter, he was overmatched as an ace.  He was also overmatched - or at least misplaced - as a closer, struggling to handle the role, especially in a ballpark as unforgiving as Coors Field.


Ironically, the Rockies probably have a much better "worst ever" team than most franchises thanks to their relatively recent entry to the league (the same reason they have a much worse "best ever" team).  Even so, the team we've assembled in these last two posts would be absolutely dreadful.  There are a couple good hitters who can't field and a couple good fielders that can't hit, but mostly it's players who can't hit, field, pitch, or do much of anything else on a baseball field.  Or, at least, players who couldn't do much of anything for a particular season.

Baseball-Reference WAR (as opposed to Fangraphs WAR; we used BR in these posts) is calculated to a standard of 52 wins.  That is, a replacement level team should win roughly 52 games (and lose 110).  This Rockies team, as a whole, is a -21.5 WAR team, meaning they would go about 30-122, a .246 winning percentage.  I also played around with a lineup tool that estimated the offense would produce roughly 750 runs.  Given that total, we can estimate - in order to finish 30-122, that this team would surrender roughly 1300 runs.  Or, in other words, the average score of a Worst Rockies Ever game would be 8 to 4.5.  Eight!  If that seems high, take a look at the pitching staff again, and remember that it has a pretty much terrible defense - especially in the outfield - to back it up.  Eight runs a game might even seem low.

Fortunately, the real Rockies are much, much better than this.  And, even more fortunately, the season is about to start.

Wednesday, February 16, 2011

Losing the WAR: Colorado Rockies Hitters

A popular mini-project that baseball fans like to play around with is building fictional rosters of the best players in team history, or the best players in the league today, or the best players all-time.  The thing is, these lists are usually focused upon just that: the best.  What if we looked at the question from the other side?  Who were the worst players in a team's history?  How bad would an all-time worst Rockies (since, you know, we're all Rockies fans in Nicht Diese Tone land) roster be?

I went through's Rockies pages to dig up the worst single-season performances in Rockies history, by position.  For simplicity's sake I used WAR as my metric of choice, but in a few cases of ties I went with a combination of other statistics and personal opinion.  The roster, along with comments, is as follows:

Catcher - Joe Girardi, 1995, -0.6 WAR
Other Stats: .262/.308/.359, 58 OPS+, 76 Ks to 29 BBs.

Girardi was never a great player, and this was one of his worst seasons.  To his credit, the Rockies did make the playoffs in '95, but not so much due to Joe's contributions.  His .308 OBP was especially uninspiring, and while he did manage to hit 8 homers, that "power" was almost certainly the result of the inaugural season at Coors Field (6 of the 8 came at home).  Indeed, Girardi was a test-case for Coors Field, hitting .291 at Coors and .228 on the road.  Remarkably, Girardi - who was 30 years old in '95 - went on to have a number of decent, if unspectacular seasons, with the Yankees and Cubs in the late '90s.

First Base - Todd Helton, 2010, 0.4 WAR
Other Stats: .256/.362/.367, 87 OPS+, 8 HR, 37 RBI, 118 G, 90 Ks to 67 BBs.

Let's get this one over with...  Helton is, as you know, probably the best player in Rockies history.  Well, that's not totally clear, because there's a good case to be made for Larry Walker.  But Helton has spent his entire career in purple pinstripes, and figures to retire with Colorado in the next few seasons.  Indeed, it was practically inevitable that he would hold this "worst ever season by a Rockies first baseman" spot if only because the Rockies have only had two starting first basemen in their entire history.  Think about that for a minute.  The Rockies have been around since 1993, and the only contenders for this spot are Helton and Andres Galarraga.  The Big Cat actually did have a season of 0.2 WAR in 1994, but both he and Helton's contributions were entirely from defense, and Andres still slugged over .500 in the strike-shortened '94 campaign (it's just that the park adjustment is so big that his offense looks bad).

Anyway, it's about time Helton start moving towards retirement.  2010 was the first season in Helton's career in which he struck out more than he walked (first since his call-up in '97, anyway, when he struck out 11 times and walked 8 in 35 games), and his 8 home runs is, frankly, sad for a first baseman playing in Colorado.

Second Base - Mike Lansing, 1998, -0.2 WAR
Other Stats: .276/.325/.411, 78 OPS+, 18 GDP, 12 HR (after hitting 20 the year before in Montreal)

Lansing is the rare player who actually got worse after coming to Coors Field.  In 1997, playing for the Expos, he hit .281/.338/.472, mashed 20 homers, and accumulated 2.5 WAR.  Not a bad season.  Hardly what you build a roster around, but exactly the kind of player you want to have to fill out a team.  Lansing fell off the proverbial cliff in '98, though, and while he was still a solid defender, his inability to hit made him worse than replacement level for a team in dire need of an identity up the middle and at the top of the lineup.

Third Base - Ian Stewart, 2009, 0.2 WAR
Other Stats: .228/.322/.464, 95 OPS+, -0.8 defensive WAR

The Rockies have never been a great team at third base, but have usually been solid.  Charlie Hayes, Jeff Cirillo, Garrett Atkins.  No one's idea of world-beaters, but good enough to play on a contender.  Stewart, of course, was supposed to be the guy who went out and beat worlds.  A top draft pick, a highly touted prospect (so good that the Rockies didn't pick Evan Longoria when they had the chance), and now a frequent "potential breakout" player every season, Stewart just hasn't put it together, and time is running out.

One of Stewie's biggest problems (besides his work ethic, which many experts have called into question) is his reliance on the long ball.  Stewart is very much a "three true outcomes" player (meaning a large percentage of his at bats end in either walks, homers, or strikeouts) who works the count and waits for his pitch.  The problem is, he's often too patient, taking drive-able pitches, and thus doesn't draw nearly enough walks.

That said, in 2009 Stewart's value was hindered significantly because the Rockies trotted him out to second base in 21 games and the outfield for 9, enough to ruin his fielding WAR for the whole season.  Stewie is actually a decent, if unspectacular, fielder at third - certainly better than his predecessor, Atkins.  What's more, while Stewart ultimately had an unimpressive 2009, Atkins gets honorable mention (and a bench spot) for being even worse than Stewart in this playoff year.  He doesn't edge out Ian for the starting spot only because the Rockies benched him midway through the season, and he didn't really come close to the theoretical 500 PA requirement I made to be a starter on this team.

Short Stop - Neifi Perez, 1998, -1.2 WAR
Other Stats: 162 G, 712 PA, .274/.313/.382, 68 OPS+, -0.9 defensive WAR, 22 Sac Bunts

Better known to Cubs fans as "Ne!f!," Perez was, for a time, possibly the worst player in baseball.  It's truly remarkable that he lasted in the majors from 1996 to 2007, posting a career WAR of exactly 0.1.  He was, in short, replacement level for his career, just as good or bad as any number of minor leaguers.  What makes his longevity even more surprising - or maybe explains it - is his dubious distinction of being one of the few players to face three separate suspensions for testing positive for amphetamines.  Yeah, Neifi is just that kind of player.

Anyway, Perez's 1998 was one of his worst seasons, and yet he played every single game, led the team in plate appearances, and often hit near the top of the lineup (and was asked to give up outs by sacrifice bunting a lot).  What's more, in addition to spotty-at-best offensive ability, WAR believes that Perez was a vastly overrated defender.  His 1998 numbers bear that out.  Even using traditional metrics, its hard to see him as a great shortstop: he committed 20 errors for a fielding percentage of .975.  Not terrible, but far from what you expect out of a guy with a .313 OBP playing at Coors Field.

Left Field - Dante Bichette, 1999, -2.8 WAR
Other Stats: .298/.354/.541, 102 OPS+, 6 SB and 6 CS, 34 HR, 133 RBI

By WAR, this was the worst season of any player in Rockies history.  Not even Mike Hampton's 2002 (we'll get there) could touch Bichette in '99.  At 35 Dante was done - certainly done in the field - his legs gone and his bat slipping.  Still, Dante was a decent hitter late in his career (his 102 OPS+ means he was just above league average), and by traditional numbers like HR and RBI he still looked pretty good.  The biggest issue here was defense.  13 errors in left field is no one's idea of acceptable, leading to an abysmal .952 fielding percentage.  Add to that an utter lack of range (his Range Factor of 1.78 was well below the league average of 1.98) and you've got a recipe for disaster.  Obviously Bichette can still be the cleanup hitter for this all-time worst Rockies team, but he's going to give up just as much as he provides on offense by being a butcher in the field.

Center Field - Cory Sullivan, 2005, -1.1 WAR
Other Stats: .294/.343/.386, 83 OPS+, 83 Ks to 28 BBs, 23 EBH (meaning 88 singles), 64 R

Ugh.  In a long line of no-hit, kind-of-fast, ok defensive center fielders,* Sullivan has the dubious distinction of being the worst.  And that's saying something; not everyone can be worse than Curtis Goodwin.  Sullivan, however, was remarkable because unlike the other "kind-of-fast" players, he wasn't really fast at all.  For his career - which is now almost 500 games long - he has only 32 stolen bases, and he has been caught 10 times.  Not exactly a speed demon (Sullivan's replacement Willy Taveras once stole 68 in a single season).

*To wit: Juan Pierre, Curtis Goodwin, Tom Goodwin, Darryl Hamilton, Willy Taveras, and now Dexter Fowler.**  The Rockies LOVE this type of player, for some reason.

**No, Dexter Fowler can't hit.  He can't.  This is an argument you won't win.  In over 1000 PAs in his career, Fowler's slash line is .259/.351/.401.  At Coors Field half the time (his road slashes: .220/.308/.334; translation: yikes!).  That's not cutting it.  Look, I get that he has an awesome name, and that he's super fast, and he's probably the coolest player on the Rockies (except maybe Ubaldo), but that doesn't change the fact that he's basically another iteration*** of the Rockies center-field prototype.

***OK, I'll concede that he has a chance to be better because he's 24.  These things aren't predictive, but his top comp at age 24, according to baseball reference, is Lou Brock.  Then again, some of his other top comps include Felix Pie and Jermaine Allensworth.  So, yeah.  There's that.

I bring up the speed issue because that's really Sullivan's biggest problem.  He fits the mold of a certain type of player, but doesn't actually do the things those players do.  So, for example, he hits a lot of singles, but he doesn't steal bases or stretch extra base hits enough to really make up for his lack of power.  He doesn't score many runs, in part because he often hits at the bottom of the lineup, but also because he can only ever get himself to first base.  He also strikes out a ton, and rarely walks.  His biggest strength, in many ways, is his skill as a bunter.  Plus, he's an overrated defensive player (career -1.8 WAR in the outfield).  Sullivan, for his career, is below replacement level (-1.2 WAR total).  How do players like him stick in the Majors?  We'll never know.

Right Field - Brad Hawpe, 2008, -1.7 WAR
Other Stats: .283/.381/.498, 121 OPS+, 134 Ks to 76 BB, -4.1 defensive WAR

Dante and Brad are on this list for the exact same reason.  Only, Dante was, early in his career, a passable defensive outfielder.  This was never true of Hawpe, who has always been a first baseman playing right field.  2008 was his worst defensive season (at a stunning -4.1 WAR, or over 40 runs!) by far, and his defense alone earned him his spot among the worst Rockies of all time.

Let's not let Hawpe off the hook offensively, however.  In 2007 Brad was a monster at the bat, setting career highs in home runs, RBI, runs scored, BB, OBP, SLG, and OPS+.  In 2008 he regressed hard, his offensive WAR dropping from 3.2 to 2.4.  That's not a ton, but it's still nearly a full win.  Throw in a truly horrid defensive season, and you have a recipe for disaster.  Also, I still blame Hawpe for blocking Seth Smith - an infinitely more talented player - throughout his early career, and starting in 2008.  Hawpe's illusory value as a hitter is at least partially responsible for stunting Smith's development to the point that he may never realize his potential.  That and Jim Tracy being a terrible* manager.

*By all accounts Tracy is great in the clubhouse, well-liked, and good at getting the best effort out of his guys, for what that's worth.  But tactically...  Ew.  And he does seem to have an irrational dislike of certain players (Chris Ianetta, Seth Smith) and an irrational love for others (Spilly, Jason Giambi).  Plus, he's going to pitch Ubaldo Jimenez's arm off within the next two seasons.  Mark my words, Ubaldo will be done as an effective pitcher by 2013.  And that really upsets me.

Given Hawpe's age in 2007 (28 years old), you might think the Rockies could have seen some drop-off coming.  Traditionally players peak around 27 or 28, and even younger defensively.  But the Rockies held onto Hawpe, who promptly fell off of a cliff in 2010.  Add him with Garrett Atkins to the "players the Rockies maybe should have gotten rid of sooner" trio.  The final member of the trio?

Bench, Outfield - Ryan Spilborghs, 2009, -0.7 WAR
Other Stats: 393 PA, .241/.310/.395, 1 amazing walkoff grandslam, 77 OPS+

OK, yes, Ryan Spilborghs hit a walkoff grand slam against the Giants in a key moment in the playoff race in an extra inning game that - to top it off - I attended (which is remarkable because I don't live in Colorado anymore, and didn't at the time; that was I think the only home game I made it to in 2009).  So that goes in his favor.  On the other hand, he was a miserable defender, and put up an OBP of .310.  .310!  As a Rockie!

Spilly has been a passable fourth outfielder throughout his career, but in 2009 and 2010 (now that he's over 30) he has suddenly found himself playing 130+ games a season.  Problem is, if you take away his 2.1 WAR 2007 (in his age 27 season, also known as his prime), he's a career below replacement level player.  He also has, as any Rockies fan with a sense of aesthetics can tell you, below replacement level facial hair.

Bench, Outfield - Darryl Hamilton, 1999, -1.0 WAR
Other Stats: .303/.374/.389, 76 OPS+, -0.8 defensive WAR, 4 SB and 5 CS, 91 G, traded mid season

Look who it is!  Another no-hit, kind-of-fast, ok-at-defense center fielder!  Hamilton was perhaps the worst of the bunch, lasting only one season (that his, half of '98 and half of '99) before getting traded.  The best part?  The Rockies got Hamilton for... Wait for it... Wait for it... Ellis Burks!  As in, the best center fielder in Rockies history (at least before Cargo, who will likely settle in RF anyway).

Now Burks was 32 at the time, but he went on to have productive seasons in 1999, 2000, 2001 and 2002.  And, what's more, Hamilton was also 32, so it's not like the Rockies were getting younger.  Hamilton, meanwhile, was flipped in '99 for Rigo Beltran (who pitched 12 innings for the Rockies and gave up 15 runs) and Brian McRae (another no-hit, kind-of-fast center fielder who played for the team for about a week).  So, yeah, Darryl Hamilton was a go-between in the classic "trade Ellis Burks for nothing" move.

Anyway, Hamilton was Cory Sullivan before Cory Sullivan was Cory Sullivan.  At least, to his credit, he did have a few good seasons early in his career.  As a Rockie, however, he was less that replacement level, and he especially stands out because the Rockies gave up one of their best players for him.

Bench, Infield - Walt Weiss, 1994, -0.7 WAR
Other Stats: .251/.336/.303. Wait, look at that again.  A .303 SLG.  In Colorado.  That's all you need to know.

Walt Weiss once hit a ball out of the infield in the air, but the wind was blowing out at 30 MPH that day.

Weiss owns the dubious distinction of "leading" the NL with his bafflingly low .303 SLG in 1994.  That Weiss couldn't hit for power, at least, was made up for by his inability to hit for average, his mediocre defense, and his complete lack of speed.  He did, at least, have a pretty good eye, and rarely struck out.  So there's that.  But everything else about him - throughout his career - screamed "bench player."  And yet, he was a starter for no less than nine teams, once made the All-Star game, and won the Rookie of the Year in 1988.  Weiss did have good seasons - and even played decently well for the Rockies from 1995-1997, but '94 was one of his worst seasons.

Bench, Infield - Garrett Atkins, 2009, -0.2 WAR
Other Stats: .226/.308/.342, 64 OPS+, 9 HR in 399 PA, $7.0 Million salary.

Ironically, Garrett's worst season was also his highest-paid one.  Despite only really playing half a full-load - thanks to a mid-season benching - Atkins accumulated -0.7 WAR offensively and was, in fact, one of the inspirations for this series.  In all, I couldn't elevate him over Stewart because he didn't play enough, but his 2009 remains one of the worst performances in Rockies history from a player who everyone expected much better from.  Atkins was only 29 in 2009, and was coming off of a challenging 2008.  But in '08 he still hit 21 homers and posted an SLG of .452, ending up with a 1.0 WAR.  His complete and total collapse was surprising to say the least.  Hell, it's worth taking a look at his career arc, by WAR:

2003 and 2004: -0.7 WAR (September callups)
2005: 1.5 WAR
2006: 6.4 WAR
2007: 2.9 WAR
2008: 1.0 WAR
2009: -0.2 WAR
2010: -1.0 WAR (with Baltimore)

Atkins went from MVP votes in 2006 to not-good-enough-to-start-for-the-Orioles in 2010.  Wow.

Bench, Catcher - Kirt Manwaring, 1997, -2.1 WAR
Other Stats: .226/.291/.276!!!!!!! 375 PA, 1 HR, 10 GDP, 38 OPS+, -0.2 defensive WAR

Actually, those stats don't even capture the worst part of Kirt Manwaring's 1997.  Let me run another set of numbers by you, so you can compare them with Manwaring's.

.297/.386/.535, 120 OPS+, 17 HR in 298 PA, +0.2 defensive WAR, 1.8 total WAR.

That was Manwaring's "backup," Jeff Reed.  Reed had the single best season of his career in 1997, outperforming Manwaring in every conceivable way.  And yet the two split time close to evenly all season.  Indeed, Manwaring had 77 more plate appearances than Reed, despite an OBP and SLG below .300!

To top it off, the Rockies actually contended in 1997, finishing 83-79, only 7 games back of the Giants.  Larry Walker won the MVP with his best season, and the team was really only a couple pieces and parts away from a playoff appearance.  One of those pieces was Jeff Reed, who languished on the bench while Kirt Manwaring put up what has to be the single worst offensive season in Rockies history.

I know what you're thinking: if Manwaring was that bad overall, what did his splits look like?  I'm glad you asked.  Away from Coors, Kirt's line was a shocking .198/.279/.222.  That, my friends, is pitcher territory.  In theory, Manwaring was a terrific defensive catcher, and while WAR doesn't like him in 1997, evaluating catcher defense is notoriously impossible.  Even if, however, we give Manwaring the benefit of the doubt...  Even if we throw a full win at him on defense, he'd still have one of the worst seasons in Rockies history.

The only thing keeping Manwaring out of the "starting lineup" for this team is his lack of playing time.  If you are so inclined, though, you can slot him in instead of Girardi.

That wraps up our look at the worst position players in Rockies history.  Next up, the pitching staff.  You'll never guess who the "ace" is. (Hint: he's a better hitter than Kirt Manwaring).

Tuesday, February 15, 2011

NBA League Size and Competitiveness, Part the Last: Calculating Competition

Today we embark on a journey through the perilous land of inventing your own statistics.  As a wrap-up for this series on NBA competitiveness and league size, I wanted to create a kind of competitiveness index based upon the regular season results in any given season.  That has proved to be a more difficult task then I originally anticipated, for reasons which will become clear.

First off, though, why would I want to do something like this?  Well, as I discussed in Part One, I'm reading The Book of Basketball, by Bill Simmons, and was struck by how definitive his assessment of which NBA seasons were competitive and which weren't is.  In particular, some of the early seasons in the NBA sparked comments like, "Everyone had a good team back then."  I wanted to try to figure out if he was right because, as sabermetrics has taught us, often people who are passionate and well-informed fans of a sport still don't really understand what's going on.

For example, in baseball it was long believed that carrying a .300 batting average alone was sufficient to make you a good hitter.  "A .300 hitter" was - still is - an honorable appellate, as well as a since qua non of baseball success.  What about a player like Juan Pierre, though, whose career .298 average puts him close enough to be called a .300 hitter? Is he really any good?  Old-time baseball wisdom would say yes.  He's fast, he hits for a high average, and he's the kind of guy that people assume is a good fielder, whether he is or not.  But even offensively, you can dive deeper into his batting lines and see that he's a deeply, deeply flawed player.

You see, Juan Pierre does not really draw walks.  Nor does he hit for power.  So despite a career .300 average, he sports a .347 OBP - not bad, but not good enough for someone who aspires to be an integral part of a team's success.  Moreover, his .366 career slugging percentage means that he's a singles hitter.  Those many hits he does generate aren't in the gaps or over the fence (as evidenced by his 14 career homers in almost 1600 games).  Now, the traditional baseball viewpoint would be that all of Pierre's singles are made up for by his stolen bases...  Which is fair, except he has lead the league in getting caught stealing 6 times, and stolen bases only 3 times.

 Which is all to say that being a .300 hitter alone used to look great, and still looks great.  But looks can be deceiving.  No one should confuse Juan Pierre with a great hitter.  Similarly, sometimes a league might look competitive without actually being competitive.  And so I embarked on this little blog-project to prove Simmons right and/or wrong.

In Part Two I found and discussed that, while defining competition - let alone assessing it - might be very difficult, we can at least see that, as the league gets larger, so too does the standard deviation of winning percentage.  From one perspective that means that the league is getting less competitive - in the sense that teams are less jumbled together - but from another it means the league is getting more competitive - in the sense that there are more elite teams in any given season.  And that's exactly what my work for today's post shows.

What I did was develop a formula for "competitiveness," using the number of above-.500 teams, the mean of their winning percentages, and the standard deviation of their winning percentages.  My reasoning was this: if a higher percentage of teams are above .500 in a given season, the league is more competitive.  Similarly, the higher the average winning percentage of those teams, the more competitive the league is.  Lastly, the more condensed those winning percentages are (the lower the standard deviation), the more competitive the league is.  The advantage of this approach, of course, is that we can completely ignore any team that finished .500 or worse.  Those teams, I reasoned, don't really count (even if many of them do make the playoffs, thanks to the NBA's "everybody makes it" attitude towards the postseason).  The disadvantage, as the statistically acute among you will see, is that the components I have selected here are all closely related to standard deviation of winning percentage league wide.

What does that mean?  Well, let me show you.

X - Number of teams, Y - "Competitiveness"
My formula for competitiveness is messy, but worth sharing.  Brackets indicate the separate components, which I tried to normalize so that 1 was more or less "average":

[(0.5 + Percentage of teams above .500)] x [10 x (mean above .500 - .5)] x [(stdev above .500 - mean above .500) / (stdev above .500 + mean above .500)] x 50

I multiplied the whole thing by 50 just to pull it up into a more readable and intuitive range.  Basically, 50 is normal (as you can see, the trendline above is close to, though not quite at, 50), while anything above 50 is a particularly competitive season, and anything below 40 is uncompetitive.

Now this graph alone doesn't show you anything problematic.  Like our graph from Part Two, it has a weak, but present trend going upwards, and...  Wait.  It looks very similar to that graph.

So I graphed "competitiveness" by year, and made the following line graph:

"Competitiveness" (Y) by season (X)

 I then did the same with Standard Deviation of winning percentage:

STDEV of Wpct (Y) by season (X)
Now you may notice that these two graphs look almost exactly the same.  With a sinking feeling - starting to realize the folly of my ways - I graphed the two against each other:

Competitiveness (Y) against STDEV of Wpct (X)
 The result is unambiguous.  My "Competitiveness" ranking basically tells me that when the standard deviation of the winning percentages league wide are high, the competitiveness is also high.  Which, of course, is the opposite of what I was suggesting in Part Three.  Yeah.

The result is hardly surprising, as I said, because of the components in my formula.  While the percentage of better-than-.500 teams may not have much bearing on standard deviation of winning percentage, obviously when the mean of the winning percentage of teams above .500 is higher, so too will be the standard deviation of winning percentage of all teams.  Meanwhile, the final component of my formula - accounting for standard deviation of above-.500 winning percentages - will be inversely related to standard deviation of winning percentages league wide, but not enough, obviously, to disrupt the high correlation between "competitiveness" and SD of winning percentage league wide.

But really, this only goes to show that "competitiveness" is a highly ambiguous term.  Where one fan might think the most competitive season is the one where all of the teams are bunched together, another might prefer the one with five great teams and five terrible ones.  It's really a matter of perspective.

Where Simmons makes his determination, then, is probably the best place: skill of players in the league.  While you do have to be careful here - because all evaluation of player skill is heavily influenced by the relative skills of his contemporaries, and things like changing league sizes mess with our understanding of what is good and what is great - probably the best way to assess the competitiveness of the league at any point is to assess the overall skill of the players in the league at a given time.  That's a much more challenging project, but I can imagine going through players and seeing where great careers overlap, and figuring out when talent has been at its apex and nadir.  Of course, Simmons does that kind of thing for a living - though without relying too much on numerical analysis and going more with his perception, a more-than-fair, if perilous, approach.  John Hollinger also does that for his living, relying absolutely on numbers.  So between the two of them, you can probably get a good sense of what's going on.

Finally, if you want to see the nuts and bolts of my work - messy as it is - I've posted my workbook to GoogleDocs.  Do with it what you will.

Friday, February 11, 2011

NBA League Size and Competitiveness, Part Three: Outliers

Before I dive into more statistical analysis in an effort to answer the question as to whether a larger NBA leads to stiffer competition or not, I want to take a brief (ha!) interlude to consider a few outlier seasons.  In Part Two we saw this graph:

Again, X is number of teams, Y is standard deviation of winning percentage
 There are, as you can see, a handful of data points here that are particularly far from the trendline, in both directions.  From the ultra-competitive mid-1950s to the mess that was the last two seasons of the ABA, I've picked the eight most notable points on either side of the trendline to discuss in this post.  I'll be grouping these by era.

The Early Days - 1950s

The 50s were an interesting time for the NBA.  The league was small, there was no three point line, and the shot clock didn't come to the league until the 1954-55 season.  Moreover, the league - and the country - had not quite worked out a number of racial issues, and so the league was dominated by white players.  What's more, because basketball was still new, there were not throngs of kids who grew up playing the game (football and especially baseball were the sports of the time in America), meaning it was harder to find talented athletes.

The 1952-53 season was one of the "least competitive" - at least by standard deviation of winning percentage - in the history of the NBA.  With a SD of .198, the league was both top and bottom heavy.  Interestingly, no team won more than 70% of their games, but the SD is so high because only one team won between 40 and 60% (the Fort Wayne Pistons).  At the high end, the New York Knicks went 47-23, the Syracuse Nationals went 47-24, the Boston Celtics went 46-25, the Minneapolis Lakers (which makes much more sense than the Los Angeles Lakers) went 48-22, and the Rochester Royals went 44-26.  On the other hand, the Baltimore Bullets and Philadelphia Warriors went 16-54 and 12-57 respectively.

Despite the disparity in winning percentages in 1952, point differentials were much smaller.  The league's high-scorers from Rochester averaged 86.3, while the Indianapolis Olympians averaged 74.6 points per game.  Both are astoundingly low by modern standards, but more remarkable is the gap - or lack thereof - between the two.  Consider the 2009-2010 NBA, in which the Phoenix Suns averaged 110.2 ppg, while the New Jersey Nets put up only 92.4.  As for point differential, the Milwaukee Hawks went 27-44, but were outscored, on average, only 77.4 to 75.9.  One gets the sense that they fell behind, and then had the clock milked against them.

The shot clock changed everything in the NBA, and it's no accident that three of the most competitive season in NBA history were 1954-55, 1955-56, and 1956-57.  Those were the first three seasons of the shot clock, and the clumping of W-L records alone shows that teams were really struggling to understand how to play in a transforming league.  SDs for those years were .087, .061, and .054(!).

That the league was changing was obvious.  In 1954-55 the Boston Celtics averaged over 100 points per game (on both defense and offense), both NBA firsts.  No team stood out in those three seasons, however, despite the complete transformation of the game.  In part this was due to a small league, in part a lack of standout talent, in part a shorter season, and in part, of course, a drastic rule change.  It took until the 1957-58 season for some team to start to pull away from the pack, some team to start to "get it" in this new era of no-running-out-the-clock basketball.  That team?  The Boston Celtics.  Not surprisingly, they had been the best offensive team in the league before the shot clock, and they continued to be for years afterwards.  What catapulted them to dominance, however, was defense.  They kept scoring, and added Bill Russell as a rookie in 1956-57.  Even that year they won their first NBA championship and finished 44-28, but the league as a whole was still bunched together (the Western Division featured no less than three 34-38 teams).

As Russell's career took off, the competitiveness of the league disappeared.  The Celtics became such a dominant force that by the 1959-60 season, the pendulum had swung so far in the other direction that a 59-16 Celtics team, combined with a 19-56 Cincinnati squad, led to a SD of .188, the second highest in league history.  Of course, 1959 is one of those seasons where we have to wonder about what "competitiveness" really means.  The uneven talent distribution did mean that Cincinnati, Minneapolis, New York, and Detroit got hammered far more often than not, but the other four teams were all about .600.  Boston's 59-16 was countered by Philadelphia's 49-26, and Syracuse's 45-30.  The St. Louis Hawks finished 46-29 in the West, as well.

By now scoring was way, way up as well.  The Celtics, in 1959, averaged 124.5 on offense, and 116.2 on defense.  Average!  No wonder Wilt Chamberlain was able to put up 37.6 a game, or Bob Cousy averaged almost 10 assists.  Anyway, this is one of those seasons, ironically, where "everyone had a good team" in the opinion of Bill Simmons, and he has a point.  Everyone may not have had a good team, but four teams definitely did.  Consider the following key players (win shares in parentheses):

Boston -  Bill Russell (13.8), Bob Cousy (7.9), Bill Sharman (7.8), Tom Heinsohn (7.7)
Philadelphia - Wilt Chamberlain (17.0), Tom Gola (9.9), Paul Arizin (9.2)
Syracuse - Dolph Shayes (9.5), George Yardley (9.0), Larry Costello (8.0)
St. Louis - Cliff Hagan (11.8), Bob Pettit (11.5), Clyde Lovellette (9.0)

Who cares if Minneapolis's Elgin Baylor (11.5, with no support from anyone else on the team) was the only other really good player in the league, the best teams were all loaded with talent.  Which, of course, only begs the question: which is better, a league with 4 great and 4 terrible teams, or a league with 8 teams who all have a change to beat each other?

The End of the ABA, and the Merger - 1970s

The early and mid 1970s were an interesting time for the NBA.  The ABA - which included three-pointers and lots more slam dunks - emerged as a competitor, but also hemorrhaged money and was responsible for seasons that were, by any measure, extremely uncompetitive.  It was, in short, more of a show league, but it put pressure on the NBA just the same.  1972 was a pivotal year, in particular, because the NBA switched TV partners, moving to CBS from ABC after the season.  It was also, by standard deviation, the single least competitive season in NBA history.

On the plus side, a 68-14 Boston Celtics team romped through its division, while 60-22 Milwaukee and 60-22 Los Angeles led the way in the Western Conference.  The New York Knicks ended up upsetting Boston in the Conference Finals and went on to crush Los Angeles in five games in the Finals.  But, like 1959, 1972 was marked by a handful of great teams and a handful of truly awful ones.

On the minus side was Philadelphia, finishing an unheard of 9-73, a full 59 games behind Boston.  Buffalo - from Philadelphia and Boston's division, went 21-61, meaning the Atlantic had two of the best and two of the worst teams in the NBA.  Portland, cellar-dwellers in the West, was also an uninspiring 21-61, and their division mates, the Sonics, went 26-56.  In all, it was a year of extremes, in a league waiting for a merger (which would bring, among others, Julius Erving to the NBA), with its TV deal caught in limbo, and yet with all of the attention that a third New York - Los Angeles finals brought to the league.  Throw in Wilt Chamberlain and Kareem Abdul-Jabaar, and the NBA was making inroads in mainstream America.

Meanwhile, the ABA was falling apart.  The 1974-75 and 1975-76 seasons were, well, horrible.  With the merger looming, and teams facing bankruptcy, competitive balance suffered.  Posting SDs of .191 and .190, the last two seasons of the ABA were more or less a joke.  Already down to 10 teams in 1974, after a 27-57 season Memphis closed up shop, and San Diego and Utah both played fewer than 20 games in 1975, finishing 3-8 and 4-12 respectively before calling it quits.  The final season of the ABA, then, featured only 7 teams including a Virginia squad that finished 15-69 for the second year in a row.  Ultimately, in a top heavy league, it made sense for the New York Nets, the Denver Nuggets, the San Antonio Spurs, and the Indiana Pacers to make the jump to the NBA as the ABA finally closed its doors.

The 1975-76 NBA season had been, in stark contrast to the ABA, highly competitive (SD of .105).  At 54-28, Boston led the Eastern Conference, while a 59-23 Golden State team led the West.  Other than those two teams, however, no one finished higher than .600, while only one team - 24-58 Chicago - finished below .300.  Parity was the word, and so the addition of four good teams from the ABA, along with a redistribution of talent from the folding ABA teams, meant that 1976-77 would be one of the most competitive ever in the NBA.

With a SD of .098, 1976-77 is the most competitive season since the merger.  It's no accident that it happened the first year after the merger, for reasons discussed above.  For the second straight season, only one NBA team finished below .300, the New York Nets.  Meanwhile, the other ABA transfers did better, with San Antonio and Denver posting solid above .500 seasons (Denver, in fact, won their division), and Indiana finishing 36-46.  No one really stood out in 1976, however, with the 53-29 Lakers the class of the league.

This was no diluted league, however, and while the lack of great teams might frustrate some, there was no shortage of great players.  Take a look at some of the leader boards to see what I mean:

1) Pete Maravich - New Orleans
2) Kareem Abdul-Jabaar - Los Angeles
3) David Thompson - Denver
4) Billy Knight - Indiana
5) Elvin Hayes - Washington

1) Kareem - Los Angeles
2) Moses Malone - Houston
3) Artis Gillmore - Chicago
4) Elvin Hayes - Washington
5) Bill Walton - Portland

Win Shares:
1) Kareem - Los Angeles
2) Gilmore - Chicago
3) Hayes - Washington
4) Dr. J - Philadelphia
5) Bobby Jones - Denver

Not even mentioned in those statistical categories are first team All-NBA-er Paul Westphal and 2nd teamers George Gervin, Geroge McGinnis, and Jo Jo White.  Rookie of the Year Adrian Dantley, All-Stars Dan Issel, Bob Lanier, Rick Barry, Dave Cowens, John Havlicek, Bob McAdoo, Rudy Tomjanovich, and Earl Monroe are also worth mentioning.  In short, it was a banner year, talent-wise, for the NBA.  It just so happened that few of those players were teammates (Issel, Bobby Jones, and David Thompson with Denver were possibly the best trio in the league, but not good enough to get past Bill Walton's eventual champion Portland).

The NBA continued to see parity in 1977-78 (SD of .111) and 1978-79 (SD of .103), but eventually we settled into a happy medium as Larry Bird and Moses Malone came into their own in the early 80s, followed, of course, by MJ.

Modern Era - 1990s and Beyond 

There's not as much to say about the NBA since the merger.  As the three-point line became and accepted part of the game, and as free agency settled in and the draft became what it is today, changes to the league structure have become much smaller.  As you can see in the graph, there's a much narrower range of variability from one season to another in the modern NBA, and that's probably a better measure of competitiveness than anything else.  The modern NBA has struck a balance, for the most part, between too few and too many teams being in contention each season, with room for the occasional extreme.

A couple noteworthy seasons include 1983-84 and 2006-07 (with SDs of .115 and .132, respectively), which were both on the "competitive" end of our SD spectrum.  1983 was Bird's first MVP season, featuring a - guess who? - Los Angeles vs. Boston finals.  While no team really pulled away, both Boston and Los Angeles were excellent, and the SD is so low mainly because no team was truly awful.  The 27-55 Chicago Bulls, of course, were one of the league's worst, and bad enough to land none other than Michael Jordan in the draft the next season (at #3 overall).*

* This was a crazy, crazy draft.  Check out some of the picks, here:
1) Hakeem Olajuwon - Houston
2) Greg Oden - Portland.  Oops, I meant Sam Bowie.  It's just, they're exactly the same player.  And, just like with Kevin Durant, the next guy was maybe a little better.
3) Michael Jordan - Chicago
4) Sam Perkins - Dallas 
5) Charles Barkley - Philadelphia 
7) Alvin Robertson - San Antonio
9) Otis Thorpe - Kansas City 
11) Kevin Willis - Atlanta
16) John Stockton - Utah

2006-07, meanwhile, was a parity hodge-podge, both for teams and for players.  Dirk Nowitzki won the MVP because, hey, why not?  And then his Dallas team proceeded to get dismantled by the eight-seed Golden State Warriors in the first round.  So that was maybe a bad choice.  Meanwhile the Spurs and boring Tim Duncan coasted through the regular season (finishing 58-24, which is pretty good for coasting) only to absolutely dominate the playoffs, beating Denver 4-1, Phoenix 4-2, Utah 4-1, and sweeping LeBron's Cavaliers in the finals.  Much as I hate the Spurs, Tim Duncan is, you know, really really good, and was at his best (or close) in 2006-07.

Our last two notable seasons are 1996-97 and 1997-98, responsible for two of the higher SDs in league history at .191 and .189.  These were the last two seasons of the Jordan Bulls, whose dominance in the East was matched by - in the regular season anyway - the Stockton-Malone-Ostertag (joke) Jazz in the West.  The NBA was kind of on cruise control in the late 90s.  Parity was at an all-time low - at least since the merger - but no one seemed to mind that Utah, Miami, Chicago, Seattle, Los Angeles, and Houston were winning at or above 60 games a season while the rest of the league was mediocre or terrible.  The NBA brass had to be happy, at least, because New York was at least decent, for most of the decade, meaning the huge media markets of Chicago, LA, and New York were drawing viewership, while Utah was a nice wrinkle and a good foil to the Bulls.  That is, they were good enough to win a game or two, but not good enough to really challenge for the title as long as Jordan had at least one leg.


Diving into individual seasons and eras tells more about the methodology I've been using than the results.  The fact is, competitiveness is subjective, and while some people will prefer parity, others prefer leagues like those of the late 90s, when parity is non-existent because a small handful of teams dominate every year.  The fact is, either type of league can be successful.  In Europe, the Premiership and other soccer leagues are routinely extremely top-heavy.  When was the last time someone not named Chelsea, Manchester United, or Arsenal won the EPL?  Answer: Blackburn Rovers in 1994-95 (during Alan Shearer's prime)*.  Yeah.  And yet, people keep watching even though the same three teams are at the top every year.

* And no, I won't apologize to other Americans for knowing who Alan Shearer is.

I would argue, though, that the EPL is supremely competitive for exactly that reason: there are a small set of teams that must get a result basically every match.  Then there are a lot of other teams that are also well-matched, and while they're not fighting for the league title, they are trying to avoid relegation, or climb high enough to qualify for the Euro Cup, if not the European Champions League.  The same is more or less true in the NBA.  While, realistically, it's hard to win a seven game series against superior opposition, there's still plenty of incentive - in revenue, principally - to make the playoffs, and for better teams there's the incentive of home-court advantage that drives competition throughout the regular season.

If anything, the biggest flaw in the NBA's competitiveness in the modern era - the real cause of the larger SDs we see now - is not dilution, league size, free agency, or the salary cap.  I'm sure those things contribute, but I think the biggest culprit is incentive: there's undeniably incentive for good teams to win games, but there's also incentive for bad teams to lose games.  Because the NBA draft lottery is designed to give inferior teams a better chance at higher picks, tanking is all-too common, and tanking has as much effect on standard deviations of winning percentage as title chasing does.  If only the NBA had a relegation system!  But that's a post for another time.

Tuesday, February 8, 2011

NBA League Size and Competitiveness, Part Two: The Data

"Errors using inadequate data are much less than those using no data at all." - Charles Babbage

That is the spirit with which this post will proceed.  The question we're trying to get at is whether or not a larger league in the NBA (or, really, in any sport) leads to a more "diluted" product.  I've reinterpreted this question to be the following: is the league more or less competitive when there are more teams?  As discussed in Part One, it's not easy to really tell where the overall talent level of a league is because all the statistics players compile - all the games they play, the championships team win, and so on - are contextual.  The best player from the 1950s was still the best player from the 1950s, and looks great in retrospect, even if he wouldn't even make an NBA roster today.

Therefore, I transformed "diluted" into "competitive," because it seems to me that the one stands for the other.  That is, when we think the league is diluted, what we're really saying is that the league is not competitive, that there are too many players who are not good enough to hang with the few good ones, and that the few good ones are causing a small set of teams to dominate.  In a non-diluted league - in a competitive league - "everyone has a good team," to use Bill Simmons's language from my last post.  The result, no one - or few people - have a team that just trounces everyone else.

So today we're going to make a first pass at the data.  That first pass?  Looking, simply, at the standard deviation of winning percentage for each year in NBA (and ABA) history.  When was the league the most competitive (smallest standard deviation), when was it the least competitive (largest standard deviation), and is there any discernible trend as the league expands (does a larger league tend to be more or less competitive)?  Without further ado, here's a graph of our results.  X-axis is number of teams, Y-axis is standard deviation of winning percentage.

X is number of teams, Y is standard deviation of winning percentage

As you can see, there is a slight upwards trend here, but it's pretty small.  The R value (R-squard is on the graph) is about 0.17, which is not really significant unless you're doing social sciences research.  Nevertheless, a big reason why our correlation is so small is how spread out the standard deviations of winning percentage were when there were only eight to ten teams in the league.  As you can see, the left-most data points are much, much more spread out than the rightmost, with values ranging from barely over 0.05 all the way to almost 0.20.  What does this mean?  It means that, in the league's most competitive season a mere 5% separated average teams from good teams, and almost everyone was within 10%.  Put in sports-fan friendly terms, the best winning percentage in the league was about .600, while the worst was about .400.  That's baseball territory.  On the other hand, in the least competitive season, the separation led to a best team with an .800 winning percentage and a worst team with a .200 winning percentage.  Now, that's not precise (we'll get into the exact numbers shortly), but that's roughly what standard deviation tells you.

So, back when there were only eight professional teams, there was a huge variety from year to year.  Of course, with only eight teams we expect more variety in standard deviation, because, hey, fewer data points means each data point has more influence.  Thus, if one team wins 90% of their games one season in an eight team league, that value is going to skew the overall standard deviation much more than if one team wins 90% of their games in a 30 team league.  And, indeed, the 95-96 Chicago Bulls (who went 72-10) did not make 95-96 anywhere close to one of the least competitive seasons in NBA history.  Had that happened in 1960, the story would have been different.  For example, one of the "least competitive" - by standard deviation of winning percentage - seasons in NBA history was 1952, when a 12-57 Philadelphia team joined a 16-54 Baltimore team to drag the whole league down.

If we take only the seasons since the merger - that is, only seasons in which the league has more than 20 teams, we get the following instead:

Now we have an R value of .46, which is getting much closer to significant.  Indeed, while random variation obviously plays a huge roll - as do hard-to-quantify things like player skill and pre-NBA training, as well as injury management and so on - there seems to be little doubt that larger leagues are at least somewhat less competitive, according to standard deviation of winning percentage.  Consider that there have been only five seasons in which the SD of winning percentage was under 0.15 since the league went to 25 teams in 1988, whereas there were over 20 such seasons in the 40 years before then.

What this really shows, though, is not competitiveness or dilution, but talent distribution.  That is, regardless of the level of talent in the league at any given time, the more spread out that talent is, the lower the standard deviation of winning percentage will be.  The more concentrated, conversely, the higher the standard deviation of winning percentage will be.  Whether this is a measure of dilution is up for debate.  Also, while in a smaller league a larger standard deviation here might mean less competitiveness (one or maybe two dominant teams), in a 30 team league it might be exactly what we want (5 or 6 really good teams).

For example, this season the Miami Heat have Dwayne Wade, LeBron James, and Chris Bosh.  The Los Angeles Lakers have Lamar Odom, Kobe Bryant, and Pau Gasol.  The Boston Celtics have Kevin Garnett, Paul Pierce, Ray Allen, and Rajon Rondo.  Now, any of those ten players would be the best or second best player on most other teams.  Whether because of finances, smarts, collusion, or some combination of factors, we're currently watching a league where talent has conglomerated onto a small set of teams that routinely beat up on inferior opposition.  While I didn't run the (still changing) numbers from this season's NBA, so far there's a team with an .840 winning percentage (San Antonio), three teams above .700 (Dallas, Miami, Boston), and six more teams above .600 (Chicago, Atlanta, Orlando, Oklahoma City, Los Angeles, New Orleans).  On the other side of the coin, there's a .154 Cleveland team, plus five other teams below .300 (Sacramento, Minnesota, Washington, Toronto, and New Jersey).

Is this season's NBA competitive or not?  There are, at this point, ten legitimately good teams, any of whom - given the right breaks - could win the NBA Finals.  That sounds extremely competitive to me.  On the other hand, there are also at least six teams that are flat out awful, meaning that a large portion of each day's games are over before they start.  Is Cleveland really going to beat Miami?  Does Minnesota stand a chance against Oklahoma City?  Even though upsets happen, I doubt any circumstance would arise where a fan would feel like one of those inferior teams really deserved to beat one of the top ones.  They would need lots of lucky breaks.  And that, I think, is a mark of an uncompetitive league, when the bottom third of the league stands little to no chance against the top third.

But wait.  Does that mean the league is uncompetitive, or does it just mean that talent is distributed unevenly?  The latter is certainly true.  The former is more a question of taste and perspective.  We could say the same about the league being diluted.  When it comes down to it, if you make the league smaller, players who seemed great in a 30 team league will look good, and players who seemed good will look average.  Does that mean the league is better or worse?  Or does it mean that our perspective changes?

Consider a historical example.  When the ABA and the NBA merged for the 1976-1977 season, the NBA had one of its least competitive seasons ever, with a standard deviation of winning percentage under .100.  Bill Simmons says, in The Book of Basketball, that this one time the league actually under-expanded, as the 18 teams from the NBA and the 8 remaining viable teams from the ABA became 22 instead of 26.  That meant that a lot of ABA talent got redistributed to (mostly) bad NBA teams, meaning that talent was distributed about as evenly as ever in the history of the league.  The 53-29 Lakers lead the NBA that season, with a 50-32 Denver and 50-32 Philadelphia on their heels.  Those were the only three teams that won 60% of their games or more.

So where does 1976-77 sit in terms of dilution, competitiveness, and distribution of talent?  Really, we can only answer the ladder.  Talent was widely distributed.  Was the league diluted?  Was it competitive?  That's a matter of opinion.  Because talent was widely distributed, it was certainly competitive in a broad sense, but many fans would rather see great teams (and, by extension, terrible teams) than good ones (and merely bad ones).  As far as dilution, that's a more complicated question still.

See, when the league goes from 8 teams to, say, 12 teams, we'll tend to think of it as diluted because players who weren't previously good enough now are.  Similarly, contraction seems to eliminate dilution, because suddenly all of those marginal players are gone.  But give it ten years after expansion and contraction, and we no longer feel that way, because it's a matter of perspective and perception.  If the NBA cut 10 teams this offseason, the result would definitely be a short-term feeling of "raising the level" of the league, and probably increased competitiveness in the sense of a smaller standard deviation of winning percentages.  However, after 10 seasons of the new, 20 team NBA, we'd get used to seeing guys who had previously been their team's #1 as role players on the "deeper" teams in the smaller league.  New draft picks who once would have been franchise guys for bad teams would suddenly never be at the top of the league.  These new would-be stars, however, would never be thought of as franchise players who turned into role players.  We'd just consider them role players.  Suddenly, over time, the league would start to look a lot like it does now, only with fewer teams.

The same goes in the other direction.  A more diluted league is all well and good to talk about, but no one talks about how diluted NCAA Division I college basketball is, despite the fact that it adds new teams almost every year.  Sure, talent distribution is pretty extreme in the NCAA, but even so there are usually a good 20 or so teams that have a legitimate chance to win the Tourney every season (if you think this is an exaggeration, consider Butler), given the right breaks, and a favorable series of match-ups in March Madness.  Is the NCAA diluted?  Maybe, in some sense, but in another sense it's almost a crazy question to ask.

I would argue the same is true in the NBA.  Is the NBA diluted or not?  That's not really a good question, because being diluted is relative, and the stats players compile are relative, and even wins and losses are relative.  Bill Simmons believes the NBA is diluted because he grew up watching a league with a dozen teams in it.  I don't, because I grew up watching a league with 27-30 teams in it.  NCAA fans are used to the 400-something Division I teams, so there's never really a discussion.

What we do have, however, are some interesting measures of talent distribution and, in some sense, competitiveness.  We've already seen the trend - as the league expands, there is both a tightening up of standard deviations of winning percentage (less variety from season to season), and a slight (very slight) upwards trend.  Next time, we'll dive a little deeper into that data and look at some of the outlier seasons.  Moreover, we haven't given up on the competitiveness question - when has the league been most competitive? With more teams or with fewer? - so we're going to tease out only the good teams and run the same analysis here on them (that is, how many good teams are there in a season, and how good are they).  Stay tuned.

Friday, February 4, 2011

NBA League Size and Competitiveness, Part One: Introduction

Thanks to a generous friend, I've recently begun reading Bill Simmons's colossal The Book of Basketball.  I say colossal because, as you may not be aware, the book is about as long as Anna Karenina.  It's long, it's big, and so far, anyway, it's extremely entertaining.  A significant portion of the book seems to be Simmons - perhaps better known simply as "The Sports Guy" - taking digs at Vince Carter, Kareem Abdul-Jabar, and Wilt Chamberlain, whilst trumpeting (who else, as a kid who grew up in Boston?) anyone who played for the Celtics, and especially Bill Russel.  Which is all very fun.

Anyway, in the first few chapters, Simmons has already made a point - well, he's made many points - with which I disagree.  He is a firm believer, it seems, that expansion has diluted the NBA, and that the league's competitiveness was much higher when he was a kid.  "Back in my day," he never says, but might as well, "Basketball players had to try harder, because every night they played against teams filled with All-Stars."  Now, "my day," in this case, refers to the Russel era of the late 50s and early 60s, when the Boston Celtics - despite the huge competitive balance of the NBA at the time - won eight championships in a row (and nine out of ten, and ten out of thirteen).  If only we could have that again!

In all seriousness, though, Simmons does an excellent job describing what makes for success in the NBA, and, frankly, he knows way more about it than I do.  He points out - and rightly so - that basketball statistics are deeply flawed, because they don't capture the magical things that allow teams to win games.  I would say that Simmons is right: points scored, assists, rebounds, blocks, and steals do not accurately measure a player's contribution to his team.  Not even close.  That doesn't mean statistics have no place in basketball, it just means that basketball statistics have to get better and, what's more, that might be impossible to do because unlike in baseball, a team's success in basketball has more to do with how teammates work together than with how individuals perform.  (Inhales).  The linchpin of Simmons argument, here, is that Wilt Chamberlain - for all his statistical dominance - was a terrible teammate who's teams rarely won championships, while Bill Russel was actually a better and more valuable player, as evidenced by his bevy of MVP awards and Championship rings.  And you know what, I buy it.

I still don't buy, however, that the modern NBA is somehow watered down compared to the NBA of the 60s, and I do think that statistics can demonstrate why.  A while back I explored how NBA rosters are constructed, using Win Shares, and discovered that teams, as a whole, follow a highly predictable model.  That model, to rehash, is that the average "best player" on a team accumulates 9.3 Win Shares in a season, and each subsequent player accumulates less in a logarithmic way.  The stunning result was, at least for the season I looked at, a correlation coefficient of exactly one.  League wide, there's a very strong trend towards a regular distribution of success on the court.

What does this have to do with competitive balance?  Not much, but I want to point out that it jives well with the qualitative description of successful NBA teams that Simmons gives in his book.  He argues that teams need a great player, followed by a couple all-stars, followed by some key role players.  If you look at my old post and the graph with the Lakers and Celtics, it's easy to see that their model fits well with that description.

Now, I bring this up because Simmons points out how many All-Stars were on the Celtics and their rival Lakers and Warriors back in the 60s.  The teams were stacked, he tells you, replete with great talent.  Not like todays teams, where many teams are lucky to have even one All-Star.

Of course, the easiest hole to poke in this argument - that teams had more All-Stars back in the 60s - is a direct result of a smaller league.  Not because the talent level was necessarily higher, but because there were fewer players from which to draw an All-Star team.  Of course the Celtics had a bunch of All-Stars in the 60s, because the league only had eight teams.  That means that, even if every team was equal, an all-star roster of 12 would mean taking three players from each team in both of the four-team divisions!  Since the Celtics were also the best team in the league, it's only reasonable that they would have four or five All-Stars in any given season.

Compare that to today's league.  With 30 teams in the league, it's hard to have even two All-Stars from the same team, because an individual player has to out-shine so many others.  What's more, a second (or third) best player on a given team is going to have an even harder time, because he has to look better than the best player on many other teams, not easy to do given the limited and flawed statistics available in the modern NBA.  I realize that may be a bit opaque, so let me clarify using the simplest example: points.

Consider two teams that score 100 points per game.  On Team A, King Star scores 25 a game, whilst his brother Duke Star scores 20 a game.  Thing is, Duke takes way fewer shots, because he's more accurate, and is generally just a more efficient player than King Star, despite King's gaudy numbers.  Now, King Star is a perennial All-Star and fan favorite, and he's still plenty good, so he's going to the All-Star Game no matter what.  Duke is on the cusp, especially because Team B - which also scores 100 a game - features Selfish McGee (also known as Allen Iverson), a player who plays the same position as Duke, but scores 30 points a game in twice as many shots, thanks to a higher-paced offense and a team that has no other reliable scorers.  So Duke Star, in order to make it to the All-Star game, has to outplay either King or Selfish in the eyes of the people who make these decisions.

Of course, that's no different now than it was 40 years ago.  What's different now, instead, is that Duke is up against his equivalent on 14 other teams (whether they be like Duke, like Selfish, or like the heretofore unmentioned Crappy Sullivan), instead of 3.  Suddenly Duke, who's just as good - maybe even better - than the number two guy the Celtics had back in 1962, doesn't even make the All-Star game, while he would have been a shoe-in, at least as a backup, back in the 60s.

Phew.  OK, all that out of the way, let's actually get to the point of the post, which is how to actually assess whether the NBA is more or less competitive now.  How do we do this?  Is it best to look at players or teams?  What statistics should we use, in order to compare against eras?  In fact, there are many ways we could study the question, but the easiest and most intuitive, to me anyway, is simply to look at wins and losses.  I'm struck by a sentence in The Book of Basketball, which goes something like this: "I'm telling you, everyone had a good team back then."  Now I know what Simmons means is "All the good teams had good teams back then," because he knows that, even then, there were cellar-dwellers.  The reality is, every game played has a winner and a loser, and one of the constants in all sports is that, league wide, the average winning percentage is always exactly .500.  It goes without saying that, in order to get to .500, there will always be some teams that are much better and some teams that are much worse, and some teams that are right about in the middle.

What do we make of the claim, then, that everyone had a good team?  Well, what I think Simmons means is, there were more - a larger group of teams - well above average, and fewer really bad ones (and, likely, fewer really great ones).  He might phrase that as "more great teams, fewer average ones," but that's just a perceptual thing.  We live in a time where "average," in sports, has come to mean "bad," and "mediocre" has come to mean "absolutely terrible."  Ironically, "terrible" is something we don't actually dislike: the Timberwolves are terrible, but in a lovable kind of way.  It's mediocre teams we can't stand.

Anyway, how do we test whether or not everyone had a good team, given that we think it means that there was better competitive balance, that fewer teams were terrible, and fewer were so good that the games weren't even worth playing?  Well, there's a pretty easy - if tedious - way, that does not require digging into the deeply flawed player statistics of the 60s.  We can, in fact, compare across eras and leagues easily - as Simmons does when he says that the modern NBA is watered down compared to the old NBA - using wins and losses.  It's simple, really.  We just need to look at standard deviations of Win-Loss records throughout NBA history, and we'll see when the NBA has been at its most competitive.  In short, smaller standard deviations means the league is more competitive, while larger ones mean that the league is less competitive (more top and/or bottom heavy).

Now, there are some concerns here.  First off, those old leagues were so small that our sample size is going to be tiny.  Standard Deviations don't mean a lot when you're talking about 8 data points.  That is, they don't mean a lot if you're trying to be predictive based on only 8 data points.  But, in this case, I think we'll be fine, because we're just trying to deduce how "spread out" the quality of teams has been throughout NBA history.  Standard Deviation is exactly the statistic we want to use.  Since we'll be able to get a broad view of competitiveness, we'll be able to take the first steps towards assessing the competitiveness or watered-down-ness of the NBA across eras, irregardless of silly things like small league sizes making it easier to win championships (because, hey, fewer opponents) or make it to the All-Star game.

I honestly don't know what I'll find in doing this, even though my hypothesis is that the modern NBA is, if anything, more competitive than the NBA of the 60s.  I might be wrong.

As an extra outlet (for both me and Simmons), I'll also calculate the mean and standard deviation of the smaller set of "good" teams in the league.  I haven't yet decided how to draw this line, but I'm initially thinking that anyone above .500 makes the cut.  Basically, if we find a relatively constant standard deviation across time, we'll still want to test is maybe, in certain eras, the "good teams" are more evenly balanced with each other.  Now, this will be built into our bigger SD calculation, but we'll be cutting out noise like a team or two that finishes with a winning percentage of .130, and thereby makes the whole league's SD look way bigger than it is.  Indeed, I think Simmons would agree that the occasional really really bad team shouldn't count against any assessment of the competitiveness of the league as a whole, and so we'll do a parallel calculation that cuts out those really bad teams.

So to recap, here's the method: I'll be going through every season of professional basketball on (oh the wonders of being unemployed), and putting every team's W-L record into a spreadsheet.  From there, it's easy to calculate mean (which will always be half the games in the season) and standard deviation of wins per league per year.  The lower that SD, the more competitive the league.  I'll also pull out just the above .500 teams, and run the same calculations, to see if maybe there was more competitiveness amongst the good teams than in the league as a whole.  Finally, I'll do a smaller cut of outliers, removing just the really really bad teams (teams more than 2 SDs from the mean), and recalculate the league without their nefarious influence.

What will I find?  You'll have to come back to my next epically long blog post to find out, because I don't know yet.