Friday, November 27, 2009

An Argument for Scott Joplin as a Great Composer

Someday I'll writeup what I'm sure will be a contentious piece about what makes Great Music, and whether there is such a thing at all. For now, I want share some brief thoughts about Scott Joplin, who was at once an extremely popular composer in his time (around the turn of the century... 19th to 20th) and remains revered as the "King of Ragtime" even today.

In academic and musicological circles, there is much hand-wringing over whether Jazz deserves to be studied as Art Music. It is undoubtedly culturally significant, as a predecessor to essentially all modern popular music, and as a unique artistic outgrowth of a democratic idealism. Whether it is "Art Music," however, is questionable. Do Jazz musicians consider carefully the artistic and aesthetic value of every note? Do they see themselves as composers, or just as musicians? On some level, these are nonsensical and unimportant questions (what matters more is the music, no?), but there is a certain scholarly significance to the why of music, and not just the what.

Before Jazz, however, there was ragtime, and while there is a fairly straightforward formalism to the rag that makes it an unlikely source of "great" music, I believe that Joplin is actually the rare Great Composer who transcends his time.

Listening to Joplin's music - and comparing it to other ragtime composers - is probably the best way to test my claim. Joplin's music is more nuanced, more melodically reflective, more groundbreaking (he wrote what we might call the first "fusion" piece in "Solace," which combined ragtime sensibilities with Latin rhythms). Joplin's mid-life visit to Europe - which spurred him to learn European harmony and counterpoint - turned him into an even better composer. A meeting with Debussy inspired the French master to try his hand at ragtime, with disastrous results, while Joplin's incorporation of European methods was seamless, enriching his later music.

Joplin even wrote two operas - one of which is lost, the other of which was not performed during his lifetime - but like many great minds, he was ahead of his time. Joplin's operas were about race at a time when that was dangerous. Joplin, indeed, was black at a time when it was almost impossible to be famous and black (famous for a good reason, anyway). And yet he was. His Maple Leaf Rag was the highest selling piece of sheet music of all time during his life, and the first to sell more than 1,000,000 copies.

Without getting into the qualities of Joplin's music - an effort too involved at this busy time to be reasonable - I'll close with this argument. In no other genre does one composer's music dominate so much. There are many famous Romantics, many Classical composers, many masters of the minuet. But in ragtime, there is only Scott Joplin. Like Bach was with the fugue, Joplin was so masterful with the rag that subsequent composers have only used aspects of it (the syncopated rhythms, for example).

Think of it this way, of the dozen or so most famous rags ever written, all were written by Scott Joplin. It's not that he was the best composer of his time, in his genre. In a very real sense, he was the only. The King of Ragtime.

Wednesday, November 25, 2009

MLB Awards in Brief

The baseball fan in me can't help but remark on a stunning turn of events in this year's MLB awards. If you follow baseball, you are aware that the two biggest awards given to individual players at the end of a season are the MVP and Cy Young awards. Traditionally the MVP is given to a position player - though there have been a few exceptional cases where a pitcher wins the award - while the Cy Young is by definition granted to the best pitcher.

If you follow baseball, you are probably also aware that there is endless wrangling over what the "valuable" in Most Valuable Player means. Because of that term, many voters - pooled from the Baseball Writers Association of America - refuse to vote for a player from a losing team. Likewise, many voters will only vote for players who collect a lot of meaningless, contextual counting statistics like Runs Batted In, which is a poor measure of a player's value, at best.

Historically, however, the player with the most RBI on a playoff team is a virtual lock to win the MVP. This season, Mark Teixeira of the World Champion Yankees led the American League in RBI, but the near unanimous choice for MVP was Minnesota's Joe Mauer. While this is not the stunning turn of events I mentioned at the outset, this remains notable, primarily because Mauer is a catcher who "only" hit 28 Home Runs and "only" had 96 RBI.

Of course, the more important statistics - as baseball fans, analysts, front office, and finally, it seems, writers have come to realize - are on-base percentage and slugging percentage. Mauer led the league in both of those statistics (and also led the league in batting average for what some people are calling the "modern triple crown"), and played catcher on top of that. While that may seem unimportant - everyone on the field has to hit - it is actually astounding that Mauer has been as successful as he has at this point in his career as a catcher. Simply put, catchers are almost never good hitters, and yet Mauer is one of the best in all of baseball.

The actual shockers, however, in this year's awards come in the Cy Young department. Mauer and Pujols (the NL MVP) were virtual shoo-ins by season's end, especially since both the Twins and Cardinals made the playoffs. Much more surprising was the victory of Zack Greinke and Tim Lincecum in the Cy Young balloting.

Greinke had a phenomenal season, and if you read Joe Posnanski (just look at my links to find him), you've probably heard all about it. Greinke, however, pitches for the Kansas City Royals, who were laughably bad this season, and who left Greinke with only 16 wins. The Cy Young Award may go, in theory, to the best pitcher in the league, but most voters believe that good pitchers collect wins regardless of their team's ability. That is, of course, a fallacy, but it is a pervasive viewpoint.

Or it has been. In a year where Felix Hernandez of the Mariners was almost as good a Greinke, and won 19 games, Greinke's 16 was enough for the Cy Young. That is a radical reversal of the history of Cy Young voting, reflected even in Bill James's Cy Young predictor. The Cy Young predictor, as you might guess, predicts who will win the Cy Young based upon historical voting patterns, and this year it had Hernandez solidly over Greinke, mainly because of the difference in wins between them. Greinke, however, won the award, because apparently voters have started to realize that assigning pitchers wins and losses is a silly exercise.

More evidence for this change in approach is apparent in the NL Cy Young voting, where Tim Lincecum won the award with only 15 wins. As in the AL, there was a 19-game-winner in Adam Wainwright of the Cardinals, and another 17-game-winner - Chris Carpenter - from the same team. Many thought Carpenter and Wainwright would finish first and second, but in a very close race, Lincecum beat out Carpenter for the award. I won't argue why that was the correct choice, but I believe it was. More importantly, however, it reflects the slow-but-steady transformation of the BBWAA into a more reasonable and logical organization. There's little doubt, of course, that if Wainwright had won one more game he would have run away with the award, and so there hasn't been a total paradigm shift, but the days of the total dominance of the "win" as a measure of pitcher effectiveness are coming to an end.

Wednesday, November 18, 2009

Ubuntu!

It's not that I gave up on Windows, I just found something better. I'm a member of a generation of computer-users, and while I may be more in-the-weeds than most of my friends, I'm far from an expert. I don't even know how to program, really. When I tell people that I use Linux - that I'm a full on, non-dual-booting, hardcore Ubuntu* user - they tend to think that I'm crazy, or stupid, or so extremely computer-savvy that I simply don't notice that, "hello," it's Linux.

*I should include Desmond Tutu's explanation of the meaning of "Ubuntu" (h/t to Wikipedia): "One of the sayings in our country is Ubuntu - the essence of being human. Ubuntu speaks particularly about the fact that you can't exist as a human being in isolation. It speaks about our interconnectedness. You can't be human all by yourself, and when you have this quality - Ubuntu - you are known for your generosity. We think of ourselves far too frequently as just individuals, separated from one another, whereas you are connected and what you do affects the whole world. When you do well, it spreads out; it is for the whole of humanity." Enough said.

I'm going to tell you a secret, though. Ubuntu is easier to use than Windows* and it's easier to use than OSX. Those are probably contentious things to say, especially since many people have a fierce loyalty to either PC or Mac these days, a situation encouraged by both companies for many of the same reasons that Coke and Pepsi encourage rivalry (or Democrat and Republican, for that matter).** That said, I strongly believe Ubuntu is easier because of my experience with all three operating systems and because of the philosophical backgrounds and business models of each of the companies involved.

*Caveat: I haven't used Windows 7 yet, though I plan on giving it a whirl (don't tell anyone) just to see if it's really as good as people say. That said, the if the Registry still exists, Ubuntu is still easier to use.

**This is, again, framing the debate. If you have to be either PC or Mac, how can you be Linux? If you have to vote Democrat or Republican, how can you vote Green or Independent or Libertarian? 'Practicality' ends up being the watchword, along with 'standardization,' and while those things are not useless, they are overrated, as Ubuntu shows.


My experience, of course, is far from standard. PCs I grew up with, mining the inner-workings for bugs and viruses, trying to improve performance and boot times, poking physically at hardware and metaphorically at software that was behaving oddly. All of that made Windows intuitive to me. In an age before the Internet could troubleshoot almost every problem you might encounter, I more or less knew where to look when something went wrong with my PC.

The problem was, sometimes there were absurdly counter-intuitive problems. Your CD drive isn't working? It's running to slow? You might think it's a hardware issue, or a driver issue, but what if it isn't? I distinctly recall battling for hours with my PC once only to discover that the OS itself (Windows XP) was storing errors inside the hardware profile for the CD drive, causing it to slow down. If that doesn't make sense to you, fine, don't worry. The point is, Windows engages in strange behaviors because it is built in such a complicated way.

Macintosh, on the other hand, is a whole different mystery. Because they assume that their users are not experts, they child-proof everything. If you want to manipulate the basic operation of your computer, good luck. Macintosh offers the ultimate in aesthetic customization, but don't you dare try to find your way into the kernel (OSX, like Linux, is built on Unix kernels), because then you'll make Mr. Apple very sad, and probably void your warranty along the way.

The differences in my experiences with the major two Operating System powers may not be typical - most users are fine with OSX, because they have no desire to mess around with the "guts" of their system - but they nevertheless stem from the respective business models of Microsoft and Apple.

Let me tell you a secret. Microsoft and Apple are not competitors. That's not entirely true, of course, because otherwise those commercials with the Jimmy Fallon lookalike and the pudgy dude in the business suit wouldn't exist. But, fundamentally, Microsoft and Apple are very, very different companies. Microsoft is, as you might guess, a software company. They produce Windows, they make games, they develop web browsers, they create file-types, and they make their money on the dominance of one program suite: Microsoft Office. Despite the gains of Apple, it's worth noting that Microsoft Office - that's right - comes with your brand new iMac. When you buy from Apple, you're still buying from Microsoft, even if you hate them. But that makes sense, because Microsoft is, again, a software company.

Apple, on the other hand, is a hardware company. Oh sure, they have software (iTunes, iMovie, iInstertProgramFunctionHere - they're creative like that), but they make their money on selling computers and phones and iPods. While Microsoft tries to dominate all things software and couldn't really care less what hardware package you use (Dell, HP, Lenovo, Sager, Toshiba, whatever), you cannot buy an Apple computer without OSX, and you cannot run OSX without an Apple. Hardware and software are one in the same, and because Apple integrates those two aspects of their production so well, they're also fantastically cheap.

Ha! Anyone who has shopped for a computer knows, of course, that a Mac with comparable hardware costs significantly more than a PC. Macintosh will tell you that's because of convenience and security and all sorts of other hogwash, but in reality it's because they have a monopoly on the hardware and software they provide, and because they're terribly inefficient compared to the PC tag-team of Microsoft and its hardware producers. Apple has to spend significant time and energy developing, troubleshooting, and building hardware. Apple also has to spend time developing, troubleshooting, and building software. On the other hand, Microsoft only builds software, and Dell only builds hardware. So the partnership of hardware company with software company leads to increased efficiency and increased competition, meaning a lower price for Mr. End-User, you.

That's not to say that Apple's model is flawed. It obviously is not. Because Apple controls all aspects of the computer's development, it has been able to build things like the iPhone and the iPod, and has made them easy to sync with your iMac. That's something that the PC world lags in, and will always lag in, because of the business models in place. Neither approach is right or wrong, of course, because both have advantages and drawbacks.

Which leads me, finally, to the actual subject of this post: Ubuntu. Ubuntu is a free, open-source Operating System that debuted in 2003. It is a distribution of Linux, which is a free, open-source code upon which many Operating Systems have been built. If you've ever used the Internet - especially at a large company - you've probably used Linux without knowing it. That's because well over half of the servers in the world run on Linux due to its reliability.* Unfortunately, Linux is also incredibly complicated and technical, and was hardly the stuff of everyday-user software for a long time. That didn't stop Linux from making a push into the OS world in the 90s, but much like the original solar panel, it was introduced before it was actually useful, and lost an undue amount of credibility.

*Back in the 90s, the story used to be that Microsoft servers had to be rebooted at least once a day or else they'd crash. Linux servers could run a month.

Ubuntu is an effort to change all that. Their slogan is "Linux for human beings," and it shows. The GUI (graphic user interface) is clean and simple, and it comes with free software than can accomplish just about anything a PC or Mac can. Where Windows has Internet Explorer and OSX has Safari, Ubuntu ships with Firefox (which is free, and open-source). Where Windows and OSX both ship with MS Office, Ubuntu ships with OpenOffice (which is free, and open-source, and does everything Office does). Where you can buy Photoshop for OSX or Windows, Ubuntu comes with GIMP - which does almost everything Photoshop does - for free.

Because Ubuntu has become by far the most widespread Linux distribution in the world, it is achieving a level of standardization that allows its software to be well-supported. What's more, if there's something you want to do that you can't find a program for, it's easy to go into the Software Manager and find - in most cases - a free, open-source program that meets your needs. That's because there are a lot of Linux users out there who do know how to program, and who have already built software that does what you need it to.

What I'm telling you is this: Ubuntu is easy to use. There are undeniably issues that require a little finagling from time to time, but most of those can be solved by a quick trip to Mr. Internet, and that's true of every OS anyway. Because Ubuntu combines the security of OSX (there are no Linux viruses) with the powerful and intuitive interface of Windows (I'm not being sarcastic, the right-click functionality on PCs is a simple, but profound advantage over Apple) and an endless opportunity for customization that leaves both of the major developers in the dust, I honestly do think that Ubuntu is the finest OS currently available. That would be true regardless of price, but consider this: a state-of-the-art laptop that Apple would charge you $3,000 for, or that Lenovo would charge $2,000 for, you could get with Ubuntu for around $1,300. A lower quality laptop you could get for much, much less, and all because you're not paying for Office, Windows, Norton, or any of the other proprietary software that is factored into the cost of your computer without you even knowing it when you buy from one of the big companies.

There are philosophical questions about open-source software. Would open-source developers be innovative without commercial developers to pave the way (Ubuntu, after all, does borrow from both Windows and OSX)? Is it fair that so many contributors to Linux - either at the kernel level or at the user-software level - go unpaid for their work? Is there a need for open-source developers like Canonical (makers of Ubuntu), when big companies like Microsoft, Apple, and Google are producing functional and effective software and hardware already?

Those are hard questions that I don't have an answer for. I do know, however, that as the computing world moves more and more towards the cloud, even the large companies are being forced to become more "open-source." The reason is efficiency. Where Microsoft has to pay a team of highly trained experts huge salaries to develop their software, Ubuntu has a massive and dispersed workforce, many of whom are not in the employ of Canonical. Windows will likely never try to copy Canonical's business model,* but many companies are increasingly operating in a hybrid fashion.

*They do have one, by the way, and do intend to become profitable in the long run. One of Ubuntu's big missions is "Edubuntu," which is a software package designed specifically for schools. The nation of Bulgaria - as it begins a one-to-one laptop program - has decided to run its entire country's school system on Edubuntu. They pay nothing to do this, but because they have such a large infrastructure, they do pay Canonical for technical support. In the end, this is still way cheaper than Windows or Apple would be, but Canonical also makes money out of the deal. If Ubuntu can break into the US education market, Canonical would stand to become quite wealthy.

If you're at all familiar with modern trends in computing, you know that the elephant in the room here is Google. Google, like Microsoft, is a software company. But unlike Microsoft, they are a Cloud Computing company as well, which means, among other things, they know the power of crowd-sourcing. When Apple released the iPhone, there was an explosion of Applications ("There's an App for that!") developed by users for everything from popping bubble wrap to learning to speak English. Those apps were developed, however, within a framework dictated by Apple, because the iPhone is run on proprietary software and hardware.

Google recently released the Android phone, with an open-source operating system. If they can get market penetration like Apple did (and Apple is one of the better companies around at marketing, something Google has never really needed to do since their money comes from advertisements and not sales), the Apps developed will put the iPhone to shame. Google recognized that it didn't need to protect it's software, because it wasn't making money on the software to begin with. Instead, Google benefits most when users can do whatever they want with the phone (with their browser, with their email account, with their blog, with their search engine, et cetera) because the more you customize, the more Google knows how to advertise effectively to you, and the more money Google makes.

Brilliant? Yes. And also by far the biggest competition for Microsoft and Apple there is. And also the biggest competition for Ubuntu, and Linux in general. Apple and Microsoft may continue to be powerful companies for a long, long time, but they also need to adapt fast to the realities of the new computing world. In my opinion, Google and Canonical are the two software companies most poised to become major powers in the next few decades. The philosophical difference here is not whether open-source is good: both companies will use open-source software because open-source is inevitable. Rather, the philosophical difference is whether advertisement belongs in your Operating System, and whether a company should collect and store your personal information so that they can better market to you. This is no trivial consideration, of course, but it will almost certainly get very little play in the media.

It is also true that Ubuntu is a major underdog as it takes on Google (and Apple and Microsoft), but it has already made tremendous gains since its inception in 2003. Consider that, by the estimates of Canonical CEO Mark Shuttleworth, Ubuntu reached over 8 million users worldwide in 2006. Consider that, in 2006, there were 6 million X-Boxes sold worldwide. Apples to oranges, of course, but whereas you've probably heard of the X-Box, most people haven't heard of Ubuntu, even though it's more widespread than the X-Box. It is also worth noting that while actual numbers of Ubuntu users are unclear, the distribution can say this: it has more users today than it did yesterday.

Don't get on the bandwagon just because I said so, though. Just go to Ubuntu's website and download the latest distribution. Throw it on a CD, and you can boot it up without installing on your computer. It doesn't cost anything, after all.

Saturday, November 14, 2009

Storytelling and the Digital Age

I've been thinking a lot about stories recently. What is a story? Why do we love them so much? Does it matter if a story is true? What does it mean for a story to be true, anyway? Some of those questions are at the heart of doing research in the Social Sciences (like Education), but they are also important in the way that we live day-to-day, and the way we communicate with others.

These questions are at the forefront of The Princess Bride, by William Goldman, which I am currently rereading. The story itself is blissfully inconsequential, full of absurdity and drama, humor and even a little sarcasm. That makes it enjoyable to read, but it's not the kind of book that warrants dissection. No, it's basically a children's adventure, meant to be read aloud, meant to be immersive and silly. But it is punctuated by another story: Goldman's narrative about hearing the story as a child, trying to pass it on to his own child (a son, which he does not have), and finding that he needed to abridge the original. This story is as fictitious as the story about Buttercup and Westley, but it is so believable that myriad forums and websites have discussions about whether or not it is actually true. Was there a Morgenstern (the supposed author of the original story)? Is The Princess Bride really based on true events? How could Goldman simply invent this incredibly elaborate and utterly conceivable back story, and why would he?

Mixed in with this background story is a thin layer of truth. Goldman was married at the time he was writing The Princess Bride, and it was born as a story to tell his daughters, and he did end up getting divorced from his wife (as the background story suggests is likely). He even includes a few parenthetical stories in the various introductions - and there are many - that say "this is a true story," to set them apart.

Why does Goldman tell the story this way? I won't pretend to be able to get into his head, nor will I try. Rather, I wonder why we tell stories at all. I know that they are enjoyable, and engaging, and immersive. I know that most people love to imagine different worlds, to take on different identities and to play with the possibilities those worlds and identities propose. So much learning happens, indeed, from the ability to imagine oneself as someone else, to take on a new identity. So much happiness comes from the ability to imagine a better world, or at least a different one. Perhaps that is why the Waldorf schools emphasize myth and imagination so much.

Learning and fun aren't the only reasons here, though. There's also the story teller, who is not always seeking to teach. Perhaps he is always seeking to entertain, but with what tools, and why? Is there a desire for self-expression, no matter how fictitious the story ends up being? Is there a pride in invention? Is there a sense of community the storyteller becomes a part of?

I think of Homer, and the story of Achilles. So much of that story must have been false, but it didn't matter anymore. It was a story that captured what it was to be a Greek, what it was to be a man, to struggle with the knowledge of your own mortality, to be great and finite all at once. It was also a story of clashing steel and spears and gruesome deaths made poetry. All of those things we find in our modern stories - whether they be movies or books or the innovative blog posts at Cardboard Gods or just a series of pictures. I would argue that some video games are build to tell stories, too, some obviously so - Mass Effect, The Witcher, God of War - some not as overtly, but in a way, more deeply - Fall from Heaven, Europa Universalis, Out of the Park Baseball, PeaceMaker. Stories come in so many forms, with such variety in authorship, reality, and audience.

What was the Iliad to Homer? Who was it for? His audience has turned out to be thousands of years worth of Westerners, even though his audience was originally just groups of interested men and their children who they desired to grow up warriors. Was it for Homer, too? And was his audience also an author? That is the modern trend, where blogs are replacing the page, and interactive games are replacing the TV screen, and people are connected with each other almost always, and having conversations. Authorship and audience are becoming the same thing, and yet, that doesn't seem so strange. It's certainly not a return of what stories used to be, but it is a moving forward that simultaneously looks back. "The human voice is making a comeback," a Professor here at Stanford told me. She's right, and while the story is once again becoming central to what it means to communicate, to think, and to learn, it is a very different kind of story that we are beginning to see. Let me tell a little story, to give a little perspective before the end of this post.

The original movies were just films of a stage play. There was no zooming, no panning, no cut-scenes or splicing or, really, editing of any kind. The movie was shot from beginning to end, because it was just another medium to capture and distribute a whole methodology of storytelling that already existed (and had existed for millenia). It wasn't until much later that film makers distanced themselves from their theatrical roots, and began to take advantage of their ability to mix in music, to cut from one actor to another, to take things out of order (consider an extreme version of this: Memento). Eventually, film makers imagined things that it would have been impossible not only to do without film, but even to imagine without film. Technology is often like that.

The kinds of stories, the kind of communication and co-authorship, the kinds of interactive games computers and the internet make possible are only now starting to become apparent. For a long time, computers have been used to do what we always did before, just faster. Even the name - computer - suggests that it does little except perform mathematical operations. Stories, however, that would have been unimaginable before the internet are now starting to crop up. Games are allowing for stories to be told in a way that was inconceivable not twenty years ago. Authorship is becoming collaborative at a level unseen in history. And the effect is probably going to be greater than what anyone anticipates. The internet is one of the most world-changing inventions of human history, not because of what it has done, but because of what it is yet to do.

I cannot say what exactly will change, and how, and to what degree. But through it all there will be language - of one kind or another - and stories, and self-expression, and communication. Indeed, these things are becoming more and more prominent. Those who bemoan the death of reading fail to recognize that, in many ways, more people are reading more often than ever before. They're just reading blogs and tweets and text messages and facebook walls. That may not be satisfying to we bibliophiles, who cling to the printed page, but it shows a profound potential for art and beauty and good writing even in the digital age. What that writing is, and will be, what stories will be told, who knows? Regardless, the future of reading and writing - the future of stories - is an exciting one.

Tuesday, November 10, 2009

Health Care: It All Becomes Clearer

Only the most indoctrinated of Democrats and the most stubbornly hopeful of progressives can really celebrate the passage of the current Health Care Reform bill through the House.* Rhetoric comparing it to the Social Security bill during the Depression is, shall we say, a bit overblown. The bill - which is already being touted as too liberal for the Senate - is far from the populist, socialist mess that you may have heard it described as (if only it were).

* Much like passing a kidney stone, I think. Or passing gas.

Over at Counterpunch, Rose Ann DeMoro provides the most lucid discussion of the pros and cons of the bill that I have seen. There are many improvements in this bill over what we have now, and it is undeniable that insurance companies would have prefered that things remain the same. That said, the single most frustrating aspect of this bill is that it mandates that Americans buy insurance without actually making it any cheaper. This is a death-wish for an already shaky economy (what will people buy when all of their money goes towards health care?) already ravaged by the exploitative business practices of corporations in almost every sector. More importantly, it is socialism of the worst kind: government mandate without government regulation of the industry in question. It is as if all public schools were abolished, replaced with private schools, and parents were forced to send their children anyway (and don't look now, but that's the darker side of the charter school movement*).

* Ok, ok, I'll do a post on this later.

What has puzzled me throughout this debate, however, is the quiet non-involvement of businesses in other industries. Sure, giant health care companies have a lot to gain by mandated insurance, but small businesses and struggling corporations in other sectors are often devastated by the health care plans they provide for their employees. Why, I have wondered, doesn't General Motors back Universal Health Care? Why doesn't Goldman Sachs, or AIG, or Coke, or Viacom? These companies all have to pay insurance benefits for their employees. Imagine if they could shift that cost to the government in a single payer system.

Only recently did it hit me. Mandated insurance. Every American must buy health insurance. Most companies are required to provide insurance already, but surely they wouldn't...

They would. The silence from other industries comes from a simple fact: mandated health insurance, plus the elimination of the requirement to provide employee health benefits. There are nuances of course, but in the battle between big corporate lawyer and "little guy" lawyer, who do you think will win?

Yes. It's that bad.

Monday, November 9, 2009

If There's a Slow Down...

It's because I'm entering the end of the academic quarter, when projects, essays, and bribes are due. Wait...

In all seriousness, project-based learning has its benefits and drawbacks. Stanford clearly believes in learning by doing, and has the relationships necessary to make the "doing" not merely simulation, but reality in almost every class. We have a chance to present the prototypes we build, or the ideas we generate, not just to teachers, but to Silicon Valley companies and to the world's largest charitable foundations.

The drawback - such as it is - to that model, especially in this environment, is that it tends to be back-heavy. As the quarter draws to an end, projects are due in every class, and in many classes the reason to do well is not a grade, or credit, but the opportunity to network with the best and the brightest that the field of education has to offer. I have found that posting here - even for my itty-bitty audience (and what is 'audience' anyway?) - helps keep my mind nimble and aware of things other than my classwork. Sometimes it helps me to contextualize that classwork more broadly. Sometimes it is just fun. Regardless, it's here to stay, but the next few weeks will see a slowdown.

That said, if any of my papers are pertinent to a wider audience, you'll probably see them up here. So maybe fewer posts, but longer posts, are in the wings.

Saturday, November 7, 2009

Why Baseball Doesn't Need a Salary Cap

In the wake of the 27th World Series title for the New York Yankees, baseball fans all over are crying again that the sport needs a salary cap. How dare the Yankees spend their way to victory year after year? How dare baseball allow the poor Royals and Pirates to be little more than AAA teams for their big market competitors in New York, Los Angeles, Boston, and Chicago? Baseball, the argument goes, needs to be more like basketball and football, which have great competitive parity because of their salary caps. Except, that's not really true. There are a number of holes in the "MLB-needs-a-salary-cap" argument, and I hope to point out as many as I can here, but perhaps the most obviously absurd is the "basketball and football have a salary cap, and therefore have parity" argument.

Consider the NBA. Since 1999-2000, there have been 10 winners of the NBA Championship. The following teams have won the NBA Championship:

Los Angeles Lakers - 4 times
San Antonio Spurs - 3 times
Detroit Pistons - once
Miami Heat - once
Boston Celtics - once

The Lakers have also lost the Championship twice, meaning they have represented the Western Conference 6 times in the last 10 years, and as you probably noticed, San Antonio represented the West 3 of the other 4. Parity? I know that the Western Conference in the NBA is considered extremely strong, every year, but that does not mean there is much parity. Conference Championships may be a crude measure, but the larger point holds, I think: the salary cap does not stop the Lakers from being a force almost every year.

Lets look at the NFL, too. Since 2000, again, there have been 9 Super Bowl winners. They are:

New England Patriots - 3 times
Pittsburgh Steelers - 2 times
Indianapolis Colts - once
Baltimore Ravens - once
Tampa Bay Buccaneers - once
New York Giants - once (beat New England)

Even without the numbers, I think it's pretty easy to look back and think about the "good teams" of the last ten years in the NFL. New England has been competitive repeatedly, despite the lack of a salary cap. It is notable, of course, that there have been many good NFL teams that come from small markets (unlike in the NBA, where being a big-market team is, interestingly, almost as important as it is in baseball), but the main point here is that a salary cap does not ensure parity. Consider, in the NFL, the other side of the equation as well. More and more teams are habitually terrible, much like in baseball. The Detroit Lions, for example, have been a laughingstock for almost a decade.

Why is there not, actually, parity in these major sports? The salary cap is almost a non-entity in the equation, and in the NBA, in particular, it actually serves to help large market teams. How is that possible?

Basketball, like baseball, generates a huge portion of its revenue from ticket sales and local TV coverage. That means that, once a large market team gets rolling, it stands to make way more money than a small market team. The Minnesota Timberwolves may be successful for a year or two - and were - but it was unsustainable because there simply aren't enough T-Wolves fans out there to allow the franchise to rake in the dough from advertisers. The Lakers, on the other hand, have an enormous fan base, and their success translates not only into slightly higher ticket sales, but massively higher advertising revenues from local TV coverage.*

*Actually, in basketball, the import of TV money is even greater than in baseball. Baseball stadiums are huge by comparison, and each venue hosts 81 games in a season. Basketball has a shorter season, with fewer games, and much smaller arenas. That means more TV viewers, plus a schedule that emphasizes night and weekend games to ensure fans can watch a much higher percentage of the season than the baseball faithful can.

Why is TV revenue so important in a sport with a salary cap? Because higher revenues means higher profit, which means more money to invest into the team. Because the salary cap restricts the amount of money that can be invested directly into players, that extra money gets invested into facilities, better trainers, superior scouting departments, high-tech gizmos, and - if they happen - shady dealings. The Lakers, because they cannot spend more on their roster than the Thunder can, reap instead the huge advantage of spending more on supporting their roster. In a salary-capped league, this also means that player retention is better: who would want to leave the perks of Lakertown for Oklahoma City, especially when OKC can't pay you any more than the Lakers can, anyway?

In the NFL, this is less of an issue because revenues are more equalized (by virtue of weekly national coverage). Nevertheless, there is an important lesson in the lack of parity there, too, because it shows how important non-player personnel can be. Management - especially general managers and scouts - play a vital role in the success of a baseball team. This is also true of NFL teams, where the Head Coach has tremendous influence on the outcome of a game. Consider the Patriots: are their players so superior to those of other NFL teams, or is their success more a result of Bill Belichick? The analogy with baseball is imperfect, because baseball managers contribute very little to a team's success, but the point is that there are places other than players to spend money.

Were baseball to adopt a salary cap, there would be two major outcomes. The first is that the massive profits that baseball teams enjoy would be directed into the hands of owners, and away from players. Players are an easy target, and we often decry how awful a society must be to pay these whiny 20-somethings millions of dollars to hit a ball with a stick, but that misses the point. A salary cap doesn't change the amount of money that goes into the MLB, it just changes who sees that money at the end of the day. If players can only make so much, owners make that much more, and if there's anything worse than a society that makes millionaires of men who hit balls with sticks, its a society that would rather make billionaires of the men who hire those millionaires.

The second result of a salary cap would be a redirection of the resources of a team like the Yankees into other markets. Already the Yankees enjoy a tremendous advantage in amateur development, facilities, training staff, and foreign scouting. Imagine if they could spend only $100 million on player salaries, and could re-direct that extra $100 million they'd be saving towards their minor league teams and foreign scouting departments. Imagine the clubhouse they could build for their players. Imagine the vast competitive advantage they would enjoy in almost every other aspect of the game. It's impossible to know just how a salary cap would actually work, but if the NBA is any indication, a salary cap would not keep the Yankees from enjoying the same kind of dynastic power that they already do, and that the Lakers enjoy in the NBA.

Of course, the Yankees already are a perennial power - along with their huge-market cohorts, the Boston Red Sox, the Los Angeles Dodgers, and the Los Angeles Angels (of Anaheim, I'm told) - which is what has everybody so up in arms. The illusion of parity in baseball is really just the parity of the playoffs. A seven game series is too short to determine the superior team in baseball, where a superior team may win 55% of the time (compared to, say, in the NBA, where the superior team wins 70% of the time). The Yankees and Phillies could have played their recent World Series 100 times, and the Yankees may have won 60 of them. Now add in two more rounds of the playoffs, including a 5 game long series at the start (which increases the odds for an inferior team). It's no wonder the Yankees hadn't won the World Series since 2000. But remember that they made the playoffs every year in between but one.

This, it seems, is the problem that gets everyone so worked up, the problem that a salary cap would, in theory, fix. But whether parity is even desirable or not is an important question that many sports fans forget to ask. While a competitive game is certainly fun, how much would College Football fans enjoy the sport if UNLV and Mississippi State had as good a chance to win the National Championship every year as USC, Florida, or Ohio State? I would argue that - because of graduation - College Football has more parity than almost any other major sport in this country, and yet it is the lack of parity sport wide (or the parity, you might say, at the fairly large upper tier) that makes the sport so engaging. That and the allegiance every alumnus has to his respective alma mater. But TCU graduates still like watching the National Championship, even if they never expect to get there.

Of course, we expect better of baseball, where every team is supposed to "have a chance." This isn't top-heavy European soccer, after all, where the top leagues are dominated by a small handful of elite teams, and the rest fight to avoid relegation. Baseball is about hope and justice and the American way, and stuff, right? Except it's not, really. Baseball is largely about money - at least on the National scale - and it is good money for the league if the Yankees are competitive each and every year. So let them be! Baseball, for the true fan, isn't about World Series victories anyway. It's about summer afternoons at the stadium, rooting for the home team whether they are 60-30 or 30-60, up by four runs or down by eight. It's about a hot dog and a beer in the bleachers, a heckler insulting himself more than the opposition, the drama of the pitcher and the batter, and the wonderful statistics all of that produces. It's about arguing with the guy in the row behind you who you've never met whether Seth Smith is better than Brad Hawpe (he is), and whether the Rockies are a better pitching team than hitting (they are), and whether Clint Barmes sucks, despite all his home runs (he does).

Baseball is too long a haul, each season, to be about championships. The Yankees fans for whom a season without a World Series is a failure are missing out on something. There can be tremendous joy in watching your team exit in the first round of the playoffs, simply because it was a good season. There can be tremendous joy in finishing last, too (just look at the '93 Rockies). I don't mean to deny that there is frustration in losing, and I wouldn't suggest that you want your team to be a failure, but real baseball fans show up in Cincinnati and Pittsburgh and Kansas City every year even though they know their teams aren't any good. Why? Because they want to watch baseball, pure and simple. Wins and losses be damned. Baseball is the closest sport to poety there is, and poetry is as much about losers as it is winners anyway, if not more so.*

*Consider this Tennyson poem, for example. Or anything Greek.

If there is something wrong with baseball - with the Yankees and the Red Sox - then a salary cap is a terrible solution. I don't know what the 'solution,' is, but I do know that it has to address, not the output of the Yankees, but the intake. Creative, intelligent people will always know how to use their superior resources - should they have them - to their advantage, and no amount of restricting that resource usage will stop the Yankees. Eliminating the resource advantage? Sure, but how are you going to do that? And do you want to anyway? On some level, I was glad the Yankees won the World Series this year; there's something right about it. Let them win five more in the next decade, let them be champions, and be desperate and incredulous when they aren't. Just let me watch baseball, and I'll be happy.

Friday, November 6, 2009

Stress as a Choice

It is not uncommon, just about everywhere, to hear people talk about how stressed they are. This usually stems from having too much to do and not enough time to do it, and manifests itself as a self-fulfilling inability to actually be productive. While stress certainly has external contributors, I believe that it is, at its heart, a personal decision about how to respond to those external contributors. Without going into psychology or trying to dissect that, I think it's fair to point out that there are many people who feel overwhelming stress when they have fairly little work to do, while others skate by whistling and smiling despite working almost non-stop.

Choosing stress, however, does not strike me as a conscious decision. Few people would say: "Oh good, I have enough to do now that I can start feeling stressed about it." Granted, there are certainly people who need the threat of a looming deadline in order to actually complete a task (similar to the people who need the threat of a torturous afterlife in order to be moral?), but that, I feel, is a different kind of stress than the crippling, miserable, and unhealthy stress that is so prevalent. Indeed, I might call a deadline motivation, and while you might say that's just the same thing as stress, I would say, in response, that that is my point. The words we choose to describe a similar phenomenon tell us a lot about our intellectual habits.

What is the original of this choice, then? Why do so many people choose stress? Being unconscious, I suspect that stress is a habit started very early in life. Children who have stressed parents are probably quick to learn stress themselves, and are likely to respond in concert. Add to that our achievement-driven academic system - which rewards neither intelligence nor effort, really - and you have children at every tier of the system terrified about how they are possibly going to succeed (and out-compete their friends). "That's life," you may say, but that again is exactly the point: we choose a stressful world, and impose it upon ourselves and our children.

I firmly believe that too much work gets done in the world (that's a post for another time), I think it is even more troubling that so much time is spent worried about getting work done. There is something backwards about a culture that is more concerned with what it does and how it is to do those things than why. Choosing stress is about forgetting why, if not personally, then societally. Of course, the widespread epidemic of stress in the world - especially in Western countries - is probably a symptom of an unexamined why. If purposelessness breeds stress and frustration, it is no wonder a culture built upon profit is rife with them.

That's probably also a post for another time. Meanwhile, I don't think there's a need to delve too deeply into the cosmic - or at least cultural - sources of stress. More important is the personal decision, which is tied so closely to personal habits and expectations. I have seen people who respond to mountains of work as a fun challenge to be overcome - even when that work is not ostensibly engaging - which seems to me a better choice.

Call that attitude "life as a game," where there is always fun to be had in almost any effort, if you make that choice. Sometimes that might involve a good deal of fantasy; sometimes, perhaps, too much. But we already know what too much stress looks like.

Tuesday, November 3, 2009

Games and Learning

(This post borrows heavily from James Paul Gee's ideas in his book What Video Games Have to Teach Us About Learning and Literacy)

It might be that video game designers know more about learning and teaching than many educators do.

I don't mean educational game designers. I mean developers like Rockstar Games - makers of Grand Theft Auto - and Blizzard - makes of Diablo, Warcraft, and Starcaft.

That seems like an outrageous claim. After all, who hasn't heard about those horrible video games, tearing at the fabric of society, turning our kids into murderers and thieves with no sense of morality or justice? Intuition and reason suggest that social violence is a natural outcome of digital violence, and if we're operating at that level, how can we even begin to talk about learning and teaching?

Of course, if you buy that games cause kids to become dangerous, you definitely buy that they are excellent teachers. They may not be teaching good content, but they are effective at what they do teach. That, however, is not the angle I would take. I believe that games are not, in fact, always good at teaching content, and certainly not at shaping external social behavior. While there's plenty of noise about the dangers of violent games (or violent TV, or violent anything), there is little reason to believe that exposure to violence on a screen makes students into killers. It's simply a much more complicated equation than that.

No, games - and especially the wildly successful ones like Grand Theft Auto - are good teachers of metacognition - of thinking as such - because they are built on a sound, well-implemented theories of learning. This is not because the guys who make GTA look for good learning theories when they build their engine and their plot, but rather it is a result of a competitive market: good games must satisfy players. What satisfies the gamer? An experience that is neither too easy, nor too difficult. Controlled frustration - where each stage of the game is difficult enough to require a creative use of already developed skills and newly acquired ones, but not so obtuse as to shut down the player - is vital to the marketability of any game.

Anyone can play GTA and run around stealing cars and killing innocent civilians, but that's not the point of the game, and it's ultimately not satisfying long-term for the player to simply get arrested or killed over and over as his crime-spree spirals out of control. Instead GTA, in order to be successful in a highly competitive game market, has to provide an involved plot with substantive challenges at every point. This is a more rewarding gaming experience, and it is what has made GTA so successful, even among people who have a sense of morality and justice and who are otherwise intelligent and hard-working people.

How does this happen? How do you achieve controlled frustration? This is a difficult question to answer, and I won't pretend to know how. What I will do, instead, is explain why it is an important question.

One of the key components of education is engagement, but too often students do not stay engaged in lessons. Why not? Is it really fair to demand that every lesson be "fun?" Of course not, but it is fair that the overall experience be one of controlled frustration, and that every lesson should be rewarding in a meaningful way. Most students who shut down do so because things are too hard or too simple for them. Like gamers, they will only "play the game" if the game challenges without overwhelming. That's not a remarkable insight, and teachers don't need to look to game designers to understand why their students "check out." Instead, what is valuable is that good games manage to maintain the interest of a wildly diverse number of players despite a standardized "curriculum." There is personalization, of course, but there is some magic in having such a dynamic game world aligned with what is usually a structured and fundamentally limited landscape.

Game designers are faced with the challenge of operating in an extremely limited game world, with code as their only resource, and must create an environment in which suspension of disbelief is possible, and in which the gamer will be encouraged to continue playing. This is no small feat, and few professions ask its members to do more with less. And yet, gaming is a multi-billion dollar industry world wide. This, I believe, is the result of a highly sophisticated sense of exactly how to engage the player.

Many educators think of technology as a tool to contribute to learning, or, alternatively, as a replacement for traditional models of instruction. It is neither; it is a whole knew environment in which learning can take place. Traditional content can be ported to technology, traditional skills and lessons can be aided by technological devices and software. But the true power of technology lies in the whole new world it provides, where the fundamentals of student-teacher interaction can transform. That does not mean the elimination of traditional teaching - any more than the explosion of video games has meant the death of board games, card games, or tag - instead it means a whole new world of opportunity for educators. Just as video games were first adaptations of board games (for example, Dungeons and Dragons became Baldur's Gate, Neverwinter Nights, and countless others), but have now expanded into unique genres especially designed for computers (Portal, Mass Effect, and Demigod come to mind here), education will almost certainly follow a similar path.

But there is no need, as education expands into the digital world, to reinvent the digital wheel. Scaffolding is a big deal in education, but scaffolding digitally is different than it is in a lesson plan, and game designers already get it.

It is no accident that magazines like Edutopia - as well as more formal research journals - are beginning to embrace games not only as a potential source of usable information, but also as allies in the broader educational mission. The distrust for gaming that many Baby Boomers feel, while certainly a valid emotional reaction to a cultural artifact that is not obviously beneficial, increasingly comes across as the reticence to recognize the value of the innovation of a new generation. Gee, in fact, speculates that many older people dislike games simply because they are too hard to pick up and play for a generation that did not grow up with computers. The deeper hatred of gaming, as such, is just a rationalization for not being able to get into these bizarre and complicated games that kids are so good at.

So what does any of that have to do with metacognition? Well, the key here is in the particulars of a given game. Some games are fairly formulaic, and straightforward, but many - more than you think - actually force the player to consciously ask himself "Why can't I get past this part?" In order to answer that question, the player has to extract himself from the game so he can reflect on what skills and knowledge he has acquired so far, and how he has used it to get past earlier parts of the game. He has to, then, consider whether or not he can recombine those skills and abilities to solve this new problem, or if he can develop an altogether new skill or ability. In short, the player has to consciously reflect not just on the game itself, but on the way he is playing the game, in order to be successful. That is metacognition, and a holy grail for any teacher.

Imagine if students - as a matter of course - simply had to reflect upon how they were learning in order to "get past" the next stage of their education? In reality, this is probably true. Few graduate students are unfamiliar with their own thought-process. But how many of those students had conscious metacognition build into their curricula as a student? More importantly, how often was that metacognition built in effectively, and not merely as a stunt? How often were those students - even the most successful, brightest, straight-A students - compelled to solve a learning problem for themselves through metacognition? I'd guess not often. And yet anyone who has played Grand Theft Auto has, albeit about somewhat less practical subject matter.

I've hit on a number of topics here, many of which could probably bare explication. The broader point, though, that ties all of those sub-topics together is this: games have something to teach us about learning, and may not be the "waste of time" we often consider them to be.

Sunday, November 1, 2009

Beethoven Quotations

In general I am wary of quotations - or, rather, I am wary of basing a world-view on one-sentence sayings - because they can be reductionist. But there are many which capture a personality well, or which expose the complexity of a situation, or which are just funny. Beethoven, though a musician who almost never wrote opera (though, of course, the Ninth Symphony has lyrics) and generally did not write songs, had his fair share of revelatory sayings. Here are my favorites:

"I despise a world which does not feel that music is a higher revelation than all wisdom and philosophy."

“Music is the mediator between the spiritual and the sensual life.”

“Music is the one incorporeal entrance into the higher world of knowledge which comprehends mankind but which mankind cannot comprehend.”

“What you are, you are by accident of birth; what I am, I am by myself. There are and will be a thousand princes; there is only one Beethoven.” - said to a Prince who, as you might guess, elicited the ire of Mr. Beethoven.

“No friend have I. I must live by myself alone; but I know well that God is nearer to me than others in my art, so I will walk fearlessly with Him.”

"The barriers are not erected which can say to aspiring talents and industry, 'Thus far and no farther.'"

And his dying words: "Applaud friends, the comedy is over."

While his music perhaps gives a stronger sense of who Beethoven is than his words do, it is also easy for our modern ears - which have grown accustomed to a different kind of music - to dismiss Beethoven as grumpy, violent, or sad. He is none of those things. His music is some of the most emotionally charged and passionate ever written, but he also composed some of the most humorous, joyful, and spiritual music of his era. What makes Beethoven so remarkable is that he manages to be all of those things - grumpy, humorous, violent, joyful, sad, and spiritual - all at once. He is an exquisitely human composer who knows how to reach into the depths of the human spirit, and to laugh while doing it.

(Though not so much on the joyful, playful side - at least not obviously so - this is an excellent recording of the "Appassionata" sonata. Do yourself a favor and at least listen to the last couple minutes, though of course the whole thing is amazing.)