Tuesday, December 6, 2011


A short poem inspired by accidental poetry overheard in a hall.

Remember When I Crocheted the Virus?

Remember when I crocheted the virus?
It was soft and deadly plush
Like afternoon rain growing into a thunderstorm,
Setting off a wildfire.
What I'm saying is, I authored that destruction,
But I made it with care, out of love.

Monday, November 28, 2011

Response to "Toc"

From my "Future of English Studies" course.  Toc is a new-media literature project.  Here's the website.  Despite my critique below, it is worth taking a look at.

As a piece of writing, Toc has its virtues. As a piece of electronic media, it's a disaster.

Disclaimer: I'm a gamer. I have been since I was young. I grew up on Civilization II, Front Page Sports Baseball, and Doom. As important to me in college as the "Great Books" and the discussions that went with them was the purchase of my first real computer, and the weekend (and wee-morning) hours I poured into Knights of the Old Republic, Neverwinter Nights (1 and 2), and countless other strategy and RPG titles. It is true that video games often do not contain great - or even good - writing, and that their stories, when they have any, are largely derivative. What they do well, however, is interface.

Toc suffers because it is "interactive" without the interaction actually amounting to anything. Sure, there's a certain randomness (and, the authors would likely argue, timelessness) to the order in which you encounter the various fragments of the story, based upon what you click when, but the fragments are sufficiently free standing that the order hardly matters. Instead, the interaction implies the game world - you have to click precisely on the little blue, green, and red lines to get the appropriate media to operate - without embracing it. The experience of the reader remains mostly passive.

A passive reader experience is fine, of course, but in this case it's the "mostly" that's the problem. The experience isn't passive, but what interaction there is either is too limiting, too frustrating, or both. The lack of controls on the videos (mostly they cannot be paused or rewinded) is a UI disaster. Clicking on the little lines is finicky. The "lens" through which the textual parts of the story is read looks bland, at best. The mix of beautiful hand-drawn images and cool 3D effects in the videos with poor and outdated CGI is jarring.

In all, the authors of Toc, it seems to me, couldn't decide whether they were making this story for readers or for users. They embraced technology as a medium without embracing the design principles that go with it. Sure, the video game industry has its faults, but design is not one of them. Sure, UI people don't have all the answers as to how we can best interact with computers, but they have some good ideas. Sure, modern digital distribution of software of all kinds raises copyright questions, but the archaic "insert CD to install" - without even an autorun function! - places Toc several years behind the times (as does its lack of a patch for the few but noticeable bugs).

Worst of all, Toc is doomed by its lack of flexibility. Though ostensibly friendly with both major operating systems, it requires Apple's Quicktime, which has a tenuous and unhappy relationship with Windows. Toc does not run in Linux (I tried), and I suspect it will struggle once the current generation of its dependent technologies has passed (that is, once we're on to OS 11, Windows 8, and Quicktime Whatever). Even if it can run, it will certainly feel outdated (as it already does). This is true for most digital media, of course, but it is particularly ironic here. For a work so sensitive to time and its illusions, Toc is perhaps more vulnerable to time's passage than most works of art and literature.

Monday, November 21, 2011

Questions for the Liberal Arts Major

The title is a bit misleading here.  Basically, I'm working on drafting a dialogue (of all things) about the future of liberal arts education for my "Future of English Studies" course.  The goal is, in short, to reimagine the St. John's College Great Books program - or something like it - with an eye towards important modern concepts like multiculturalism and technology.  The fundamental question is, how much do you lose from the radical dialogic pedagogy of the college in modernizing it (or post-modernizing* it, I guess).

* Post-modernism still makes me viscerally uncomfortable, even as I recognize that almost everyone - including myself - with any semblance of education in the modern world believes in it implicitly.  Truth is relative?  Of course.  Context matters?  Duh.  Moral sensibilities have more to do with culture than with eternal, Platonic forms? Yeah, I guess so.  That doesn't change, though, that I also dislike post-modernism.  I think this is in part due to its horrible name.  Someone should have foreseen a problem with future naming of philosophical movements when they called their own time "modern."  Someone else should have realized that calling the next movement "post-modernism" was also silly.  What comes next?  Post-post-modernism?

The goal in this post is not to write my draft, but to pose questions.  That is, I don't even plan on trying to answer or discuss those questions here: that's what the dialogue will do.  I just want to pose questions.  So without further ado, here are, in no particular order, questions for the liberal arts major:

- Is it desirable for every student in a college to share a reading list with every other student?
- Is it possible to share a reading list in a multicultural curriculum?
- What makes for a good classroom discussion, and how important is a shared reading list - or even a shared reading - to that project?
- How could a St. John's-like program teach writing effectively without abandoning its pedagogical roots?
- What is more important to St. John's pedagogy: the illusion of equality in the classroom, the apparent absence of grades, the commitment to the shared reading (and no outside sources), or the participation of a sufficient percentage of the students in the class?
- How many students and tutors is ideal?
- What is the purpose of a liberal arts education?  How can we justify it in the modern world?
- Could St. John's work as a multicultural institution?  That is, is it merely the reading list, or is it the entire structure that is racist and sexist? (Can the subaltern speak?)
- Indeed, is higher education in general (not just St. John's) not culturally hegemonic?
- What subjects should make up "tutorials?" At St. John's we do Math, Laboratory, Music, and Language.  Are these the best options?  What is the goal of the tutorial?
- If Husserl's "Crisis of the European Sciences" organizes the traditional St. John's, what text or texts would best organize a modern St. John's?
- How should a student be assessed in a dialogic classroom?
- Is it possible to just update the reading list and keep everything else about St. John's the same?
- If we did update the reading list, what would be thrown out or condensed, and what would be added?  Isn't it too ironic to have a multicultural, post-modern canon?
- What about increasingly prominent non-textual works of art and philosophy, like movies, documentaries, albums, and born-digital documents like blogs or video games?  What would it mean to study these, and would it be possible to do so in a dialogic classroom?
- Can elements of the St. John's program be recreated online?
- What would a fully digital St. John's seminar look like; what would it gain over the traditional model, and what would it lose?
- How important is the credential to a liberal arts education?  Practically and theoretically.
- How should questions like these even be decided?  That is, how should a modern liberal arts program be run politically and socially?

I'm sure I could pose more questions, but this seems to me a good start.  Of course, if you have any thoughts, I'm happy to hear them.

Monday, November 7, 2011

Reading Asterios Polyp's Chart

In my "Future of English Studies" course this week we read David Mazzucchelli's Asterios Polyp.  It's a wonderful graphic novel that you should go read.  Then you can come back and read this post.  I'll do my best, as always, to explain my astrological analysis in non-astrological terms.

Doing a natal chart for a fictional character is always tricky business, but in the case of Asterios Polyp I feel justified.  The book itself draws attention to Asterios's birthday and sign, as one of his key companions in the story is a self-proclaimed "Goddess" who studies astrology and, hilariously, lets Asterios know not to worry "if you fall in love with me, everyone does."  She's a Pisces, we're told, but she must be an Aquarius cusp or a Leo rising the way she comes across in the book.  She's, uh, forward.  And one of only two other characters as opinionated as Asterios.

While Asterios, as a rational thinker, dismisses astrology and its nebulous determinations and interpretations, I thought it might be interesting to see whether his chart tells us anything more about him than "he's a Cancer-Gemini cusp."  Now, the book tells us a few things - birth date, and a number of significant life events - but it gives no time or place, so we have to do a little rectification.  Guessing at place is easy enough.  Asterios lives and works in New York, and his parents are immigrants from Eastern Europe (hence the "Polyp," a shortening of a longer, presumably Greek, name).  Assuming that Asterios was born in New York City seems safe enough.

Time is a little trickier.  I ended up deciding on 5AM, for a number of reasons.  First off, we learn that Asterios is born a twin - his brother dies - after a 30+ hour labor.  For whatever reason, I imagine the doctor's decision to perform a c-section as happening at the end of a second long night of labor.  As for astrological reasons, the first argument for this birth time is that it puts most of Asterios's planets on the Eastern side of his chart.  Asterios is a fairly self-absorbed character who, moreover, is an architect used to shaping his own world.  That jives better with Eastern hemisphere of self-determination than Western.  Moreover, we learn that Asterios has not yet designed something that was actually built, so he's more of a theoretical architect.  That his Northern hemisphere is stronger than his Southern with a 5AM birth time supports this facet of his personality: despite his vociferousness and manifest brilliance, much of his work (Saturn) remains below the horizon (in the fourth house).

The other virtues of a 5AM birth time include a Piscean midheaven, supported by the positive and, I think, transformative relationship he has with our aforementioned Goddess and her family.  Also, Sagittarius - the sign of philosophy and higher education - fills most of the 6th house of work, another sensible configuration given that Asterios works in Academia.  Finally, and most importantly, this birth time puts his Sun in Cancer in the first house, but keeps his ascendant as Gemini.  The book as a whole is largely concerned with how Asterios deals with duality, and so a Gemini ascendant makes sense, as does a first house Sun, as Asterios is extremely self-absorbed for much of the book (it is worth noting that he is a progressed Leo, too, making him something of a showman).

Ultimately, we can't rely too heavily on the houses here, despite how sensible the rectified chart looks.  Nevertheless, they can serve as guideposts for interpreting the signs and planets.

Without further ado, here's the chart.

Asterios Polyp Rectified Chart - Generated by OpenAstro

I won't go into too much detail here.  There's a lot here and, to even my surprise, it fits Asterios extremely well.  Indeed, the chart's correlation to what we know about Asterios as a character is strong enough that I wonder whether Mazzucchelli consulted an astrologer while he was writing.  Anyway, let's look at the highlights.  What really jumps out here?

Short (technical) answer: T-Squares.  Big, messy t-squares.  Mercury in Gemini squares a Saturn-Moon Conjunction, Squares Chairon.  Indeed, throw in Jupiter and you're on the verge of a Grand Cross.

Non-technical answer: Asterios has serious conflict and difficult in his chart.  His workmanlike attitude belies the difficulty he has in translating his work into reality.  Nevertheless, he's so deeply invested in what he does that he makes it a part of his home.  He has a strong aesthetic that he must see realized in his home, and even slight deviation from his expected and desired order of things is profoundly upsetting to him.

This quality he has a hard time expressing and understanding, tending to overwhelm his interlocutors precisely because he's not as self-assured as he seems.  He struggles to communicate how deeply he is what he does and says.  That is, ideas are not merely ideas to Asterios, but are rather a core part of his personality.  This multi-faceted sense of self is so upsetting to Asterios that he subsumes it in his subconscious, refusing to realize that the way he lives is peculiar and personal, instead externalizing it as a philosophical position.

Because of this internal conflict, duality takes the place of complexity in Asterios's thought.  In a beautiful piece of astrological serendipity, Chairon sets off both the internal and external senses of self with a deep, personal wound.  For Asterios, this wound is his dead brother.  Indeed, his splitting the world into dualities is his attempt to heal the loss of his twin.  Not only that, in healing himself - so he thinks - he finds a means by which he might heal others.  That is, he can bring them a simplifying dualism.

That Asterios does not understand that dualism is, of course, the irony at the heart of his character, and the quality that sets the plot in motion.  The story is very much an effort by Asterios to better connect with complexity.  Fittingly, this is represented by the place to which his t-cross opens up: Pisces in the 10th house.  Asterios has Jupiter - a planet of growth - in Pisces, and it is no surprise that his growth throughout the book is both Piscean - in the sense of embracing emotional complexity - and social, as signified by the 10th house.

There are other interesting aspects in Asterios's chart, but this t-cross (which is nearly a grand cross) is really the heart of the thing.  How well it fits perhaps raises a question: how much am I mapping the book onto the chart, and how much am I mapping the chart onto the book?  That is, which is prior?

The answer, of course, is neither.  Yes, my familiarity with Asterios as a character colors my reading of his chart, but should I know nothing about him, I would come to similar, if more abstract, conclusions.  The forces at work here - the important planets, signs, and houses - have very broad meanings that, nevertheless, are narrow enough to allow only particular interpretations.  That the events of Asterios Polyp fit so well with that interpretive baseline, yielding a rich, specific chart seems to me no accident.

Now, the question is, does this tell us anything about Asterios we didn't already know?  Perhaps not.  But it does give me a different language through which I might explain the things I sensed in reading.  Just like writing a reflection on a text, doing a chart might open up linguistic and artistic interpretive pathways that would otherwise stay closed.

To the skeptic, then, that opening up of new pathways is the value of astrology.  It is not about fate.  It is about understanding.

Sunday, October 23, 2011

Backwards Design and Higher Education

A note about the title: it's actually a joke.  There really isn't any meaningful backwards design in higher education.  But I'm not here to complain.  No, there's actually very little backwards design at every level of education, and for the most part I've managed to enjoy being a student throughout my life.  Even without backwards design, schools are wonderful places to establish networks, to have conversations, to try out ideas, and, if you put yourself in the right mental space, to learn by failing.*

* That could be its own post, but we'll save it for now.

Before I can say anything about higher education, I need to talk about backwards design, briefly.  Back in 1998, Grant Wiggins - who happened to go to a very good Great Books college - and Jay McTighe wrote a book called "Understanding by Design."  It's essentially a handbook for curriculum writers, advocating an approach that starts at the end and works its way backwards.  This idea was hardly new to educational theory by 1998 - indeed, in one of my courses this week we've been reading a piece from 1949 that advocates the same idea - but for whatever reason the formulation by Wiggins and McTighe grabbed ahold of the world of educational practice.

The essence of the concept is this.  First, you figure out what core idea or ability you want students to finish a class with, then you figure out how you're going to know whether they have it.  And voila, you're curriculum is done!

That's not entirely true, but it's pretty close.  Sure, you can import a handful of secondary and tertiary ideas or abilities with which students should gain familiarity, if not mastery.  And sure, there's still the work of actually writing the curriculum after that.  But figuring out the end goal and the assessment really is more than half the battle.  I would say that, in writing my creative writing curriculum this summer, for example, I spent a good two or three weeks decided on an end goal and an assessment, and maybe a day or two on writing the actual curriculum from there.  That is, once you know what you're trying to do, it's not so hard to figure out whether any particular activity or lesson plan fits into that bigger frame.

The funny thing is, in higher education this doesn't seem to happen at all.  I could speak to my current classes - though a couple are better than others on this front - but rather I want to point out a deeper and less personal issue.  It should seem obvious to anyone who has been to college or graduate school that the vast majority of Professors do not use anything resembling backwards design in their curricula (heck, most of them just stand up and lecture every week, and then have their TAs administer and grade a content-knowledge test at the end).  The question is, why?

At the heart of the problem, it seems to me, is a dichotomous conflict between research and pedagogy.  The difference between a Professor and a researcher at a think tank or consulting firm is, primarily, this: the Professor, in addition to doing research, teaches.  Perhaps it is easier to find Professor jobs than pure research jobs, or perhaps Professors like the idea of and prestige associated with University positions.  Regardless, once in the Academy, Professors do not, actually, get to choose one or the other.  Or, rather, they are not assessed, themselves, on both fronts.

For the most part, academic survival depends upon research and publishing.  While a great many people will defend the "publish or perish" mentality of the academic market as a necessary part of a meritocracy, it has an unintended side effect.  Professors, because they are evaluated almost entirely on research, do not spend time or energy designing or executing their pedagogical functions.  They are, in short, bad teachers.  And they are not necessarily bad teacher by choice, but rather by necessity.  A Lecturer (not a full Professor) at Stanford I spoke with this week related a story of a colleague whose Dean told him not to spend so much time teaching.  "If you ever want to get tenure," he (more or less) said, "You have to cut back on your teaching and get to work doing research and publishing."

It's important to note that "cut back on your teaching" does not mean teach fewer courses.  No, the admonition is to take your teaching less seriously, to spend less time and effort on designing a good curriculum, on employing effective pedagogy, on evaluating whether your students are understanding the material.  The result, for students, is long, boring lectures and even longer, even more boring reading assignments that float aimlessly in an ethereal mist, never to be connected to their studies except in their own minds.  In short, none of the habits of mind - design, making informed connections, creative generation of questions, and so on - that make for good research are modeled for students in the classroom.

So what does that have to do with backwards design?  While Wiggins and McTighe talk mostly about courses, I think there's an argument to be made that educational structures as a whole can be subjected to a similar analysis.  In the case of higher education - and especially graduate studies - the analysis leads to disturbing revelations.

- At the level of the individual course, there is no clear sense of what a student ought to be getting out of the course, nor how anyone (except maybe the student) will know whether the course succeeded.

- Institutionally, there is little coherence to the student experience except what the student is capable of bringing to it herself.

- It is not clear that the assessment procedures we have in place - that is, the dissertation - are effective measures of whether students are adequately prepared to do meaningful research.

I've hit on the first of these above, but the second two deserve quick explication.

Institutional incoherence is a big problem in the humanities in particular, where students routinely take ten years or more before they finish their studies.  While there are many factors that cause this problem, one of the most important is the lack of clear objective for graduate students at an institutional level.  That is, Universities do a very poor job of saying "what we want out of our PhDs in English is _____."  Of course, they do generally have something to fill in that blank with, but it's rarely something that makes sense, or that would be agreeable across the department (let alone amongst students).  Now, that may not be a problem, per se, except it leads to our other issue.

How do you assess outcomes (or processes, even) when you do not know what outcomes (or processes) you desire?  In the case of the University, the dissertation has long been the be-all-end-all.  Why? Because of tradition.  Oh, sure, there's more to it than that, but not much more.

In the modern world, I think it's fair to ask whether the dissertation is an adequate reflection of whatever kind of learning students are meant to do.  That is, the dissertation exists, primarily, as a kind of pre-monograph, a preface to a first book.  But is our goal to turn all PhD students into writers of books, anymore?  How many graduate school programs define their success based on whether or not graduates go on to publish books?  While that may have been the model in the past, that publishing itself is undergoing rapid transformation in the modern age demonstrates that its probably not the best model for the future.

What's more, the dissertation is written, published, and defended individually.  It is, perhaps, in some ancillary sense a collaborative experience, but it is still held up as an indication of individual achievement.  The problem is, Professorial research is decreasingly individual.  Collaboration has taken hold in much of the academy, and, indeed, part of the goal of graduate studies probably ought to be habituating students - many of whom have been stuck in highly uncollaborative environments through the whole of their academic lives - to working in teams.  And yet, our final assessment is monolithic, and, what's more, it's almost unimaginable that it be anything but the work of a single person.

All of which shows not only a disconnect between purpose and assessment in higher education,* but a total lack of consideration of that disconnect.  What is the purpose of a graduate education? How do we know that we've achieved that purpose?

* We haven't even touched undergraduate, which is its own messy can of worms.

A great many institutions obviously do a fine job creating future scholars, and so the system is working fine on a certain level.  The question is not, however, if it has worked or if it is working, but rather how does it work and will it continue to work?  Despite missing the kind of clear curricular structure I've mentioned above, the Academy has always has a resilience, thanks largely to its clever, self-motivated members.  It is not clear, however, that survival alone means that the system actually works.

Tuesday, October 18, 2011

Watching Myself Read

For my "Future of English" course this week we were asked to watch ourselves read and to reflect on the process.  This is my result.

In trying to watch myself read, I was surprised to find that I already do watch myself read. That is, I sat down to read and said "ok, now to watch myself read" and found that, as I started to read, I was reading exactly the way I always do. So, really, that wasn't what surprised me, actually. What surprised me was that I didn't realize that I always watch myself read. You could say I haven't watched myself watching myself read.

This continuously self-aware division into reader and watching-reader leads to interesting contradictions. I am an extremely critical reader, on the one hand. I am an extremely agreeable reader, on the other. I naturally filter what I read through the lens of other familiar texts, but I am often hard-pressed to say just exactly which text I am filtering my understanding through, or even if I actually am filtering at all. I have a sophisticated and refined interpretive ideology, and yet strive to adopt the ideology of the text I am reading.

In short, I know my reading extremely well on a surface, procedural, level, but understand it almost not at all at a deeper, existential level. That is, I know how I read, but I don't know exactly who the I is that is doing the reading. It's not the I that eats dinner with my wife, that's for certain. Indeed, that dinner-eating I often spends much of those dinners trying to understand what the reader I has just experienced, appropriating the readerly experience and reinterpreting it for my real life, synthesizing and analyzing, arguing and explaining.

As an almost schizophrenic reader, it is not uncommon for me to generate a lot of ideas as I read, and especially shortly after or whenever I pause to take a (mental) breath, often by glancing at the clock or leafing through the upcoming pages or checking my Twitter feed. In these times of pause, the more familiar, more argumentative, more discursive I interposes on the receptive, reader I. This is the I that has always watched the reader I. This is the I that is critical. This is the I that searches for connections. This is the I that is consciously ideological. This is the I that is always the same.

The reader I, on the other hand, is much more mysterious, despite how conscious my bifurcation is. I find myself reading texts back-to-back that contradict each other, and, at the reader level, agreeing with both. My reader-self will follow even the most dubious argument to its conclusion, nodding in assent the whole way. Nevertheless, the reader-I is not wholly passive. It is the reader part, not the interpretive part, that shifts voices as an author does, that can tell whether Eagleton is speaking his own opinion or is rehashing the views of someone else.

In observing my reader I this week, an interesting phrase occurred to me. It is this: "the audience of your reading." My reader reads, while the interpretive I asks questions like "who is the audience of your reading?" I might as well say, while I am reading, a part of me is always writing - or preparing to write - also. Whether that writing ends up actually written is immaterial, I engage in mental preparation for it either way.

I would not go so far as to say that, for me, reading is writing. Rather, that they are different is exactly why my reading process is so bifurcated. Then again, the reader I and the writer I almost always coexist. They are not, perhaps, so schizophrenic after all. Rather they are like Aristophanes's lovers (from Plato's Symposium), amorphous blobs meant to be together. Indeed, they are not only meant to be together, but incapable of surviving without each other.

Perhaps the true challenge of this assignment, however, is not in recognizing this dichotomy. Rather, it has been in getting the reader I to read itself. The writer I is used to interpreting the transmissions of the reader, and the writer I is used to looking at and analyzing itself. The reader, however, never really gets a chance to turn that equation in the other direction*. Because this reader I is so impossibly anti-ideological and anti-interpretive, its voice in the process is non-existent. It is, after all, the writer within me that writes this very reflection. Even when I read my own writing, the reader I adopts its traditional role, treating the text as if it is not my own, letting the interpretive I do the interpretive work.

* Except, perhaps, when I'm doing astrology, but that involves the procedural trick of placing myself outside myself in a system of formal codes.

Reflection is a telling word, in fact. When I look in the mirror, what I see is not myself, but a reflection of myself. In trying to turn the interpretive mirror on myself, what I discover, above all, is that the same interpreter that interprets my reading also interprets myself. This could be maddening, but in fact it bothers me not at all. The system, it seems to me, works. I see no occasion for dramatized crisis. Nevertheless, it is interesting to observe and articulate this strange division-cum-non-division, and it is surprising to me that I had never noticed something so fundamental to my own reading before.

Wednesday, October 12, 2011

An Alternative Pedagogy for the PhD Core


Because my last post was a little bit negative, and perhaps somewhat dramatic,* I want to offer a constructive follow-up.  Let me make explicit, first, the criticism that is really at the heart of yesterday's post.  Then, I'll offer an alternative. 

* I never, ever write overly-dramatic posts, do I?

The problem with my two required courses is that they are not, at least so far, good courses.  One is a Doctoral Proseminar that every first year PhD student has to take.  Its stated objectives are, in part, taking a broad look at the field of education research and, in part, getting to know the rest of the cohort.  The other required course - the spark for yesterday's existential angst - is the first leg of a three course series of introductory research methods courses required within the first two years by all Education PhD students. 

The How

How are these courses bad?  Well, they are both co-taught, but there is not even a semblance of collaboration between the Professors co-teaching the courses.  In both sections, one Professor gets up and lectures while the other sits around looking bored, only occasionally interjecting.  At some point the Professors might switch places when the topic changes, but that's really it.  There's no dynamic interaction, there's no prior planning, there's no engagement.

The lack of interaction is perhaps not damning, but the lack of student participation is.  Now, a student in one of these courses might very well object that, yes, we do participate some.  This is true.  The lecture style in both classes is not pure, incessant blather.  There is some opportunity for back-and-forth.  But there is no room for dialogue, even so.  The path of the "conversation" is, if not predetermined by the Professor, managed in its entirety by him.*  Students don't have the opportunity to speak to each other without Professorial interpolation.  That's not dialogue, that's Q&A. 

* I say 'him' because three of the four Professors in question are male, and the three men are the chief pontificates.

A word on the classroom that both of these courses take place in.  It's not exactly conducive to dialogue.  It's basically a small-to-medium sized meeting hall, perfect for a breakfast get-together.  There are a number of round tables that fit four people each, with a longer desk at the front of the room, in front of a SmartBoard (that, incidentally, none of the Professors knows how to use).  At the back of the room is a blackboard.  Notably, one wall contains creative artifacts made by STEP (Stanford Teacher Education Program) students, who use this same room more dynamically.

For a typical class, students will read a few articles and do a kind of preparation activity.  In the Proseminar these are basically note-taking activities designed to reinforce good reading skills.  In the methods course, these are writing activities, some of which have proven interesting and valuable, and others, well, yeah.  My last post addresses that.  Regardless, with the rare exception of some small-group work at the tables, whether a student has done the reading or not has no particular bearing on the course because the Professor simply stands around expounding about said reading for far too long, asking theoretically probing questions that the same five students answer.

In short, it's a typical University class.  But what is striking is how different it is from my experience at this very University as a Master's student.  It is said that the difference between the Master's level and the PhD level is that the latter is more focused on research.  It seems that, in addition, because research is the important thing, pedagogy goes totally out the window as well.

A final note on the how and why of the problems with these required courses.  In addition to troubling pedagogical practices, the curricula of both courses are far from compelling.  There is no clear "this is what you're getting out of this class."  That means that, while the assessments are fairly good, there's not a strong sense of how said assessments measure whatever it is that we as students are supposed to be learning.  For example, the book review required in Proseminar may develop good habits of mind, but it does not connect in any meaningful way with the readings or the lectures, at least so far.  In the methods course, we're supposed to design a study around our research interests and questions, but in the first three weeks (30% of the quarter), we've not even spoken about research questions, research design, or what makes for a good study, let alone actually done anything.* 

* It's becoming a source of personal amusement that, in a certain sense, my tennis course is pedagogically superior to my other courses.  Each session we show up, we warm up by working on whatever part of our skill set we want to, then the instructor shows us a new skill or a wrinkle on an old one and we go practice it for a half-hour as he wanders around giving pointers.  And you know what?  My ability to play tennis is improving much faster than my ability to research.  I know it's not a totally fair analogy, but that doesn't mean it's not worth thinking about. 

The Why

Why are these courses the way they are?  Perhaps the Professors teaching these courses are doing it because they're trying to curry favor with the administration, and thus they don't want to invest in designing a strong curriculum or practicing good pedagogy.  Perhaps the course curriculum, because it is designed by committee and not by the teacher, is innately unfocused.  Perhaps no one has recognized the problems with the room the courses are housed in, and therefore hasn't tried to come up with a more engaging way of using the space.

Whatever the reason, at its heart is this: Stanford University, like most institutions of higher education, is primarily interested in research.*  While many Professors like to teach, they frequently have little to no actual teaching training, and their jobs are in no way dependent on the quality of their teaching.  Students, similarly, are generally less invested in classes** because they're wrapped up in research agendas and assistantships and meeting other requirements and trying to survive in the Bay Area on the roughly $20,000 a year they make as PhD students. 

* Well, research and football, anyway.

** A quotation from a Doctoral student well-along on her path: "Your classes don't matter."  In that case, I wonder, why do we have them at all?

In summary, Professors are not accountable to their employers for their teaching.  They are accountable to their students, but their students don't particularly care whether they teach well or not.  As a result, there's no particular motivation to improve a course, no particular need to assess whether it is "working" or not,* and no particular place for a student who does care about the quality of his courses (and the pedagogy therein) to voice concerns.  It's a self-perpetuating, broken system.  Except it's not broken at all: it's exactly what almost everyone involved wants it to be, which indicates that maybe the real problem is much, much deeper.  But we'll have to leave that for another time. 

* At Stanford in particular such an assessment would be confounded by the fact that most students here are extremely good at doing well in and taking as much as they can from poorly taught classes.  Otherwise they wouldn't have made it to Stanford in the first place. 

An Alternative Pedagogy 

Vogon Guard: "Alright, so what's the alternative?"
Ford Prefect: "Well, stop doing it, of course!  Tell them you're not going to do it anymore.  Stand up to them!"
Vogon Guard: "Doesn't sound that great to me."
  - Douglas Adams, Hitchhiker's Guide to the Galaxy

The alternative to lecturing is, simply enough, not lecturing.  The alternative to none - or few - students participating is getting them all to participate.  Perhaps this is my St. John's education speaking, but I still think there's a lot to be gained from students talking to each other, and there's no reason that can't happen in these required courses.

How does that work?  First of all, no more powerpoint slides.  No more prepared lectures or conversational agendas.  No more "this is what this article means" declarations.  Questions - even pointed ones - to shape a conservation are fine, but presupposed answers are death to inquiry.  If the goal is for new doctoral students to learn to interpret research, to discuss or analyze a text, and to be able to construct and argument as to what that text means, then it's imperative that they get practice at actually doing it.  That does not mean "write a summary," that means dialogue, conversation, and argumentation.

So instead of two Professors trading off droning - with occasional interruption - at 30 students, let's put all of the manpower and brainpower in the classroom to work.  Split the class into two groups of fifteen,* and send one Professor off with each group.**  Sit in one great big circle, let the Professor ask a question about the text, and let the students work together to try to answer that question.  Have, in other words, a dialogue. 

* And rotate the groups around each week, so there's always a different mix in each group.

** Alternatively, split into even smaller groups - say five groups of six - and let the Professors float around, or put them into different groups each session.

Does that sound like St. John's?  Of course it does.  And why do I suggest it?  Because it works.  Stanford PhD students are smart people.  If you put fifteen of them in a room with a text and ask them to figure out what it means and why it's important, odds are they're going to succeed.  So why not give it a chance?  They'll be developing interpretive skills, learning to talk to each other about research (which, vitally, may not even be in their area), and getting to know each other much better than they can when they're sitting four-to-a-table and being lectured at for two hours.

It is true that Professors are generally experts in these fields while students are not, but their wealth of experience does not mean that they are innately better readers than their students, or that they can say something more insightful about a text.  What's more, even if they are better at those things, students will not learn simply by watching them talk.  I cannot learn tennis without swinging for myself (and sometimes hitting it into the net), so how can I be expected to learn to speak to my future colleagues about research without being given a chance to do so, even if sometimes our interpretations are wrong, or we cut each other off, or we oversimplify?  By doing will we learn, not by watching.

The Professor's role in this picture is to be the net.  When I hit a tennis ball too short, I can tell.  When I say something stupid, the Professor can chime in.  But here's the really cool part: even that can be given over to students.  If you let us talk to each other, we'll learn how to point out each other's mistakes, as much as highlight each other's strengths.  In short, we'll form and learn to be a part of an intellectual community much like the one we're supposedly entering as future PhD-holding scholars.

Perhaps the objection could be raised that dialogue doesn't happen among Professors, either, and that therefore such a pedagogical system would not prepare students for the Academy.  If that's so, then this alternative to lecturing becomes doubly important: we need academics who do more than merely pontificate, but who can actually communicate.  The only way we'll get them is by training them to do so from the beginning.  And anyway, it's a lot easier to transfer the ability to dialogue into giving a good lecture than the other way around.

A St. John's-ian dialogue, of course, is not the only valid alternative to a lecture class.  Not all good classes are discussion classes.  I do, however, pose it as a radical opposite pole.  Somewhere in between is an equally good place where co-teachers actually work together, where the affordances of the room are taken into consideration when shaping the curriculum, and where the students are allowed to practice and be engaged with the material (and each other) for the whole two to three hours.  Such pedagogical strategies exist.  I wish that some enterprising Professor teaching a required, core course would use them.

Tuesday, October 11, 2011

When Doing Poorly on an Assignment is a Crisis

I knew, when I decided to come to Stanford, that there would be days I would wish I had gone to UCSD instead (just as the reverse would have been true).  I did not expect that one of those days would come so early.

Those that know me know that I care little for grades.  I am perfectly capable of assessing my own learning.  What I desire, instead, is feedback, constructive criticism, helpful advice.  Harsh doesn't hurt.  On the contrary, the more direct the feedback, the more specific the criticism, the better.  I want to be a better writer, a better thinker, a more skilled lover of wisdom.

In one (or two) of my courses this quarter, however, I'm feeling something of a crisis of purpose.  The course is a core requirement for all Stanford PhD students, a course that, in principle anyway, is at the heart of what we're doing and learning as future researchers.  The course is a methods course, an introduction to research methodology and thinking.

It is in this course that we recently read a piece by Jerome Kagan.  The piece was the first chapter of his "The Three Cultures," and, frankly, it's one of the worst pieces of writing I've read in a long time.  Its point - that different research methodologies* have different cultures - was blindingly obvious, but its construction was totally inane, ranging from disorganized to grossly oversimplified to needlessly complex.  Phrases like "the critical point is" appeared over and over, often referring to disparate ideas, while the phrase "to put it simply" appeared in front of one of the most complex formulations in the text.  A single paragraph mentioned algae, bees, and ferrets in an effort to make a point about language, but the point got lost in the bestiary. 

* That is, the hard sciences, the social sciences, and the humanities.

All of this was prefaced by two inexcusable writing decisions.  The first was a table that shouldn't have been a table. The table charted "dimensions of research cultures" against those cultures, with the horrifying result that some cells contained whole sentences formatted like bad poems.  Note to self: anytime you put a 20 word sentence in a table, try to make the cells wide enough so that you do not to have one word per line.

The other decision - perhaps graver - was an invocation of Ludwig Wittgenstein near the opening of the text.  Wittgenstein, if you don't know, is famous for destabilizing theories of language and meaning with his not-always-clear, fragmentary texts.  He challenged the assumption that words meant the same thing all the time, and ultimately convinced everyone from philosophers to linguists to social scientists that context is really important - perhaps the only thing that is important - to meaning.

Kagan uses Wittgenstein, then, to begin his treatise on the three cultures.  His citation of the philosopher is made to preface his own observation that, within different research paradigms, different words have different meanings.  Fear, for example, means a different thing to an English professor than to a Behavioral Psychologist.

Which is all well and good, of course, if blindingly obvious.  No, the offense here was pointing to Wittgenstein as the divider of the disciplines.  It perhaps did not strike Kagan as supremely ironic that Wittgenstein himself probably would not have been all that excited by the theoretical division of the "three cultures" of modern academic research, that Wittgenstein's ways of thinking were in equal part scientific, social, and humanistic, that his methodology was not easy to categorize by the very system that Kagan found Wittgenstein to be the father of.

Behind my disgust at reading page after page (after page) of the drivel in "The Three Cultures"* was a lurking fear.  You see, this was not something I was reading on my own.  No, this was one of the first pieces that I was meant to read as a PhD student at Stanford University.  In some sense, this was canonical, brilliant, an important work for me to contemplate and consider.

* I prefer to be a generous reader, but in this case I cannot think of a single redeeming quality of the piece.

With even greater horror I turned to my assignment: I must summarize this monster and, what's more, apply its reasoning to my own potential research interests.  The written summary is, I think, a poor piece of pedagogy and assessment as is, but it's doubly hard when the work in question is of such poor quality.

Being the contrarian that I am (nicht diese tone, after all), I decided to write a poem.  I suspected this might get me in trouble, to a degree, but I have always tested academic limits (often to my benefit) throughout my time as a student.  This was not, for example, the first poem I've turned in on a non-poetry assignment, and historically my assessors have appreciated the change of pace, as well as the effort at creative insight.  Some have leveled a warranted "don't do that again" at me, as well, but at least there was respect for the process.

In this case, I borrowed some of Kagan's language, inserted a one-sentence statement of the simple fact - that different research cultures are different, basically - and ended with the observation that "somebody misappropriated the Wittgenstein."  It was, in my opinion, a funny but not inaccurate piece of analytical and synthetic work considering the quite dull material with which I was presented.  It was not a good poem, but it was not meant to be.  Good poems (and perhaps good writing of any kind) need subject matter worth writing about.  Of course I could have, in less time and probably with better results - at least from the Professorial point of view - done a traditional summary, but if I'm going to spend hours reading a piece of drivel, I intend not to bore myself by writing the same kind of drivel in response.*

* It is worth noting that the Kagan piece was poor writing by academic standards as well.  It was flowery and full of needless metaphors.  It was organized so that multiple and unclear ideas populated each sentence.  It was a mess.

In response to my poem (and, for the other reading, an interpolation of the reading into Plato's Meno; another, I thought, interesting and synthetic attempt to summarize the work without driving myself totally mad at the grade-school-book-reportiness of it all), I received a "check-minus" with a note that I did not summarize Kagan, and that I should look at my classmate's example passages for help.

Hence my crisis.  I know how to summarize a piece of writing.  I can write a clear and concise sentence when it is worth writing.  But I am not here to learn how to regurgitate simple information poorly presented.  And yet, increasingly, that is how I feel my classes are designed.  Classes taught ostensibly by critical pedagogues - who believe in student-run classrooms and dialogue and activity - are two-hour lectures.*  Courses in research methodology involve no training in research or methodology, but rather middle-school level assignments that ask primarily that I demonstrate not that I understand the ideas of a text or am striving to make them my own, but rather that I have done the reading, and am able to copy and paste important ideas into equally vapid academic jargon.**

* A fact that would amuse me were it not so sad.  It is a shocking revelation that the people who teach teachers to teach (and teach education researchers to assess good teaching) - that is, School of Education Professors - are such poor teachers.

** Perhaps that is "methodology" to the modern researcher?  If so, academia is in worse trouble than I thought.

Perhaps I was mistaken in thinking that Education research was open to people who value ideas as much as research, people who value creativity, process, and inspiration as much as method and practice.  It is not that I discount research, method, or practice, it is rather that I see so little of the other stuff that I'm beginning to wonder, well, whether wonder is a part of the equation at all.

Again, I don't really care about grades.  I could get a check-double-minus or a D or whatever and that wouldn't bother me.  No, what bothers me is that the feedback I received misses the point entirely.  It bothers me that the conversation is one way.  That the pedagogy is so deeply flawed.  That the underlying philosophy is so undemocratic.  That the value system is so blindly accepted that it cannot see that, maybe, a whimsical, irreverent, and sarcastic spark might have merit beyond "not being a summary."  Of course it wasn't a summary; it was a condemnation.

If a summary had been worth writing, I would have written one.  Give me an assignment worth doing, an article worth reading, a conversation to have (instead of a lecture to attend to), and I will produce high quality work.*  Give me a class worth taking - do not waste my time for three hours at a time with your pontificating and your holier-than-thou elitism (tenure alone does not make you interesting).  I, too, am an intellectual.  I, too, love ideas, perhaps more than you would believe.  I, too, can speak and listen.  I, in my own way, am well-read.  I have stood in front of students and shut myself up so that they might speak, and it was wondrous.  Yet, if no such thing can happen even here, at Stanford University, in a PhD program, in Education, can we hope for it to happen anywhere else?

* I might as well say the same for any and every student, and yet this lesson learned so well in research has not been learned in practice even by the very researchers who teach it.

It is not my assignment that makes me wonder about all of this.  That is but a small and ultimately meaningless symptom of a much deeper problem.  Nevertheless, it is an indicative example of the bigger picture: a case in which, it now seems obvious to me, the result was destined both because of how I was inevitably to respond to the assignment, and how the assignment, so to speak, was going to respond back to me.  And so the deeper culture becomes the question, and leaves me with an exhortation.

Instead of demanding that I fit into your narrow boxes, academia, you would do well to invite and celebrate your radicals, your creative thinkers, your irreverent teachers, your trouble-makers.  I know that UCSD does, and yet I chose Stanford, in part, because I believed that it did as well.  Now I'm not so sure.

Monday, October 3, 2011

Four Random Thoughts on Reading Gerald Graff's Professing Literature (and Sundry Other Works)

Ok, so this is a response to my reading for my "Future of English" class this week, but I believe it is actually broad enough that it should more or less make sense to people who aren't in the class.


When I taught creative writing last summer, we spent a day with Italo Calvino's Invisible Cities. It's an enigmatic book, a difficult one, but intensely perceptive. Getting high schoolers to appreciate – nay, to comprehend – the project of the work I feared would be impossible. As we read sections together, my fears were realized: they had no idea what was going on.

So instead of forcing a conversation, I jumped into the later part of the activity sooner. I sent them out of the classroom (this was a private school, so I was allowed to do this) to find a place on campus that they found interesting, and then to compose something in the style of Calvino (or at least his translator). An example:

A City In Perpetual Motion

The city in perpetual motion is constantly rising upwards as its inhabitants' visions grip the stars with the dreams swirling in their minds. The city is made of glass allowing a communal flow of energy to circulate through its streets. Glass wall upon glass wall are lined and stacked to form endless buildings, those too, upon glass. The glossy skyscrapers are filled with rarities and talents and treasures, seemingly weightless and untainted. The glassy floor is covered in flourishing plots of herb and flower, each of their seeds pulled from the farthest edges of the universe. As the city in perpetual motion rises upward, it expands, its inhabitants determined to take their place above the stars.

Perhaps not as skillful as Calvino – certainly not as impressive as the vast, interwoven collection he produces, complete with self-reference and surprising invisible threads and the strange Marco Polo / Kublai Kahn back story – but good enough to give rise to a question. Did the students actually fail to understand Invisible Cities? What was I looking for? What ought I have done?


It strikes me that the history Graff describes is a history of Hegelian dialectic aborted. Time after time, apparent theoretical, pedagogical, and interpretive opposites have circled each other in the English Department, seeking synthesis, but unable to find it thanks to an unwillingness or inability for the the institution to engage the conflict.


I wonder what a similar history of the School of Education would look like? What conflicts have been subsumed into the now-incoherent structure of the institution? Certainly the tripartite division between the researchers who identify themselves with 1) the humanities, 2) the social sciences, 3) the hard sciences comes from some historical disagreement that Schools of Ed participated in, but continue to shield their students from (instead requiring methods courses in each of these research techniques separately, without asking them to consider the fundamental question: what is research?)


What does the ideal University look like, or is there such a thing? Perhaps a better question is: if one had the power to build not just a University or a College, but indeed the Educational system or, deeper still, the entire society itself from scratch, what would it look like? I suspect one would need to go to social values to make any meaningful, fundamental change. Why?

Is it maybe the case that, given our social values the system we have is the ideal, or at least a very good representation of those values?

Monday, September 26, 2011

Two Research Ideas

One of my first assignments here at Stanford was to draft a possible research question and propose a methodology to use in answering that question.  Because I have multiple interests, I wrote two.

The Dialogic Pedagogy of St. John's College


What is a dialogue? Obviously there is plenty of research about how to support dialogue in the classroom, and the role of discussions in learning. From my own limited experience, however, as a graduate student and as a teacher, there is a tremendous difference between dialogue as it is conceived in such research and dialogue as it occurs in, particularly, the classrooms of St. John's College.

As a graduate of St. John's, I have first hand experience in those classrooms. Far from providing answers, my experience only raises more questions. What is it about the culture, the pedagogy, and the organization of learning at St. John's that makes their classrooms unique? How is it that students in the St. John's seminar are able to speak so directly to each other about such complicated texts with minimal Professorial intervention? Or are they actually able to do so, after all? What is the relationship between the dialogic pedagogy of the college and its Great Books curriculum? Do they support each other, are they separable, or do they detract from each other? Perhaps most importantly: what, if anything, does the pedagogy – as distinct from the curriculum – have to teach educational systems at large?

All of these questions interest me, and they serve as a backdrop for a possible research agenda. In the short term, however, I want to focus particularly on undergraduate programs and dialogue. That is, I want to address this question: What constitutes a dialogue at St. John's, and how is it distinct from dialogues in other undergraduate academic environments?


In order to begin to answer that questions, an initial qualitative, ethnographic foray into the college would be invaluable. A visit to the Santa Fe campus would allow me to conduct interviews with current students and faculty, as well as observe seminars in action with a researcher's – rather than a student's – eye. Video analysis of seminars could also be helpful, though that would require both the necessary equipment and approval from the school (recording of seminars is traditionally forbidden). While direct observation and interviews would only be possible when present, with permission and help I might be able to record video of a seminar or seminars throughout a semester or more.

In addition to doing qualitative data collection at St. John's, I would want to follow a similar interview and observation (and/or video analysis) protocol at another or other institutions. Perhaps the most feasible option would be to look at undergraduate courses at Stanford, as well as another local college or university (such as San Jose State). I would strive to find courses which describe themselves as “seminars” and which intend to use discussion-based pedagogies. I would strive, at this stage, to match specific content across the colleges in question.

Having collected my data, I would code and analyze in the qualitative tradition, with an eye towards answering my initial question, but also with the hope of discovering the appropriate path towards addressing the deeper questions that inspire my research.

Design in Game Development and Curriculum Construction

J.P. Gee, among others, has noticed that game developers do an excellent job of scaffolding learning into their products. That is, gamers learn how to play the game from simply playing it, whether because there is a built-in tutorial, or because the mechanics of the game are somehow made obvious by experimentation, or because the game is similar enough to other games in the genre that new players can be expected to transfer skills and strategies. Regardless, learning to play a game is a significant portion of what makes games fun.

School environments and activities, on the other hand, are frequently designed with a much more acute eye for learning, but too often without the same success. What, then, is it about the game design process that makes games so effective? How does that process differ from the curriculum and educational design process? Is the difference cultural, mechanical, or philosophical? 

I'm particularly interested in an analysis of the design process game and curriculum development, as it pertains to learning in those respective environments. The question is not, then, whether students learn better from games, or what games do so effectively. It is how games are designed and how that process differs in education.


I believe this question lends itself to an ethnographic approach. On the one hand, ethnography of a game development studio – or, indeed, a multitude of studios, as different studios likely have different design processes – and on the other, a similar ethnography of a school or other academic institution engaged in curriculum and activity design. Many public schools, of course, have little say in the details of their curricula, so perhaps a textbook company or other curriculum developer (Foss, for example) would serve as an interesting foil instead.
Spending time observing and interviewing participants in the design processes in these two environments, I would hope to discover what, if any, vocabulary is shared, and what is different. I would also hope to see to what degree learning is an explicit or implicit part of the design process. I expect a variety of other notable differences might arise as well, in terms of relationship to prototyping, the degree to which the process itself arises organically during development, and the passion and engagement of the people working on the design, among other things.


Ultimately, my hope is that the game design process might hold some keys as to how educators might help to unlock the powerful learning potential of games in education, without forcing us to conclude that boring “educational games,” irrelevant commercial games, or tacky gamification are the only options. That is, perhaps traditional learning might be benefit at the level of design from contact with the game design world. And, in reverse, perhaps the game design world can benefit from contact with the education design process.

Thursday, September 15, 2011

The Future of the Blog

As I am about to begin working on my PhD in Education at Stanford University, I have been reflecting on this blog and the role it will play in my studies. While I was an MA student I wrote here frequently, and I expect I'll continue to write frequently as a PhD student. I feel, however, that I put an undue amount of pressure on myself to produce content for this blog at a fairly regular rate. Undue because, ultimately, I have a very small readership (though I very much appreciate those of you who do read the blog regularly), and thus mostly am writing for myself.

The upshot of all of this reflection is that, while I'm not planning on discontinuing the blog by any stretch, I'm planning on scaling it back significantly. Instead of my target of two posts a week, I'm hoping to get to a post every now and again... Maybe one every couple of weeks. I'm still going to work on my Beethoven project, of course, as time and willpower permit, and likewise with my still nascent novel (which has been more difficult than I anticipated), and I fully anticipate occasional game ravings, political rants, and educational reflections to make their way to this space. But the pace at which I've tried to write here over the past couple of years will be lessened because, well, there's just too much else to do.

In particular, there's too much else to read. Not only will I be taking on a heavy PhD reading load, but I've found that the Internet is bustling with great writing, and the decision to be an active producer of writing sometimes gets in the way of my desire to read more. With that in mind, I want to encourage my readers to take a look at Dirk Hayhurst, at Eric Nusbaum, at Joe Posnanski. These are writers who I follow (or have recently begun to follow) and who are teaching me - whether they intend to or not - to write better.

Perhaps what I'm trying to say is this. As an undergraduate I read a lot and wrote a little. Since then, I've written a lot and read not as much. Now I think it's time for the pendulum to swing back in the other direction, and a few years from now I'll pick up the pen (or the keyboard) and become a writer again. Of course, that's not really a dichotomy - one may write and read a lot at the same time - but I nevertheless feel that one's focus naturally tends towards one or the other at any given time. For now, it's back to being a reader and a student.

Tuesday, September 6, 2011

Ferrero, Sequoia, and Finding a Home

Juan Carlos Ferrero used to be the best tennis player in the world.  Back in 2003, in the middle of that five-year vacuum between Sampras and Federer, Ferrero joined Andy Roddick, Lleyton Hewitt, and even, briefly, Andre Agassi in taking his turn at the top of the tennis-ranking mountain.  Now Ferrero is 31, still young by the standards of society, but positively ancient in the world of tennis.

Ferrero fell short of the quarterfinals at this years U.S. Open after losing to the colorfully named Janko Tipsarevic.  And had he managed to win that match, an almost certain defeat at the hands of current world number one Novak Djokovic waited at the other end.  Regardless, the Tipsarevic match I have little to say about, except that it was a hard-fought four setter.  The match before it was the interesting one.


On our way to San Diego, my wife and I suddenly found ourselves in bumper-to-bumper traffic.  Now that's no surprise in Southern California, but in this case the traffic was particularly bad.  The highway, it turns out, was on fire.  Or, rather, a fire raged on both sides of I-15 in Victorville, just north of the northern reaches of the Los Angeles metropolitan area.  After two and a half hours of trying to find a way around the flames, we turned North instead, setting out on an unexpected path.

We ended up in Lake Isabella, on the doorstep of Sequoia National Forest.  Big trees, we thought.  We want to see big trees.  We rolled into a shabby old motel near the lake at almost 10:00 PM, after starting our day in Flagstaff, Arizona, and called it a minor, if unexpected, victory.


Gael Monfils might be the most entertaining player on the ATP World Tour.  He dives to make his shots, he lopes around the court, he pouts, he yells, he engages in a subtle rope-a-dope that announcers - and, in theory, his opponents - mistake for nonchalance.  Above all, he plays with joy and the curse of grace, and so the crowd loves him and is yet frustrated by his failure to be better.

The "curse of grace" is a term Joe Posnanski uses to describe Carlos Beltran.  Some athletes are so gifted, so refined, so graceful at what they do that they appear not to be trying.  If only, we think, he would try a little harder, he'd be truly great.  Gael Mofils tries hard.  You don't become as good a tennis player as he is (#7 in the World entering the U.S. Open) without practicing hours and hours and hours a day, without pushing yourself, without learning how to give more than you think you have.  And yet he looks like he's not trying, because he's just that graceful, and because - despite the criticism it brings - he wants you to think that he's not trying.

Juan Carlos Ferrero, whose own grace is largely a memory by now, beat Gael Monfils in five sets last week.


After a long drive through the Kern River valley, up into the mountains, we finally arrived at the big trees.  The first one lurked behind a turn on the road, eliciting cries of surprise and awe as we passed.  Even if you've watched Planet Earth, even if you've seen one of the countless pictures of the Giant Sequoias, nothing can prepare you for actually seeing one.

We happily paid to park next to the somewhat touristy Trail of 100 Giants trailhead.  Rarely - though moreso in National Parks than anywhere else - the touristy vibe of a place is irrelevant to its own internal grace.  The Giant Sequoias along the trail we walked were in no way lessened by the paved path that weaved between them, nor by the screaming of children unaccustomed to a screen-less walk through a natural monument.

One child in particular screamed and screamed and screamed.  He didn't want to go towards the leaning tree, or away from something he liked, or along a path near so many bees.  The exact nature of his protest was hard to make out, but it was, I'm certain, directional.  It seemed to me, above all, that he didn't want to confront the Sequoias.  No child is able to comprehend those trees.  Indeed, probably no adult can really understand what it means for a tree to be 1,500 years old.  Constantine is ancient history; the birth of a Sequoia, even more so.

Infinity is not nearly as overwhelming a concept as finite but large.  The Sequoias are giants - too big to see the top of, over a dozen feet in diameter at the base, sometimes scarred with burn marks as large as an entire cedar tree - but it is their relationship to time, more than space, that truly impresses.  What can a child do but cry when confronted with a 75-year-old baby of a Sequoia that's older than Granpa, and yet at only the beginning of its young life?


Monfils played as Monfils plays.  Ferrero played as he is forced to at his age.  After a gritty first set tiebreaker, Monfils power and agility won out in tight second and third sets.  He looked to be in command of the match, but he plays an expensive brand of tennis.  Every dive, every collapse, the effort of chasing down so many of Ferrero's well-placed shots caught up with Monfils.  He still held serve in most every game down the stretch of the final two sets, but a break here and a break there, and Ferrero closed out the match 6-4, 6-4 in the final two sets.

Tennis - especially at a Major Tournament like the Open - is a grueling sport.  As important as power, speed, and agility are, stamina and the ability to pace oneself are just as important.  Ferrero rarely overexerted himself in his match with Monfils, playing just hard enough to push the Frenchman, just pesky enough to stay in the first three sets and steal one of them.  Then, by the time the fourth set rolled around the match was already well over three hours old, and Monfils's grace and nonchalance turned into fatigue, and Ferrero showed the instinct that allows you to be the best in the world, if only for a short time.


The tree we spent the most time with was dead.  It had fallen well over 100 years ago, and yet it had barely decayed.  For the Sequoias even death moves in slow motion.  Lying on the ground, it's easier to appreciate the sheer size of one of these Giants.  Walking from base to tip is roughly equivalent to traversing three tennis courts.  Climbing onto the curved peak affords one a view of a dangerous fall.  Bringing the virtually imperceptible giants to human scale only further reinforces how unfathomably huge they are standing up.

On the exposed faces of the fallen Sequoia people had scrawled initials, expressions of undying love, crude jokes, and brief political rants.  It's tempting to be upset about the human need to spoil natural beauty, but in this case anger is as perfectly irrelevant as the illegible carvings.  These trees do not exist on any human scale, and our petty efforts to make them ours - by writing on them, or by protecting them from writing - misses their point.  Indeed, they do not have, in any human sense, a point.  They are merely trees.  What power we exert over or in service of them says more about us than it does about them.


 I wonder about players like Juan Carlos Ferrero.  What motivates him to keep playing when he knows that he stands little chance against the Roger Federers and Rafael Nadals and Novak Djokovics of the modern game?  As time passes the gap between himself and the top players grows, and as new top players find their way to the peak of their own games, Ferrero will continue to be the same Ferrero who reached his peak in 2003, and has been fighting to maintain a semblance of that greatness ever since.  Sure, another big paycheck, another endorsement deal or two can't hurt, but defeat does hurt for a champion.

"The love of the game" is a hackneyed and inadequate response to the question.  Of course Ferrero loves the game, and of course he believes in himself.  Somewhere, though, in the back of his mind, he has to know that the victories left to him are the ones that are not won during prime time, with the lights shining on him and the TV cameras rolling.  He might beat Gael Monfils, who is number seven in the world, but that is still a minor, unexpected victory.  Maybe, though, even for a former world champion, that's all that matters sometimes.


Now we're searching for a home in the South Bay.  The trees are behind us.  Their monolithic presence has given way to human concerns: a place to sleep, food to eat, a toilet to shit in.  As we search we've been forced to reconnect with commerce, with the artificial rules and regulations of the social contract, with the cultural codes that run even deeper than that, with the games that you have to play.  The U.S. Open is still on TV.

And what is a home, really?  It's a place to stay, but - as the saying goes - you can carry it on your back.  It's a transitory phenomenon, a place defined by its residents, a moment.  And yet the question "Where do you live" usually elicits a one word response: Honolulu, Denver, San Francisco.  These capital-letter stand-ins for the particular moments we spend in our homes speak to eternity, a time that spans our own lives.  In the end, a home is both the moment and the eternity, the place and the Place.  Perhaps we don't always look for these things, but we find them anyway.  Or, anyway, they find us.


What does Juan Carlos Ferrero have to do with the Sequoias?  Nothing, really.  Ferrero operates on a human scale, searching for a victory this week, and another one next week.  His career as a tennis player will be over soon, even though it only just started, even by the standards of a normal human "career."  The Sequoias, on the other hand, lie mostly outside of human perception.  Comprehending what it means for anything to exist for 1,500 years is essentially impossible for a creature that lives less than 100 years.  Eternity is not the precise word, but it might as well be.

As I search for a home, I'm really searching for a place where I can filter my perceptions, understand my interactions, and ground my existence.  Wrapped up in a home is an understanding of home in its moments and in its eternities.  Ferrero and the Sequoias are experiences I filter through myself, and as I look for a home, I look for a way of understanding not the meaning of tennis or really big trees, but rather something about myself.  I know where I am going, what I am going to do, and perhaps even a little bit about who I am.  Still, there's something else that a home represents in time, in space, and in understanding.

That something is wrapped up in eternity and in moments, in trees and tennis courts, and, above all, in the way we move through it all.

Wednesday, August 31, 2011

Win Probability and Tennis

As a huge baseball statistics nerd, I love Fangraphs.com.  Well, I don't really read many of their articles, nor do I engage in their often contentious comment threads.  Rather, I love the thing that made it famous: win probability graphs.  If you haven't seen one, they look like this.

Hey look, the Royals lost a game they should have won!

The premise is simple.  Given any game situation, you can calculate - based on the number of expected runs scored for both the remainder of the inning and the remainder of the game - how likely each team is to win.  Certain events, like home runs, tend to increase your odds significantly, while others, like strikeouts with the bases loaded and two outs, tend to hurt them.  Of course, context becomes extremely important, and the graph will fluctuate more in closer games like the Royals versus Tigers game above.

Now the cool part is not even the graph itself, but the "Leverage Index" beneath it.  Basically, the amount of change that is possible/likely in any given situation is mapped beneath the graph.  This leads to a clear, quantitative mapping of high and low leverage situations.  This in turn, leads to cool calculations of things like "clutch," which fangraphs calculates by comparing a player's performance in low leverage situations to his performance in high leverage ones (instead of the standard, and unsatisfying "close and late").

All of this is old hat to baseball fans, and my point isn't to recap what you already know or could find on fangraphs.  Rather, I want to put out a call to coders, web developers, and tennis fans to do this for tennis.  I've seen a man named Jeff Sackmann produce a "Tennis Win Expectancy Graph" for a particular match, but as far as I know, there is no widely available tool to let tennis fans calculate win probability on their own, much less a live scoreboard like on fangraphs.

If anything, win probability for tennis should be simpler than for baseball.  Rather than having to calculate run expectancies, your set of variables are much, much smaller.  If you use the tour wide statistic, as Sackmann does, that the server will win 64% of points, you can, theoretically, easily produce a win probability algorithm.  Turning that into a graph is obviously not too difficult, given Sackmann's own work.

So let's make it happen! Tennis needs would benefit from win probability graphs as much as baseball.  Questions that win probability (and leverage index) could help answer include:
1) Do great players "raise their level" on key points, or are they just always better?
2) How much of a difference in point-by-point leverage is there between a three and five set match?
3) What is the "most important" point in a given match?
There are countless others, so I won't list them all.  But even these should whet the appetite of the statistically-minded tennis fan.

Now you're probably thinking: "great, so do it yourself."  If only I could.  I have spent most of the day watching the U.S. Open and playing around with Excel.  But while the mathematics are not beyond me (though they are harrier than you might think), the coding is.  Perhaps someone else wants to take the baton?  If so, I'd be a happy contributor, cheerleader, partner in the process, to whatever degree I can.

Hey, fangraphs isn't just graphs, it's also writers.  If someone starts "tennisgraphs," I'll be happy to write for it.

Saturday, August 27, 2011

The Corporate Death Penalty

If corporations are people, they should not only have the rights of people, but also the responsibilities.  They should be able to suffer the same consequences individual people do for immoral and illegal actions, including, in particularly heinous crimes, the death penalty.

Now what I'm not going to say here is that corporations are not people.  It should be obvious that they are not, but the notion that they are has been a legal reality in the United States for over 100 years.  So as much as the Democratic Party was ostensibly up-in-arms over Mitt Romney's "Corporations are people" gaffe, they've been on the same bandwagon for over a century (since, in fact, the Democrats were the conservatives and the Republicans the liberals).

No, the point here is that, if corporate personhood is a legal and philosophical reality in the United States already, we ought to extend it to its logical limits.  As is, the corporation (capitalized?  I think maybe yes)...  Ahem, Mr. Corporation is protected by the constitutional rights of individuals.  That is, his individual rights are protected under the 1st and 14th amendments, among others.

Obviously, this has created all kinds of problems, not the least of which being the (increasingly less) recent Federal Election Commission vs. Citizen's United Supreme Court ruling that allows corporate entities to give unlimited sums of money to political causes.  The result was the meteoric rise in spending in the 2010 midterms, and the even more insane fundraising that has already occurred in the 2012 election cycle (Barack Obama is on pace to shatter George Bush's reelection fundraising totals, and not because of grassroots support).  Of course, corporate personhood has been around and caused trouble for a long time before FEC vs. CU, but this particular piece of legal interpretation has dire, self-perpetuating consequences in a way that few previous corporate personhood rulings and legislation have.

How do we get ourselves out of this mess?  Well the root of the problem is, in some sense, that we've endowed personhood on entities that have no conscience, no fundamental ethical code, and no accountability to anyone except for shareholders.  Because most shareholders are only distantly involved in the companies they are invested in, and because of the cultural maxim that all public companies must maximize profits at all costs,* there is little concern for little stuff like making sure people or the environment aren't harmed by defective (or even effective) products and services.  The result is a set of beings with human rights, but without human responsibilities.

* Indeed, they must not only maximize profits, they must actually make more profits than they were expected to make if they are to do well in the market.

 Getting rid of corporate personhood is well nigh impossible.  There's no political traction for it, no reason that either major party will challenge a nearly two-century old legal statute.  What I propose, then, is to force the corporate person to have a conscience by forcing him to be accountable for his actions.  There have been a great many well-known cases where corporations knowingly engaged in immoral behavior at the expense of consumers, the environment, or both.  While there are repercussions for such actions, they are usually slight and monetary.  While there are people held accountable for heinous corporate crimes, their prison sentences are lenient and their fines meaningless (and frequently uncollected).

Rather than approaching corporate responsibility quantitatively, let's approach is qualitatively.  When a corporation is convicted of a heinous crime - the intentional killing of a human being (or murder, as we call it in human-speak), for example - it should be subject to the same kinds of penalties an individual is subject to.  In particular, I believe we should sometimes put corporations to death.

What does that mean, exactly?  It means that the corporation is dissolved, it's assets liquefied and seized, and its board and CEO barred from serving with any other for-profit corporation in the future.  Is that too harsh?  Ironically, a great many people would say "yes," even though I'm talking about a corporation convicted of murder.  "Why," the argument goes, "should the people inside the company (and the shareholders) be legally responsible for the actions of the corporation?  The CEO is not the murderer."  Too which I respond, yes, but the corporation is being put to death, not the CEO.  The CEO and board, however, do share some responsibility for the corporation, and thus should not be allowed to serve with other for-profit corporations in the future.

What about the shareholders?  Well, they could conceivably cash out between indictment and conviction.  Otherwise, their investment would come to nothing.  What about the employees?  Well, unfortunately, they'd be out-of-work.  Now if that seems harsh and unfair to employees and shareholders (the "little people" in the equation), think about the other side of the equation.  If every single employee and every single shareholder of a corporation has that much stake in the corporation not committing heinous crimes, suddenly said company actually does have a conscience.  The CEO and board stand to lose plenty under this proposal, but the employees and the shareholders stand to lose even more, which makes them powerful advocates for the moral behavior of the corporation.

While the corporate death penalty - complete with severe professional ramifications for the board, as well as loss of employment for the employees and loss of investment for shareholders - would be a significant step towards ensuring a more ethical moral climate in the corporate world, it's far from a silver bullet.  Indeed, the hassle of prosecution, the potential economic ramification of a "too big to fail" company being put to death, and the difficulty of enforcement of the stipulations that would be necessary to stop the most corrupt of CEOs and boards from profiting even from the death of their companies (see Ken Lay at Enron) would prove significant hurdles to implementing such a proposal.  Hopefully, however, the very attempt would at least force us to recognize the absurdity of legal corporate personhood to begin with.

The biggest hurdle to the corporate death penalty, however, is that corporations would resist it, just as they resist rescinding corporate personhood in general.  Given their political influence, it would be nearly impossible to gain traction on a bill that allowed corporations to be put to death.  Given the Supreme Court's political leanings, it is also hard to believe they would uphold such a bill (though, in the process, they might at least be forced to declare the individual death penalty unconstitutional).  Nevertheless, I suggest that it might be an easier path to the end of corporate tyranny than any other we have before us.

Wednesday, August 24, 2011

Returning to Coors Field

For the first time in over a year, I made it to Coors Field for a Rockies game on Monday.  Though I missed the real excitement by one day (yesterday's game had a little bit of everything), the opener of the Astros series had a couple innings worth remembering.

First off, the Rockies - after much mocking by me and my fellow attendees for their abysmal offense - managed to score six runs in the first inning.  This offensive explosion proved enough to win the game, though the home team, in another shocker, managed to tack on to the lead in later innings.

All was not peachy in Rockie-ville, however, as J.C. Romero and Josh Roenicke combined to surrender 4 runs and record one out in the ninth inning of what had been a 9-1 game.  Rafael Betancourt was summoned to close out the game, which he did promptly by retiring two batters in, roughly, seven and a half hours.

The final out was a strikeout of former Rockie Clint Barmes, who batted with two runners on base and a chance to bring the game to one run.  Even as a die-hard Rockies fan, a small part of me secretly wished that Barmes would have hit the ball out.  As one expects with Barmes, however, occasional power comes with frequent strikeouts and popups, and so the game was hardly in doubt.  Betancourt, especially, is the kind of pitcher that Barmes struggles to hit well, and while he had a battlers at bat, he never seemed likely to square anything up.

The details of the game, of course, were nothing remarkable.  Instead, the reason I mention it in this space is because of the chance to return to Coors Field.  Recently Rob Neyer sung the only slightly over-exaggerated praises of the park I grew up in.  While even I'm hesitant to call it the best of the last 50 years, there's no doubt that it's an amazing stadium.  The views of the mountains alone are enough to separate Coors from most any other ballpark.

Unfortunately, we sat away from the mountains, instead finding ourselves behind the left field foul pole.  Our slightly-obstructed view of the infield meant that Mark Ellis was hard to see.  Which is fine, because he still barely feels like a Rockie to me (though an offseason resigning seems plausible, if unfortunate).  On the other hand, we had an unparalleled view of Eric Young Jr., who continues, for some reason, to play out-of-position in left field.  Though, as fellow attendee Joe observed, Eric Young Jr. is out of position no matter where he plays, so there is that.

Me and the view from our seats.

As for ballpark banter, my favorite incident was our critique of a nearby fan's unfortunate sign.  It read "CarGo's #5 Fan."  Clever, at first blush, because Cargo's number is, of course, 5.  Not clever because it lacks the usual fanbole (fan hyperbole, per Joe Posnanski) of always using superlatives.  Perhaps, we discussed, it could be an ironic sign?  But the other side of the same poster-board said that it was the fan's first game at Coors, which suggests, instead, an unintentional faux pas in sign construction.

With Eric Young Jr. continuously in view, we hatched a plan for how to improve the sign.  Eric Young Jr. wears the number 1 - because, hey, that's what 5'10''* utility infielders/outfielders/fast-guys-who-don't-play-defense wear.  Combined with EY Jr.'s status as the son of Eric Young - of first Colorado home run fame - and you have the much better sign: "EY2's #1 fan."  Much, much better.

* Standing next to Dexter Fowler, Eric Young looks like a small child.  Dexter is 6'4'', though he only weighs, per baseball reference, 10 pounds more than EY.  Also, for some reason Dex can't steal bases.

The original sign showed its weaknesses in particular when Cargo came up with the bases loaded in the 8th inning. As Joe observed, "Cargo's #5 fan must be really excited, right now."  "Yes," I responded, "but there are four people more excited."

Perhaps all of this sounds irrelevant.  But that's exactly the point.  Watching baseball on television (or MLB.tv, anyway) for the past year, I forgot how different it is to watch in person, with friends.  There's a vitality to the experience that is lacking even on the most vivacious Purple Row game thread.  The game is alive.  What you lose in ability to see strikes and balls on television, you gain in the ability to watch the defense.  What you lose in announcing (which, frankly, is little with the Rockies broadcast crew) you gain in the quips and jabs of surrounding fans.

To those of you who live near baseball stadiums, this is not news.  But living in Hawaii - despite its many virtues - makes you forget the magic of baseball.  Even a mediocre, meaningless August game at Coors Field is enough to remind my why I love the sport, and why, even in the leanest and meanest times, I'll always be a Rockies fan.  In the end, it's not about the Rockies at all.  It's about being there - about feeling and seeing the game around you.  And, in my case, it's about Coors Field, because that's where I learned to love and understand the game, and the place that will always define my fandom.  Even if I live the rest of my life in the Bay Area, the Rockies will be my team because Coors Field is my baseball home.  And it was good to be home again, if only for one game.