Some thoughts on algorithmic criticism

Over the past couple of weeks, one of the central arguments we have encountered in class is the notion that, despite the buzz surrounding its potential, the core issues of the digital humanities are not radically different from those of the humanities in general.  To take a case in point, in his talk for the MLA, Ted noted that scholars have been using “digital tools” for a long time to conduct basic searches through archives and databases.  As a result, he suggests that the primary novelty associated with the growth of digital humanities as a recognized field has not been a shift in practice but rather a shift towards heightened reflexivity about the search process.

Clearly, in thinking about the application of digital tools to literary criticism, it makes sense to reflect on the way searches are conducted.  Yet, in practice, I found it interesting that while the Stanford Literary Lab pamphlet on “Quantitative Formalism” and Tanya Clement’s article on Gertrude Stein’s The Making of Americans were fairly open in discussing the process of data collection, neither seemed particularly transparent about acknowledging the continuities between their interpretative practices and the tradition of twentieth-century literary criticism.

To be specific, as I read through these pieces, I found myself hung up on what may appear to be an obvious question (at least at first glance):  Is the distant reading of algorithmic criticism little more than a form of “close reading” for the digital age?  Unlike commentators who have expressed anxiety that the kind of distant reading Franco Morretti advocates in Graphs, Maps, and Trees might get in the way of close reading, I have a different problem:  I fear that it might replicate the worst tendencies of New Criticism– namely, ahistoricism.  When we think about New Criticism, we probably think of W.K. Wimsatt and Monroe Beardsley’s emphasis on “the text itself” in essays like “The Intentional Fallacy” (1946) or concepts of “unity” found in Cleanth Brooks’ work on the heresy of paraphrase for The Well-Wrought Urn: Studies in the Structure of Poetry (1947).  Like New Criticism, algorithmic criticism locates meaning in the text itself; however, as we have seen in several examples, it is often far more interested in word choice as a matter of quantification.  But as most of us probably know, the problem with formalism is that it has the potential to obscure the subjectivity of interpretation, as well the historical context of literary production.  Consequently, my concern is that if we don’t think carefully about how we use these tools, we run the risk of repeating the missteps of previous critical movements.

I found myself coming back to this problem again and again in the work from the Stanford Literary Lab and Tanya Clement, albeit it to varying degrees.  Since the Stanford Literary Lab’s work on quantitative formalism is primarily focused on technology and the research process, it is difficult to critique, as its observations are largely cursory and anecdotal.  Thus, while the discovery of the fact that Dickens’ “language remains basically the same” as he moves from novel to novel doesn’t advance my knowledge of Dickens or nineteenth century literature, it does tell me a good bit about some of the problems one encounters when designing and applying tools (15).

Far more troubling than the meta-reflective nature of the Stanford pamphlet is Clement’s work.  In short, her reliance on “the data”—which stands in for the text itself—seems to assume the existence of a unity or pattern that may or may not exist.  Although I don’t claim to be a specialist on Gertrude Stein, the fact that other modernists such as Virginia Woolf and James Joyce often edited their work to obscure meaning leads me to question the utility of looking for a discernible structural arc in a highly experimental work.  Ultimately, I’m suspicious about whether Clement advances our understanding of a work like The Making of AmericansIn many ways, she seems to transform a highly affective, rhythmic, poetic work into little more than a bad grammar lesson.

For the sake clarity, in levying this critique, I’m not suggesting that digital tools don’t have an application.  But I do think that it is important for us to consider whether or not essays like Clement’s alienate readers in their desire to replicate scientific realms.  Whether or not its fair, literary theory and criticism has often been accused of incomprehensibility.  As interest in the digital humanities presents an opportunity for the humanities to become more relevant, it seems foolish to squander such cultural capital in a turn to the quantitative.

In my own practice, I have envisioned using digital tools in a supportive mode for arguments that are rooted in more established forms of theoretical discourse.  Am I an outlier in this regard, or do others share my interests and concerns?  I’d be curious to hear more about the varying assumptions and aspirations that others have bought to this course.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s