What the New Computational Rigor Should Be
Lauren F. Klein
Writing about the difficulties of evaluating digital scholarship in a recent special issue of American Quarterlydevoted to DH, Marisa Parham proposes the concept of “The New Rigor” to account for the labor of digital scholarship as well as its seriousness: “It is the difference between what we say we want the world to look like and what we actually carry out in our smallest acts,” she states (p. 683). In “The Computational Case against Computational Literary Studies,” Nan Z. Da also makes the case for a new rigor, although hers is more narrowly scoped. It entails both a careful adherence to the methods of statistical inquiry and a concerted rejection of the application of those methods to domains—namely, literary studies—that fall beyond their purported use.
No one would argue with the former. But it is the latter claim that I will push back against. Several times in her essay, Da makes the case that “statistical tools are designed to do certain things and solve specific problems,” and for that reason, they should not be employed to “capture literature’s complexity” (pp. 619-20, 634). To be sure, there exists a richness of language and an array of ineffable—let alone quantifiable—qualities of literature that cannot be reduced to a single model or diagram. But the complexity of literature exceeds even that capaciousness, as most literary scholars would agree. And for that very reason, we must continue to explore new methods for expanding the significance of our objects of study. As literary scholars, we would almost certainly say that we want to look at—and live in—a world that embraces complexity. Given that vision, the test of rigor then becomes, to return to Parham’s formulation, how we usher that world into existence through each and every one of “our smallest acts” of scholarship, citation, and critique.
In point of fact, many scholars already exhibit this new computational rigor. Consider how Jim Casey, the national codirector of the Colored Conventions Project, is employing social network analysis—including the centrality scores and modularity measures that Da finds lacking in the example she cites—in order to detect changing geographic centers for this important nineteenth-century organizing movement. Or how Lisa Rhody has found an “interpretive space that is as vital as the weaving and unraveling at Penelope’s loom” in a topic model of a corpus of 4,500 poems. This interpretive space is one that Rhody creates in no small part by accounting for the same fluctuations of words in topics—the result of the sampling methods employed in almost all topic model implementations—that Da invokes, instead, in order to dismiss the technique out of hand. Or how Laura Estill, Dominic Klyve, and Kate Bridal have employed statistical analysis, including a discussion of the p-values that Da believes (contramany statisticians) are always required, in order to survey the state of Shakespeare studies as a field.
That these works are authored by scholars in a range of academic roles, including postdoctoral fellows and DH program coordinators as well as tenure-track faculty, and are published in a range of venues, including edited collections and online as well as domain-specific journals; further points to the range of extant work that embraces the complexity of literature in precisely the ways that Da describes. But these works to do more: they also embrace the complexity of the statistical methods that they employ. Each of these essays involve a creative repurposing of the methods they borrow from more computational fields, as well as a trenchant self-critique. Casey, for example, questions how applying techniques of social network analysis, which are premised on a conception of sociality as characterized by links between individual “nodes,” can do justice to a movement celebrated for its commitment to collective action. Rhody, for another, considers the limits of the utility of topic modeling, as a tool “designed to be used with texts that employ as little figurative language as possible,” for her research questions about ekphrasis. These essays each represent “small acts” and necessarily so. But taken alongside the many other examples of computational work that are methodologically sound, creatively conceived, and necessarily self-critical, they constitute the core of a field committed to complexity in both the texts they elucidate and the methods they employ.
In her formulation of the “The New Rigor,” Parham—herself a literary scholar—places her emphasis on a single word: “Carrying, how we carry ourselves in our relationships and how we carry each other, is the real place of transformation,” she writes. Da, the respondents collected in this forum, and all of us in literary studies—computational and not—might linger on that single word. If our goal remains to celebrate the complexity of literature—precisely because it helps to illuminate the complexity of the world—then we must carry ourselves, and each other, with intellectual generosity and goodwill. We must do so, moreover, with a commitment to honoring the scholarship, and the labor, that has cleared the path up to this point. Only then can we carry forward the field of computational literary studies into the transformative space of future inquiry.
LAUREN F. KLEIN is associate professor at the School of Literature, Media, and Communication, Georgia Institute of Technology.