Argument
(This response follows Nan Da’s previous “Errors” response)
Nan Z Da
First, a qualification. Due to the time constraints of this forum, I can only address a portion of the issues raised by the forum participants and in ways still imprecise. I do plan to issue an additional response that addresses the more fine-grained technical issues.
“The Computational Case against Computational Literary Studies” was not written for the purposes of refining CLS. The paper does not simply call for “more rigor” or for replicability across the board. It is not about figuring out which statistical mode of inquiry best suits computational literary analysis. It is not a method paper; as some of my respondents point out, those are widely available.
The article was written to empower literary scholars and editors to ask logical questions about computational and quantitative literary criticism should they suspect a conceptual mismatch between the result and the argument or perceive the literary-critical payoff to be extraordinarily low.
The paper, I hope, teaches us to recognize two types of CLS work. First, there is statistically rigorous work that cannot actually answer the question it sets out to answer or doesn’t ask an interesting question at all. Second, there is work that seems to deliver interesting results but is either nonrobust or logically confused. The confusion sometimes issues from something like user error, but it is more often the result of the suboptimal or unnecessary use of statistical and other machine-learning tools. The paper was an attempt to demystify the application of those tools to literary corpora and to explain why technical errors are amplified when your goal is literary interpretation or description.
My article is the culmination of a long investigation into whether computational methods and their modes of quantitative analyses can have purchase in literary studies. My answer is that what drives quantitative results and data patterns often has little to do with the literary critical or literary historical claims being made by scholars that claim to be finding such results and uncovering such patterns—though it sometimes looks like it. If the conclusions we find in CLS corroborate or disprove existing knowledge, this is not a sign that they are correct but that they are tautological at best, merely superficial at worst.
The article is agnostic on what literary criticism ought to be and makes no prescriptions about interpretive habits. The charge that it takes a “purist” position is pure projection. The article aims to describe what scholarship ought not to be. Even the appeal to reading books in the last pages of the article does not presume the inherent meaningfulness of “actually reading” but only serves as a rebuttal to the use of tools that wish to do simple classifications for which human decision would be immeasurably more accurate and much less expensive.
As to the question of Exploratory Data Analysis versus Confirmatory Data Analysis: I don’t prioritize one over the other. If numbers and their interpretation are involved, then statistics has to come into play; I don’t know any way around this. If you wish to simply describe your data, then you have to show something interesting that derives from measurements that are nonreductive. As to the appeal to exploratory tools: if your tool will never be able to explore the problem in question, because it lacks power or is overfitted to its object, your exploratory tool is not needed.
It seems unobjectionable that quantitative methods and nonquantitative methods might work in tandem. My paper is simply saying: that may be true in theory but it falls short in practice. Andrew Piper points us to the problem of generalization, of how to move from local to global, probative to illustrative. This is precisely the gap my article interrogates because that’s where the collaborative ideal begins to break down. One may call the forcible closing of that gap any number of things—a new hermeneutics, epistemology, or modality—but in the end, the logic has to clear.
My critics are right to point out a bind. The bind is theirs, however, not mine. My point is also that, going forward, it is not for me or a very small group of people to decide what the value of this work is, nor how it should be done.
Ed Finn accuses me of subjecting CLS to a double standard: “Nobody is calling in economists to assess the validity of Marxist literary analysis, or cognitive psychologists to check applications of affect theory, and it’s hard to imagine that scholars would accept the disciplinary authority of those critics.”
This is faulty reasoning. For one thing, literary scholars ask for advice and assessment from scholars in other fields all the time. For another, the payoff of the psychoanalytic reading, even as it seeks extraliterary meaning and validity, is not for psychology but for literary-critical meaning, where it succeeds or fails on its own terms. CLS wants to say, “it’s okay that there isn’t much payoff in our work itself as literary criticism, whether at the level of prose or sophistication of insight; the payoff is in the use of these methods, the description of data, the generation of a predictive model, or the ability for someone else in the future to ask (maybe better) questions. The payoff is in the building of labs, the funding of students, the founding of new journals, the cases made for tenure lines and postdoctoral fellowships and staggeringly large grants. When these are the claims, more than one discipline needs to be called in to evaluate the methods, their applications, and their result. Because printed critique of certain literary scholarship is generally not refuted by pointing to things still in the wings, we are dealing with two different scholarly models. In this situation, then, we should be maximally cross-disciplinary.
NAN Z. DA teaches literature at the University of Notre Dame.
Pingback: Computational Literary Studies: A Critical Inquiry Online Forum | In the Moment
Pingback: In the Moment – Colchester Maths Tutor