Errors
Nan Z. Da
This first of two responses addresses errors, real and imputed; the second response is the more substantive.
1. There is a significant mistake in footnote 39 (p. 622) of my paper. In it I attribute to Hugh Craig and Arthur F. Kinney the argument that Marlowe wrote parts of some late Shakespeare plays after his (Marlowe’s) death. The attribution is incorrect. What Craig asks in “The Three Parts of Henry VI” (pp. 40-77) is whether Marlowe wrote segments of these plays. I would like to extend my sincere apologies to Craig and to the readers of this essay for the misapprehension that it caused.
2. The statement “After all, statistics automatically assumes” (p. 608) is incorrect. A more correct statement would be: In standard hypothesis testing a 95 percent confidence level means that, when the null is true, you will correctly fail to reject 95 percent of the time.
3. The description of various applications of text-mining/machine-learning (p. 620) as “ethically neutral” is not worded carefully enough. I obviously do not believe that some of these applications, such as tracking terrorists using algorithms, is ethically neutral. I meant that there are myriad applications of these tools: for good, ill, and otherwise. On balance it’s hard to assign an ideological position to them.
4. Ted Underwood is correct that, in my discussion of his article on “The Life Cycle of Genres,” I confused the “ghastly stew” with the randomized control sets used in his predictive modeling. Underwood also does not make the elementary statistical mistake I suggest he has made in my article (“Underwood should train his model on pre-1941” [p. 608]).
As to the charge of misrepresentation: paraphrasing a paper whose “single central thesis … is that the things we call ‘genres’ may be entities of different kinds, with different life cycles and degrees of textual coherence” is difficult. Underwood’s thesis here refers to the relative coherence of detective fiction, gothic, and science fiction over time, with 1930 as the cutoff point.
The other things I say about the paper remain true. The paper cites various literary scholars’ definitions of genre change, but its implicit definition of genre is “consistency over time of 10,000 frequently used terms.” It cannot “reject Franco Moretti’s conjecture that genres have generational cycles” (a conjecture that most would already find too reductive) because it is not using the same testable definition of genre or change.
5. Topic Modeling: my point isn’t that topic models are non-replicable but that, in this particular application, they are non-robust. Among other evidence: if I remove one document out of one hundred, the topics change. That’s a problem.
6. As far as Long and So’s essay “Turbulent Flow” goes, I need a bit more time than this format allows to rerun the alternatives responsibly. So and Long have built a tool in which there are thirteen features for predicting the difference between two genres—Stream of Consciousness and Realism. They say: most of these features are not very predictive alone but together become very predictive, with that power being concentrated in just one feature. I show that that one feature isn’t robust. To revise their puzzling metaphor: it’s as if someone claims that a piano plays beautifully and that most of that sound comes from one key. I play that key; it doesn’t work.
7. So and Long argue that by proving that their classifier misclassifies nonhaikus—not only using English translations of Chinese poetry, as they suggest, but also Japanese poetry that existed long before the haiku—I’ve made a “misguided decision that smacks of Orientalism. . . . It completely erases context and history, suggesting an ontological relation where there is none.” This is worth getting straight. Their classifier lacks power because it can only classify haikus with reference to poems quite different from haikus; to be clear, it will classify equally short texts with overlapping keywords close to haikus as haikus. Overlapping keywords is their predictive feature, not mine. I’m not sure how pointing this out is Orientalist. As for their model, I would if pushed say it is only slightly Orientalist, if not determinatively so.
8. Long and So claim that my “numbers cannot be trusted,” that my “critique . . . is rife with technical and factual errors”; in a similar vein it ends with the assertion that my essay doesn’t “encourag[e] much trust.” I’ll admit to making some errors in this article, though not in my analyses of Long and So’s papers (the errors mostly occur in section 3). I hope to list all of these errors in the more formal response that appears in print or else in an online appendix. That said, an error is not the same as a specious insinuation that the invalidation of someone’s model indicates Orientalism, pigheadedness, and so on. Nor is an error the same as the claim that “CI asked Da to widen her critique to include female scholars and she declined” recently made by So, which is not an error but a falsehood.
NAN Z. DA teaches literature at the University of Notre Dame.
Pingback: In the Moment – Colchester Maths Tutor
Pingback: Is there a text in my data? (Part 1): On Counting Words « CA: Journal of Cultural Analytics