Alex Gil
28 June 2023
“Is it useful?” Bethany Nowviskie, the American pragmatist, asked me about a decade ago almost in passing. The question came as a revelation. She wasn’t talking about an app, or some civic action. She was talking about a blog post or another I threatened to write. I was late to this insight, which has never really left “theory,” from Plato until the present. We are reminded by various sorts of theorists, literary or otherwise, that writing and literacy can and have been used to mobilize massive historical forces (Edward Said), to enter contracts (J. L. Austin), to shape institutions (Michel Foucault) or even budge them from within (Sarah Ahmed).
It seems clear that many in the AI community, including Emily Bender and her coauthors, are hoping to put to good use the idea that intentionless language is impossible. Whether they realize it or not, they echo similar claims made by Steven Knapp and Walter Benn Michaels in “Against Theory.” If it doesn’t have intention, it is not language, it doesn’t mean anything, and we should move on, as one presumably does from an uncanny encounter by an American shore. They’re hoping with that line of argumentation that they can mount a credible resistance to the current damage to the environment and precarious labor conditions generated by the AI industry; that they can perhaps deactivate the fearmongering around the rather silly marketing fiction of a “Singularity”; or at a more mundane level, that they could perhaps reign in the oracular relationships to predictive text that we see forming all around us, even amongst academics who should know better but apparently don’t. They are asking us in short, the same that Knapp and Michaels were asking us, to move on, nothing to see here, just waves. If it were only that simple. It was wishful thinking in the original argument, and it remains so. If one were to see a second stanza appear, written by a wave, two things would happen. First, we would check to see if the second stanza has any meaning, and because it does, the next day the beach would be flooded with scientists in hazmat suits. If they are not religious, they won’t be looking for intention but for a mechanism.

Improbable philosophical scenarios, as common as they are in the literature, remain elusive in real life. The rise of consumer computing, the internet, machine-generated text of all sorts and our all-too-human programming languages have turned the wave poem into a banal occurrence, not a freak accident at the beach, as Hannes Bajohr reminded us recently in “Artificial and Post-Artificial Texts.” The deus ex machina returning meaning to the scribbles is no longer Knapp and Michael’s mad scientists on a submarine but the machina itself, the world. Something happened to our use and usage of meaning and intention along the way, and our own time is asking us, begging us, to reformulate our theories of language and meaning, sans intention. We are suddenly faced with a machine that can consistently produce grammatical and meaningful sentences just fine with a mechanism different from our brains (however factually wrong the output of both can be). We are prompted to account for this phenomenon seriously. This I find is one of the most promising uses of generative AI, which ironically does not involve using it.
What I would call a first mechanics would be interested in explaining how these machines process data to produce meaningful language. But to describe the mechanics by which machine learning or human brains can produce sentences is only half the picture. These descriptions reduce language production to the individual machine or human but ignore how language is (re-)produced at scale. Printed text almost always involves a social and material infrastructure as well, and the picture of meaning-making is never complete without looking at these other mechanics. I would call these the second mechanics of textual (re-)production. In a special issue of Critical Inquiry, Knapp and Michaels get a chance to answer their critics. The one response to “Against Theory” that gets the least attention from them is the one from the textual scholar, Hershel Parker. I can see why. In “Lost Authority: Non-Sense, Skewed Meanings, and Intentionless Meanings,” Parker reminds us that a literary text goes through a very specific process that involves multiple intentions bearing on any given modern printed book. This process was, give or take, the same for printed materials for the past few centuries, pace William Blake and other self-printers. With the advent of computers, which (re-)produces text beyond print, the social configuration didn’t change much for some kinds of writing, but the illusion of self-printing has become readily available for anyone using a personal computer and the internet. It becomes harder under these conditions to know which text was generated by typing and which was generated through an algorithmic process, as the output can be the same. A world where we can’t tell apart a bot from a human being just based on the output is a world where we don’t see the second wave. We could, but we would have to do research far too often for unfamiliar outputs, and who has time? Going forward, most of us will only have time to look for useful meaning, not intention, for better or worse.
Those in the literary professions are the first to acknowledge that authorship is directly tied to the rise of private intellectual property in the eighteenth and nineteenth centuries—and the uses of intentionality along with it. Artists seeking to avail themselves of relevant laws will continue to put intention to good use for them, but we should not let our inquiries be guided by private interests. Considering the first and second mechanics of textual (re-)production in light of LLMs begs of us a new theory of language that can no longer ignore the sum total or statistics as factors in meaning-making. We have work to do. LLMs work with a massive data set of preexisting human language tied to real human cultures and politics. Its outputs will most likely be deployed right back into the digital cultural record, reproducing its gaps and biases.[1] Research into the whole is only beginning, and the work is cut out for those who know how to wield and interpret their macroscopes. The machine may be one step ahead of us but only because we are always behind it.
Alex Gil is Senior Lecturer II and Associate Research Faculty of Digital Humanities in the Department of Spanish and Portuguese at Yale University. His research interests include Caribbean culture and history, scholarly technology design for different infrastructural and socioeconomic environments, and the ownership and material extent of the sum-total of our cultural and scholarly record.
[1] See Roopika Risam, New Digital Worlds: Postcolonial Digital Humanities in Theory, Praxis, and Pedagogy (Evanston, 2019).

Pingback: Again Theory: A Forum on Language, Meaning, and Intent in the Time of Stochastic Parrots | In the Moment
Pingback: Again Theory: A Forum on Language, Meaning, and Intent in the Time of Stochastic Parrots | In the Moment
Pingback: Here Is a Wave Poem that I Wrote . . . I Hope You Like It! | In the Moment