Against AI?

Rita Raley and Russell Samolsky

27 June 2023

The initial reader response to our title would most likely be to assume that it refers to the increasingly polarized debates for and against an advanced AI that promises to dangerously exceed our intentions. Readers might, however, also interpret our title as a question of whether we as writers are for or against AI because of the ways in which it is now conditioning or perhaps even usurping authorial intentions, raising the stakes from linguistic conditioning to linguistic expropriation, from AI as assistant to AI as generative author. But what readers of this forum will also quickly remark is the way in which our title “Against AI?” echoes with a difference the title “Against Theory” and, absent deciding context, generates an undecidability. Indeed, our focus will be the question of generation itself and the work it does in both the essay and Generative AI, which will entail an analysis of a performative lexicon that encompasses generate, genus, generalization, and general.

Knapp and Michaels define “theory” as the “attempt to govern interpretations of particular texts by appealing to an account of interpretation in general” (p. 723). They go on to assert that “the clearest example of the tendency to generate theoretical problems by splitting apart terms that are in fact inseparable is the persistent debate over the relation between authorial intention and the meaning of texts” (p. 724). As there is no Archimedean standpoint outside of practice from which theory may operate, they summatively assert that the endeavor of theory should simply end. There is at least one practical consequence of this declaration, they concede: “If accepted, our arguments would indeed eliminate the ‘career option’ of writing and teaching theory.” Although their call for an end to theory would result only in the ending of specialized academic careers, the anxiety this would have provoked at the time now extends to the threat AI poses for a host of academic writing practices in general. What then is the relation between “theory” in their conception and AI? Or, more specifically, in what sense do theorists in their conception function like AIs? Considering this will lead us on to the associated question: In what sense does AI accord with their notion of intention and authorship and in what sense does it not?

Knapp and Michaels argue that theorists “generate” false problems in a twofold sense. The first is by a governing appeal to “interpretation in general” when no such general system empirically exists, and the second by “generating” specific false interpretative problems such as a divide between authorship and intention when none in fact exist. They thus deploy generate to mean “create by means of generalization” and understand theorists to be those who generate fictitious texts. Theory for them, then, is a system or program for generating fables. For Generative AI, generate refers to the production of synthetic media—more precisely, to the production of text, images, or video by machine learning models that have learned from massive training corpora and discovered patterns in the data on their own, without explicit supervision. Generative models produce something new, whether an image of a wave or a poetic description of the same, that is doubly synthetic: correlationist but also artificial. Confined to the framework of Knapp and Michaels, the generating operations for both theory and AI can be understood as extrapolative and abstractive. They are alike in the sense that each generates entities that “only seem real,” as, for example, when DALL-E or Stable Diffusion generates a realistic but nonexistent person that, unlike a photograph, does not have an actual referent in the world.[1] Here we should also remark that text-to-image deep learning models such as these introduce another element to the constructing work of generation. From the Latin generāre (to beget, produce, or create), generation also traces its etymological roots to genus, which raises the well-documented epistemological problems of taxonomic classification and prescriptive bias that have been introduced through the use of the ImageNet database as a foundational training dataset for AI.

 Asserting that there cannot be intentionless writing and thus that the writing produced by a computer is intentional, Knapp and Michaels conclude that “the only real issue is whether computers are capable of intentions.” By “capable” they presumably mean being aware of these intentions. They proceed to claim that deciding the matter “will not rest on a theory of meaning but on a judgment as to whether computers can be intentional agents.” For them this determination can only be made from outside the computer itself, and this outside judgment is reiterated by their thought experiment of submariners in “white lab coats.” Importantly, they assert, theory attempts “to stand outside practice in order to govern practice from without” (pp. 729-42). In one respect, then, in deciding awareness of agency or authorship from without, Knapp and Michaels repeat the move they claim as the signature move of theory itself.

But might we not counter their thought experiment by imagining an AI that is aware of itself as an intentional agent from within itself?[2] While GPT-4 is certainly capable of generating summarizations of “Against Theory,” it is not capable of properly understanding what “intention” means in this article.[3] An AI genuinely capable of understanding all that is at stake in the question posed in our title would surely possess significant capacities of self-reflexive generalization and thus fulfill the promise of the name that researchers give to such a speculative entity: “General AI” or AGI.[4] The true mark of General AI would be that it would not be restricted to generating theory in Knapp and Michaels’ sense of a false problem but rather would understand itself, or generate an awareness of itself, as a General AI. Were such a General AI to emerge in practice, would it have been possible without being preceded by a theory of AI in general?


Rita Raley is professor of English at the University of California, Santa Barbara. Her most recent work appears in American LiteraturePost45, ASAP/Journal, Digital Humanities Quarterly, and symplokē; and she has previously cowritten articles with Russell Samolsky for PUBLICUnderstanding Flusser, Understanding Modernism; and Left Theory and the Alt-Right.

Russell Samolsky is associate professor of English at the University of California, Santa Barbara, and author of Apocalyptic Futures: Marked Bodies and the Violence of the Text in Kafka, Conrad, and Coetzee (2011). He has previously collaborated with Rita Raley on two essays on AI, one on aliens and interspecies communication, and another on Vilém Flusser and futurity, as well as a forthcoming essay entitled “Rocket Theory.”


[1] Knapp and Michaels further claim that “the mistake on which all critical theory rests has been to imagine that these problems are real” (p. 724).   

[2] For an early engagement of this question, see Alan Turing’s “argument from consciousness.”

[3] The model after all is just predicting token sequences. See Emily M. Bender and Alexander Koller, “Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data.”

[4] Here we evoke multiple senses of generalization, including a model’s ability to work with data it has not previously seen. For one contextually relevant exploration of AGI, see Sébastien Bubeck et al, “Sparks of Artificial General Intelligence: Early experiments with GPT-4.”

3 Comments

Filed under AI FORUM