Novel Corona: Posthuman Virus

N. Katherine Hayles

 

The novel coronavirus is posthuman in at least two senses. First, and most obviously, because it is oblivious to human intentions, desires, and motives. In the US, this has led to the spectacle, refreshing despite the virus’s appalling toll on human lives, of politicians unable to spin “alternative facts” beyond a certain point—the point marked by bodies piling up in morgues. As many have observed, the virus does not distinguish between Democrats and Republicans, liberals and conservatives, Christians and Jews, Evangelicals and Muslims. In a country as deeply partisan as the US, this has opened new possibilities for dialogue. Canny governors, for example Gavin Newsom of California, are realizing the advantages of putting policies ahead of politics, abstaining from criticism of Donald Trump even when deserved. The US Congress has come together with remarkable speed to pass stimulus legislation, and even Trump has had to tone down his early claims of the virus as a “Democratic hoax” into a more fact-based approach (although never without some propaganda).

The second sense is more technical, although not difficult to grasp. In evolutionary terms, humans and viruses have adopted diametrically opposed strategies. Humans have achieved dominance within their evolutionary niche by evolving toward increased cognitive complexity, developing language with associated changes in brain and body, evolving elaborate social structures, and in very recent human history, augmenting their capacities with advanced technical devices, including artificial intelligence. Viruses, by contrast, have evolved toward increased simplicity. Viruses replicate by hijacking a cell’s machinery and using it to proliferate, which allows them to have a much smaller genome than the cell itself, a characteristic favoring rapid replication.

DNA

In broad scope, then, these two strategies appear completely opposed. However, recent research is painting a more complex picture. As Annu Dahiya argues, the idea that viruses cannot replicate without cells (because they use the cell’s machinery to turn out copies of themselves) is now being questioned.[1] She recounts a series of experiments by Sol Spiegelman’s lab at the University of Illinois Champaign-Urbana in the early 1970s that show this with elegant simplicity. After demonstrating that viral RNA could indeed self-replicate, albeit in vitro rather than in vivo, Spiegelman combined in a test tube viral Qß phage RNA, the enzyme RNA replicase and salts. After viral replication, he then diluted the solution multiple times by discarding most of the test medium and adding more medium enriched with RNA replicase and nutrients. This was equivalent to creating an environment in which, to use a human analogy, 90 percent of the population dies and the remainder spreads out over the previously crowded terrain, then 90 percent of them die, and so on. This creates an intense selective pressure favoring those entities that can replicate the fastest. As Dahiya summarizes, “the most successful replicating viral RNAs successively shortened their sequences through each serial transfer. This resulted in them losing almost all genetic information that did not relate to the binding of RNA replicase. While the initial Qß phage had 3600 nucleotides, the RNA phage at the end of the experiment possessed only 218.”[2]

Similar results were obtained by Thomas Ray in his Tierra experiment, designed to create similar competitive conditions in a simulated environment within the computer, where artificial species competed for CPU time in which to replicate. Ray found that within twenty-four hours, an entire complex ecology had evolved, including species that (like viruses) had lost the portion of their genome coding for replication and instead were using the code of other species to carry out the task. The shortened genome allowed them to replicate at an increased speed, giving them a selective advantage over species with longer codes. Moreover, these were then parasited in turn by other species that had lost even more of their code and used that of the viral-like replicators to carry out their replication (which in turn relied on the longer codes of the species they had parasited), a strategy that Ray called hyper-parasitism.

These results encourage us to understand the present situation as a pitched battle between different evolutionary strategies. On the human side are the advantages of advanced cognition, including ventilators, PPE, and of course, the race to find a vaccine. On the novel coronavirus side are the advantages of rapid replication enabled by a very short genome, and extreme contagion through its ability to disperse through the air and to live for many hours on a variety of surfaces. Recent research has indicated that people may be most contagious before they show symptoms, which has been led to novel corona being labeled a stealth virus. (Perhaps the stealth strategy evolved to ensure maximum spread through a population before individuals became too sick to move about.) In evolutionary terms, the novel coronavirus has hit the jackpot, having successfully made the leap from bats in the planet’s most populous large mammal, humans. Comparing the two strategies so far, the score is staggeringly one-sided: coronavirus, 140,000 and counting; humans, 0.

Amidst all the pain, suffering, and grief that this virus has caused humans, are there any lessons we might learn, any scrap of silver lining that we can snatch from the global chaos and wreckage? In addition to imposing reality-based constraints on political discourse, the virus is like being hit across the head by a 2X4; it reminds us with horrific force that although humans are dominant within our ecological niche, many other niches exist that may overlap with ours and that operate by entirely different rules. It screams at jet-engine volume that we are interdependent not only with each other but also with the entire ecology of the earth. And finally, it makes devastatingly clear how unprepared we are: unprepared to cope with the virus’s effects, of course, but equally important, unprepared to meet the philosophical challenges of reconceptualizing our situation in terms that does justice both to the unique abilities of humans and to the limitations and interdependencies upon which those abilities depend.

This interdependence is illustrated through the new kinds of origin stories being written about the emergence of life on earth. The recent discoveries of ancient giant viruses, with genomes almost as large as bacteria, suggest that they may have played a crucial role. These giants contain genes that encode for translation machinery, something previously believed to exist only in cellular organisms. Moreover, they include multiple genes that encode for enzymes catalyzing specific amino acids, another task that cells perform. Investigating these complexities, recent research is accumulating evidence that virus-like elements may have catalyzed some of life’s critical stages, including the evolution of DNA, the formation of the first cells, and the evolutionary split into the three domains of Archaea, bacteria, and eukaryotes. Modern viruses may have evolved from the ancient giants through stripping-down processes similar to those described above, jettisoning parts of their genome to facilitate faster replication.

In addition to the participation of viruses in life’s beginnings, another kind of interdependence has been the discovery of ancient virus DNA within human stem cells. Stem cells are crucial to human reproduction because they are pluripotent, having the ability to transform into all the different kinds of cells in the body as the fetus grows. Recent studies have found that one class of endogenous retroviruses, known as H. HERV-H, has DNA that is active in human embryonic stem cells but not in other types of human cells. Moreover, researchers have discovered that if this activity is suppressed by adding bits of RNA, the treated cells cease to act like stem cells and instead begin to act like fibroblasts, cells common in animal connective tissues. Without the pluripotency provided by stem cells, human reproduction could not work. Ironically then, the viral contamination that is posing a deadly threat to contemporary humans is also, in another guise, critical for human reproduction.

These complexities suggest that a simple binary of us-versus-them, humans versus viruses, is far too simple to be an adequate formulation for understanding our relation to each other and to the larger ecologies within which we are immersed. If the novel coronavirus is posthuman, other viruses, such as those in stem cells, are human at their/our core. We need a thorough reconceptualization of the concepts and vocabularies with which to describe and analyze these complex interdependencies, as well as the ways in which humans, as a species, are interdependent with one another as well. The pandemic offers an opportunity to rethink the ways in which we can identify with each other and with life forms radically different from us.

As a start, I would like to suggest three terms for consideration.[3] The first is humans as species-in-common, an idea emphasizing the commonalities that all humans share with one another, notwithstanding all the ethnic, racial, geopolitical, and other differences that exist between us. We can see flashes of this idea throughout history, including in the present pandemic, a situation that overruns all borders and geopolitical differences to strike at humans everywhere. The second term is species-in-biosymbiosis, an idea recognizing the ways in which different species interpenetrate, for example in the human biome. The third is species-in-cybersymbiosis, emphasizing the ways in which artificial agents, especially artificial intelligences, are actively collaborating with humans to shape our shared world. I offer these brief sketches as a first pass at what a more adequate framework might look like. Notwithstanding its devastating effects, the pandemic invites us to think new thoughts, try out novel ideas, and suggest formulations that can lead to better futures for us and for the more-than-human organisms with which we share the planet.

17 April 2020


N. Katherine Hayles, the James B. Duke Professor of Literature Emerita at Duke University and Distinguished Research Professor of English at the University of California, Los Angles, teaches and writes on the relations of literature, science, and technology in the twentieth and twenty-first centuries. She has published ten books and over one hundred peer-reviewed articles, and she is a member of the American Academy of Arts and Sciences. Her most recent book is Unthought: The Power of the Cognitive Nonconscious (2017). She is a frequent contributor to Critical Inquiry.


[1] Annu Dahiya, “The Conditions of Emergence: Toward a Feminist Philosophy of the Origins of Life” (PhD. diss., Duke University, in-progress).

[2] Ibid. p. 166.

[3] I am developing these terms in more depth in a forthcoming book.

15 Comments

Filed under 2020 Pandemic

Viral Times

Peter Szendy

Despite the warning signs, despite the news from China, it was as if we had woken up overnight in a completely different world. Wholly different but exactly the same.

The Emergency of Being at a Standstill

For some, including me, everything stopped. Immobilization has visible effects, at least through the eyes of the machines that continue to fly while humans are nailed to the ground: satellites show the sky cleansed of polluting emissions over China, Milan, or Paris.

For others, there has been an acceleration without measure. Faced with the increasing rapidity of contagions and the multiplication of serious or fatal cases, health care staff are overwhelmed, exhausted. Amazon is hiring hard to try to honor an exploding volume of orders, while workers at the firm’s US warehouses are starting to strike to protest the lack of protection at their workplace. And an intense human or machine activity is necessary for the mass monitoring of mobile phone geolocation data in order to control compliance with confinement: our immobility prompts a large mobilization.

Hypervelocity and standstill are like two extremes that belong together. Speaking of the “jet-man” who flies jet planes, Roland Barthes wrote that his “vocation” consists in “overtaking motion, in going faster than speed.”[1] The jet-world is today stopped at the very tip of a precipitation that continues behind the scenes, in a shadow economy.

This freezing in acceleration has come as a result of another temporal paradox: nothing has changed, nothing has happened, but everything that seemed unthinkable, incredible, or impossible has now become obvious, madly obvious and yet so banal.

Years, decades of neoliberal dismantling of health and research infrastructures, as we knew, could only lead to a foreseeable catastrophe. And the inexorable destruction of animal habitats has for a long time increased the risk of zoonoses, those passages of a virus from one species to another. Nothing new, therefore, befell us. Rather, a process we knew well without wanting to recognize it suddenly crystallizes before our eyes.

The event has precisely the form of an internal polyphony made of superimposed temporalities and layers of velocities. It appeared as the unlikely and startling novelty of something that, after all, had already happened a long time ago. I suddenly woke up in another world, the same world. A world at a standstill because it goes faster than itself.

KOY

Epidemics or Endemics

These simultaneous but asynchronous times constitute the mediality of the event today, its way of happening, of occurring through the milieus and media that carry it. What the current pandemic reveals is the speed differentials that shape the coming of the event, that carve and distend it from the inside.

At a microscopic level, the lifespan of the virus, according to the studies conducted so far, varies considerably depending on the element in which it evolves: from a few hours in the air (in aerosol form) to several days over steel or plastic. On a planetary scale, one cannot fail to be struck also by the complex spread of contagion: far from the immediacy that a certain imaginary of globalized interconnection would lead us to expect, what we see is a virulence that explodes in the United States two months after it did in China; while China, where restaurants are filling up again, is preparing for a second viral wave. Here, the virus is arriving in force; there it comes back in a loop. And the temporality of the unconfinement to come promises to be even more entangled, with likely relapses and resumptions.

How are we to understand the contemporaneity of this event which unfurls like a wave while winding around itself? I mean: how are we to understand not only its temporal regimes—its evolutions, its peaks and its course, its ebb—but also its way of being concurrent (or not) with major changes in our societies?

In the last of his lectures delivered at the Collège de France in 1976 (“Society Must Be Defended”), Michel Foucault introduced a distinction between epidemics and “what might broadly be called endemics.”[2] Foucault did so while identifying and relating to each other “two technologies of power which were established at different times and which were superimposed”: on the one hand “a disciplinary technology” for which “the body is individualized”; on the other, “an insurancial [assuranciel] or regulatory technology” relating to “the biological or bio-sociological processes characteristic of human masses” – that is,  what he proposes to call “a ‘biopolitics’ of the human race” (“S,pp. 243, 249-50; trans. mod.).

Now, to this complex paradigm shift corresponds, for Foucault, a nosological mutation which seems more clearly marked or punctuated:

At the end of the eighteenth century, it was not epidemics that were the issue, but something else—what might broadly be called endemics. . . . Death was no longer something that suddenly swooped down on life—as in an epidemic. Death was now something permanent, something that slips into life, perpetually gnaws at it, diminishes it and weakens it. [“S,pp. 243-44]

Forms of disease and technologies of power are interrelated, coimplicated, Foucault says. And the question that seems to be on everyone’s lips today, even in silent or unheard ways, is this one: What is the coronavirus contemporary with? Or, rather, what is it the metonymy or synecdoche of? That is to say, to what regime or to what technology of power does it attach itself with the spikes of its crown? What is the organism or organization of power—sovereign, disciplined, or biopolitical—that hosts it and is systemically related to it?

To give this question its full scope, we also have to consider, on the one hand, that among the “domains” or “fields of intervention” that “appeared in the late eighteenth century” with the birth of biopolitics, there is what Foucault calls the “control over the relations between . . . human beings insofar as they are a species, insofar as they are living beings, and their environment” (“S,p. 245); ecology, in sum, is also contemporary with biopower.

We must then consider, on the other hand, the extension of the Foucauldian analyses that Gilles Deleuze proposed in 1990 in his “Postscript on Control Societies,” where he suggests setting up “a correspondence between any society and some kind of machine.”[3] What he calls “control societies”—a generalization of disciplines and biopower outside their institutional walls and even into the micropores of the social fabric—is for him the era of “viral contamination” par excellence.[4]

What about the coronavirus, then? What kind of society hosts it? And what nosologico-political paradigm would it belong to?

While epidemiologists expect Covid-19 to become a new seasonal disease, one may wonder, according to the Foucauldian distinction, whether we are dealing with epidemics or endemics. Unless we are rather facing the resurgence of an epidemic temporality from the very heart of the endemic “homeostasis” regulated by biopolitics (“S,p. 246). What we should therefore reflect upon is a contamination that can no longer be contained within the distinction between epidemics and endemics—a contamination that contaminates these categories themselves, the one by the other. What we could well witness, then, is a panendemic that would be contemporary neither with past societies of sovereignty, of course, nor with disciplinary societies and their biopolitical developments, nor even with the Deleuzian “forms of control” (contrôlats) that prolong them.

After becoming pandemic, the epidemic could end up endemic, though still punctuated by epidemic peaks; but the reverse is also true: the endemic plague of healthcare systems under capitalism has exploded into a pandemic crisis. The latter is the subject of permanent statistical monitoring, of course, but it seems to thwart insurancial preparation and regulatory controls. In short, what arises with this nosological formation which is both new and familiar is perhaps the very time differential between these paradigms to which it belongs while exceeding them in every way.

A Crisis of Crisis?

I would be inclined to say that these paradigms are put in crisis, if the event called coronavirus did not overflow even the category of crisis itself. In their Communist Manifesto, speaking of the “periodical return” of the “commercial crises” that shake capitalist society, Karl Marx and Friedrich Engels described them as a “social epidemic” (gesellschaftliche Epidemie). But the regularity of these crises ended up consecrating the phrase “endemic crisis.”[5]

The very notion of crisis is still part of what it puts in crisis: by determining the threat as a crisis, “one tames it, domesticates it, neutralizes it,” Jacques Derrida wrote when questioned in 1983 about “the idea that the current world is in crisis.”[6] The crisis, especially when it is endemic, is already the horizon for a way out of the crisis. This is why Derrida could add: “In its turn in crisis, the concept of crisis would be the signature of a last symptom, the convulsive effort to save a ‘world’ that we no longer inhabit.”[7]

Promises have been made in recent weeks that would have been unthinkable a few months ago, for example that of resuscitating a dying public health system. It remains to be seen whether these promises will be kept (the signs are not encouraging).[8] More or less tacit commitments are also made regularly, for example about the temporary and exceptional nature of the mass surveillance measures deployed or currently experimented with. Here too, everything is ready, and everything remains to come.

Whether the coronavirus will end up being just one more crisis, perhaps more memorable than others, remains to be seen. And above all to be decided. A decision which must be taken now but that will have to be taken again, again and again, later.

What the coronavirus will have been, we will have to remember without erasing its time differentials. We will have to keep alive the experience of the heterochronies that wove the medial texture of the event.

It will decidedly have taken it several times to happen to us.

15 April 2020


Peter Szendy is David Herlihy Professor of Comparative Literature and Humanities at Brown University.


[1] Roland Barthes, “The Jet-man,” in Mythologies, trans. Annette Lavers (New York, 1972), p. 71

[2] Michel Foucault, “Society Must Be Defended”: Lectures at the Collège de France, trans. David Macey (New York, 2003), p. 243.

[3] Gilles Deleuze, “Postscript on Control Societies,” in Negotiations, trans. Martin Joughin (New York, 1995), p. 180.

[4] Ibid.

[5] Karl Marx and Friedrich Engels, The Communist Manifesto, trans. Samuel Moore (London, 2002), 225.

[6] Jacques Derrida, “Economies of the Crisis,” in Negotiations: Interventions and Interviews, 1971-2001, trans. Elizabeth Rottenberg (Stanford, Calif., 2002), pp. 70-1.

[7] Ibid.

[8] See Laurent Mauduit and Martine Orange, “Hôpital public: la note explosive de la Caisse des dépôts,” mediapart.fr, 1 Apr. 2020.


Watch Timothy Bewes interview Peter Szendy about his CI blog post for the Cogut Institute for the Humanities.

2 Comments

Filed under 2020 Pandemic

The Universal Right to Breathe

Achille Mbembe

Translated by Carolyn Shread

 

Already some people are talking about “post-Covid-19.” And why should they not? Even if, for most of us, especially those in parts of the world where health care systems have been devastated by years of organized neglect, the worst is yet to come. With no hospital beds, no respirators, no mass testing, no masks nor disinfectants nor arrangements for placing those who are infected in quarantine, unfortunately, many will not pass through the eye of the needle.

1.

It is one thing to worry about the death of others in a distant land and quite another to suddenly become aware of one’s own putrescence, to be forced to live intimately with one’s own death, contemplating it as a real possibility. Such is, for many, the terror triggered by confinement: having to finally answer for one’s own life, to one’s own name.

We must answer here and now for our life on Earth with others (including viruses) and our shared fate. Such is the injunction this pathogenic period addresses to humankind. It is pathogenic, but also the catabolic period par excellence, with the decomposition of bodies, the sorting and expulsion of all sorts of human waste – the “great separation” and great confinement caused by the stunning spread of the virus – and along with it, the widespread digitization of the world.

Try as we might to rid ourselves of it, in the end everything brings us back to the body. We tried to graft it onto other media, to turn it into an object body, a machine body, a digital body, an ontophanic body. It returns to us now as a horrifying, giant mandible, a vehicle for contamination, a vector for pollen, spores, and mold.

Knowing that we do not face this ordeal alone, that many will not escape it, is vain comfort. For we have never learned to live with all living species, have never really worried about the damage we as humans wreak on the lungs of the earth and on its body. Thus, we have never learned how to die. With the advent of the New World and, several centuries later, the appearance of the “industrialized races,” we essentially chose to delegate our death to others, to make a great sacrificial repast of existence itself via a kind of ontological vicariate.

Soon, it will no longer be possible to delegate one’s death to others. It will no longer be possible for that person to die in our place. Not only will we be condemned to assume our own demise, unmediated, but farewells will be few and far between. The hour of autophagy is upon us and, with it, the death of community, as there is no community worthy of its name in which saying one’s last farewell, that is, remembering the living at the moment of death, becomes impossible.

Community – or rather the in-common – is not based solely on the possibility of saying goodbye, that is, of having a unique encounter with others and honoring this meeting time and again. The in-common is based also on the possibility of sharing unconditionally, each time drawing from it something absolutely intrinsic, a thing uncountable, incalculable, priceless.

2.

There is no doubt that the skies are closing in. Caught in the stranglehold of injustice and inequality, much of humanity is threatened by a great chokehold as the sense that our world is in a state of reprieve spreads far and wide.

If, in these circumstances, a day after comes, it cannot come at the expense of some, always the same ones, as in the Ancienne Économie – the economy that preceded this revolution. It must necessarily be a day for all the inhabitants of Earth, without distinction as to species, race, sex, citizenship, religion, or other differentiating marker. In other words, a day after will come but only with a giant rupture, the result of radical imagination.

Papering over the cracks simply won’t do. Deep in the heart of this crater, literally everything must be reinvented, starting with the social. Once working, shopping, keeping up with the news and keeping in touch, nurturing and preserving connections, talking to one another and sharing, drinking together, worshipping and organizing funerals begins to take place solely across the interface of screens, it is time to acknowledge that on all sides we are surrounded by rings of fire. To a great extent, the digital is the new gaping hole exploding Earth. Simultaneously a trench, a tunnel, a moonscape, it is the bunker where men and women are all invited to hide away, in isolation.

They say that through the digital, the body of flesh and bones, the physical and mortal body, will be freed of its weight and inertia. At the end of this transfiguration, it will eventually be able to move through the looking glass, cut away from biological corruption and restituted to a synthetic universe of flux. But this is an illusion, for just as there is no humanity without bodies, likewise, humanity will never know freedom alone, outside of society and community, and never can freedom come at the expense of the biosphere.

3.

We must start afresh. To survive, we must return to all living things – including the biosphere – the space and energy they need. In its dank underbelly, modernity has been an interminable war on life. And it is far from over. One of the primary modes of this war, leading straight to the impoverishment of the world and to the desiccation of entire swathes of the planet, is the subjection to the digital.

In the aftermath of this calamity there is a danger that rather than offering sanctuary to all living species, sadly the world will enter a new period of tension and brutality.[1] In terms of geopolitics, the logic of power and might will continue to dominate. For lack of a common infrastructure, a vicious partitioning of the globe will intensify, and the dividing lines will become even more entrenched. Many states will seek to fortify their borders in the hope of protecting themselves from the outside. They will also seek to conceal the constitutive violence that they continue to habitually direct at the most vulnerable. Life behind screens and in gated communities will become the norm.

In Africa especially, but in many places in the Global South, energy-intensive extraction, agricultural expansion, predatory sales of land and destruction of forests will continue unabated. The powering and cooling of computer chips and supercomputers depends on it. The purveying and supplying of the resources and energy necessary for the global computing infrastructure will require further restrictions on human mobility. Keeping the world at a distance will become the norm so as to keep risks of all kinds on the outside. But because it does not address our ecological precariousness, this catabolic vision of the world, inspired by theories of immunization and contagion, does little to break out of the planetary impasse in which we find ourselves.

4.

All these wars on life begin by taking away breath. Likewise, as it impedes breathing and blocks the resuscitation of human bodies and tissues, Covid-19 shares this same tendency. After all, what is the purpose of breathing if not the absorption of oxygen and release of carbon dioxide in a dynamic exchange between blood and tissues? But at the rate that life on Earth is going, and given what remains of the wealth of the planet, how far away are we really from the time when there will be more carbon dioxide than oxygen to breathe?

breath


Rafael Lozano-Hemmer, “Last Breath”, 2012. Shown here: Rafael Lozano-Hemmer: Obra Sonora, Carroll / Fletcher Gallery, London, United Kingdom, 2014. Photo by: Grace Storey, Carroll/Fletcher Gallery.

Before this virus, humanity was already threatened with suffocation. If war there must be, it cannot so much be against a specific virus as against everything that condemns the majority of humankind to a premature cessation of breathing, everything that fundamentally attacks the respiratory tract, everything that, in the long reign of capitalism, has constrained entire segments of the world population, entire races, to a difficult, panting breath and life of oppression. To come through this constriction would mean that we conceive of breathing beyond its purely biological aspect, and instead as that which we hold in-common, that which, by definition, eludes all calculation. By which I mean, the universal right to breath.

As that which is both ungrounded and our common ground, the universal right to breath is unquantifiable and cannot be appropriated. From a universal perspective, not only is it the right of every member of humankind, but of all life. It must therefore be understood as a fundamental right to existence. Consequently, it cannot be confiscated and thereby eludes all sovereignty, symbolizing the sovereign principle par excellence. Moreover, it is an originary right to living on Earth, a right that belongs to the universal community of earthly inhabitants, human and other.[2]

Coda

The case has been pressed already a thousand times. We recite the charges eyes shut. Whether it is the destruction of the biosphere, the take-over of minds by technoscience, the criminalizing of resistance, repeated attacks on reason, generalized cretinization or the rise of determinisms (genetic, neuronal, biological, environmental), the dangers faced by humanity are increasingly existential.

Of all these dangers, the greatest is that all forms of life will be rendered impossible. Between those who dream of uploading our conscience to machines and those who are sure that the next mutation of our species lies in freeing ourselves from our biological husk, there’s little difference. The eugenicist temptation has not dissipated. Far from it, in fact, since it is at the root of recent advances in science and technology.

At this juncture, this sudden arrest arrives, an interruption not of history but of something that still eludes our grasp. Since it was imposed upon us, this cessation derives not from our will. In many respects, it is simultaneously unforeseen and unpredictable. Yet what we need is a voluntary cessation, a conscious and fully consensual interruption. Without which there will be no tomorrow. Without which nothing will exist but an endless series of unforeseen events.

If, indeed, Covid-19 is the spectacular expression of the planetary impasse in which humanity finds itself today, then it is a matter of no less than reconstructing a habitable Earth to give all of us the breath of life. We must reclaim the lungs of our world with a view to forging new ground. Humankind and biosphere are one. Alone, humanity has no future. Are we capable of rediscovering that each of us belongs to the same species, that we have an indivisible bond with all life? Perhaps that is the question – the very last – before we draw our last dying breath.

13 April 2020

[A version of this post appears in French at AOC]


Achille Mbembe is the author of Brutalisme (Paris, 2020). He is the cofounder with Felwine Sarr of Ateliers de la pensée in Dakar.


[Is translation still permissible in Covid-19? We know that its reach is across borders, that it comingles in a way that is rapidly disappearing into a seemingly distant past, that it transfers and transforms. Now, under the regime of social distancing, where I show my care for you by stepping away, what is it to translate? For there’s no reading more intimate than a translation – a bodily intimacy that adopts the rhythm of the lungs, the pulse of the heart, the coursing of the blood through the text to the point that we ask, whose breath is it anyway?

I know that this text kept me alive – merci, Achille Mbembe. That it came out of the blue, bringing a breath of fresh air – thank you, Hank Scotch. And that I’ll pass it on to you, readers of Critical Inquiry, hoping that it frees up the atmosphere. Because we need to breathe together. And there is no solitary breath—Carolyn Shread, translator]

[1] Building on the terms origins as a mid-twentieth century architectural movement, I have defined brutalism as a contemporary process whereby “power is henceforth constituted, expressed, reconfigured, acts and reproduces itself as a geomorphic force.” How so? Through processes that include “fracturing and fissuring,”,\ “emptying vessels,” “drilling,” and “expelling organic matter,” in a word, by what I term “depletion” (Achille Mbembe, Brutalisme [Paris, 2020], pp. 9, 10, 11).

[2] See Sarah Vanuxem, La propriété de la Terre (Paris, 2018), and Marin Schaffner, Un sol commun. Lutter, habiter, penser (Paris, 2019).

 

77 Comments

Filed under 2020 Pandemic

Ground-Zero Empiricism

Lorraine Daston

I am used to waking up in the seventeenth century. As a historian of early modern science, that’s where I spend a lot of time. But it is strange that everyone else is suddenly keeping me company there.

No, I don’t mean the plague. Fortunately for us, Covid-19 is nowhere near as deadly as the diseases caused by the bacterium Yersinia pestis. From its arrival in Pisa in 1348 to the last great outbreak in Marseilles in 1720, the bacterium killed at least 30 percent of Europe’s population and probably a comparable number along its path from South Asia to the Middle East. That would translate to ninety-nine million deaths in the US alone. No one, not even the gloomiest epidemiologists, think Covid-19 will carry off almost a third of the world’s population.

MARSEILLE

Yet, beyond that tepid reassurance, there’s not much consensus as to just how deadly the virus is; observed case-fatality rates in places where the disease has spread so far range from 12.7 percent (30.25 deaths per 100,000 inhabitants, this latter a better gauge when testing is still spotty) in Italy to 2.2 percent (3.14) in Germany, although the two countries have comparable (and comparably good) health systems. For the US, the current observed rate is 3.6 percent (5.04); in China, 4 percent (0.24). (All figures from the Johns Hopkins University Coronavirus Resource Center.) There is always variability in how the same bug affects different individuals: age, sex, income, medical care, genetic dispositions, nutrition, and many other factors all play a role. But within large samples of hundreds of thousands of patients, stable averages ought to emerge and converge, at least in roughly similar populations. Why are these numbers all over the map?

That’s what I meant when I said that we’ve suddenly been catapulted back to the seventeenth century: we are living in a moment of ground-zero empiricism, in which almost everything is up for grabs, just as it was for the members of the earliest scientific societies – and everyone else — circa 1660. For them, just figuring out what a phenomenon was (Was heat or luminescence or for that matter, the plague, all one kind of thing?), how best to study it (Collect comprehensive natural histories? Count instances? Perform experiments – if so, what kind? Systematically observe – if so, what exactly, and how long?), why it happened when and where it did, and, above all, what to do with it or about it — none of these basic questions had an agreed-upon answer. It wasn’t just a question of lacking knowledge. We will always lack knowledge, which is why research is never-ending. There was no settled script for how to go about knowing.

Of course, I exaggerate the analogy between then and now. Thanks in no small part to the ingenuity, sagacity, and sheer persistence of thousands and thousands of researchers since the seventeenth century, we are the heirs not only to knowledge (what a virus is, what it does, and how to thwart it) but also to a diverse repertoire of ways of knowing, from well-designed experiments and systematic observations, already being refined and yoked together in the seventeenth century, to chemical assays and statistical analysis to computer simulations. And by researchers, I mean not only natural philosophers in their curled periwigs or professors in their white lab coats but legions of lynx-eyed investigators everywhere, at sea and in fields, in cities and in kitchens, noting events and correlations: the bark that lowers fever; the cloud formation that portends a storm; the lackluster stone that shines in the dark with a cool light.  They all helped draft our script for how to go about knowing – a lengthy, intricate, and well-rehearsed script that guides our efforts to understand, among many other things, Covid-19 and its perplexingly various manifestations.

Yet, in moments of radical novelty and the radical uncertainty novelty emits, like a squid obscuring itself in ink, we are temporarily thrown back into a state of ground-zero empiricism. Chance observations, apparent correlations, and anecdotes that would ordinarily barely merit mention, much less publication in peer-reviewed journals, have the internet buzzing with speculations among physicians, virologists, epidemiologists, microbiologists, and the interested lay public. Is it true that more men are dying than women, and if so, in which age groups? Are the differences between observed case-fatality rates real or an artifact of how much various countries test for the number of infected persons (the denominator of the fraction) and/or how causes of death are registered? For example, some countries count the death of anyone who tested positive for Covid-19 as a death due to the virus, no matter what other factors (such as diabetes, for example) might have played a role; other countries use dominant or proximate causes in their classifications; both systems have their pros and cons.

Quite aside from the fog of statistics, there are basic facts yet to be ascertained. Is the disease airborne (and if so, how long it can linger in the air)? Do some antiviral drugs help alleviate symptoms in acute cases – and for whom? How much do ventilators, even when available, prolong the life of patients sick enough to warrant their use? Does Covid-19 cause heart attacks? Medical staff from Wuhan and Hackensack, Seoul and London, Bergamot and New York City are frantically exchanging observations on Twitter about therapies and “curious cases” (a very seventeenth-century term).

At moments of extreme scientific uncertainty, observation, usually treated as the poor relation of experiment and statistics in science, comes into its own. Suggestive single cases, striking anomalies, partial patterns, correlations as yet too faint to withstand statistical scrutiny, what works and what doesn’t: every clinical sense, not just sight, sharpens in the search for clues. Eventually, some of those clues will guide experiment and statistics: what to test, what to count. The numbers will converge; causes will be revealed; uncertainty will sink to tolerable levels. But for now, we are back in the seventeenth century, the age of ground-zero empiricism, and observing as if our lives depended on it.

10 April 2020


Lorraine Daston is director at the Max Planck Institute for the History of Science, Berlin, permanent fellow of the Wissenschaftskolleg zu Berlin, and visiting professor in the Committee on Social Thought at the University of Chicago. Her most recent book is Against Nature (2019). She is a frequent contributor to Critical Inquiry and a member of the editorial board.


Daston’s CI blog post was recently featured in Clifford Marks and Trevor Pour’s New Yorker article “What We Don’t Know About the Coronavirus.”

 

 

74 Comments

Filed under 2020 Pandemic

A Letter to Oliver Vogel

Alexander Garcia Düttmann

Translated by James Fontini

Dear Oliver,

Many years ago, the press you work for published a book of mine with the subtitle Thinking and Talking About a Virus. If I were to write about a virus again today, about this virus called corona, I would conceivably choose a similar subtitle, only slightly altered. The subtitle would read: “That a Virus Is Thought about and Spoken Of”. The first subtitle should indicate that the discourses emerging from the human immunodeficiency virus (HIV) were indeed of different sorts, but nonetheless developed into specific discursive patterns, which in turn minimized the disparity. On the other hand, the second subtitle would indicate the presupposition of such a development, which is to say the power of the virus to generate discourses. To my eyes, the present is less characterized by the question of “How to think and talk about a virus?” than by the fact that a virus is thought about and spoken of so much, so incessantly, and so tirelessly.

DUTTMANN

For nothing has struck me more in the past few weeks or so than the endless procession and parade of scientists, experts, researchers, doctors, of academics, artists and politicians, of sociologists, philosophers, political scientists, historians, cultural workers, who – one behind the other, one after the other, one before the next – constantly express their thoughts on the new pandemic to the public in print, on TV, and on the internet. Everyone knows something about the meaning of this pandemic and its repercussions, whether or not it announces the end of capitalism, allows for the emergence of new social solidarity, or ratifies the general state of exception that reduces life to bare life. It is as if this march of the minds – or ghost train? – was already in motion before one could even speak of an epidemic, let alone a pandemic.

While virologists work to develop a vaccine as quickly as possible – a medicine that will stop the pandemic – in overcrowded hospitals and clinics it is not merely a matter of life and death but of decisions of life and death that must be made quickly, as the next decision already imposes itself. Personnel are pushed to the limits of what is doable and what is reasonable. Protective masks, ventilators, even beds, are lacking. When this unpredictable, almost implausible, and immensely accelerated activity doubles in the superstructure, a communication explosion occurs.

Yet does not this doubling, the hectic drive and meddling of those who reflect on the crisis, also contain a comic dimension? It is as if it were also a crowning of the coronavirus,[1] mirroring or forestalling its virulence and unstoppability, prostrating respectfully before it while simultaneously warding it off, covering over the traces it has not yet left behind. If self-reflection was once tied to the end, if it was once an attempt to understand an experience or a challenge one had experimented, it now comes before the start, before one has actually experienced something.

When a total mobilization of spirit takes place, one that is acknowledgement and ostracism in one, a being-caught-up-in and a not-letting-near, when all minds are ready and willing to fulfill a task, it is hardly surprising that people reach for ways of thinking and speaking that are easily recognizable and provide something of a quick, calming platitude. It does not matter how apocalyptic they may sound or how much they may call for pause and mindfulness. Some seek new guidelines and directives for business to carry on as smoothly as possible, without getting too bogged down or delayed. Others invoke creativity, which, in order to justify its returns, must prove itself, especially in critical times.

It seems to me, then, that the only ones who can respond to the pandemic are those for whom the comic aspect I mentioned triggers an astonished laughter. Without such laughter, I fear, there is a lurking danger of the pandemic contributing only to the consolidation of social tendencies whose momentum it brings to a standstill, at least on the face of it. Let me ask you this: What sort of social distancing, if any, does astonished laughter practice?

Yours,

Alex

Bad Nauheim, 25 March 2020

[This was originally published in the magazine Hundertvierzehn]


Alexander García Düttmann teaches philosophy and aesthetics at University of the Arts in Berlin. He is the author of numerous books, including At Odds with AIDS. Talking and Thinking about a Virus (1996), Philosophy of Exaggeration (2004), Visconti: Insights into Flesh and Blood (2008), and, most recently, Love Machine. The Origin of the Work of Art (2018). He is also the editor of Jacques Derrida’s lecture course Théorie et pratique (2017) and appears as an actor in Albert Serra’s new film Liberté (2019).

James Fontini is working on a PhD about Heidegger and topology at University of the Arts in Berlin.


[1] Düttmann writes “dem Coronavirus noch einmal die Krone aufsezten”, a reference in part to the morphology that gives coronaviruses their name. The phrase “etwas die Krone aufsetzen” can also refer to “crowning” or “capping something off” in the sense of concluding or bringing to its end–Tranlator’s note.


 

1 Comment

Filed under 2020 Pandemic

COVID-19 Metaphors

Norman MacLeod

In her 1978 essay on Disease as Political Metaphor, Susan Sontag demonstrated that the trope of the infectious malady has been used through human history as a metaphor to represent, describe, and critique failures of the polis by critics of culture and politics. The present COVID-19 crisis is ripe — some might say “rife” — with further examples that embody the complete spectrum from profound to ridiculous. The fact that many of the metaphors being used have been expropriated from my own fields of evolutionary biology and earth science simply serves to underscore the difficulties, and the opportunities (some unrealized to date), that the metaphoric mode of communication entails.

virus

First, let’s get some facts straight about viruses. They get a bad rap from the press. Ask just about anyone about them and all you hear are complaints. It’s “disease this” and “infection that.” No one seems to have anything nice to say about viruses, which is a shame because you wouldn’t be here reading this without them.

Viruses are the most ubiquitous life forms on the planet with 1031 individuals cited as one recent estimate of their number. They are also one of the least understood. They live everywhere in nature and everywhere both on and inside of you. Less than one percent are known to be pathogenic, but many more are known to be symbiotic (which means they assist the host), mutualistic (which means both host and virus benefit from the association), or benign (which means we don’t know what they do). In addition, viruses’ modus operandi of targeting specific cell types and interrupting these cells’ genetic functioning means they can be used to destroy certain cell types selectively (e.g., cancer, HIV) and repair genetic damage in others. So, next time someone asks you about viruses, show a little respect.

These days when people say something has “gone viral,” they almost always are using the term as a metaphor for an event that touches a great number of people and news of which is passed from individual to individual, usually via social media. As metaphors go, it’s not too bad. Of course, there’s nothing especially virus like about microbial infection. A wide variety of small beasties spread disease among individuals via close proximity and/or physical contact (e.g., bacteria, prions). But the term virus sounds much closer to vita (Latin for “the life force”) and so is obviously the better choice for representing any event, idea or philosophy that touches large numbers of people.

More interesting is the, somewhat neglected, aspect of the viral disease metaphor’s cultural extrapolation. Viruses are not designed to damage or kill their hosts. The point of a virus is life, not death. Because viruses need living cells to reproduce over time, they have developed transmission strategies that make the finding of living hosts quick and efficient. Ideally, a pathogenic virus will enter a living system and have sufficient time to make many copies of itself before it is eliminated by the host’s immunological defenses. The virus survives and the host survives. That’s the model. The problem with pathogenic viruses — especially those hosts have not encountered before — is that, in the system’s efforts to find, and develop a means of neutralizing the virus, the state of the body is changed, sometimes beyond the point at which the body (especially weakened bodies) can remain alive. Consequently, it’s not the virus that kills, it’s the body’s reaction to the virus that kills.

The COVID-19 virus is special but not for the reason that most people think. Its infection of our bodies is nothing noteworthy as viruses go. But COVID-19 has also infected our cultures, our economics, and our politics, worldwide. It’s a virus and it’s a meme. In order to reduce the inferred levels of mortality in at-risk individuals our societies have reacted in unprecedented ways, by mandating the shut down of economic and cultural activities, curtailing the individual (and increasingly legal) rights of citizens, and by forcing both individuals and family groups into physical isolation for an, as yet, unspecified time interval. It remains to be seen whether these societal reactions will be sufficient to mitigate the damage the virus will inflict on human populations. In a moral sense, we have no choice but to endure them in the hope they will. But just as a body’s reaction to a pathogenic virus can leave it in a weakened state, and so susceptible to other infections that would not prove problematic had the virus not come along, the economic social and cultural reactions the COVID-19 meme has caused will leave our societal bodies in much weakened states. It will take a substantial interval and sustained efforts for our societies to recover from their reaction to this infection. In thinking about the legacy COVID-19 will leave, especially in light of the knowledge that similar microbe/meme pandemics have happened in the past and will happen in the future, it will be important to remember that, unlike our body’s immunological reactions, we are in control of how our societies react to this and future infections. It is in our power to learn from this infection and so establish structures that will recognize the danger and take steps to mitigate harmful societal responses both to future pandemics and to other events of a holistically environmental nature, as they arrive.

Of course, this process is nothing more, or less, than an example of cultural adaptation and, as with all forms of adaptation, the key to success is diversity. But herein also lies danger. Drawing, once again metaphorically on natural systems in the immediate wake of a supervolcano eruption, asteroid impact, or ice age, the strategies that lead to successful postevent diversification are unknown. Some lineages remain more-or-less unchanged and continue to pursue old established ways. Others undergo rapid and profound alteration in their approaches to life. Success always belongs to whichever strategies work best for whatever reason. Moreover, adaptations that confer an advantage, whatever their origin and however slight, can eventually displace those that don’t, irrespective of the success the latter may have enjoyed previously. Prior incumbency is no guarantor of success in the aftermath of a profound dislocation.

As it is with nature, so it is with the social factors of culture, economics, and politics. Humans can do many things that are highly unusual, even unique. But, by definition, humans can never do anything that’s unnatural. Owing to the manner in which human cultures have responded to the COVID-19 infection, many of their most cherished mores, traditions, and institutions have, to all intents and purposes, been suspended. It’s far too early to tell which will survive after the crisis has passed and in what state. What can be said with a fair degree of certainty, however, is that aspects of the world of tomorrow may be very different from the world of yesterday and that the challenges we’ll face in coping with that world won’t end with our society’s survival; they’ll only have begun.

6 April 2020


Norman McLeod teaches at the School of Earth Sciences and Engineering, Nanjing University, China NMacLeod@nju.edu.cn

4 Comments

Filed under 2020 Pandemic

Anticipatory Care

Carol J. Adams

I’ve been talking to my dogs more frequently these days because, as I tell them, they have no idea about the coronavirus pandemic or at least aren’t communicating their thoughts about the issue to me.

Today, Inky the rescued Minpin lay on the penultimate step to the second floor as I leaned over to address him on this subject; “You don’t know anything about this, do you?” He was barely committed to the conversation. As I rubbed his back, I thought about a recent call from my sister.

flu

She had called to urge me to set up a sick room in our house, now. Now, she directed, before either my partner or I fell ill with the virus. She listed items to put in the room (extra sheets and towels, cleaning supplies), the tasks of preparation we should do now. I dutifully wrote the list down but found that when she said “and clear off all surfaces, now,” I stopped. Perhaps we have more tchotchkes than most, but the idea of clearing off all the surfaces of the designated sickroom seemed going too far.

The fault line between the real and imagined threat of coronavirus could be found right there—will I do something about those surfaces populated with items? The idea is that if you remove them now there would be less surfaces that might contain the virus to wipe down if someone were ill with the disease. I should have been thinking of the real possibilities of the needs of an ill person. As someone who provided care to others, shouldn’t I have traversed the space between the real and the possible more easily? A quick read of Jessica Lustig’s “What I learned when my husband got sick with coronavirus” could have jumpstarted my priorities.

All the years of caregiving for our parents that my sisters and I shared, following what I called “the rule of the good daughter,” and here was the first test of what a post-parental caregiving situation might require. I was balking.

My sister continued, “Don’t shake out the dirty clothes before washing them.” Who, I wondered, shakes out clothes before washing them anyway? But such a specific image, the sick person’s clothes, the need to wash them, the need to wash your own clothes after tending the sick person, the person standing in front of the washing machine, seemed to bridge the real and the possible with urgency.

She said, “You need a Plan B, you get sick, and Plan C the person you live with gets sick.” She was urging what we might call anticipatory care. Of professional caregiving, in crisis because of federal inaction, an eviscerated health care system, and the absence of needed equipment, there is not much we individually can do—unless we are sewing masks, a nineteenth century answer in a twent-first century world. For collective caregiving, our responsibility is clear—flatten the curve.  Personal caregiving such as Jessica Lustig describes (Advil in a plastic dish, doing laundry, trying to get her husband to eat something, wiping down the shower, switches, faucets, well . . . everywhere) is learned in the work of it. The reports, such as Lustig’s may awaken us to anticipatory care—this practical work of thinking about “what if the virus strikes home?”

mask

It is said about a creative venture that you have to imagine something before it can come into existence. But, in general, this is not how personal caregiving evolves. A need comes into existence and caregiving is the necessary and immediate response—a broken leg, Alzheimer’s progressing—so often we figure it out on the go.

Anticipatory care, as my sister urged, and as Lustig implies, recognizes that we or someone we love may be next. It might begin with ensuring all capable members of a household—if we live in a household—know where the sheets and cleaning agents are, know how to use the dishwasher and the washing machine, if we are lucky enough to have access to those appliances. The Centers for Disease Control help us if we struggle with imagining what is required.

What if we acknowledged that a part of life is thinking ourselves toward our own deaths, thinking ourselves toward a possible role of caregiver or care receiver?

I ended my essay in Critical Inquiry with a question:“It’s a quandary and an epistemological concern: can the noncaregiving world comprehend or encompass the world of caregiving?” Lustig has seen no sign of it:

It’s as if we [her family] are in a time warp, in which we have accelerated at 1½ time speed, while everyone around us remains in the present — already the past to us — and they, blissfully, unconsciously, go about their ordinary lives, experiencing the growing news, the more urgent advisories and directives, as a vast communal experience, sharing posts and memes about cabin fever, about home-schooling, about social distancing, about how hard it all is, while we’re living in our makeshift sick ward, living in what will soon be the present for more and more of them.

If the pandemic situates us so much closer to caregiving than many have ever been, still anticipatory care on its own won’t transform a noncaregiver into a caregiver. Lustig talks about the people walking past the door of a clinic through which she and her very ill husband must exit, the passing pedestrians oblivious to the illness on the other side of the door. Think of the crowds who showed up to watch the USNS Comfort arrive in the New York harbor—arriving, they hoped, for them.

Those who have been through hell have told us what we need to do, tchotchkes be damned.

5 April 2020


Carol J. Adams is a feminist scholar and activist whose written work explores the cultural construction of overlapping and interconnected oppressions, as well as the ethics of care. Adams’s first book, The Sexual Politics of Meat, is now celebrating its thirtieth anniversary. She is also the author of Burger and coauthor with Virginia Messina of Protest Kitchen: Fight Injustice, Save the Planet, and Fuel Your Resistance One Meal at a Time, and of many other books. A new and updated The Pornography of Meat (2004) will appear in the fall of 2020. The visual accompaniment to The Sexual Politics of Meat will include 340 images sent from around the world by Adams’s readers, whom she calls “grassroots sociologists.” She is working on a memoir of her mother based on her essay from the New York Times, “Finding Myself in My Mother’s Calendars.”

www.caroljadams.com

3 Comments

Filed under 2020 Pandemic

Biopolitics in the Time of Coronavirus

Daniele Lorenzini

 

In a recent blog post, Joshua Clover rightly notices the swift emergence of a new panoply of “genres of the quarantine.” It should not come as a surprise that one of them centers on Michel Foucault’s notion of biopolitics, asking whether or not it is still appropriate to describe the situation that we are currently experiencing. Neither should it come as a surprise that, in virtually all of the contributions that make use of the concept of biopolitics to address the current coronavirus pandemic, the same bunch of rather vague ideas are mentioned over and over again, while other—no doubt more interesting—Foucauldian insights tend to be ignored. In what follows, I discuss two of these insights, and I conclude with some methodological remarks on the issue of what it may mean to “respond” to the current “crisis.”

Exif_JPEG_PICTURE

The “Blackmail” of Biopolitics

The first point that I would like to make is that Foucault’s notion of biopolitics, as he developed it in 1976,[1] was not meant to show us just how evil this “modern” form of power is. Of course, it was not meant to praise it either. It seems to me that, in coining the notion of biopolitics, Foucault wants first and foremost to make us aware of the historical crossing of a threshold and more specifically of what he calls a society’s “seuil de modernité biologique” (“threshold of biological modernity”).[2] Our society crossed such a threshold when the biological processes characterizing the life of human beings as a species became a crucial issue for political decision-making, a new “problem” to be addressed by governments—and this, not only in “exceptional” circumstances (such that of an epidemic), but in “normal” circumstances as well.[3] A permanent concern which defines what Foucault also calls the “étatisation du biologique” (the “nationalization of the biological”).[4] To remain faithful to Foucault’s idea that power is not good or bad in itself, but that it is always dangerous (if accepted blindly, that is, without ever questioning it), one could say that this “paradigm shift” in the way in which we are governed, with both its positive and its horrible outcomes, no doubt corresponds to a dangerous extension of the domain of intervention of power mechanisms. We are no longer governed only, nor even primarily, as political subjects of law, but also as living beings who, collectively, form a global mass—a “population”—with a natality rate, a mortality rate, a morbidity rate, an average life expectancy, etc.

In “What is Enlightenment?” Foucault claims that he wants to refuse the “‘blackmail’ of Enlightenment”—that is, the idea that we have to be either “for” or “against” it—and address it instead as a historical event that still characterizes, at least to a certain extent, what we are today.[5] I would like to suggest, in an analogous way, that it would be wise for us to refuse the “blackmail” of biopolitics: we do not have to be “for” or “against” it (what would that even mean?), but address it as a historical event that still defines, at least in part, the way in which we are governed, the way in which we think about politics and about ourselves. When, on the newspapers or the social media, I see people complaining about others not respecting the quarantine rules, I always think about how astonishing it is for me, on the contrary, that so many of us are, even when the risk of sanctions, in most situations, is quite low. I also noticed the panoply of quotes from Discipline and Punish, in particular from the beginning of the chapter “Panopticism,”[6] which of course perfectly resonates with our current experience of the quarantine, as it describes the disciplinarization of a city and its inhabitants during a plague epidemic. However, if we just insist on coercive measures, on being confined, controlled, and “trapped” at home during these extraordinary times, we risk overlooking the fact that disciplinary and biopolitical power mainly functions in an automatic, invisible, and perfectly ordinary way—and that it is most dangerous precisely when we do not notice it.

Instead of worrying about the increase of surveillance mechanisms and indiscriminate control under a new “state of exception,” I therefore tend to worry about the fact that we already are docile, obedient biopolitical subjects. Biopolitical power is not (only) exercised on our lives from the “outside,” as it were, but has been a part of what we are, of our historical form of subjectivity, for at least the past two centuries. This is why I doubt that any effective strategy of resistance to its most dangerous aspects should take the form of a global refusal, following the logic of the “blackmail” of biopolitics. Foucault’s remarks about a “critical ontology of ourselves”[7] may turn out to be surprisingly helpful here, since it is the very fabric of our being that we should be ready to question.

The (Bio)Politics of Differential Vulnerability

The second point that I would like to discuss—a crucial one, but alas one that I rarely find mentioned in the contributions mobilizing the notion of biopolitics to address the current coronavirus pandemic—is the inextricable link that Foucault establishes between biopower and racism. In a recent piece, Judith Butler rightly remarks “the rapidity with which radical inequality, nationalism, and capitalist exploitation find ways to reproduce and strengthen themselves within the pandemic zones.” This comes as a much-needed reminder in a moment in which other thinkers, such as Jean-Luc Nancy, argue on the contrary that the coronavirus “puts us on a basis of equality, bringing us together in the need to make a common stand.” Of course, the equality Nancy is talking about is just the equality of the wealthy and the privileged—those who are lucky enough to have a house or an apartment to spend their quarantine in, and who do not need to work or can work from home, as Bruno Latour already observed. What about those who are still forced to go to work every day because they cannot work from home nor afford to lose their paycheck? What about those who do not have a roof over their head?

In the last lecture of “Society Must Be Defended,” Foucault argues that racism is “a way of introducing a break into the domain of life taken over by power: the break between what must live and what must die.”[8] In other words, with the emergence of biopolitics, racism becomes a way of fragmenting the biological continuum—we all are living beings with more or less the same biological needs—in order to create hierarchies between different human groups, and thus (radical) differences in the way in which the latter are exposed to the risk of death. The differential exposure of human beings to health and social risks is, according to Foucault, a salient feature of biopolitical governmentality. Racism, in all of its forms, is the “condition of acceptability” of such a differential exposure of lives in a society in which power is mainly exercised to protect the biological life of the population and enhance its productive capacity.[9] We should therefore carefully avoid reducing biopolitics to the famous Foucauldian formula “making live and letting die.”[10] Biopolitics does not really consist in a clear-cut opposition of life and death, but is better understood as an effort to differentially organize the gray area between them. The current government of migration is an excellent example of this, as Martina Tazzioli convincingly shows when talking of “biopolitics through mobility.”[11] Indeed, as we are constantly, sometimes painfully reminded these days, biopolitics is also, and crucially, a matter of governing mobility—and immobility. Maybe this experience, which is new for most of us, will help us realize that the ordinary way in which “borders” are more or less porous for people of different colors, nationalities, and social extractions deserves to be considered as one of the main forms in which power is exercised in our contemporary world.

In short, biopolitics is always a politics of differential vulnerability. Far from being a politics that erases social and racial inequalities by reminding us of our common belonging to the same biological species, it is a politics that structurally relies on the establishment of hierarchies in the value of lives, producing and multiplying vulnerability as a means of governing people. We might want to think about this next time that we collectively applaud the “medical heroes” and “care workers” who are “fighting the coronavirus.” They deserve it, for sure. But are they really the only ones who are “taking care” of us? What about the delivery people who make sure that I receive what I buy while safely remaining in my quarantined apartment? What about the supermarket and pharmacy cashiers, the public-transportation drivers, the factory workers, the police officers, and all of the other people working (mostly low-income) jobs that are deemed necessary for the functioning of society? Don’t they also deserve—and not exclusively under these “exceptional” circumstances—to be considered “care workers”? The virus does not put us on a basis of equality. On the contrary, it blatantly reveals that our society structurally relies on the incessant production of differential vulnerability and social inequalities.

The Political Grammar of the Crisis

Foucault’s work on biopolitics is more complex, rich, and compelling for us today than what it appears to be under the pen of those who too quickly reduce it to a series of anathemas against disciplinary confinement and mass surveillance or who misleadingly utilize it to talk about the state of exception and bare life.[12] I do not want to suggest, however, that the notion of biopolitics should be taken as the ultimate explanatory principle capable of telling us what is happening and what the “solution” to all of our problems is—and this, not only because of the “historically differentiated character of biopolitical phenomena” correctly emphasized by Roberto Esposito, but also for a deeper methodological reason. Our political thought is a prisoner to the “grammar of the crisis” and its constrained temporality, to the extent that critical responses to the current situation (or, for that matters, to virtually all of the recent economic, social, and humanitarian “crises”) do not seem able to look beyond the most immediate future.[13] Thus, if I agree with Latour that the current “health crisis” should “incite us to prepare for climate change,” I am far less optimistic than he is: this will not happen unless we replace the crisis-narrative with a long-term critical and creative effort to find multiple, evolving responses to the structural causes of our “crises.” To elaborate responses, instead of looking for solutions, would mean to avoid short-term problem-solving strategies aiming at changing as little as possible of our current way of living, producing, traveling, eating, etc. It would mean to explore alternative social and political paths in the hope that these experiments will last longer than the time between the present “crisis” and the next one, while acknowledging that these transformations are necessarily slow, since we cannot just get rid of our historical form of being in the blink of an eye. In a word, it would mean having faith in our capacity to build a future, not only for ourselves, but for countless generations yet to come. And to actually start doing it.

 

New York City

2 April 2020


Daniele Lorenzini is Assistant Professor of Philosophy at the University of Warwick, where he is also Deputy Director of the Centre for Research in Post-Kantian European Philosophy. A coeditor of Foucault Studies, his most recent books include La force du vrai: De Foucault à Austin (2017) and Éthique et politique de soi: Foucault, Hadot, Cavell et les techniques de l’ordinaire (2015).


[1] See Michel Foucault, The History of Sexuality, Volume 1: An Introduction (New York: Pantheon Books, 1978), 135-145; Michel Foucault, “Society Must Be Defended”: Lectures at the Collège de France, 1975-1976 (New York: Picador, 2003), 239-263.

[2] Foucault, The History of Sexuality, Volume 1, 143 (translation modified).

[3] Foucault, “Society Must Be Defended”, 244.

[4] Ibid., 240 (translation modified).

[5] Michel Foucault, “What is Enlightenment?”, in The Foucault Reader, ed. Paul Rabinow (New York: Pantheon Books, 1984), 42-43.

[6] See, e.g., this dossier on “Coronavirus and Philosophers”. To read Foucault’s analysis in full, see Michel Foucault, Discipline and Punish: The Birth of the Prison (New York: Vintage Books, 1977), 195-200.

[7] Foucault, “What is Enlightenment?”, 47.

[8] Foucault, “Society Must Be Defended”, 254 (translation modified).

[9] Ibid., 255-256 (translation modified).

[10] Foucault, The History of Sexuality, Volume 1, 138-141; Foucault, “Society Must Be Defended”, 241-243.

[11] Martina Tazzioli, The Making of Migration: Biopolitics of Mobility at Europe’s Borders (London: Sage, 2019), 106. Although this has passed virtually unnoticed, in the first volume of his History of Sexuality, Foucault mentions migrations as one of the main areas in which biopolitical mechanisms of power function. See Foucault, The History of Sexuality, Volume 1, 140.

[12] See, e.g., Giorgio Agamben’s texts on coronavirus, as well as Gordon Hull’s critical response.

[13] See Daniele Lorenzini and Martina Tazzioli, “Critique without Ontology: Genealogy, Collective Subjects, and the Deadlocks of Evidence”, Radical Philosophy, forthcoming.

12 Comments

Filed under 2020 Pandemic

Hanging in the Air

Andrea Brady

 

Being less than an activity

we empty out the life that hangs

like code in the air, but for how long

does it survive there if the air is white and lush,

more benevolent to the city than ever, whose leaves are out

of a season we are missing. It hangs

on the window like a recrimination,

a rainbow trail, the wolf’s chalky invite

to the last kid hiding in the clock.

And like a call; and is filled with calls

of the chattering species

whose voices are carried from house to house

parties and face times, many heard, the more silent.

 

And like nothing but indifference

growing warmer in the tangled biome to its human

carriers. We pick our way prudently down the street.

The person who passes is like us

a matrix of infection. We turn around at the head

of the aisle that has someone in it, and wash our hands

and shrink. Our hands are very dry now. Our mean gestures have all changed.

 

 

When in this poem I say we I mean a nuclear family in London

who are lucky. Having outside space.

The ball keeps getting kicked over the fence, and there is someone

there to return it.

 

 

A friend, who is Chinese, has been repeatedly abused in the street.

Mean gestures, filthy speech. The street is also the space

where our neighbours are clapping. Where we perform distance

to contain the bad humours that may be hidden

in another body. Hidden inside a room that can’t be left

because of the news, the violent man, the guard, the border. It is now

very easy to get sectioned. We consider ourselves indefinitely

separated from our friends and lovers and nothing will be the same

until it is, and the amazonification

of the planet will be complete, and we’ll be released

from our incommensurate lockdown to party and write poems

upon poems about the virus and the discourse of war.

And some will still not be able to go out into

the streets still full of the performance of abuse.

 

 

For now we pick apart the hem looking for silver linings

inside the garment of bad surprises.

 

 

My kids have been teaching me about black holes, clock time

and dentistry in ancient Egypt.

I thought the singularity was a site of infinitely dense matter

but it’s the profound energy that distorts space and time.

They’re overjoyed to learn that if they tried to pass

through its horizon they’d be spaghettified,

their whole body a stream of plasma

one atom wide. If your being was not then empty

it would be still, watching the universe shift

and quicken before it.

 

 

Right now I’m writing this standing up because I’m teaching

and working and printing and feeding and remembering

and in pain. When you’re sick or in pain it’s hard to remember

what it was like not to be, the self that streamed

painlessly through another world is not yourself, the light

stuttering on her face was not your light or your face.

How could I have been so stupid not to notice

how easy it has always been for me to move down the street?

Right now I am trying to read and not read the accounts

of the anaesthetists. I misread the inhalation

of toxic gas as toxic glass. I don’t want to think of all the people alone

 

 

I tell the kids to write about their experiences

of this big historical singularity

and hide its data from them. I could say it’s like the way

the black hole can’t be seen but shifts everything around it

but that’s a comparison in a poem and the kids just laugh.

They know that the collapse of everything clears

the air at least. How cool the sky would always be

without the scratching of motors. We could lag together,

smooth in our suspension.

 

We stay in the yard.

 

In its green and yellow is an image

of the lungs we will be given

if we cross the horizon and abandon

the nuclear family, private property, obedient domains.

 

1 April 2020

 


Andrea Brady’s books of poetry include The Strong Room (Crater, 2016), Dompteuse (Book Thug, 2014), Cut from the Rushes (Reality Street, 2013), Mutability: Scripts for Infancy (Seagull, 2012), and Wildfire: A Verse Essay on Obscurity and Illumination (Krupskaya, 2010). She is Professor of Poetry at Queen Mary University of London, where she founded the Centre for Poetry and the Archive of the Now.

2 Comments

Filed under 2020 Pandemic

Would a Shaman Help?

Michael Taussig

 

A friend in the Midwest asks if a shaman could help in the present crisis?

Given presidential grandstanding and the run on toilet paper and guns, it seems like a reasonable question. But it all depends on what kind of shamanism and what kind of help.

Shamanism is no substitute for science as regards virology, but as performance art sparking the imagination, it could dampen panic, ease social isolation, and promote cohesion. As a Happening it may not have raised the Pentagon during the Vietnam War, but it emboldened the imagination that brought that war to an end. During Occupy Wall Street downtown NYC, you could smell burning sage learnt from Native American shamanism. Attempting to resist the white man from the Rockies to the Plains, the Ghost Dancers were massacred, but now the white man needs to form the magic circle, compose the songs, and start dancing too. And for sure it will be a magical circle seeing as we are now in strict isolation.

Giorgio de Chirico’s melancholy paintings of Roman arcades and streets without people are no less shamanic, capturing the aura Walter Benjamin found in Eugène Atget’s photographs of Paris streets likewise without people. Being alone in cities with empty streets and piazzas is more shamanic than the “real thing.”

DE_CHIRICO

With his prescient focus on viral epidemics and on words as mutating viruses, William S. Burroughs would certainly be asking my friend’s question, especially as regards his notion of the “composite city” as a mosaic of fabulous forms. For him it all began in 1953 with his eye-opening encounter taking the hallucinogen yagé (ayahuasca) with shamans in the Putumayo region of southwest Colombia, which I visited annually from 1972 to 1999.

The phantasmatic properties of viral pandemics in the fiction that followed paralleled his yagé experience with shamans. His curiosity was writerly, becoming a few years later a conscious method of cutting up images and, with that practice, confronting “Control,” spiritual no less than political.

As with yagé, the cut ups were intended to connect language with the body in galvanic upheavals of subject-object relations for which the all-night wordless song is essential.

CitiesRedNight

Shamanism is primarily a means for buffering rumor and paranoia. Yet it depends on that too. Who is bewitching (read infecting) who? Fox News and Trump are pretty good at this shamanic warfare. Hence our need for an alternative. It is not a choice but a necessity.

The yagé séance is a small-group unscripted theatrical exorcism of the malevolence the sorcerer projected into oneself. Relief depends on visions flowing into one like a blue substance, storytelling, and the fiercely visceral sensations that recur in wave-like rhythms with the divine hum of the shaman, the hum of the waking world.

But could anything like this be achieved in a situation of social distancing and lock-down? Can you on your lonesome cook up image and music repertoires, say like Alice Coltrane, so as to engage inner fear with global meltdown? Here’s the thing: due to the pandemic the gates of creativity swing wide open. We have to become our own shaman.

An important yagé trope for me and I think for my Putumayo friends was to see the shamanic experience as journeying through the “space of death.” Dante presented a version, but his is famously symmetrical and ordered. The yagè space of death is not.

Shamanic magic today owes much to colonial projections of magical power to the primitive. That, combined with the at times terrifying sensation of dying under the influence of yagé, made me think of the Iberian conquest of Latin America as bringing together the magical underworlds of Africa, the Iberian Peninsula, and indigenous America. In my estimation, akin to the historiographic practices of Benjamin and Aby Warburg, this is alive as occult force today yet easily abused by people looking for the shamanic fix, including dime-a-dozen shamans themselves.

My friend’s question begs the big picture. How have we been looking at climate change?

One opinion is that we in the West long ago disenchanted nature.

But what the question opens up is the thought that with global meltdown we now live in a reenchanted universe for which the aesthetic of a dark surrealism is relevant. It is a mutating reality of metamorphic sublimity that never lets you know what is real and what is not. Born from WWI, there is a lot of Dada here too, with its shock effects and montage. We were told the bourgeoisie had gotten bored with that. But now, has not Dada and surrealism returned with a vengeance? Before it was avant-garde subsiding into history. But now with the reenchantment of nature, history is subsiding into Dada, and it’s not so boring, not with swans and dolphins being sighted (so it is said) in the now clear canal water of Venice where people are dying in quantities and “Death in Venice” recurs as if an Eternal Return while tourists flee in their pestilential cruise ships in a replay of Michel Foucault’s Great Confinement.

Perhaps the strangest thing of all are the masks of the medico della peste, the doctor of the plague, for sale until lockdown in the ubiquitous tourist curio shops in Venice. It is an unsettling mask with a long beak that I could never make sense of. Now I get it. The beak was the fifteenth-century equivalent to the surgical mask of today (and people think the germ theory of disease is modern!). It was filled with sweet smelling flowers. A drawing by Paul Hirst in 1721 is spooky in the extreme. It shows a beak-masked plague doctor with huge goggles and an overflowingly large gown so large it could encompass the universe. He is the epitome of the black plague and the Corona virus. That would be the sympathetic magic of like affects like as the fourteenth century meets today.

plaguemask

Of course they were a superstitious lot back then, not like today as people scurry for toilet paper and guns.

As people die the pope just announced that you can confess directly to God. Opera singers belt out arias from their balconies. It seems like the shamanism I was describing; lavish images in the space of death, as the divine hum like a candle in the night steadies the soul in our reenchanting world.

Shamanism coexists with allopathic medicine, with penicillin and dialysis machines, for example. It’s not one or the other. What the latter lacks, however, along with political economy, is the divine hum of the reenchanted universe that opens the doors of perception just as the virus does. That’s what I’ll tell my friend.

High Falls, NY

30 March 2020


Michael Taussig teaches anthropology at Columbia University. He is the author of The Devil and Commodity Fetishism (1980); Shamanism, Colonialism, and the Wild Man (1987); The Nervous System (1992); Mimesis and Alterity (1993); Law in a Lawless Land (1993); and My Cocaine Museum (2004).

 

3 Comments

Filed under 2020 Pandemic

The Rise and Fall of Biopolitics: A Response to Bruno Latour

Joshua Clover

How swiftly do genres of the quarantine emerge! Notable among them is the discovery of the relation between the present pandemic and onrushing climate collapse. The driving force of this genre is not holy shit two ways for a lot of people to die but the realization, or hope, that the great mobilizations of state resources currently being unspooled to address COVID-19 prove the possibility of a comparable or greater mobilization against ecological catastrophe, an even greater threat if somewhat less immediate. There is to be sure a certain mixing of analogies: in the United States, confronting climate change is conventionally likened to the New Deal or Marshall Plan, schemes to hedge against the charisma of communism, while addressing the pandemic decisively takes the language of war itself, a “war footing,” “wartime president,” and so on. This is an interesting slippage, no doubt, though both analogies rely on a vision of preserving global hegemony. Insert rueful laugh.

Bruno Latour provides a recent example of this genre; it appeared dually in Le Monde and Critical Inquiry on 25 March, here under the title “Is This a Dress Rehearsal,” and in French under the more prosaic but imperative “Health Crisis Demands We Prepare for Climate Change.”[1] The short piece is filled with the author’s habits of mind such as the inevitable “Latour Litany,” a list of all the various actors human and inhuman in an “entire network,” enumerated with an insistent leveling of its contents where what matters is that all these actors stand in ratio with each other, mute equivalents. It is as if exchange value had taken up a side hustle as a theorist. The goal is to demonstrate yet again the indistinction of nature and society toward discovering the obvious truth that “The pandemic is no more a ‘natural’ phenomenon than the famines of the past or the current climate crisis.”

But here problems arise for the comparison, as the author himself admits. Writing from France, he notes that Emmanuel Macron’s capacity to confront the pandemic is not of a kind with even his least gesture toward (purported) climate abatement, recalling how his gas tax was met not with relief and a thirst for more but with the riots of the Gilets Jaunes movement. Per Latour, this is because Macron — and ostensibly other leaders — have not forged the kind of new state that climate collapse will require. Instead, “we are collectively playing a caricatured form of the figure of biopolitics that seems to have come straight out of a Michel Foucault lecture.”

He means Foucault’s final lecture on the theme Society Must Be Defended, describing a new kind of power. Whereas once “Sovereignty took life and let live,” he writes, we discover toward the end of the eighteenth century “the emergence of a power that . . . in contrast, consists in making live and letting die.” This is the famous formula of biopolitics: the sovereign power to make live and let die.

Latour notes that this power’s deployment in the present moment includes “the obliteration of the very many invisible workers forced to work anyway so that others can continue to hole up in their homes.” Rightly so — this is a peculiarly awful time to be a delivery worker, from the warehouse or restaurant to the driver anxiously tossing a box on your porch. Recent days have presented an even more devastating turn: recent pronouncements by various governmental figures who, noting the economic devastation of COVID-19, proclaimed that people would have to abandon quarantine procedures after a fortnight at the very most and return to work so as to avoid cratering the economy. This despite the medical certainty that this would lead to more transmissions and more deaths. Forty-four years and five days after Foucault’s lecture, Donald Trump tweeted, WE CANNOT LET THE CURE BE WORSE THAN THE PROBLEM ITSELF. AT THE END OF THE 15 DAY PERIOD, WE WILL MAKE A DECISION AS TO WHICH WAY WE WANT TO GO! If this was in any way opaque, two days later Texas Lieutenant Governor Dan Patrick speculated, “are you willing to take a chance on your survival in exchange for keeping the America that America loves for its children and grandchildren? And if that is the exchange, I’m all in.”

But this course of action is not speculative at all: rather it seems to be the express plan of the state, coming soon. Look, to save the economy, we’re gonna have to kill some folks. Like, a lot. Horrified humans immediately noted this was a blood sacrifice to capitalism and who could disagree? This is the most dramatic political development since the early hours of millennium if not very much longer. It must seem like the apotheosis of biopolitics: a crackpot sovereign deciding at national scale who will be made to live, who let die.

Except for the way in which this was, in the clearest manner, the reverse. By 22 March, Goldman Sachs was already predicting an unparalleled 2.5 million new jobless claims; this would prove optimistic.

CLOVER

Meanwhile the Senate tinkered with its relief bill. The massive transfers to corporations were a given, for which 2008 now appears as a dress rehearsal. The haggling endeavored to dial in the exact size of the direct payment to citizens. It would need to restore enough aggregate demand to keep the economy breathing (a ventilator of sorts) while taking care not to give a single prole the incentive to be, in the face of a global and terrifying pandemic poised to kill millions absent assiduous measures taken by all, lazy. And it is to this delicate measure that presidents must also dance, not the measure decided on by the legislature, but the measure of that abstraction “the economy.” Nothing could have thrown Foucault’s formulations about sovereignty and regimes of power, and especially the limits of these ideas, into clearer relief than this week’s pronouncements, provisions, and data.

This is not to say there is no such thing as biopolitics nor any power to make live and let die. Clearly there is; clearly it is this that is wielded by all the Trumps great and small. Nonetheless it is apparent that the sovereign is not sovereign. Rather he is subordinated entirely to the dictates of political economy, that real unity of the political and economic forged by capital and its compulsions. Make live and let die is simply a tool among others in this social order whose true logic, from Trump’s tweet to Dan Patrick to the Senate bill, is the power employed always as a ratio of make work and let buy.

Here we must take a final turn toward where we began and reenter the genre named at the outset. The link between coronavirus and climate is more direct than mere analogy, two threats that challenge our senses of scale and temporality and so seem to demand something like a state to address them. Rather it turns out that one shows us the character of the other with horrific lucidity. We should not be surprised to discover that, like the 2008 economic collapse, the pandemic has significantly reduced emissions globally. The reductions have been particularly marked in China and Italy, the two most devastated nations. We might expect, glancing at the rate of spread and those unemployment numbers, that we will see similar results from the United States. Maybe we will get right with the Paris Accords after all.

This is not to say that we should imagine the virus as a redeemer; that is a particularly grotesque fantasy. Its role in a temporary retreat of planetarily fatal emissions is nonetheless informative. Ecological despoliation is a consequence not of humans, as the name “Anthropocene” and Latour’s essay suggest, but of industrial production and its handmaidens, and only forces which can bring that to heel allow us to prepare for climate change. Capital, with is inescapable drive to reproduce itself, is not some actor in a network, equivalent to other actors, but an actual cause. The compulsion to produce, and to produce at a lower cost than competitors, in turn compels the burning of cheap and dirty fuels to drive the factories, to move the container ships, even to draw forth from the ground the material components of “green energy” sources. The Gilets Jaunes did not riot because they object to ecological policies but because the economy dictates that they find jobs in places they cannot afford to live, and to which they must therefore commute. As long as the compulsions of production for profit and of laboring to live persist, climate survival will be beyond the reach of any state.

yellow_vests

We must take this fact with the utmost seriousness: that Foucault’s new regime of power appears in the late eighteenth century, which is to say, alongside the steam engine and the industrial revolution, which is also to say, alongside the liftoff of anthropogenic climate change. We need to stop fucking around with theory and say, without hesitation, that capitalism, with its industrial body and crown of finance, is sovereign; that carbon emissions are the sovereign breathing; that make work and let buy must be annihilated; that there is no survival while the sovereign lives.

29 March 2020


Joshua Clover is a professor of English at the University of California, Davis.  He is also a faculty member in the Department of Comparative Literature and affiliated faculty in the French and Italian departments, Film Studies Program, and the Designated Emphasis in Critical Theory. He is affiliated with the Mellon Research Initiative in Racial Capitalism. His most recent book is Riot. Strike. Riot: The New Era of Uprisings (2016).


[1] This translation mine; the remainder come from the English text.

26 Comments

Filed under 2020 Pandemic

Is This a Dress Rehearsal?

Bruno Latour

The unforeseen coincidence between a general confinement and the period of Lent is still quite welcome for those who have been asked, out of solidarity, to do nothing and to remain at a distance from the battle front. This obligatory fast, this secular and republican Ramadan can be a good opportunity for them to reflect on what is important and what is derisory. . . . It is as though the intervention of the virus could serve as a dress rehearsal for the next crisis, the one in which the reorientation of living conditions is going to be posed as a challenge to all of us, as will all the details of daily existence that we will have to learn to sort out carefully. I am advancing the hypothesis, as have many others, that the health crisis prepares, induces, incites us to prepare for climate change. This hypothesis still needs to be tested.

LENT

What allows the two crises to occur in succession is the sudden and painful realization that the classical definition of society – humans among themselves – makes no sense. The state of society depends at every moment on the associations between many actors, most of whom do not have human forms. This is true of microbes – as we have known since Pasteur – but also of the internet, the law, the organization of hospitals, the logistics of the state, as well as the climate. And of course, in spite of the noise surrounding a “state of war” against the virus, it is only one link in a chain where the management of stocks of masks or tests, the regulation of property rights, civic habits, gestures of solidarity, count exactly as much in defining the degree of virulence of the infectious agent. Once the entire network of which it is only one link is taken into account, the same virus does not act in the same way in Taiwan, Singapore, New York, or Paris. The pandemic is no more a “natural” phenomenon than the famines of the past or the current climate crisis. Society has long since moved beyond the narrow confines of the social sphere.

Having said that, it is not clear to me that the parallel goes much further. After all, health crises are not new, and rapid and radical state intervention does not seem to be very innovative so far. One need only look at President Macron’s enthusiasm to take on the figure of head of state that he has so pathetically lacked until now. Much better than terrorist attacks – which are, after all, only police business – pandemics awaken in leaders and those in power a kind of self-evident sense of  “protection” – “we have to protect you” “you have to protect us” – that recharges the authority of the state and allows it to demand what would otherwise be met with riots.

But this state is not the state of the twenty-first century and ecological change; it is the state of the nineteenth century and so-called biopower. In the words of the late Alain Desrosières, it is the state of what is rightly called statistics: population management on a territorial grid seen from above and led by the power of experts.[1] This is exactly what we see resurrected today – with the only difference that it is replicated from one nation to the next, to the point of having become world-wide. The originality of the present situation, it seems to me, is that by remaining trapped at home while outside there is only the extension of police powers and the din of ambulances, we are collectively playing a caricatured form of the figure of biopolitics that seems to have come straight out of a Michel Foucault lecture. Including the obliteration of the very many invisible workers forced to work anyway so that others can continue to hole up in their homes – not to mention the migrants who, by definition, cannot be secluded in any home of their own. But this caricature is precisely the caricature of a time that is no longer ours.

There is a huge gulf between the state that is able to say “I protect you from life and death,” that is to say from infection by a virus whose trace is known only to scientists and whose effects can only be understood by collecting statistics, and the state that would dare to say “I protect you from life and death, because I maintain the conditions of habitability of all the living people on whom you depend.”

Think about it. Imagine that President Macron came to announce, in a Churchillian tone, a package of measures to leave gas and oil reserves in the ground, to stop the marketing of pesticides, to abolish deep ploughing, and, with supreme audacity, to ban outdoor heaters on bar terraces. If the gas tax triggered the yellow-vests revolt, then imagine the riots that would follow such an announcement, setting the country ablaze. And yet, the demand to protect the French people for their own good and from death is infinitely more justified in the case of the ecological crisis than in the case of the health crisis, because it affects literally everyone, not a few thousand people – and not for a time but forever.

It is clear that such a state does not exist — and maybe fortunately so. What is more worrying is that we do not see how that state would prepare the move from the one crisis to the next. In the health crisis, the administration has the very classic educational role and its authority coincides perfectly with the old national borders – the archaism of the sudden return to European borders is painful proof of this. In the case of ecological change, the relationship is reversed: it is the administration that must learn from a multiform people, on multiple scales, what will be the territories upon which people are trying to survive in many new ways as they seek to escape from globalized production. The present state would be completely incapable of dictating measures from above. If in the health crisis, it is the brave people who must relearn to wash their hands and cough into their elbows as they did in primary school, in the case of the ecological mutation, it is the state that finds itself in a learning situation.

But there is another reason why the figure of the “war against the virus” is so unjustified: in the health crisis, it may be true that humans as a whole are “fighting” against viruses – even if they have no interest in us and go their way from throat to throat killing us without meaning to. The situation is tragically reversed in ecological change: this time, the pathogen whose terrible virulence has changed the living conditions of all the inhabitants of the planet is not the virus at all, it is humanity! But this does not apply to all humans, just those who make war on us without declaring war on us. For this war, the national state is as ill-prepared, as badly calibrated, as badly designed as possible because the battle fronts are multiple and cross each one of us. It is in this sense that the “general mobilization” against the virus does not prove in any way that we will be ready for the next one. It is not only the military that is always one war behind.

But finally, you never know; a time of Lent, whether secular or republican, can lead to spectacular conversions. For the first time in years, a billion people, stuck at home, find this forgotten luxury: time to reflect and thereby discern that which usually and unnecessarily agitates them in all directions. Let’s respect this long, painful, and unexpected fast.

26 March 2020

[The post was originally published in French with La Monde]


Bruno Latour is an emeritus professor associated with Sciences Po médialab.


[1] Alain Desrosières, The Politics of Large Numbers: A History of Statistical Reasoning,  trans. Camille Naish (Cambridge, Mass., 2002).

 

47 Comments

Filed under 2020 Pandemic

When Movies Get Sick

Kyle Stevens

 

Space is never just space. Sometimes we think of it as the air around us. Sometimes we think of it as a thing in which to find a WiFi signal. Sometimes it’s what we need when we’ve had an argument with someone we love. Perhaps most often potentiality is the value assigned to it: What can be put here? Who can live there? Which plant can grow everywhere? Rarely is space treated as inherently dangerous, villainous (that is in part what makes the films of Fritz Lang or Kira Muratova exceptional), yet that is precisely one of the tectonic shifts wrought by living in the era of Covid-19. It has suddenly unfastened the values that traditionally attach to proximity, particularly regarding human bodies, as vocabulary focusing on the distance between them—social distancing, self-isolation, quarantine—become part of our quotidian language. Questions of what is close, what is far, and what is far too close have become matters of life, death, and illness. The space between bodies is a measure of harm, even violence. The embrace is no longer the signifier of core social values. Standing six feet apart is. Distance has become the sign of intimacy—of respect, care, concern, shared understand of a shared world, a sense of belonging to a form of life.

Within discussions of film aesthetics, cinephiles tout the value of cinema for inviting audiences to attend to bodies in space, referring to the composition recorded by the camera and whose projection is offered up for our pleasure. Adrian Martin provides an account of the traditional view of a film’s “mise en scène as the movement of bodies in space—a space constantly defined and redefined by the camera.”[i] This idea motivates a critical recommendation to see, at least to some degree, beyond characters situated within a narrative to the pleasures of graphic compositions and, perhaps from there, to aesthetic questions of scale, shape, line, and so forth.

But as the perception of space is reconditioned in life under Covid, our encounters with fictional spaces, and with what and how they express, alters. Even when we know stories are not set in the present, the new regimes of bodily organization affect how we might see onscreen space. When watching movies, I have lately found myself wincing ever so slightly at people dancing in clubs or of a friend running up to another in the street for a hug. So-called negative space between characters, traditionally construed as an aesthetic choice, now takes on a biopolitical urgency, a politicized and medical meaning. Space comes to the foreground as negative space, but is it properly called negative if we worry that it is full of contagions? Space remains unsubstantial but no longer quite so inert when the invisible has become urgently visible.

We might of course think about how we receive narratives overall during this new era. (When we are confined to our homes, will we be more sympathetic to Jeff in Rear Window [dir. Alfred Hitchcock, 1954], who is concerned with the well-being of his infirmed neighbor?) But I suspect that our perception of bodies in space will more subtly restructure, and that the situation of onscreen figures will strike us, not necessarily consciously, but affectively, differently. Consider a few examples.

SOM

The opening of The Sound of Music (dir. Robert Wise, 1965) has been a classic image of freedom, of joy. Away from, as we soon learn, the confines of a sexually and vocally repressive convent, the wide open space affords Maria sovereignty of expression. Now, however, that sense of release is bolstered by the perception that she is safe, away from the threat that other bodies bring. Perhaps this was always part of why the image registered as freeing, perhaps others are always a threat to one’s sovereignty. Yet what was once an affordance of nature begins to fade into an affordance structured by the lack of others.

NBN1

Similarly, in Alfred Hitchcock’s North by Northwest (1959), the very openness of the field seemed to anticipate the possibility of threat coming from all sides. Hitchcock even shows us that there is nothing around our protagonist in shots that cover a full 360-degree range. However, now that field is also reassuringly devoid of people. Openness becomes safety.

NBN2

But when another character shows up, but does not approach, do we now register this as a sign of tension, of awkwardness, or of propriety, caution, even care? Or the film may become even more Hitchcockian as the Cary Grant character is forced to confront the fact that there is no visible difference between a friend and foe. Is the man the savior he needs or enemy infector? The risk he takes in walking towards the other man is now more suspenseful as he may be thrusting himself upon the knife.

MAREINBAD

In Last Year at Marienbad (1961), Alain Resnais employed social distancing to solicit contemplation of social alienation, but now this configuration also transforms into an image of social responsibility. Not alienation, but care for others, of self-care and social care. It is an image in which love and duty meet, not a vision of postwar alienation, but a foreshadowing of future forms of being.

Picture5

In How to Marry a Millionaire (dir. Jean Negulesco, 1958), a framing that emphasized the interval between two people previously suggested remoteness, that they were not destined for coupledom (if not to demonstrate CinemaScope). Now it seems more like a respectable span for two people getting to know one another.

Picture6

How much more romantic has become a touch, especially a touch of the face? In Portrait of a Lady on Fire (dir. Céline Sciamma, 2019) the borders of bodies are blurred such that a hand may belong to either lover, now overlaying our affective swoon with a frisson of anxiety. Perhaps it will no longer seem coincidental that we use falling to name the entry into both love and illness.

Picture7

In Claire Denis’s Beau Travail (1999), the regimented, equidistant bodies—configurations often seen in musicals, too—suggested rigid conformity, fascism. Now it appears mass ornamental, an ideal arrangement.

Picture8

For this reason, solitude registers anew. Baxter, in The Apartment (dir. Billy Wilder, 1960), working alone in the office, seems lucky to have gotten out of the house. Loneliness is less available to visual signification as it once was.

Picture9

As is friendship. In Tangerine (dir. Sean Baker, 2015), the distance with which two friends walk signals a latent hostility in their relationship. Alexandra is frustrated with Sin-Dee’s anger and impulsive behavior. But now it may not read that way. This may simply look like how two friends walk together.

Picture10

Picture11

Cultural minorities never needed Immanuel Kant to tell them that space is subjective and “not something objective and real, nor a substance, nor an accident, nor a relation.” The registration of proximity as aggressiveness, menace, is well known to queer subjects who fear detection. It is in fact the default queer mode of inhabiting public spaces. Tea and Sympathy (dir. Vincente Minnelli, 1956) understood this. Contrast the proximity of the cishet macho fellows on the beach with the world of women, in which a young, burgeoning homosexual tries to hide himself in plain sight. The women are social distancing, which allows him to, as well. Here, nearness means danger and distance means security. In this context, proximity has not only been a historical marker of intimacy but of privilege, of a confidence in one’s belonging with and around others.

I am reminded, too, of the 1938 jazz standard “The Nearness of You,” written by Hoagy Carmichael and Ned Washington: “It’s not the pale moon that excites me, that thrills and delights me, oh no, it’s just the nearness of you.” Once tender, were this sung to an unrequited love today, these are the words of a psychopath.


Kyle Stevens is a visiting assistant professor of film studies at MIT. He is the author of Mike Nichols: Sex, Language, and the Reinvention of Psychological Realism (2015), coeditor of the two-volume collection Close-Up: Great Screen Performances (2018), and editor of the forthcoming The Oxford Handbook of Film Theory. His essays have appeared in Critical Inquiry, Cinema Journal,Critical Quarterly, Film Criticism, and World Picture, as well as in several edited collections.


My thanks to Daniel Morgan for helpful feedback on this topic.

[i] Adrian Martin, Mise en Scène and Film Style: From Classical Hollywood to New Media Art (New York, 2014), p. 45.

2 Comments

Filed under 2020 Pandemic

To Quarantine from Quarantine: Rousseau, Robinson Crusoe, and “I”

Catherine Malabou

In May of 1743, a vessel from Corfu carrying bodies of dead crew members who had died of a mysterious disease arrived in Messina.  The ship and cargo were burned, but cases of a strange new disease were soon thereafter observed in the hospital and in the poorest parts of the town; and in the summer, a frightening plague epidemic developed, killing forty to fifty thousand people, and then disappeared before spreading to other parts of Sicily. Rousseau was traveling from Paris to Venice and was forced to halt in Genoa because of the epidemic. He narrates his quarantine in the Confessions (1782):

It was at the time of the plague at Messina, and the English fleet had anchored there, and visited the Felucca, on board of which I was, and this circumstance subjected us, on our arrival, after a long and difficult voyage, to a quarantine of one-and-twenty days.

The passengers had the choice of performing it on board or in the Lazaretto, which we were told was not yet furnished. They all chose the Felucca. The insupportable heat, the closeness of the vessel, the impossibility of walking in it, and the vermin with which it swarmed, made me at all risks prefer the Lazaretto. I was therefore conducted to a large building of two stories, quite empty, in which I found neither window, bed, table, nor chair, not so much as even a joint-stool or bundle of straw. My night sack and my two trunks being brought me, I was shut in by great doors with huge locks, and remained at full liberty to walk at my ease from chamber to chamber and story to story, everywhere finding the same solitude and nakedness.

This, however, did not induce me to repent that I had preferred the Lazaretto to the Felucca; and, like another Robinson Crusoe, I began to arrange myself for my one-and twenty days, just as I should have done for my whole life. In the first place, I had the amusement of destroying the vermin I had caught in the Felucca. As soon as I had got clear of these, by means of changing my clothes and linen, I proceeded to furnish the chamber I had chosen. I made a good mattress with my waistcoats and shirts; my napkins I converted, by sewing them together, into sheets; my robe de chambre into a counterpane; and my cloak into a pillow. I made myself a seat with one of my trunks laid flat, and a table with the other. I took out some writing paper and an inkstand, and distributed, in the manner of a library, a dozen books which I had with me. In a word, I so well arranged my few movables, that except curtains and windows, I was almost as commodiously lodged in this Lazeretto, absolutely empty as it was, as I had been at the Tennis Court in the Rue Verdelet. My dinners were served with no small degree of pomp; they were escorted by two grenadiers with bayonets fixed; the staircase was my dining-room, the landing-place my table, and the steps served me for a seat; and as soon as my dinner was served up a little bell was rung to inform me I might sit down to table.

Between my repasts, when I did not either read or write or work at the furnishing of my apartment, I went to walk in the burying-ground of the Protestants, which served me as a courtyard. From this place I ascended to a lanthorn which looked into the harbor, and from which I could see the ships come in and go out. In this manner I passed fourteen days. [1]

Being told like the rest of humanity to “stay at home” because of the pandemic, I immediately remembered this passage from the Confessions. While all of his companions of misfortune chose to stay confined together on a boat, Rousseau decided to be locked up in the lazaretto instead. A lazaretto is a hospital for those affected with contagious diseases. A felucca, or Mediterranean sailing ship, could also be set apart for quarantine purposes. Obviously, the two possibilities were offered to travelers in Genoa, and Rousseau thought he had better leave the boat and stay on his own in the building.

MESSINA

One can read this episode by solely focusing on the idea of choice: What is best in a time of confinement? Be quarantined with other people? Or be quarantined alone ? I must say that I spent some time wondering about such an alternative. If I had had the choice between the two options, what would have I done? (I am on my own, by the way, sheltered in quasi total isolation in Irvine, California.)

There is something else perhaps more profound in this passage, which is that quarantine is only tolerable if you quarantine from it—if you quarantine within the quarantine and from it at the same time, so to speak. The lazaretto represents this redoubled quarantine that expresses Rousseau’s need to isolate from collective isolation, to create an island (insula) within isolation. Such is perhaps the most difficult challenge in a lockdown situation: to clear a space where to be on one’s own while already separated from the community. Being cooped up on a boat with a few others of course generates a feeling of estrangement, but estrangement is not solitude, and solitude is, in reality, what makes confinement bearable. And this is true even if one is already on one’s own. I noticed that what made my isolation extremely distressing was in fact my incapacity to withdraw into myself. To find this insular point where I could be my self (in two words). I am not talking here of authenticity, simply of this radical nakedness of the soul that allows to build a dwelling in one’s house, to make the house habitable by locating the psychic space where it is possible to do something, that is, in my case, write. I noticed that writing only became possible when I reached such a confinement within confinement, a place in the place where nobody could enter and that at the same time was the condition for my exchanges with others. When I was able to get immersed in writing, conversations through Skype, for example, became something else. They were dialogues, not veiled monologues. Writing became possible when solitude started to protect me from isolation. One has to undress from all the coverings, clothes, curtains, masks, and meaningless chattering that still stick to one’s being when one is severed from others. Social distance is never powerful enough to strip one from what remains of the social in the distance. Sheltered-in-place has to be a radical Robinson Crusoe experience, an experience that allows one to construct a home out of nothing. To start anew. Or to remember.

CRUSOE

I wonder if Foucault, at the end of his life, did not turn to the ethics of the self—care of the self, technologies of the self, government of the self—out of the same necessity. The urge to carve out a space for himself within the social isolation that AIDS insidiously was threatening him with. Perhaps Foucault was looking for his island, his absolute (ab-solutus) land where he would have found the courage of speaking and writing before he died. Those who have seen in his late seminars a nihilistic individualist withdrawal from politics have totally missed the point.

We know that Karl Marx made fun of eighteenth-century robinsonades like Rousseau’s. Marx said that the origin of the social can by no means be a state of nature where isolated men finally come to meet and form a community. Solitude cannot be the origin of society.

This may be true, but I think it is necessary to know how to find society within oneself in order to understand what politics means. I admire those who are able to analyze the current crisis caused by the covid-19 pandemic in terms of global politics, capitalism, the state of exception, ecological crisis, China-Us-Russia strategic relationships, etc. Personally, at the moment, I am on the contrary trying to be an “individual.” This, once again, is not out of any individualism but because I think on the contrary that an epochè, a suspension, a bracketing of sociality, is sometimes the only access to alterity, a way to feel close to all the isolated people on Earth. Such is the reason why I am trying to be as solitary as possible in my loneliness. Such is the reason why I would also have chosen the lazaretto.

23 March 2020


Catherine Malabou is a professor in the philosophy department at the Centre for Research in Modern European Philosophy at Kingston University and of European languages and literatures and comparative literature at University of California, Irvine. She is the author of Ontology of the Accident: An Essay on Destructive Plasticity (2012) and Before Tomorrow: Epigenesis and Rationality (2016), and most recently, Morphing Intelligence: From IQ Measurement to Artificial Brains (2019).


[1] Jean Jacques Rousseau, The Confessions, trans. pub., 2 vols. (London, 1903), 1:273-74.

39 Comments

Filed under 2020 Pandemic

The Climatic Virus in an Age of Paralysis


Nikolaj Schultz  

The collective reaction following CoVid19 seems to be a double-edged sword. On the one hand, the state of exception continues to generate fear, panic, anxiety, in all of their respective differences. On the other hand, to more than a few people, the fear strangely enough seems to go hand in hand with a feeling of relief. If I am not wrong on this point, and even if the feeling of relief will pass as the crisis gets worse, then what are the origins of this dramatic, emotional division?

My hypothesis is that one can only explain this two-sided collective psychology if we understand the social reactions to the new coronavirus as related to the helplessness societies are experiencing in the face of another civilizational tragedy: climate change.

The_Triumph_of_Death_by_Pieter_Bruegel_the_Elder

As already noted by many commentators, the enormous panic and action readiness that the virus has installed in both the public and the state make the political and social reactions to climate change look rather vague. Obviously, the paradox here is that despite the great tragedy that the virus is, the enormous consequences of climatic mutations will, in all probability, by far surpass those of the virus.

The actions taken against the virus are without a doubt necessary, but we are still experiencing a huge, paradoxical distance between consequences and action. Citizens and social scientists would perhaps explain this gap with an analysis underlining the relation between affect and abstraction. Here, the argument would be that climate change does not accumulate affect and action because it is abstract, while the danger of the virus is concrete, and thus affects people and provoke action. In other words, two different phenomena of different abstractions, and two different reactions, where only one of them creates the necessary affect and agency.

However, this alone cannot explain the panic, the action readiness, and the civil mobilization that the virus has accumulated. If we wish to understand the magnitude of this reaction, it is perhaps more plausible to imagine that the public and the state are indeed affected by the enormous, abstract climatic risk but that its affects, because of sheer helplessness, now is projected onto the concrete virus risk, which certainly is easier to grasp: “Finally – a tangible apocalypse.”

My point is not that the draconian interventions in the infrastructure of society are unreasonable or exaggerated. My point is simply that we do not understand the sudden panic, action readiness, or civil sacrifice in a hitherto paralyzed society if we do not – at least partly – see this reaction as result of a collective, psychic milieu that climate change has made neurotic. Two different phenomena but perhaps one and same reaction in the end.

At first glance, this analysis seems unrealistic; it is too speculative, too psychoanalytic, too hypothetical. However, what is really unrealistic is imagining that there are not mass psychological consequences when a civilization for fifty years consciously ignores the proof of the catastrophic consequences of their actions and moves forward unabatedly in the same direction.

How could it not create a collective-panic psychic environment, which can now finally be compensated for in facing the virus, when a civilization walks blindfolded beyond four out of nine planetary boundaries and continues directly into a sixth mass-extinction event?

Now, this is exactly why the panic goes hand in hand with the relief. Am I wrong to suspect that the virus has not only accumulated fear but also a certain peace of mind in at least some people? Does my intuition lead me astray, if I detect next to peoples’ anxiety almost a sense of balance? When we are not short of breath, these days, are we then not breathing even better than before?

As noted above, this doubleness seems difficult to explain. However, if we understand these affects as partly deriving from the climatic changes, then we find at least two reasons for it being a logical outcome: on the one hand, because the pandemic now allows for a concrete drain of the collective anxiety that the climate’s abstract risks accumulate; on the other hand, and perhaps more importantly, because we are right now exactly seeing how all the social systems that we thought made the ecological transition impossible – production, consumption, mobility, etc. – are not chiseled in stone but that they are in fact changeable.

If we today are relieved about the world taking a break for a while, then it is not just due to the vulgar banality that “people do not want to go to work.” If we feel a certain balance in the pandemic’s dramatic reorganization and short circuit of society’s social and economic systems, then it is due to sensing that the “acceleration society”[i] can be stopped and that its unimaginable consequences after all might not be inevitable.

Thus, the point is not only, as Slavoj Žižek has argued, that the virus is a strike at the heart of capitalism. He is probably right that it is, but if this heart flicker leads to a collective feeling of relief, then it is because the lurking climatic catastrophe no longer appears as an absolute necessity. The relief emerges because the concrete crisis has shown us that the abstract crisis might not be unavoidable.

So, as Karl Polanyi would say, society can still defend itself[ii]. And not only are we watching social systems change, we are even discovering how social values are changing accordingly. Sure, some people are reinventing themselves as Ayn Randian, sovereign individuals by hoarding toilet paper, and a few of the ultra-rich escapists that Bruno Latour and I have previously discussed as a geosocial elite have fled to New Zealand where they are hiding from the virus in their climate secured bunkers[iii]. However, as Rune Lykkeberg notes, in general, the panic seem to have generated practices of solidarity that were impossible to even imagine a few weeks ago.

This only makes the relief even bigger. Both our material and social destiny are still negotiable. And if this is an important realization, it is of course because of the hope that we – when the time is right – will be able to take advantage of the current, collectivist momentum and its political energy to create a realistic connection between the direction of civilization and its earthly, material conditions of existence. However, the possibility of this is much greateer if we understand that it might already be the absence of such a connection that we are reacting to, in panic as well as with relief.

This relief might very well disappear from the horizon within a few days or weeks, when the virus crisis reaches its ultimate point. Fear will be all we have left and we will unequivocally wish ourselves back to the days where everything was as it used to be. However, this does not necessarily make its insights any less important or valid – perhaps even the contrary.

It will be a strange spring and perhaps even a strange summer. However, maybe the concrete threat has given us a number of cognitive and practical strategies to counter the more abstract crisis that we are facing with climatic mutations. Despite its tragedies, the virus might end up as an emancipatory tool in an age of paralysis.

21 March 2020


Nikolaj Schultz, sociologist, is a PhD Fellow at the Department of Sociology, University of Copenhagen. He is currently a visiting scholar in Paris, where he is working with cosupervisor of his PhD thesis, Bruno Latour, on developing the concept of geosocial classes.


[i] Hartmut Rosa 2013: Social Acceleration: A New Theory of Modernity, New York: Columbia University Press.

[ii] Karl Polanyi 1944: The Great Transformation. Boston, MA: Beacon Press. See especially Part II, “Self-Protection of Society”, pp. 136-228.

[iii] See Rupert Neate 2020: : “Super-rich jet off to disaster bunkers amid coronavirus outbreak”, The Guardian, 11th March, Available at: https://www.theguardian.com/world/2020/mar/11/disease-dodging-worried-wealthy-jet-off-to-disaster-bunkers and Edward Helmore 2020: ”Coronavirus lifestyles of the rich and famous: how the 1% are coping”, The Guardian, 13th March, Available at:  https://www.theguardian.com/world/2020/mar/13/coronavirus-lifestyles-of-the-rich-and-famous-how-the-1-are-coping

 

3 Comments

Filed under 2020 Pandemic

Is Barbarism with a Human Face Our Fate?

Slavoj Žižek

These days I sometimes catch myself wishing to get the virus – in this way, at least the debilitating uncertainty would be over. . . A clear sign of how my anxiety is growing is how I relate to sleep. Till around a week ago I was eagerly awaiting the evening: finally, I can escape into sleep and forget about the fears of my daily life. . . Now it’s almost the opposite: I am afraid to fall asleep since nightmares haunt me in my dreams and awaken me in panic – nightmares about the reality that awaits me.

11Wood-articleLarge-v2

What reality? These days we often hear that radical social changes are needed if we really want to cope with the consequences of the ongoing epidemics (I myself am among those spreading this mantra) – but radical changes are already taking place. The coronavirus epidemics confronts us with something that we considered impossible; we couldn’t imagine something like this really happening in our daily lives – the world we knew has stopped turning around, whole countries are in a lockdown, many of us are confined to one’s apartment (but what about those who cannot afford even this minimal safety precaution?), facing an uncertain future in which, even if most of us will survive, an economic mega-crisis lies ahead. . . What this means is that our reaction to it should also be to do the impossible – what appears impossible within the coordinates of the existing world order. The impossible happened, our world has stopped, AND impossible is what we have to do to avoid the worst, which is – what? (I owe this line of thought to Alenka Zupančič.)

I don’t think the biggest threat is a regression to open barbarism, to brutal survivalist violence with public disorders, panic lynching, etc. (although, with the possible collapse of health care and some other public services, this is also quite possible). More than open barbarism, I fear barbarism with a human face – ruthless survivalist measures enforced with regret and even sympathy but legitimized by expert opinions. A careful observer easily noticed the change in tone in how those in power address us: they are not just trying to project calm and confidence, they also regularly utter dire predictions – the pandemic is likely to take about two years to run its course, and the virus will eventually infect 60-70 percent of the global population, with millions of dead. . . In short, their true message is that we’ll have to curtail the basic premise of our social ethics: the care for the old and weak. (Italy has already announced that, if things get worse, difficult decisions about life and death will have to be made for those over eighty or with underlying conditions.) One should note how the acceptance of such a logic of the “survival of the fittest” violates even the basic principle of the military ethics that tells us that, after the battle, one should first take care of the heavily wounded even if the chance of saving them is minimal. (However, upon a closer look, this shouldn’t surprise us: hospitals are already doing the same thing with cancer patients.) To avoid a misunderstanding, I am an utter realist here – one should even plan  to enable a painless death of the terminally ill, to spare them the unnecessary suffering. But our priority should be, nonetheless, not to economize but to help unconditionally, irrespective of costs, those who need help, to enable their survival.

So I respectfully disagree with Giorgio Agamben, who sees in the ongoing crisis a sign that “our society no longer believes in anything but bare life. It is obvious that Italians are disposed to sacrifice practically everything — the normal conditions of life, social relationships, work, even friendships, affections, and religious and political convictions — to the danger of getting sick. Bare life — and the danger of losing it — is not something that unites people, but blinds and separates them.” Things are much more ambiguous: it DOES also unite them – to maintain a corporeal distance is to show respect to the other because I also may be a virus bearer. My sons avoid me now because they are afraid that they will contaminate me (what is to them a passing illness can be deadly for me).

In the last days, we hear again and again that each of us is personally responsible and has to follow the new rules. The media is full of stories about people who misbehave and put themselves and others in danger (a guy entered a store and started to cough, etc.) – the problem is here the same as with ecology where media again and again emphasize our personal responsibility (did you recycle all used newspapers, etc.). Such a focus on individual responsibility, necessary as it is, functions as ideology the moment it serves to obfuscate the big question of how to change our entire economic and social system. The struggle against the coronavirus can only be fought together with the struggle against ideological mystifications, plus as part of a general ecological struggle. As Kate Jones put it, the transmission of disease from wildlife to humans is “’a hidden cost of human economic development. There are just so many more of us, in every environment. We are going into largely undisturbed places and being exposed more and more. We are creating habitats where viruses are transmitted more easily, and then we are surprised that we have new ones.’”

So it is not enough to put together some kind of global healthcare for humans, nature should be included too – viruses also attack plants, which are the main sources of our food, like potato, wheat, and olives. We always have to bear in mind the global picture of the world we live in, with all the paradoxes this implies. For example, it is good to know that the lockdown in China saved more lives than the number of those killed by the virus (if one trusts official statistics of the dead):

Environmental resource economist Marshall Burke says there is a proven link between poor air quality and premature deaths linked to breathing that air. “With this in mind,” he said, “a natural – if admittedly strange – question is whether the lives saved from this reduction in pollution caused by economic disruption from COVID-19 exceeds the death toll from the virus itself.” “Even under very conservative assumptions, I think the answer is a clear ‘yes’.” At just two months of reduction in pollution levels he says it likely saved the lives of 4,000 children under five and 73,000 adults over 70 in China alone.

We are caught in a triple crisis: medical (the epidemic itself), economic (which will hit hard whatever the outcome of the epidemic), plus (not to underestimate) the mental health – the basic coordinates of the life-world of millions and millions are disintegrating, and the change will affect everything, from flying during holidays to everyday bodily contacts. We have to learn to think outside the coordinates of stock market and profit and simply find another way to produce and allocate the necessary resources. Say, when the authorities learn that a company is keeping millions of masks, waiting for the right moment to sell them, there should be no negotiations with the company – masks should be simply requisitioned.

The media has reported that Trump offered one billion dollars to Tübingen-based biopharmaceutical company CureVac to secure the vaccine “only for the United States.” The German health minister, Jens Spahn, said a takeover of CureVac by the Trump administration was “off the table”; CureVac would only develop vaccine “for the whole world, not for individual countries.” Here we have an exemplary case of the struggle between barbarism and civilization. But the same Trump threatened to invoke the Defense Production Act that would allow the government to ensure that the private sector could ramp up production of emergency medical supplies:

Trump announces proposal to take over private sector. The US president said he would invoke a federal provision allowing the government to marshal the private sector in response to the pandemic, the Associated Press reported. Trump said he would sign an act giving himself the authority to direct domestic industrial production “in case we need it.”

When I used the word communism a couple of weeks ago, I was mocked, but now there is the headline “Trump announces proposal to take over private sector” – can one imagine such a headline even a week ago? And this is just the beginning – many more measures like this should follow, plus local self-organization of communities will be necessary if the state-run health system is under too much stress. It is not enough just to isolate and survive – for some of us to do this, basic public services have to function: electricity, food, and medical supplies. . . (We’ll soon need a list of those who recovered and are at least for some time immune, so that they can be mobilized for the urgent public work.) It is not a utopian communist vision, it is a communism imposed by the necessities of bare survival. It is unfortunately a version of what, in the Soviet Union in 1918, was called “war communism.”

As the saying goes, in a crisis we are all socialists – even Trump is considering a form of UBI – a check for a thousand dollars to every adult citizen. Trillions will be spent violating all the market rules – but how, where, for whom? Will this enforced socialism be the socialism for the rich (remember the bailing out of the bank in 2008 while millions of ordinary people lost their small savings)? Will the epidemics be reduced to another chapter in the long sad story of what Naomi Klein called “disaster capitalism,” or will a new (more modest, maybe, but also more balanced) world order emerge out of it?

18 March 2020

[CORRECTION. For clarification, the author has asked us to modify the following: “Italy already announced that, if things get worse, those over eighty or with other heavy diseases will be simply left to die” The sentence now reads: “Italy has already announced that, if things get worse, difficult decisions about who gets to live may have to be made for those over eighty or with underlying conditions.” Furthermore, President Trump did not “invoke the Defense Production Act.” The sentence now reads: “But the same Trump threatened to invoke the Defense Production Act that would allow the government to ensure that the private sector could ramp up production of emergency medical supplies.”-Ed.]


Slavoj Žižek, dialectical-materialist philosopher and psychoanalyst, is codirector at the International Center for Humanities, Birkbeck College, University of London.He is a frequent contributor to Critical Inquiry.

21 Comments

Filed under 2020 Pandemic

Recoding Relations: Dispatches from the Symposium for Indigenous New Media

David Gaertner and Melissa Haberl

In June 2018, scholars, developers, artists, and community members from over twenty institutions and three continents gathered on the ancestral and unceded territory of the WSÁNEĆ, Lkwungen, and Wyomilth peoples to participate in the inaugural Symposium for Indigenous New Media (SINM). As part of the University of Victoria’s annual Digital Humanities Summer Institute (DHSI), #SINM2018 sought to highlight Indigenous innovation with digital technology and new media and to create a space for relationshipbuilding between the digital humanities (DH) and Indigenous studies. Scholars from across the social sciences and the humanities presented research on topics ranging from Indigenous video games and virtual reality, to communications technology, language revitalization, and new media, to digital texts, social media analytics, and archival digitization. Our specific intent was to interrogate the critical relationship between DH and Indigenous studies, namely generating more robust ways to consider how key concepts in Indigenous studies—namely, land, language, sovereignty, and self-determination—translated (or failed to translate) into digital spaces and practices.

IMG_7987

There is an urgent need to decolonize DH theory and practice. Many Indigenous scholars and community members resist the digital humanities because of concerns raised by their communities about the expropriation of data. These concerns are not unfounded. Indeed, just after our symposium ended, the translation company Lionbridge was accused of mining Facebook for access to Te reo Māori (the Māori language), which, in turn, they were mobilizing for profit: “Data sovereignty has become a real issue,” Peter-Lucas Jones (Te Aupōuri, Ngāi Takoto, Ngāti Kahu) told interviewers about this incident, “now we have a situation where there is economic gain for our reo and if there is economic gain, it should be for our own Māori people, not an American company.”[1] The Lionbridge incident illustrates how digital technologies reproduce and amplify ongoing histories of settler colonialism, which exploit Indigenous resources and knowledges for non-Indigenous cultural and financial gain. We argue that digital extraction is not simply symptomatic of settler colonialism, it is a constitutive piece of terra nullius: the erasure of Indigenous peoples as peoples, with inherent rights and millennia-long histories of research, science, and knowledge mobilization. If DH cannot, or will not recognize Indigenous data sovereignty—that is Indigenous peoples inherent right to steward and mobilize their own knowledges without interference—it will remain, even when mobilized with the best of intentions, part of the problem. If it is able to grapple with the legacies of colonialism embedded in technology and knowledge mobilization schematics, however, we argue that DH has the potential to meaningfully contribute to decolonization. This is the balance on which the symposium operated.

“Recoding Relations,” the title of this blog post and the podcast series that preceded it, means shifting our perspective on the objectives of DH: from data extraction to relationship building; from settler state-based perspectives to anticolonial methodologies; from saviour narratives to reciprocal knowledge exchanges. In other words, “recoding relations” is a call to be attentive to the “how” of DH or, more specifically, the relationality of DH as a practice. Leanne Betasamosake Simpson writes that “ultimately we access knowledge through the quality of our relationships and the personalized context we collectively create—the meaning comes from the context and the process, not the content.”[2] In this sense, “recoding relations,” as informed by Simpson, means being attentive to the relationships we cultivate in DH—not just those that are amplified through our projects and publications but also those that go unheard or are rendered unheard (intentionally or not) through our work. It means assessing the contexts through which we inherit DH (academia, settler colonialism, Western technology) and using those contexts to interpret the processes through which we enact digital research (data scraping, visualization, textual encoding, and others). It means putting people before platforms and consent before code.

SINM was informed by Indigenous interventions into technology. Our goal was to build technical and cultural capacity through the symposium and to articulate our conclusions via open access knowledge mobilization, in the form of blog posts, newspaper articles, and podcasts.[3] Our findings were broad in scope but address a number of key intersecting takeaways: (1) emphasize relationships over tools: that is, rather than  engaging DH as a means to collect, analyze, and visualize data, we argue for imagining it as a site of activation for community building and knowledge sharing. We look towards a DH that is willing to build meaningful relationships with community and individuals in ways that exceed the boundaries of what is typically understood as  the digital Emphasizing relationships also means overcoming the deficit model, which has historically framed Indigenous peoples as inherently lacking and therefore in need of (Euro-Christian) support. Building reciprocal relationships based in equality means reaching out to communities before, during, and after a project and lending support and resources, as well as providing training, so that they can continue to build digital projects without the P. I. or the initial research team; it means shifting the critical gaze away from Indigenous communities and towards the colonial systems that produce deficit (2) affirmed, ongoing consent: settler colonialism (colonialism, as seen in settler states such as Canada, the US, Australia, and New Zealand, that is premised on the displacement and erasure of Indigenous people) is already built out of a nonconsensual relationship. We argue that DH can help to make nonconsent visible via big data and immersive, geospecific visualizations. We also argue that we must hold the DH community accountable to the highest standards of informed, ongoing, relational, and reciprocal consensual research. This includes, but is not limited to data sovereignty and the OCAP® principles.[4] (3) Include Indigenous thinkers, programmers in your syllabi: The settler colonial project functions, historically and presently, by simplifying Indigeneity and relegating it to the past. Foregrounding Western models of “progress” and technology fundamentally contributes to the erasure of Indigenous innovation. As such, we argue for DH research and pedagogy that holds up Indigenous technologies in the past, present, and future. A huge part of this means training the next generation of DH scholars, from all backgrounds, to read with Indigenous technologies and towards decolonial methodologies, as developed by Indigenous scholars and activists.

In what follows, we summarize the major themes of SINM as they arose out of presentations, workshops, and discussion groups as a means to build on the above three points. We’ve organized those themes into five categories: (1) (re)worlding through new media; (2) the digital divide and Indigenous technological tradition; (3) historicizing Indigenous new media; (4) challenges, relationships and suggested practices; and (5) decolonizing the digital humanities. Overall, we argue that Indigenous new media is not tangential to DH but that it is in fact foundational to how we understand digital scholarship as a community-oriented practice and relationship. It is our hope that the details shared in this blog post contribute to a deeper relational engagement with Indigenous studies in DH and leads to further work between and across the two fields. Working together, we are hopeful that DH and Indigenous studies can produce significant decolonial digital interventions at a moment when more of this work is desperately needed.[5]

(Re)Worlding through New Media

While barriers to Indigenous participation in DH persist, SINM participants spoke to the powerful ways in which Indigenous peoples are harnessing and repurposing digital technology as a means of self-representation and storytelling, decolonial education, and relationship building. They also attested to the power of digital technologies as potential tools for political mobilization and expressions of sovereignty. During a panel on gaming and animation, Mohawk Communications Studies MA candidate Maize Longboat shared his work on Indigenous video-game development. Longboat argued that video games offer “a narrative medium for Indigenous peoples to tell their stories in ways that other media simply can’t.”[6] His presentation focused on his experience developing his own video game (Terra Nova) as part of a research-creation project for his MA. He posed the question, “What makes Indigenous video games?” and noted that he is still exploring how his game will be informed by his experience as a Mohawk person.[7] Longboat explained that Indigenous video games have a unique narrative quality and are grounded in direct cultural connections to a territory’s original inhabitants. Yet at the same time, the development process and mechanics of the medium are traditionally Western. “How do we contend with that tension?” he asked, and, most importantly, “How do experiential forms of media expand our ways of knowing?” He positioned video games as a means to express long-standing Indigenous knowledges, identities, and cultures but also indicated that gaming offers a way to build on intellectual and cultural traditions by creating new stories and storytelling platforms for and by Indigenous peoples. “Ongoing systems of colonization,” explained Longboat, “seek to relegate Indigenous peoples and identity to a past time that is separate from our contemporary era of digital technology.” Longboat pushes back against that narrative by recognizing Indigenous peoples as “present and active participants in the technological world” and his work contributes to a growing movement of Indigenous developers who are world-making and decolonizing through video games.[8] Terra Nova, a two-player, cooperative puzzle platformer, illustrates how Indigenous epistemologies translate into game play and mechanics while holding up videogame development as an extension of Indigenous storytelling.

Virtual and augmented reality developers Caroline and Michael Running Wolf had a similar message about the power of Indigenous new media and its capacity to connect people across distance, language, and culture. Caroline, of the Crow Nation, and Michael, of the Northern Cheyenne Nation, hail from what is currently known as Montana. In 2016 they travelled to neighbouring North Dakota to join thousands of Indigenous and allied people gathered at Oceti Sakowin, commonly known as the Standing Rock water protector camp. The Running Wolfs created a virtual reality (VR) platform based on their experiences at the gathering and later brought it to a conference on language conservation in Hawai’i. There, they showed it to a Siberian grandmother who chose to watch the victory song that erupted after then President Obama announced that the US government would halt the North Dakota Access pipeline efforts for a time. As Michael explained,

she didn’t speak a lick of English, she didn’t speak any native language outside her home country, and yet she got it. She understood the power of this event that we had captured through technology and transported. So I think that’s the power of this technology — that we can take video from this alien place, North Dakota, and show it to someone from Siberia, in Hawai’i. And it transported her, and she just got it emotionally, what was going on [at] this event of joy.

This story elucidates the power of Indigenous VR to create spaces for understanding and decolonial education all rooted in an ethic of relationship building. The capacity of the app to connect someone from the other side of the world, with no knowledge of the language being spoken, to the people, struggles, and triumphs at Standing Rock is particularly significant in a colonial system designed to segregate and disassociate Indigenous peoples from settler society while alienating individual struggles as a means of control. The Standing Rock VR app, along with other VR and augmented reality (AR) projects developed by the Running Wolfs, works to not only hold up Indigenous experiences and resistance but to forge new social realities and decolonial futures by facilitating learning and building empathy and community through virtual worlds.[9]

Gaming and social media are two spaces in which we are witnessing Indigenous resurgence. According to Métis scholar Aubrey Hanson, “resurgence is an Indigenizing impulse; it acknowledges colonialism and domination through resistance but it does not focus solely on colonialism as the most important concern. Instead, resurgence insistently focuses on Indigenous communities as sites of power and regeneration.”[10] Social media, and in particular #NativeTwitter, represents a critical space where Indigenous resurgence is taking place. Understanding the labour that Indigenous peoples put into making Twitter an effective platform for anti-racist and anti-white supremacist work is key to unpacking and reconfiguring the DH/Indigenous studies relationship.[11]  During the SINM panel on digital ecologies, Nehiyaw (Cree) Applied Psychology scholar Jeffrey Ansloos presented on his current research with Twitter, where he uses social media analytics as a means of analyzing social and political dimensions of Indigenous mental health as they’re expressed online. In particular, his research aims to strengthen a qualitative understanding of decolonial efforts on Twitter and to “explicate the polity of cultural revitalization activities” happening on the site. He spoke about how #NativeTwitter is repurposing the platform to not only revitalize Indigenous cultures, but to mobilize politically and to assert sovereignty. His research into language revitalization on the site found that “the [Twitter] ecology is producing an opportunity where there is language learning, but not in the way we have understood it — not merely to indigenize, but also to speak politically . . . and to strategically engage systems of the settler state.” Ansloos argued that while cultural revitalization online can indeed support Indigenous mental health, this cannot be achieved through “a neoliberal framing of indigenization or cherry-picking culture.” Rather, he explained that the Indigenous community’s relationship with these social media projects is fundamentally “renegotiating Indigenous peoples’ sovereignty with the settler state.” Ansloos’s findings and political orientation push us to think beyond the sometimes-limited framework of cultural revitalization. Instead of “indigenizing,” his work highlights the necessity of decolonizing, and of productively engaging with the ways in which the Indigenous Twitter community is already doing this work. That is to say, in order to decolonize DH, it is not enough to simply invite more Indigenous peoples into the field. Rather, allied scholars must first work to make the field safe and viable for Indigenous peoples and Indigenous knowledges. Ansloos’s research speaks to the richness of data in online environments like #NativeTwitter and how analyzing these ecologies can in turn inform and encourage resurgence in social policy and practice.

The Digital Divide and Indigenous Technological Tradition

More than presenting on the ways in which Indigenous communities are taking up technology, SINM participants also explained that Indigenous new media engagement is not novel but a continuation of a long history of Indigenous technological innovation. At the same time, the symposium grappled with how that history exists in tension with current realities of ongoing colonization, material inequality, and systemic barriers to information and communication technologies (ICTs). The historical and ongoing exclusion of Indigenous communities and reserves from these rapidly evolving industries and technologies presents a major problem, and as Jasmin Winters put it, “challenging the digital divide is no small feat.” Winters presented on her involvement with the First Nations Technology Council — an Indigenous-led organization in BC working to ensure that Indigenous peoples have “equal access to the tools, training, and support required to maximize the opportunities presented by technology and innovation.”[12] She explained that the council aims to address practical issues “like the actual building of digital infrastructure such as fibre optics, increasing supplies of hardware and software in communities, and creating more opportunities for careers in existing tech industries.” The council also does advocacy work around “the potential of digital tools for the pursuit of Indigenous rights to self-determination and sovereignty.” In this way, the Council fills a much-needed gap in respect to offering services and support that practically address the material impacts and injustices of the digital divide while providing infrastructural support that can be levied towards the proliferation of Indigenous resurgence.

Systems of oppression and digital inequality, however, must not belie the ways in which Indigenous peoples have technologically innovated since time immemorial. Winters noted that the Technology Council “first and foremost recognizes Indigenous peoples as always having been innovators in science and technology.” She stressed that we “need to position Indigenous peoples as the original innovators on these territories” and cited Cheryl L’Hirondelle, who writes “that to be truly free and self-governing, [Indigenous peoples] must also acknowledge and be aware of [their] pre-contact ingenuity as inventors and technologists — experts in new media and avatars of innovation.”[13] Sara Humphreys furthered this argument during her talk on the Cogewea Project.[14] According to Humphreys, “Indigenous ontology and epistemology expressed ideals of cyberspace before cyberspace was thought of as technology.” As evidence to this claim, she cited the centrality of interconnectedness within Indigenous worldviews, the storing of data via sign systems, and uses of multilayered, multimedia communication systems. In turn, Ashley Caranto Morford presented on one such example of precolonial Indigenous digital technology. Morford’s presentation built out of the foundational work of Cherokee scholar Angela Haas, who writes that the wampum belts made by Woodlands Indigenous peoples “extend human memory . . . via interconnected, nonlinear designs and associative message storage and retrieval methods” and have thereby functioned as hypertextual digital technology for over a thousand years — long before the invention of Western hypertext in the 20th century.[15] According to Morford, Haas “calls on us to rethink the digital” as not only that which involves computers and computer technology, but as that “which relies on the intricate work of the fingers, or digits, to create complex code.” Morford then turned to her own research on pre-colonial and ongoing Philippine tattooing practices: “These practices rely on the fingers to code significant aspects of our cultures through an intimate hand tapping technique that requires a bamboo stick and lemon tree thorn, water and soot,” and as such, are also “forms of decolonial digital technology.” In sum, Winters, Humphreys and Morford all demonstrated the long-standing genealogies of Indigenous technology while illustrating how those technologies translate into contemporary platforms and practices. At stake for all of these scholars were expressions of Indigenous technologies that informed and expanded contemporary definitions of the digital, namely through advanced cataloguing and representational techniques.

Considering these perspectives and traditions, we need to reject deficit- and damage-based approaches when moving towards the creation of a more just and equitable digital future. Deficit- and damage-based narratives look towards documenting exploitation and colonial oppression to elucidate the contemporary issues faced by Indigenous peoples and leverage redress. As Eve Tuck puts it, “common sense tells us this is a good thing, but the danger . . . is that it is a pathologizing approach in which the oppression singularly defines the community.”[16] Winters argues that “decolonizing the technology sector means first challenging deficit-based notions of the digital divide and understanding the impact and legacy of colonization on Indigenous knowledge,” as well as “challenging linear worldviews of development and innovation.” This is to say that beginning from the idea that DH or technology can “save” Indigenous peoples reproduces deficit-based narratives while eliding Indigenous innovation. In the Q&A for their panel, Mark Turin added that while Indigenous peoples have always engaged technology in deep and insightful ways, this does not mean that state structures have been supporting or facilitating that work. The DH and tech communities must hold up both of these realities by first recognizing the Indigenous histories at play, while also working to end digital inequality through strengths-based approaches, for instance supporting the work already being done by organizations like the First Nations Technology Council.

Historicizing Indigenous New Media

While histories of Indigenous creation with technology go back millennia, the now constantly evolving field of Indigenous new media developed more recently and specifically through the leadership of Indigenous women. During his symposium keynote, David Gaertner traced the emergence of the field to 1996 and to two key interventions: Loretta Todd’s essay critiquing the colonial underpinnings of the internet, “Aboriginal Territories in Cyberspace,” and Skawennati CyberPowWow, which Gaertner argues was “the first Indigenous territory in cyberspace.”[17] Gaertner explained that “it was no small feat” that these women made these interventions at a time and in an environment that was (and often still is) openly hostile towards women and Indigenous peoples. Since their emergence, online spaces have been, and continue to be, disproportionately violent toward black and Indigenous peoples, other people of color, as well as women, queer, trans and Two-Spirit individuals and communities. This is particularly true in respect to Indigenous women, who frequently face a combined force of racist, sexist, and colonial harassment and abuse online. Despite this, Gaertner noted, it also continues to be Indigenous women, like Anishinaabe video-game developer Elizabeth LaPensée, who “do the heavy lifting” in respect to confronting this behaviour and calling out violence, such as when LaPensée “intervened to stop the 2014 rerelease of the Atari platformer Custer’s Revenge — a game in which the objective is the rape of Pocahontas.” Indigenous women have in this way led the charge in building safer, more just and more equitable digital worlds, and their intellectual and creative contributions form the backbone of Indigenous new media and, in some ways new media itself. Indeed, some of the most cited new media and digital technology scholars at SINM were Todd, Skawennati, Angela Haas, and Marisa Duarte — all Indigenous women whose digital innovations and critical interventions have helped shape the field from the onset. Scholars and developers currently working in DH and new media need to hold up this labour and do far better in respect to supporting Indigenous women and addressing colonial and patriarchal violence when it occurs both online and offline.

Challenges, Relationships and Suggested Practices

Closing his talk during the SINM Indigitization workshop, Cultural Coordinator of Cowichan Tribes Chuck Seymour remarked that: “[Indigenous peoples] are the most studied people, but the least understood.” “Why are we not understood?” he asked. “You don’t speak our language.” Seymour was presenting on his work with the Cowichan Tribes Cultural Education Department and their process of digitizing cultural heritage materials so that their history and language can be kept alive and accessible for future generations. His words threw into sharp relief a larger truth that was discussed by other presenters at the symposium: that while non-Indigenous scholars continue to pursue research and projects in Indigenous contexts, there remains a significant gap in understanding and lived experience between these academics and the Indigenous communities they seek to work with. The material challenges and demands communities face as a result of ongoing settler colonial occupation are often missed or ignored by academics working in the digital humanities and the academy more broadly and thus the colonial dynamic to research goes largely unchanged. Addressing these gaps through ongoing relationship building, community-led research, and cultural sensitivity training, while not traditionally thought off as “digital” are thus key innovating ethical and meaningful relationships between DH and Indigenous studies.

Sarah Dupont, program manager at Indigitization — a collaborative initiative that works to support Indigenous communities and organizations with the conservation, digitization and management of community knowledge — dedicated most of her time at SINM to discussing issues of capacity for Indigenous digital initiatives. In particular, she outlined how Canadian government and industry demands on Indigenous nations, handed down in the form of thousands of annual referrals, often make doing archival digitization or other digital projects a difficult trade-off for communities.[18] Committing time and resources to this work is a substantial challenge in a colonial context where nations are constantly faced with proposals for natural resource development on their territories, or other threats to land, sovereignty, and culture. Dupont explained that nations are also often working with extremely limited resources, small staff numbers, and technical constraints, especially in more remote First Nations that may not have access to IT departments or up-to-date communications technology. Cultural heritage work, for instance, often operates on contingent funding, she explained, which leads to difficult cycles of “startup and collapse” for many communities. These are some of the challenges communities are facing, and Dupont argues that when academics make a commitment to work with a nation, they need to understand the resources and demands that community is dealing with and adjust their practice and objectives accordingly.

Settler students and scholars interested in or already working in Indigenous contexts also need to appreciate the living history of academic appropriation, misrepresentation, and exploitation. Linda Tuhiwai Smith writes that, “from the vantage point of the colonized, the term ‘research’ is inextricably linked to European imperialism and colonialism,” and the ways that academic research has been used to subjugate and dehumanize Indigenous peoples remains a “powerful remembered history for many of the worlds colonized peoples.”[19] Dupont explains that “colonial organizations have historically worked against Indigenous control of Indigenous information,” while Gaertner noted that universities have been complicit in the theft of Indigenous land and knowledge since the onset of colonization. These histories and their ongoing effects on Indigenous communities extend critical responsibilities. DH and new media scholars, and academics in general, need to recognize that they are working out of a space that is deeply implicated in colonial violence and in turn make visible and resist that legacy in the ways they carry out their work. As Gaertner argued in his keynote, “settler colonialism is already premised off a non-consensual relationship” and thus “we need to hold ourselves to a higher standard as DH researchers working in Indigenous studies.”

Bearing in mind the historical connection between research and colonization, several SINM participants also offered helpful guidelines for engaging Indigenous communities. The Running Wolfs suggested “6Rs” for nonexploitative data collection and research: respect, relevance, responsibility, reciprocity, relationality, and representation. They emphasized that research must be done with communities, which involves researchers building respectful and reciprocal relationships. Caroline explained that scholars need to ask themselves what and how they can give back to the communities they work with and that they also need to be aware of the forms of representation they create through their work. Drawing on the words of Deanna Reder, Gaertner offered similar suggestions and argued that researchers need to be better relations to Indigenous peoples. “Being a good relation,” he explained, involves forming “meaningful relationships with communities and individuals, which requires time and emotional labour.” Gaertner stressed the importance of free, prior, informed, and ongoing consent, and insisted that Indigenous buy-in cannot be an afterthought but must be secured before and throughout a project. “A yes at the beginning,” he said, “is not a yes at the middle, nor is it a yes at the end.” Citing the First Nations Principles of OCAP®, Gaertner also noted that scholars and developers in the digital humanities need to “take data sovereignty seriously” and that while it may be legal to use data in a certain way, Indigenous communities may have different rules for data stewardship that must be respected and followed.[20]

Decolonizing the Digital Humanities  

More than being good relations and ethical researchers, the digital humanities need to carve space for Indigenous knowledge, worldviews, and ways of being, and better attend to colonial legacies in the field. DH scholars need to recognize how mainstream ways of engaging with digital spaces, for instance with trends regarding open access and open education, often work against the values, concerns, and rights of Indigenous peoples. As Kimberly Christen explains, for the past two decades, demands for increased information freedom by the free and open source software community have combined with debates about open access, digital rights management, and intellectual property rights. Yet, those pushing to resist private control over digital spaces often do not consider—or actively deny—Indigenous rights to managing their information and knowledge online.[21] “The celebration of openness, something that began as a reaction to corporate greed and the legal straightjacketing of creative works,” writes Christen, “has resulted in a limited vocabulary with which to discuss the ethical and cultural parameters of information circulation and access in the digital realm.”[22] The open access and Creative Commons movements in this way fail to recognize or respond to culturally-specific contexts and social realities, such as the rights of Indigenous communities to uphold protocols for who, how, and when their digital heritage materials can be accessed, used, and disseminated. Morford in turn argued at SINM that “Creative Commons licensing and the public domain are not necessarily ethical, and often are a means of benefiting and protecting the colonialist and the colonial system.” She gave an example of historical photos of Philippine Indigenous peoples taken by early colonial zoologists and asked: “Did the ancestors whose photos were taken by white researchers with malicious colonial intents, and whose photos are now in the public domain, consent to have their images taken and used in such a way?” The DH community, and indeed all people involved in the broad scope of the open access movement, have a responsibility to address these concerns and to build space for discussing issues like Indigenous consent, protocol, and sovereignty within larger debates regarding open access.

The digital humanities and technology sectors also need to acknowledge the ways in which whiteness, colonialism, and harmful Western ideologies have shaped the internet. “Since its beginning,” explain Jason Lewis and Skawennati, “cyberspace has been imagined as a free and open space, much like the New World was imagined by the Europeans.”[23] Indeed, as Loretta Todd wrote in 1996, the internet was built as an extension of millenia of Western and colonial philosophy and “has in fact been under construction for at least the past two thousand years.”[24] Todd argues that “a fear of the body, aversion to nature, a desire for salvation and transcendence of the earthly plane created a need for cyberspace,” and that the “tension [in Western culture] between the need to know all . . . and the limitations of the body and the senses, of the physical world, [extended] a need for a new site for the ‘heart and mind’ of man.”[25] During her presentation in the symposium, Humphreys argued that “there are limits to knowledge” and that, despite its depiction in literature, “cyberspace is not limitless and utopic” and cannot be treated as such. We remain accountable to people and place when we contribute to and engage in the digital world, and Humphreys stressed that we must be responsible to the communities we represent when we use these spaces. Deciphering what this looks like is thus a key component of what decolonial DH is and should be.

Power is an essential consideration in a DH/Indigenous studies relationship. According to Treena Chambers, “too often we see the politics of the powerful as the norm” in the digital humanities. That is to say that technology, as a tool of power, carries with it particular sets of ideologies that are often elided via its application. Chambers notes that, in this sense, technology itself is political, speaking to the need to apply critical analysis to the tools we use, not just the results they produce. Other presenters stressed similar points and offered different perspectives on how to improve and decolonize DH. Symposium co-organizer and Nisga’a writer Jordan Abel further argued during his SINM keynote that “DH needs to be a space . . . with generative, porous borders” — that it needs to be an “interdisciplinary and intersectional” community that encourages work that can in turn “engender understanding across forms of difference.” Finally, more than address the realities of ongoing colonization and invite critical scholarship, Ansloos argued that the digital humanities must not simply seek to “indigenize” or treat Indigenous peoples as “sprinkles on the academic cupcake” but that the DH community needs to support decolonization and Indigenous sovereignty in practical, material ways, be that through funding opportunities, training, resourcing, reciprocal research, and/or MOUs.

Conclusion

Throughout the many presentations at SINM 2018 there lingered a constant notion that digital technologies themselves cannot achieve the goals of Indigenous communities or dismantle colonization. Participants noted that it is not technology alone but people and relationships that have the power to support Indigenous and decolonial futures, and while SINM was itself an important space for people to connect, share ideas, and discuss common challenges, there remains much work to be done in terms of community building and supporting the relationships necessary for decolonial digital innovation in DH. It is our hope that this blog post furthers those conversations and leads to continued capacity-building across DH and Indigenous studies.

For more on SINM, including audio excerpts from the above described presentations, please download our four-part podcast miniseries Recoding Relations, which you can find here: https://www.recodingrelations.org  

David Gaertner is an Assistant Professor in the Institute of Critical Indigenous Studies at the University of British Columbia. He has published broadly on Indigenous literature, Indigenous new media, and the digital humanities. His articles have appeared in Canadian Literature, American Indian Cultural and Research Journal, and Bioethical Inquiry, amongst others. He is the editor of Sôhkêyihta: The Poetry of Sky Dancer Louise Bernice Halfe and the co-editor of Read, Listen, Tell: Indigenous Stories from Turtle Island. His monograph, The Theatre of Regret: Objecting to Reconciliation with Indigenous Arts and Literatures is forthcoming from UBC Press.

Melissa Haberl  is a BA graduate of History and First Nations and Indienous Studies at the University of British Columbia and a creator of the 2018 Symposium for Indigenous New Media’s Recoding Relations podcast series. She currently lives in Berlin, Germany.


[1] “Indigenous Data Theft,” te hiku media, 10 Aug. 2018, https://tehiku.nz/te-hiku-tv/haukainga/8037/indigenous-data-theft

[2] Leanne Betasamosake Simpson, “Bubbling Like a Beating Heart: Reflections on Nishnaabeg Poetic and Narrative Consciousness,” in Indigenous Poetics in Canada,  ed. Neal McLeod  (Ontario, 2014), p. 112.

[3] Aside from our four-part podcast miniseries, Symposium RA, Autumn Schnell, also produced the essay “It’s Time to Queer the Digital Humanities,” The Talon, 29 Jan. 2019, https://thetalon.ca/its-time-to-queer-the-digital-humanities/

[4] See Tahu Kukutai and John Taylor, Indigenous Data Sovereignty: Toward an Agenda (2016), p. xxii. See also OCAP (Ownership, Control, Access, Permission), https://fnigc.ca/ocap

[5] Miriam Posner writes that, “DH needs scholarly expertise in critical race theory, feminist and queer theory, and other interrogations of structures of power in order to develop models of the world that have any relevance to people’s lived experience. Truly, it is the most complicated, challenging computing problem I can imagine, and DH hasn’t even begun yet to take it on” (Miriam Posner, “What’s Next: The Radical, Unrealized Potential of Digital Humanities,” Miriam Posner’s Blog, 27 July 2015, https://miriamposner.com/blog/whats-next-the-radical-unrealized-potential-of-digital-humanities/). While this claim is five years old now, we believe that issues race, gender, and indigeneity are just as pressing now in 2020 as they were in 2015. DH still has an enormous amount of work to do.

[6] Maize Longboat, presentation, Symposium for Indigenous New Media, Victoria B.C., June 2018. Unless otherwise indicated, all subsequent references to content from presentations, keynotes and related discussions are from the Symposium for Indigenous New Media held as part of the Digital Humanities Summer Institute at the University of Victoria on 10–11 June 2018.

[7] You can download and play Terra Nova at https://maizelongboat.itch.io/terra-nova

[8] For other key projects, see the Aboriginal Territories in Cyberspace (AbTec) research-creation network, http://abtec.org/; Achimostawinan Games, http://abtec.org/iif/residencies/achimostawinan-games/; Skins 5.0, http://skins.abtec.org/skins5.0/; and work by Skawennati, http://www.skawennati.com/ and Elizabeth LaPensée, http://www.elizabethlapensee.com/.

[9] For more virtual and augmented reality projects developed by the Running Wolfs, see Buffalo Tongue Inc., http://buffalotongue.org/, and Madison Buffalo Jump and others at http://runningwolf.io/vr.html.

[10] Aubrey Hanson, “Reading for Reconciliation? Indigenous Literatures in a Post-TRC Canada,” ESC: English Studies in Canada 43, nos. 2-3 (2017): 74.

[11] Here, we bear in mind Lisa Nakamara’s work on social media labour; see Lisa Nakamara, “The Unwanted Labour of Social Media: Women of Colour Call out Culture As Venture Community Management.” new formations: a journal of culture/theory/politics 86 (2016): 106–12.

[12] For more information on their objectives, projects, and current opportunities, see the First Nations Technology Council website, http://www.technologycouncil.ca/

[13] Cheryl L’Hirondelle,”Codetalkers Recounting Signals of Survival,” in Coded Territories: Tracing Indigenous Pathways in New Media Art, ed. Steven Loft and Kerry Swanson (Calgary, 2014), p. 147.

[14] To learn about the Cogewea Project, see http://www.philome.la/smhumphreys/the-cogewea-project

[15] Angela M. Haas, “Wampum as Hypertext: An American Indian Intellectual Tradition of Multimedia Theory and Practice,” Studies in American Indian Literatures 19, no. 4 (2008): 80–81.

[16] Eve Tuck, “Suspending Damage: A Letter to Communities,” Harvard Educational Review 79, no. 3 (2009): 413.

[17] Cyberpowwow was an Indigenous online gallery, live chat space and mixed-reality event active between 1997–2004. To learn more about the space, see CyberPowWow, http://www.cyberpowwow.net/.

[18] Dupont explained during her presentation that “referral” is a generic term used by the Crown, First Nations, or both when referencing a potential statutory or policy decision that may adversely affect or impact the Aboriginal or treaty rights of a nation. Referrals typically relate to the land, water, and natural resources of a nation and typically include consultation requests from industries such as oil and gas, wind energy, hydro, forestry, and mining.

[19] Linda Tuhiwai Smith, Decolonizing Methodologies: Research and Indigenous Peoples (London, 2012), p. 1

[20] The First Nations Principles of OCAP® stand for ownership, control, access, and possession. To learn more about OCAP®, see the First Nations Information Governance Centre, https://fnigc.ca/ocapr.html.

[21] Kimberly Christen, “Does Information Really Want to Be Free? Indigenous Knowledge Systems and the Question of Openness,” International Journal of Communication 6 (2012): 2870.

[22] Ibid., p. 2874.

[23] Jason Lewis and Skawennati Tricia Fragnito, “Aboriginal Territories in Cyberspace,” Cultural Survival Quarterly 29, no. 2 (2005).

[24] Loretta Todd, “Aboriginal Narratives in Cyberspace,” in Immersed in Technology: Art and Virtual Environments, ed. Mary Anne Moser and Douglas MacLeod (Cambridge, Mass., 1996), p. 155.

[25] Ibid

 

1 Comment

Filed under Uncategorized

Cosmology and Class: An Interview with Bruno Latour by Nikolaj Schultz

In this conversation with the sociologist Nikolaj Schultz, Bruno Latour elaborates his analysis of our new climatic regime and presents new ideas on its consequences for political and social theory. With the earth reacting to our actions, we face a cosmological shift that leaves us all divided and lost in space. The quintessential political question of our times is finding a place to land. Globalists continue to believe in the project of modernization, populists flee back to the land of the old while a few escapists simply try to take off to other planets. How to respond? According to Latour, the task becomes reinventing the old socialist tradition beyond the system of production, something we can only do if we retheorize the concept of social class to include a wider array of material conditions of existences than Marx’ definition of class alluded to.      

 

NS-CRITICAL INQUIRY

Cosmology and Division

Nikolaj Schultz (NS): In Facing Gaia you try to historically situate our present encounter with an earth suddenly reacting to our actions by comparing two different scientific discoveries.[1] In the seventeenth century, Galileo Galilei raises his telescope to the moon and shortly after concludes that our earth is similar to all the other planets of the universe. Some 350 years later, James Lovelock instead concludes that our earth is dissimilar to all the other planets. What are the symmetries and asymmetries of these two discoveries and what do they tell us about where we are in history?

Bruno Latour (BL): Galileo and Lovelock both try to cope with moving earths but two different kinds. Galileo discovered that the earth was moving around the sun and disturbed everybody by saying so. First, there was the quarrel with the church and, secondly, there were the major consequences his discoveries had on social order. This is well-known from the history of science and because of Bertolt Brecht’s extraordinary play The Life of Galileo. People believed they were in one cosmos before suddenly learning that the earth was moving. They did not know where they were in space and they felt lost—even if the practical consequences of Galileo’s discovery for daily life was close to zero. So, at hand we have a famous discovery with major impacts for physics and astronomy that simultaneously disturbs the whole establishment of the church and the social world.

Now, I contrast this with Lovelock’s similar but different discovery of another kind of moving earth. What Lovelock and Lynn Margulis discover is not simply that the earth is moving but that earth is being moved, to use Michel Serres’s expression.[2] The earth is reacting to the actions of humans. This new sort of movement of the earth is immensely more important, not least in terms of consequences for the social order and thus also more disputed. So, with a gap of three hundred years, we have two discoveries of moving earths and what interests me is that they both bring along extraordinary changes in cosmology and in understanding of space. It is another powerful example of a question which has interested me for forty years, namely the link between science and society, between cosmology and social order. While Galileo’s discovery marked the beginning of modern cosmology, I see Lovelock’s and Margulis’s discoveries as marking the end of modern cosmology. Right now, when we here about their discovery that the earth is being moved, we find ourselves in the same shoes as the people who in 1610 were worried about Galileo messing up their cosmology by proving that the earth was moving. We are as lost as they were.

NS: So, to talk with Alexandre Koyré, if Galileo took us from the “Closed Cosmos to the Infinite Universe,” then Lovelock is bringing us back from the infinite universe to a closed cosmos on earth.[3] Why has the figure of this return to earth, Gaia, been so misunderstood?

BL: Most importantly because it was understood trough a wrong idea of space. Gaia was immediately associated with the idea of the globe and with the idea of the earth as an organism. This meant it was quickly used by biologists and New Age people to return to the old, Greek idea of earth considered as one big animal. But this was not what Lovelock was interested in. Instead, he was interested in how life forms—including bacteria, vegetation, insects, and others—had provided so many changes in the chemical circulation of the atmosphere that it became impossible to understand air, water, mountains, or plate tectonics without taking into consideration the dynamic agencies of these life forms. With the help of his instruments, Lovelock was studying pollution and had realized that pollutants were able to spread everywhere on Earth. This made him intuit that what modern industry was doing perhaps had been done for billions of years by all life forms on Earth. He meets Margulis who studied the consequences bacteria had on the atmosphere, climate, rivers, mountains, and together they arrive at this extraordinary entity called Gaia. An entity with nothing in common with the idea of the earth being alive as an organism. Instead, it is an argument about the ways life forms continue to transform their own conditions of existence to the point where they engineer the whole surface of the earth.

NS: So, the fundamental consequence of Gaia is that entities make up their own environments. This not only means that climate is the historical result of agencies, it also means that space itself is the offspring of time. With Gaia, space is not in the background, space is continuously constructed by dynamic life forms. Why is this difficult to understand cosmologically?

BL: Not least because of the cartographic tradition, invented at the time of Galileo. Cartography gave us a sort of taken-for-granted definition of space as a frame inside which objects and people reside. With this definition of space, you cannot see how space itself is constructed by the agencies of life forms. With this gaze, you miss how life forms are not in space but that they make space. One example is how bacteria produce the oxygen of the atmosphere that all life forms breathe. Bacteria are not in the frame, they make the frame. This you cannot see if you approach space cartographically. If you approach space from the view of the globe, or as a map, you remain stuck inside a frame, with difficulties understanding what life is. These difficulties have burdened biology and ecology since the seventeenth century.

With Gaia the situation is reversed. The trick of Lovelock and Margulis is to say, “If there is an earth, soil, and sea, it is because life forms are producing their own environment.” Life forms are not sitting in the environment, they produce the environment. In biology, Margulis’s ideas and her notion of holobionts are becoming mainstream now. Today, everybody knows that our bodies are made of microbes, for example. So, the idea that we are seized and maintained by the agencies of life forms is beginning to become common sense. The amusing thing is that this idea of space as the product of agencies is an old actor-network theory argument that we developed completely separately in sociology.

NS: What are the political consequences of this concept of space? Previously, you have conceptualized this spatial or cosmological shift with the notion of a new climatic regime.

BL: Like Galileo, Lovelock is not interesting for his politics. What I am nonetheless interested in is the political consequences of being lost in space after the discovery of Gaia. This is somehow what I try to map very grossly in Down to Earth.[4] My argument is that what we all have in common is no longer moving forward trough progress but that we are lost in space. What we all have in common is no longer having an exact idea of where we are in space or on what soil or land we reside. And I think this shows clearly in the political disputes of today.

First, by what is normally referred to as populist movements and their questions of “What are our borders and what are the people inside our borders?” Questions posed all over Europe and, of course, most vividly with Brexit. Secondly, it shows with those who say “Let’s go on,” “Business as usual,” “Let’s maintain the modernist tradition of progress.” The ideal of globalization, if you want. Both these positions are simply affects asking where we are, on what soil or land we reside. Now, the problem is that both these positions are too abstract in terms of material existence. The land the populists wants to go back to—The England of Johnson, The Italy of Salvini, The France of Front National—are not real countries. They are imaginary versions of what would have been the land years ago. But the land of the globalists is just as imaginary, as they imagine that the earth will accept infinite modernization. So, we are lost in space.

NS: So, politics is now ordered by the question of land, but we are all lost in space because none of the political territories that modernity offers us have any ecological or economical fundament.

BL: Exactly. Look at the example of Brexit, for me a great experimentation of territorial redescription. It started with an imaginary space based on ideas of identity and borders. Three years later it is a complete mess. The English learned day by day, bit by bit, what they were actually depending on as a territory—dependencies always transcending the nation state. If you leave the EU, you will be in trouble getting medicine, fresh food, then your labor force will have bad protection of rights and so on. So, one talks about identity and about walls, but slowly you realize that you do not only depend on identity but more importantly on a long list of other conditions of existences. Our ignorance about what makes our countries thrive is immense. This is what I try to allude to when I say that we are spatially lost.

NS: Yesterday in Paris, you attended the defense of French philosopher Pierre Charbonnier’s habilitation.[5] One of his arguments is that there is a disconnect between where the moderns think they live and the territory they actually live off. How is this connected to the current spatial confusion? Why the difficulty of understanding that to have politics you need to have a land and a people corresponding?

BL: Yes, there is a disconnect between the two sorts of land that we inhabit. On the one hand, there is the land from where we have our rights—the nation state—which is territory that we understand ourselves as living in. On the other hand, there is the land we live from, which is the territory where we get our resources. We sort of know these two territories are connected, but because of the material history of the Moderns—first the colonies, then the discovery of coal and later oil—they have divorced. So, if people have lost their sense of space, it is because of this divorce that has made it difficult for people to describe the world out of which they get their prosperity and the entities that allow them to subsist. And what Charbonnier investigates is simply how this disconnect becomes bigger and bigger ever since the “discovery” of America.

In one chapter, there is an interesting simile to understand the argument and its relevance for political ecology. In the beginning of the nineteenth century, Johann Gottlieb Fichte wrote a book answering to the English project of liberalism, arguing that in inventing the global world the English were completely hypocritical.[6] They pretended to be civilized and tolerant while simultaneously exploiting the whole planet. If Germany wanted to be tolerant, Fichte said, they would need to close down their borders, forbid commerce, and instead juxtapose the land out of which they lived with the land that gives rights to their citizens. Fichte probably did not imagine this to be possible even in the nineteenth century, but it is a fine description of a sort of utopia, where the legal country is reconciled with the material country.

I think this is a good way of grasping our current situation. Because, in fact, political ecology has nothing to do with green stuff or nature. It is about how the new, moving earth forces everybody to ask again the question of what to subsist on. This question of subsistence is a main feature of what I call the new climatic regime. Everyone is simply trying to find out which land to live off and live in. This is also why the Trumpists are climate deniers. You study that yourself, namely the question on how some are saying: “We don’t share the same earth as you.”[7] Something impossible to reconcile with modernism because modernism was supposed to be the progress of all—even if it really wasn’t.

NS: Yes. It is difficult to believe in modernism when you see a picture of Elon Musk’s Tesla sports car floating around in outer space. This did not look like the progress for all—this was progress or emancipation for the wealthy few. And when you look at how other Silicon Valley tech billionaires are trying colonize Mars, then it certainly does not look like modernism either, as the classic question “Is there life on Mars?” is rephrased from a civilizational question into a question for the one percent who try to escape Earth. This is the ideological essence of what you have previously called offshore politics or planet exit: the escapism of the earthly, material limits by the few.

BL: With a lot of money put into it . . .

NS: Lots of money and lots of technology. For these elites it is a case of deus ex machina, and as techsters they are God’s chosen few. It is definitely not a coincidence that the wildfires in California never reached Silicon Valley—and with the prospect of ecological, civilizational collapse they take off and go to Mars. The problem is that you rather quickly find out that Mars is uninhabitable. It is not a very nice up there. So, the tech billionaires shift from planet B to plan B and invest in luxury climate-secured escape bunkers in places like New Zealand, so they can escape civilizational collapse. It sounds anecdotal, but it has been studied in detail by investigative journalists.[8]

BL: So, they hedge their bets, one on Mars and one in New Zealand. How many people are we talking about?

NS: Exactly. Crushed under the weight of the new moving earth, they choose to escape and leave the rest of us behind. They do not live in the Anthropocene they live in the Misanthropocene. Steve Hoffman, the billionaire founder of Reddit, estimates that around fifty percent of the Silicon Valley tech elites have bought escape property around the world. Escape bunker property for the ultrarich has become a billion dollar business.[9] The interesting thing is that it is not even secret, even if it sounds like a neo-Balzacian conspiracy theory when one says that the rich are escaping the planet during nighttime. They actually say so themselves. Yet, it is not an unproblematic move for the rich. A lot of things could go wrong: How do you make sure that your security guards do not turn their weapons against you? Are you supposed to bring the family of the pilot of your private jet when you escape? A lot of questions arise, but it is still better than “staying with the trouble,” as your friend Donna Haraway would say.[10]

BL: But they are not climate skeptics; they are deniers, right? They recognize that there is a planetary danger?

NS: Yes, that is exactly why they take off. Again, climate denial arises not despite the fact that the climatic mutations are real, it arises because the climatic mutations are real and because the price of solidarity is too high to pay. It is the same with Trumpism. These are just the people that take the extreme consequences and choose to leave.

BL: But how do they cope with the fact of being alone and not following the old logic of modernism? How do they cope morally with leaving behind the rest of us? They must sort of reinvent themselves as atomized agents.

NS: That is what should to be studied now. If we could describe the material conditions of existence and the moral economies of these exiters and compare them with those who are stuck behind deprived of habitable territory, we would probably have a better grasp on tomorrow’s class struggle, a struggle over territory and not over the means of production.

BL: When Musk sent his Tesla into space, he said that it was “silly but fun.” The space adventures of the twentieth century were certainly not silly and fun, they were  part of a progressive modernity. Seeing space adventure becoming a caricature for just a few people is very shocking. So, at this moment, I think we are exactly at a place where we are literally living on different planets. One of these is the modernist, globalist planet; the other one is the identity, localist planet; and the third is the escapist or exit planet that you study. We are completely divided about which planet or which land on which we live. This is what I try to show in Down to Earth.

NS: Yes, when Musk said that it was “silly but fun” it was good proof that modernity was dead. But we only capture the dividedness you speak about if we remember that these people are very serious about escaping. They put billions of dollars into it. And, somehow, this move of escapist ideology was not a big surprise. Only a few months before, Donald Trump took America out of the COP21 Agreement. What did he do just after? He announced that he was going to “Make America Great again . . . on Mars.”

Class and Description

BL: The question is how to respond to this division of space. Here, I want to go back to what we spoke about before. As Charbonnier shows, the question is now about how to restart the socialist tradition. A tradition that was in fact always interested in the question of the divide between the land, the industry, and the legal framework in which people live. One can even say that socialism was about this disconnect. Yet, it is also true that socialism never succeeded in connecting with ecology. For this reason, Charbonnier’s hero is the same as mine. Karl Polanyi was one of the few in the socialist tradition who articulated the the idea that both the labor force and the land resists production. In The Great Transformation he maintains that it is not a question of production but of what I call a processes of engendering, the ways in which things are brought to the world.[11]

This is the connection I am interested in. Can we, within the socialist tradition, rearticulate the questions concerning ecology as questions of existence, of survival, of generation, of reproducing, of giving birth and of losing territory? As a philosopher, I see the first contours of what we have been calling geosocial classes, a notion that would perhaps allow us to redo for the present situation what the socialist did fairly well until the 1950s and the beginning of The Great Acceleration. This would allow us to reconnect the land that we live in and the land we live from, as well as to connect ecology with socialism within the framework of politics as usual. But I gave you the task of finding these classes. How would you approach the question? What would be a good definition of geosocial classes?

NS: I think your intuition in Down to Earth is correct. These are not classes defined by their position in the production system; they are classes defined by their territorial conditions of survival, their material conditions of existence or reproduction. Defining geosocial classes means taking the cosmology of our new moving earth serious when approaching the social question and use it to redescribe social classes in a way that extends their Marxist definition. While social classes were defined by their ownership over the means of production, geosocial classes are defined by their dependence on a wider array of material conditions of existence that allows social groups to survive or thrive. If we had such a definition of classes, we could delineate a people corresponding to the new climatic question of the twenty-first century, and to geohistory, in the same way Marx made a people correspond to the social question of the nineteenth century and to social history.[12]

The terribly difficult question is how to map this empirically. First, it would be necessary to define the territory on which different collectives live, by describing what entities or actors different social collectives depend on to reproduce. If we did this, we would first see that the networks of existences that allow different social groups to survive and reproduce would look very heterogeneous. But what would also be clear is how some social groups would share means of reproduction with some more than others—similarities and dissimilarities that would allow us to reclassify social groups on the basis of material conditions of existence. Perhaps this would even allow us to redefine exploitation as the surplus of existence that some social groups profit from, by describing how the livelihoods of some collectives prevents the access to a habitable territory for others. I think that the Yellow Vests in France perhaps showed us the urgency of the geosocial question. Would you agree?

BL: I think the Yellow Vest affairs started with an interesting moment of geosocial inquiry, as it was a matter of salary, taxation, gas, landscape, and social justice. So you are right; initially, the connection was made. But you cannot have a political position if you cannot describe your own territory. So nothing came out of it precisely because they did not have the vocabulary, tools, or political movement to help them articulate this link. We are extraordinarily bad at describing what allows people to subsist. We talk a lot about identity, we have a lot of discussions about values, but please describe to me the territory in which you survive, in which you invest, and might want to defend. I think the lack of such descriptions is what renders the political scene so interesting but also so violent today. We begin to realize that this is the real question, but we do not know how to answer it. This is also why I am interested in the episode of the Cahiers de Doleances, because it was exactly an initiative directed towards territorial descriptions and questions of social justice in one and same breath. The Yellow Vests did not manage to maintain this link.

So if the question of geosocial classes is difficult to answer, it is because we all have very little idea about where we get our subsistence from. We have simply lost the habit of describing what we are attached to, what we are connected to, and what allows us to survive. In a way, Marxism used to be a vocabulary that allowed such descriptions of our conditions of subsistence, which we could use to locate ourselves inside the system of production. Can we do the same thing today with what I call the processes of engendering? From Proudhon to Marx, socialism described the practical and material realities of industrial society. They described where people within this society got their subsistence from, which allowed people to position themselves in the system of production. But today, we live in a different world. Today, if one would have to describe the practical, material world in which one lives it would not only be about industry, we would furthermore have to add entities like the climate, carbon dioxide, water, bugs, earth worms, soil, and others—the wider array of material conditions of existences that you spoke about before. And this is what ecologists never managed to bring to the attention of socialists. It is still the question of inequality, of justice, and of the material world out of which we get our subsistence; it is simply that the world has changed form.

NS: Yes. The interesting thing is that in the first period of the Yellow Vests, when there was a moment of geosocial description, they actually enjoyed support and were able to mobilize affects internally and externally. When they lost their territorial descriptions, it turned violent. It seems that in some situations, violence does not occur when indignation reaches a certain level; it occurs when you are no longer able to describe who you are, what you are attached to, and with whom you fight.

BL: Yes, they lost completely their territorial descriptions, and instead went on to ask for the head of the president. . . . Macron then offered them a grand debate, but we learned nothing from it because people simply gave their opinions. But the opinions of people who have no land nor a world to describe is useless. A million and a half answers to the debate and not one single description of where we reside and with whom. Values? Yes. Identity? Yes. But no territories. If you have lost the ability of describing the land or the territory on which you reside—understood in the etiological sense as the lists of entities you rely on to subsists—then you simply cannot do politics. If you have no territory, you have no politics.

NS: So to restart politics, we need to redescribe our territories, our lands, and our people. How come we lost the ability of doing so? Were we atomized by neoliberalism, which is  fundamentally an ideology and politics of disattachments?

BL: Of course, this is one of the reasons. But you can simply also just loose the habit and culture of doing politics, if it is not constantly maintained. Redescription is a general rule of the social sciences, but today I would say that this is the political question. Let us not forget that ecological mutations are unprecedented. We have never before had a moment where we had to reengineer the whole system of reproduction piece by piece, house by house, mobility by mobility, food by food. We have the experience of production and modernization, but we do not have any experience of reproduction and remodernization. Eight billion people and every single material entity that binds their societies together and make them live are controversial. Meat is controversial, clothes are controversial, transport is controversial. In this situation, we cannot skip the phase of description of territory, unless you want to end up in an abstract world of identity or values. This is what happened in England. If we do not do the work of description, we cannot go forward.

NS: This leads me to my next question. Forty years ago, you started your career by following natural scientists in the laboratory. Now, you are interested in a new sort of science and a new sort of scientist. In Down to Earth you dedicate a chapter to critical zone’ and critical-zone scientists, and you are currently doing an exhibition on the topic. Why are you interested in these topics and how are they related to the task of description?

BL: First, critical zones and critical-zone scientists are words used in geoscience, hydrology, geomorphology, geochemistry, and in soil sciences to denote the thin crust or skin of the Earth and the scientists that are studying it. And, yes, when I have been following and studying these scientists for five years now, it is exactly because I think they help with the redescription of territories in a very practical way. First, because they are not global. They are not working with the Earth as the globe. Rather, it is the Earth as a thin skin. Everything on which life forms live exists only here, on a few kilometers thick pellicule of the earth, reaching from the atmosphere and a few kilometers down in the rocks. So, what they study is comparable to Lovelock’s discoveries. It is another tool to get away from the idea of nature, which is simply too big, abstract, and imprecise. When you study critical zones, you study a series of things or connections on the crust of the Earth, so it has a modest reach. It is about very limited entities; it is not the whole cosmos. The second interesting thing about these sciences is that they explicitly study the differences between what they see in the laboratory and what they see in the field. Again, there is this modesty, it is a boots-on-the-ground type of science—a bit like natural history or like Alexander von Humboldt’s natural science.

NS: It is another epistemology.

BL: Yes. Epistemologically, they are far from the other sciences that I have been following for many years. And since they underline the discrepancies between their observations and the chemical reactions, it means that they are redescribing and rematerializing the question of territory, which we simultaneously try to redescribe and rematerialize in political and social theory. This is also where there is a link between Lovelock’s discovery, the political question of geosocial classes and critical zones. This is why I am interested in them and why I am also doing an exhibition on the topic.

NS: Why an exhibition? What are the role of the artists in it?

BL: Exhibitions allow you to do a thought experiment in a limited space that cannot be done in any other way. Every time I have done an exhibition, the question at hand was completely impossible to raise in a book but possible to raise in a space. Why? Because you are able to submit people to an experiment. This is what I mean by thought exhibitions. It is a way to use limited space, art, and artists to bombard visitors with expressions and then see what happens with them. The last one I did in 2016 Reset Modernity,[13] bombarded visitors with objects, asking if they could reset their vision of modernity.[14] The current exhibition—simply called Critical Zones: Landing on Earth—is somehow easier. It basically offers a lot of scientific facts and arts from which the visitors can learn to redescribe and revisualize the Earth’s surface in which they live but which they are not conscious about, in large part because of the cartographic imaginary we spoke about before. The problem remains the same. We always think of the Earth, seen from the outside. If you say “Earth,” what typicaly comes to peoples’ mind is the globe. But despite all the talk  about the Blue Planet, only the people who are out of space, out in space, experience the Earth like that. We are not out in space; we are inside critical zones. And this is what we need to visualize. Here, the importance of artists is that they help us multiply the visions of the Earth, viewed from the inside and not from the outside. It sounds simple, but it is absolutely crucial not to imagine the planet as the globe if we want to land on Earth. The globe is too big and too abstract. So, what we simply try to do is to invent with scientists and artists a vocabulary for this landing. In a way, it is surprising that we even have to do so. Why should we have to land? Are we not on Earth? In a way no, because Moderns took off on an interesting and somehow beautiful journey, as visualized by Musk and his Tesla, but now we realize that we have to land again without crashing. As I say in Facing Gaia (2017), we are exactly in the same position as when we “discovered” the New World and when the cartographers had to redraw their maps. Four centuries later, we discover a new, moving earth. Not in extensity but in intensity, an earth which is reacting to our actions. For that you need new descriptions, and you need new visualizations.

 

Notes

This conversation between Bruno Latour and Nikolaj Schultz took place at The Queens Hall, Royal Danish Library on 29 May 2019. It has since been edited and substantially revised. Selections of the conversation were first published as an audio file by the Danish newspaper Dagbladet Information for their podcast series “European Ideas.”

 

Nikolaj Schultz, sociologist, is a PhD Fellow at the Department of Sociology, University of Copenhagen. He is currently a visiting scholar in Paris, where he is working with cosupervisor of his PhD thesis, Bruno Latour, on developing the concept of geosocial classes. Bruno Latour, sociologist and philosopher, is Professor Emeritus at Sciences Po, Paris. He is currently preparing the Critical Zones. Observatories for Earthly Politics exhibition, cocurated with Martin Guinard, Peter Weibel, and Bettina Korintenberg, set to open 8 May 2020 at the ZKM | Center for Art and Media Karlsruhe.                                                                

[1] See Bruno Latour, Facing Gaia: Eight Lectures on the New Climatic Regime (London, 2017).

[2] See Michel Serres, The Natural Contract (Ann Arbor, Mich., 1995), p. 86.

[3] See Alexandre Koyré, From the Closed Cosmos to the Infinite Universe (Baltimore, 1957).

[4] See Latour, Down To Earth: Politics in the New Climatic Regime (London, 2017).

[5] See Pierre Charbonnier,. Abondance et liberté. De la revolution industrielle au changement climatique (Habilitation thesis, L’École des Hautes Etudes en Sciences Sociales, Paris, 2019).

[6] See Johann Gottlieb Fichte, The Closed Commercial State, trans. Anthony Curtis Adler (New York, 2012).

[7] See Nikolaj Schultz, “Life as Exodus,” in Critical Zones: Observatories for Earthly Politics, ed. Latour (forthcoming).

[8] See Evan Osnos, “Doomsday Prep for the Super-Rich,” New Yorker, 22 Jan. 2017, www.newyorker.com/magazine/2017/01/30/doomsday-prep-for-the-super-rich

[9] See Julie Turkewitz, “A Boom Time for the Bunker Business and Doomsday Capitalists,” The New York Times, 13 Aug. 2019, www.nytimes.com/2019/08/13/us/apocalypse-doomsday-capitalists.html?searchResultPosition=9

[10] See Donna Haraway, Staying with the Trouble: Making Kin in the Chthulucene (Durham, N.C., 2016).

[11] See Karl Polanyi, The Great Transformation: The Political and Economic Origins of Our Time (Boston, 2001).

[12] See Schultz, “Geo-Social Classes: Stratifications in the System of Engendering,” in Critical Zones.

[13] See Latour, Reset Modernity (Cambridge, Mass., 2016).

[14] See Latour,  Reset Modernity.

2 Comments

Filed under Uncategorized

Tales of the 1940s: A Conversation between Werner Sollors and Françoise Meltzer

Coeditor Françoise Meltzer and Werner Sollors discuss Sollors’s The Temptation of Despair: Tales of the 1940s (2014). Read Sollors’s “‘Better to Die by Them than for Them'”: Carl Schmitt Reads ‘Benito Cereno'” in the Winter 2020 issue of Critical Inquiry.

You can also listen and subscribe to WB202 at:

iTunes

Google Play

TuneIn

Leave a comment

Filed under Podcast, Uncategorized

ON OKWUI ENWEZOR (1963-2019)

 

Terry Smith

I first met Okwui Enwezor in 1997, at Bard College in upstate New York, when the curatorial team for the Global Conceptualism: Points of Origin 1950s-1980s was assembled by the project organizers: the artist Luis Camnitzer, the scholar Rachel Weiss, and the curator Jane Farver. The exhibition opened at the Queens Museum, New York, in April 1999, and traveled elsewhere in the US. The aim was to show New Yorkers and other Americans that conceptualist practices from elsewhere were not pale imitations of European and US models. Instead, they had originated throughout the world in response to local conditions; and were usually more political in intention and effect. At the Bard workshop, each curator was challenged to prove that “our” artists—the artists from the region we represented—met these criteria. Okwui and I were provoked by this: me to show that, in Australia and New Zealand, there were both imitators and originators, but more importantly to demonstrate that conceptualism was more an “art in transit” than an art locked into local settings. Okwui’s answer was better. Fresh from curating the 2nd Johannesburg Biennial, he boomed “Of course, there are some artists who are clearly international conceptualists, yet work in unique ways.” He showed South African Willem Boshoff’s braille text pieces. “But,” he continued, “the point is that, as Yoruba knowledge tells us, in Africa, artists emerge from a long tradition of ideas, language, and performance.” Thus, Frédéric Bruly Bouabré. African art, he was saying, has always been conceptual and political—on a broader scale, and in deeper, more embedded ways than anything you can imagine.

ENWEZOR

Global Conceptualism was reviled at the time; it is now regarded as a landmark exhibition, a harbinger of the “global exhibitions” to come, that are now almost a norm for exhibitions that aim to be seriously consequential. Okwui was not only a pioneer of this form, he quickly became its leading exponent. What drove him to take on such ambitious projects? What enabled him to succeed, so often, over more than two decades?

His personal qualities were evident to all who knew him. A love of life. A large laugh. A generosity of spirit. High intelligence. A constant quest for more knowledge; an incessant self-education. A gift for friendship. He was a demanding companion and a challenging colleague. Of course, he had unlimited ambition—for himself and for his projects. His natural inclination to leadership was tempered by an instinct towards collective action. True grit. Unbending integrity. Impatience with stupidity; hatred of cupidity. An instinctive educator; a great teacher (he was much loved at the University of Pittsburgh, where I brought him to teach, straight after Documenta 11). He was an inspiring, indefatigable collaborator, as I found out when he and Nancy Condee and I worked together in Pittsburgh to stage the conference that led to Antinomies of Art and Culture: Modernity, Postmodernity, Contemporaneity (Duke, 2008). Above all, he was a visionary, a dreamer.

He possessed a love of art that encompassed continents and centuries, thus making a random stroll through the Metropolitan Museum, New York, his greatest happiness as a private visitor to an art gallery. It is one of the world’s losses that we never got to see Okwui in the role as director of that museum, or an equivalent institution. While the United States, to its great credit, was able to elect, and reelect, a black president (for whom we both voted), black directors of major museums are few and far between. The situation in Europe is no better. We talked about this structural exclusion, which he felt keenly. The world’s geopolitical turning would, we dared to hope, eventually lead to change, despite the current reactionary regressions. It is a matter of deep regret that his life cut short—he died in Munich on 15 March 2019—means that we will never see him break through that wall, as he did so many others.

But Okwui Enwezor amounted to much more than the sum of his personal qualities, and a lot more than the list of his formal identities. This became truly clear when I visited Documenta 11, the fifth platform of which was at Kassel in June 2002. For me, a defining moment occurred in the Documenta Halle, in the installation From/To by Fareed Armaly and Rashid Masharawi. Armaly, an artist of Lebanese-Palestinian descent, born in the US and resident in Stuttgart, designed a floor grid of orientations based on territories claimed by Palestine. Masharawi, a Palestinian filmmaker, born in the Shati refugee camp and resident of Ramallah, presented an engrossing program of Palestinian film. The projection space included an illuminated wall map showing the actual locations of Israeli settlements on the West Bank. It became obvious at a glance that Israel was establishing “facts on the ground” that would make the two-state solution supposedly desired by all parties a practical impossibility.

An informed, free press would have made this known to all, but these were the months after 9/11. The War on Terror had been declared by the oligarchs who were then, as now, in command of nations. Information inimical to their interests was systematically eclipsed, even in “free” societies. In the United States, where we were living, opposition was rare, and when exceptional intellectuals such as Noam Chomsky and Susan Sontag raised their voices against the tide of misinformation, mindless patriotism, and fearful retreat from critique, they were pilloried. Okwui and his team, and the artists in the exhibition, did not fear such criticism. They had a larger duty: to show the world to us as it was, and to imagine the world as it might be, after the legacies of colonialism are finally overcome.

Okwui called this: opening “The Black Box.” Not just creating spaces for photography, video, and documentary, but also exposing the world’s unconscious, its centuries of repression. Under his guidance, the exhibition became a space of liberation.

A certain trajectory emerges in the series of his exhibitions that began at the Guggenheim Museum, New York, in 1996, with In/sight: African Photographers, 1940 to the Present, and continued with Trade Routes: History and Geography, 2nd Johannesburg Biennial (1977); Short Century: Independence and Liberation Movements in Africa 1945-1994 (2001-2002); Documenta 11 (2002); and then through several others, up to and including his recent major achievements. It was no accident that he located the continuous reading of undervalued yet essential texts at the core of most of his exhibitions. Nor that, at Venice in 2015, it was Karl Marx’s Capital. Thomas Pikkety’s globalized version, Capital for the Twenty-first Century, had been published the year before. Okwui wanted us to remember the real thing, to help us imagine All the World’s Futures more clearly.

No curator working today matches the scope of Okwui’s vision. I see him as the Karl Marx of contemporary curating. I say this with full awareness that each of us is a clutch of contradictions, as was Marx himself. Okwui’s deep understanding of the kinds of work that art does in the world parallels Marx’s grasp of the importance of modes of production, and how when they change, the world changes. These are not abstractions. They are insights into how things are, and how they might get worse, or better, or both. Compare any of his exhibitions, with their world-historical sweep, to the mainstream surveys of contemporary art, vaguely shaped according to a generalizing, pluralistic theme—for example, most editions of the Venice Biennale. In contrast, Okwui became the master of what we might call the contemporary, historical, and critical exhibition. Magnificent Scale was a great title for the El Anatsui exhibition at the Haus der Kunst, Munich: it describes Okwui’s achievements equally well.

In a conversation that we had in Munich in 2013, that was published two years later in my book Talking Contemporary Curating, he said this:

To me the fundamental challenges that a curator faces today are how to provoke an engaged confrontation with works of art, how to make that experience legible, and how to use it to open up forms of engagement with the world. Exhibitions, in this sense, open up the surplus value of art. They create value of many kinds, simply because each time artworks are exhibited they accrue new meaning, new force, and open out new possibilities, while not necessarily changing their shape. In turn, art changes the perceptions of those it engages—so, to make an exhibition is to theorize the place of art not only in institutions, but also in public spaces, and, if you will, in the world.

To truly value the surplus value of art, and to never use it for its exchange value—that was what Okwui believed that contemporary curating should do.

In the week before he died, on 15 March 12019, I spent many hours each day by his hospital bed in Munich, as the vast complexities of his life converged upon us. It was a privilege to be there with him then, as it had been, so often but never often enough, since 1997.

 


TERRY SMITH is Andrew W. Mellon Professor of Contemporary Art History and Theory in the Department of the History of Art and Architecture at the University of Pittsburgh, and Professor in the Division of Philosophy, Art, and Critical Thought at the European Graduate School. In 2010, he became Australia Council Visual Arts Laureate and the received the Franklin Jewett Mather Award from the College Art Association (USA). Books include What is Contemporary Art? (2009), Contemporary Art: World Currents (2011), Thinking Contemporary Curating (2012), Talking Contemporary Curating (2015), The Contemporary Composition (2016), One and Five Ideas: On Conceptual Art and Conceptualism (2107), and Art to Come: Histories of Contemporary Art (2019). See http://www.terryesmith.net/web

Leave a comment

Filed under Uncategorized

Talking about the Rule of Law with Robert Mueller and E. P. Thompson

Aziz Z. Huq

 

“What is remarkable (we are reminded) is not that the laws were bent but the fact that there was, anywhere in the eighteenth century, a Rule of Law at all.”[1]

 

Predictably and painfully, the public exorcism of the Mueller investigation came to an ashen close this week. Its climax—or perhaps more accurately, bathetic anticlimax?—took the form of a pair of congressional hearings where the Special Counsel along with an assistant testified. These were a peculiar blend of ghostly whispers conjuring evasions and circumlocutions, a surfeit of the usual theatrical bluster and malarkey—thank you, Jim Jordan—and the occasional huffs of exasperation by Mueller himself. The latter were not, though, evinced by any pained splinter of concern at the documented fact of presidential criminality. Rather, the special counsel showed the most energy when his rock-ribbed prosecutorial reputation seemed under interrogation. In contrast, the sedulous documentation in volume 2 of the Mueller report, which persuasively adumbrates almost a dozen discrete instances of presidential obstruction of justice, has vanished into little or nothing. At the hearing, their echo yielded nothing but “euphoria” from the White House.[2] The reason is easy enough to see: no reality-show producer is rushing to book Mueller on the strength of his ethereal performance.[3] But absent of some histrionic moment, like something out of A Few Good Men (1992), it is hard to see how the hearing could have made much impact in the first instance.

These events would be no surprise to readers of E. P. Thompson’s masterful history Whigs and Hunters. Its first 250-odd pages, after all, are a powerful, even unforgettable testimonial to the potency of “bad law, drawn by bad legislators, and enlarged by the interpretations of bad judges” (WH, p. 267). Those pages sketch indelibly the Black Act of 1723, a cruel enclosure of land and customary rights as an incident of class war, one that extended the death penalty to deer stealing, tree cutting, and burning. In the body of Thompson’s account, law is a tyranny, as red in tooth and claw as the rapacious English land-owning class that wielded it without compunction or hesitation.

Yet, in a passage that has puzzled many of his admirers and ideological fellow travelers, Thompson ends his account with a paean to the rule of law. Despite everything in Whigs and Hunters, he nonetheless praises the “remarkable” virtue of law. This adheres in its “principles of equity and universality which, perforce, has to be extended to all sorts and degrees of men” (WH, p. 259). This quality of generality is common to many leading definitions of the otherwise protean rule of law, from Joseph Raz’s to Lon Fuller’s.[4] Hardly a panacea—remember Anatole France’s pungent line about the law’s “majestic equality”—law still seemed to hold for Thompson the promise of some vestigial constraint on the wielding of state power against the vulnerable.[5]

One hears something of an echo of Thompson’s sentiment, I think, in calls from the left for the machinery of criminal justice—responsible for the evil of mass African-American incarcerations, the horrors of the war on drugs, and far more—to crank into action against Trump. There has been an idea on the left that prosecutors will come swinging to the rescue, ending the moral catastrophes of the Trump presidency, without reckoning with its structural causes or institutional continuities.

The hope was always an unlikely one, quite apart from its selective and culpable omissions about American criminal justice’s longer historical record. At the Mueller hearings, the hope plainly flared and died. The documented instances of serious criminality—witness tampering of the sort that gives drug dealers a bad name—are simply ignored, drowned out with cries of “no collusion.”

I think this is an opportune moment for thinking about the role that law plays in constraining power, and state power in particular, in a putatively liberal and democratic context. In particular, this is an opportunity to think about the circumstances in which that constraining role can be anticipated, and when it is likely to fail. When, that is, should we anticipate the cooperation of law with “tyranny,” and when should we anticipate the vindication of “principles of equity and universality”?

Something of an answer to this question germinates within the text of Whigs and Hunters itself. For Thompson, the “essential precondition for the effectiveness of law, in its function as ideology, is that it shall display an independence from gross manipulation and shall seem to be just” (WH, p. 263). That is, to function effectively as ideology, the law cannot be wholly supervened by the naked policy preferences of a hegemonic class. It can’t be too ruthless or zealous in its pursuit of dominion. The civilizing force of the law’s touch thus depends on the political implausibility of (too much) hypocrisy. Appearances, or the compulsion to keep them up, turns out to matter. This sort of mechanism contrasts with the notion, associated most recently with Frederick Schauer, that law constrains only when it can credibly threaten coercion or force.[6]

Once one focuses on the legitimacy of law as a normative system independent of “gross manipulation” as the effective vector of law’s constraint, the failure of the Mueller report to generate consequences commensurate with its accusatory force becomes clearer. For three related structural reasons, the current occupant of the White House has no need even to gesture or perform fealty to the rule of law in terms of his personal conduct. Both for him and for members of his political coalition—both legislators who ignore his criminality and judges who ignore his nakedly unconstitutional animus—the civilizing bent of law’s generality no longer has a gravitational pull. These reasons, to be clear, aren’t general in scope. They are local to a particular historical moment. They may be, therefore, exemplary rather than exhaustive of the conditions in which Thompson’s aspiration founders.

To begin with: there is simply no partisan political logic to law’s restraint, even where what is at issue are serious violations of the criminal law. In a separation-of-powers system, just as in a parliamentary system, a president in large measure stays afloat through his or her ability to influence legislators. Republican legislators in safe seats—whether because of the “big sort”[7] or just gerrymandering[8]—have to worry not about the general election but the primary. Unless Republican primary voters are motivated to care about legality, Republican legislators are not likely to be moved by allegations of law violation. That is, they have to worry about the slice of the population that is probably most likely to approve of Trump, and perhaps most likely to be enraptured by his venal and venial sins. But Trump commands an approval ratingconsistently greater than 80 percent among Republican voters.[9] That approval may be correlated with a distaste for certain elements of liberal constitutionalism.[10] Tellingly, it peaked after his recent racist attacks in four women legislators of color.[11] This means that Trump has nothing to fear from copartisan legislators; they, in contrast, have everything to fear from his ability to provoke a primary challenge.

Second, the American public sphere is organized around media that are structurally oriented toward the dilution and distortion of information harmful to Trump, and the elevation of information (true or not) that helps him. This is not the internet, but the pro-Trump cable news shows that populate much of the public imaginary. In a brilliant dissection of political misinformation during the 2016 election, Yochai Benkler, Robert Faris, and Hal Roberts have shown that misinformation’s diffusion is not the result of Putin’s troll farms or Cambridge Analytica, but rather the decision of cable news hosts, acting as trusted intermediaries, to push out Pizzagate or deep state conspiracies.[12] Benkler et al.’s compelling empirical analysis decisively undermines the canard that social media or Facebook alone has critically undermined the possibility of a shared public sphere by enabling the spread of disinformation.[13] Given the structural alignment between the Trump presidency and Fox News in particular, it was never likely that any wrongdoing (barring perhaps the infamous possibility of murder in cold daylight) would move the needle.

Finally, there is the content of the law itself. To an extent that nonlawyers perhaps do not appreciate, the law’s substance is not predetermined by the text of, say, a criminal statute or the Constitution. The questions whether the president can obstruct justice, or whether he or she can be indicted as a result, are not resolved by Article II of the Constitution or the obstruction statute directly. Rather, they are interpretations of the law. Interpretations depend on the institutional context in which they are manufactured. Law’s penumbra, as Thompson appreciated, is a function therefore of its institutional conditions.

The law of criminal penalties is in the first instance the work of prosecutors, who must interpret vague or general statutes before applying them. The only formal legal authority on whether a president can be indicted is, therefore, controlledby the Department of Justice, a body that reports to the president.[14] Even before one gets to the stacking of the federal bench with former prosecutors and White House lawyers, or starts to dabble in the airy suppositions of “unitary executive” theory, there was a (yet again structural) bias against the kind of generality that Thompson savored—at least when it comes to the president. The point here is absolutely not that the president is “above” or “beyond” the law: the relationship between law and the presidency is far too complex,[15] far too mutually constitutive, to support that sort of broad and inchoate a claim.[16] Rather, the specific forms of legality that the president can invoke, or that can be invoked against him or her, are a function of institutional context, and in the context of the criminal law there is a profound inconsistency between the aspiration of “principles of equity and universality” and the brute fact of institutional motivation.

On the same day as the Mueller hearing, a federal judge in Washington, D.C. upheld a Trump rule that would bar almost all asylum claimants because they had moved through a third country without first seeking asylum there, a rule that even the neoliberal Economist decried.[17]  Law, in at least one of the forms that Thompson recognized and documented in Whigs and Hunters, remains alive and well in America. Where and how it grips is a function, though, and should be the focus of careful and situated analysis—and not a matter of aspiration or hope alone.

[1] E. P. Thompson, Whigs and Hunters: The Origins of the Black Act (London, 1975), p. 259; hereafter abbreviated WH.

[2] Eliana Johnson and Melanie Zanona, “’Euphoria’: White House, GOP exult after a flat Mueller performance,” Politico, 24 July 2019, https://www.politico.com/story/2019/07/24/robert-mueller-testimony-gop-white-house-1430049

[3] Peter Baker, “The Blockbuster That Wasn’t: Mueller Disappoints the Democrats,” New York Times, 24 July 2019, https://www.nytimes.com/2019/07/24/us/politics/trump-mueller-democrats.html

[4] See Joseph Raz, “The Law’s Own Virtue,” Oxford Journal of Legal Studies39, no. 1 (Spring 2019): 1-15; and Lon L. Fuller, The Morality of Law (New Haven, Conn., 1969).

[5] Anatole France, Le Lys Rouge(Paris, 1960).

[6] See Frederick Schauer, The Force of Law (Cambridge, Mass., 2015).

[7] Richard Florida, “America’s ‘Big Sort’ Is Only Getting Bigger,” CityLab, 25 Oct. 2016, https://www.citylab.com/equity/2016/10/the-big-sort-revisited/504830/

[8] See Nolan McCarty, Keith T. Poole, and Howard Rosenthal, “Does Gerrymandering Cause Polarization?” American Journal of Political Science53, no. 3 (July 2009): 666-680.

[9] “Trump approval ratings,” FiveThirtyEight, 30 July 2019, https://projects.fivethirtyeight.com/trump-approval-ratings/

[10] Aziz Z. Huq, “The People Against the Constitution” University of Michigan Law Review116, no. 6 (2018): 1123.

[11] “Republican Support for Trump Rises After Racially Charged Tweets,” Investing.com, 17 July 2019, https://www.investing.com/news/politics/republican-support-for-trump-rises-after-racially-charged-tweets-reutersipsos-poll-1925530

[12] Yochai Benkler, Robert Faris, and Hal Roberts, Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics (New York, 2018).

[13] The post-2016 criticism of social media, in any case, was always at odds with the known demography of the Trump coalition. More recent studiesshowing widespread distrust of social media on the right and left only undermine it further; see Katerina Eva Matsa and Elisa Shearer, “News Use Across Social Media Platforms 2018,” Journalism.org, 10 Sep. 2018, https://www.journalism.org/2018/09/10/news-use-across-social-media-platforms-2018/

[14] See “A Sitting President’s Amenability to Indictment and Criminal Prosecution,” The United States Department of Justice, 10 Dec. 2018, https://www.justice.gov/olc/opinion/sitting-president’s-amenability-indictment-and-criminal-prosecution

[15] Aziz Z. Huq, “Binding the Executive (by Law or by Politics),” University of Chicago Law Review 79 (May 2012): 777.

[16] “While You Were Tweeting,” The Economist, Jul. 20, 2019, https://www.economist.com/leaders/2019/07/20/the-white-house-ditches-half-a-century-of-immigration-law

[17] See Spencer S. Hsu, “Federal judge allows Trump administration rule restricting asylum access to continue,” Washington Post, 24 July 2019, https://www.texastribune.org/2019/07/24/federal-judge-upholds-trump-administration-rule-restricting-asylum-acc/; and “The White House ditches half a century of immigration law,” The Economist, 20 July 2019, https://www.economist.com/leaders/2019/07/20/the-white-house-ditches-half-a-century-of-immigration-law

Leave a comment

Filed under 2016 election, Mueller Report

The Palestinian Shoah?

The Palestinian Shoah?

David Simpson

First, note the italics. I mean the film, not the event. We have all been well schooled in the moral orthodoxy whereby nothing can or should be compared to the Shoah, which was indeed a genocide of staggering and exceptional proportions, one whose millions of dead indeed deserve not to be jumbled together as simply one set of victims among many in modern history. Speaking about the Shoah has generated a unique level of attentiveness and deference; some feel that nothing can be said by way of explanation, or that no restorative gesture can be adequately imagined, or that any comparison with anything else is an outrage. Some say that it is best remembered as an instance of absolute evil, one that will forever stand as the limit case of human cruelty and depravity. All explanations soon seem to come to the point where something irrational must be confronted. The disturbances generated by any attempt at explanation are not likely to disappear. Claude Lanzmann’s Shoah (1985) has a good deal to do with this situation.

Lanzmann’s film generated an uncommonly intense set of responses, all now part of the record. Whether out of choice or necessity, Lanzmann barely interviewed the Nazi perpetrators: figures like the Polish train driver at Treblinka had to do most of the work of accounting for the agents. Lanzmann was a Zionist, and historical complexity is no part of his film. But the testimony of the victim survivors is unforgettable. Above all it is suffused by the melancholic passage of time; these are among the last who will speak from personal experience, who saw and felt the culture of the death camps. The Palestinian survivors of the Nakba (catastrophe) are also reaching old age; they too have little time left to be recognized and recorded.

Shoah had worldwide distribution and massive publicity. It has become an unignorable centerpiece of film history, both for its topic and its methods, and at over nine hours in length it demands a serious commitment from its audiences, one commensurate, no doubt, with the gravity of its subject. It is unlikely that Andy Trimlett and Ahlam Muhtaseb’s 1948: Catastrophe and Creation, produced largely by community funding (it was twice refused NEH support), released in late 2017 and running for not much more than an hour, will get anywhere near this level of attention.[1] Indeed at least one city council in the US actively sought to prevent its being shown. The current weaponization of anti-Semitism, which seeks to identify any critique (or even historical analysis) of Israel or Zionism as an ethno-racial attack on all Jews, will ensure that many of us who see this film will see it in the way I saw it, at a one-off showing in a Unitarian church attended by persons already sympathetic to the cause of Palestinian rights. Alternatively, we can resort to Amazon Prime. It is worth doing so.

CC.jpg

These limits on public circulation are to be regretted, for the film deserves the widest distribution. It is the outcome of much research and some ninety interviews with those who lived through 1948 in Palestine as it was becoming Israel, interspersed with the comments of modern historians of the Middle East. It offers more or less equal time to members of the Jewish militias and to their victims, and in this sense it records both sides; but equal time does not imply moral equivalence, nor does it pretend that there is no agreement about the harsh facts of what occurred.  As I am writing, things are going rapidly from bad to worse for the Palestinians, and it is unlikely that we have reached bottom. 1948 does not claim that what happened was a Shoah equivalent; the film is modest in its documentation of actual deaths (on both sides) and is scrupulously sensitive to the anguish of those who felt or now feel terrible about their role in the “cleansing” of Arab villages and neighborhoods. Even when we are told the story of a baker and his son who were thrown alive into an oven by Jewish soldiers, there is a remarkable lack of melodrama or coercive emotionalism. On the contrary, we are made to see how absolutely normal such events are among those who feel that being at war justifies the rapes, tortures, and murders committed. The Deir Yassin massacre figures in, of course, but only as one among many other stories of violent expulsions all over Palestine.

Absent here is any reference to the obfuscating question as to whether Israel has a “right to exist,” as if any state anywhere has ever had such a right, or has been innocent of founding violence. The old canard about the two-state solution that was supposedly on offer only to be refused by the Palestinians is shown for what it was: a massively uneven division of the land that gave more than half of the land, and the best land, to what was then a Jewish minority. Muhtaseb and Trimlett have done for film what Thomas Suárez’s State of Terror (2016)—also probably destined to remain a hard-to-find book—did for the print record: they bring to life the exhaustive evidence from the archive (or what the author has been allowed to see of it) that carefully planned terrorism and violence were the foundations of Israel both before and after it achieved statehood.[2]

If the film is not “even-handed” in the habitual American sense whereby one position is set against the opposite position, whatever the issue, and no one raises awkward questions about facts, it is because the history being remembered is itself not even-handed. One side had the weapons, the training and the violent ethno-nationalist motivation, and the other did not. In the present day, the winners are taking more and more of the land, and look as if they might take it all. In so doing they are bound to confirm and compound by more and more violence their own status as unwelcome occupiers, and enact more and more punitive legislation, all the while trying to persuade the world that they are an inclusive, nonracial democracy. Many of the old Irgun and Palmach fighters report what they did and what they saw without excessive sentiment and without explicit apology, but their discomfort and occasional distress are palpable, and they share with their victims, however reluctantly, a dignified commitment to establishing the record, to witnessing. They are neither vindicated nor excused, but there are no denials. The concluding voiceover in 1948 does not ask what degree of right and wrong exists here, but whether it has been worth it; and if it has not been worth it, then what happens next? In the face of the militant triumphalism and historical misrepresentation enacted by the current Israeli government and its apologists, this new way of asking an all-too old question should be welcomed and circulated as widely as possible.

David Simpson is Distinguished Professor and G. B. Needham Chair Emeritus at the University of California–Davis. His most recent book is States of Terror: History, Theory, Literature (2019).

Footnotes

[1] Andy Trimlett and Ahlam Muhtaseb, dir. 1948: Catastrophe and Creation (Portland, OR: Collective Eye, 2017), 85 min. http://www.1948movie.com

[2] See Thomas Suárez, State of Terror: How Terrorism Created Modern Israel (Bloxham, 2016).

Leave a comment

Filed under Uncategorized

All That Heaven Allows: Robert Pippin and Tom Gunning Discuss the Work of Douglas Sirk

Robert Pippin and Tom Gunning discuss Douglass Sirk’s film All That Heaven Allows (1955).  Pippin’s “Love and Class in Douglas Sirk’s All That Heaven Allows” was published in the Summer 2019 issue of Critical Inquiry. 

You can also listen and subscribe to WB202 at:

iTunes

Google Play

TuneIn

Leave a comment

Filed under Podcast

Q & A with Robert Mueller on Legal Writing, Imagined before House Committees

Richard H. Weisberg

 

On 24 July, Robert Mueller is scheduled to appear before various House committees. My close, literary reading of his famous report raises several questions that I would ask him were we face to face on camera. Imagine this dialogue between Mueller (“A”) and myself (“Q”), which ends with my questioning the report’s conclusions as to Part I.

Q – Mr. Mueller, thanks for your thorough investigation. I know you wanted legal precision in your written analysis, but did you and your team also aim to meet the highest standards of expository writing skill and stylistic excellence?

A – Yes. I instructed everyone who participated in drafting the report to follow the rules you set down in When Lawyers Write, your book I’ve been consulting for years.[1]

Q – I’m flattered but not surprised, because almost every sentence practices what I preached there: strong choices of subjects and verbs; good organization of paragraphs and sections; near-perfect punctuation and use of “that” or “which”; little verbosity, and only one case of significantly awkward variation in word use; keeping the reader on track . . .

A – Well, the Russian names challenged us; we could hardly sort them out ourselves!

Q – Like the first time reader of ANNA KARENINA! Still, your famous control and patience maximize the reader’s chances of following this cast of characters, in some cases from introduction to indictment . . .

A – I’m especially proud of my “chapter” linking Deripaska to Manafort, and Kislyak to Sessions. Not exactly Pierre and Natasha, but Tolstoi made up his characters’ names, while I was handed Kilimnik, Akhmetov, Serhiy, Lyovochkin, and Veselnitskaya, among other oligarchs, devotees of Trump properties worldwide, dabblers in Eastern Ukrainian politics, and abusers of the court system. What was the one awkward variation?

Q – Maybe we’ll have a chance to get to that. What counts first, though, is that I’ve rarely read a legal document, short or long, that so often flows with an elegance worthy of Benjamin N. Cardozo, Oliver Wendell Holmes, Lincoln or JFK at their best . . .

A – Please, red is not my best facial color in front of all these cameras. And the only other time I’ve addressed the public on this matter, people said that I was stiff and unclear. I know that my writing is better than my oratory, but do you really think I’m as good as Judge Cardozo, one of my heroes?

Q – Yes; consider the following representative sentence from your crucial Part II passage on the so-called witch hunt, where McGahn resists Trump’s apparent order to fire you [see pp. 345–50]:[2]

First, McGahn’s clear recollection was that the President directed him to tell Rosenstein . . . that “Mueller has to go.” McGahn is a credible witness with no motive to lie or exaggerate given the position he held in the White House. McGahn spoke to the President twice and understood the directive the same way both times, making it unlikely that he misheard or misinterpreted the President’s request . . .

A – Sorry to interrupt, but yes, this is my favorite long paragraph. The simple transitive verbs follow your “directive” to choose the most active noun in your thought and make that the subject of the sentence. I avoid sentences that look like “the cat was eaten by the dog” just by making the dog—here McGahn—the subject. Five words instead of seven, simple transitive verbs, no evasive passivity: “The dog ate the cat” all the way down!

Q – And it sets up the denouement of the paragraph’s plot: “In response to that request, McGahn decided to quit because he did not want to participate in events that he described as akin . . .“

A – I put in that “akin” myself during a final edit!

Q – “ . . . as akin to the Saturday Night Massacre. “ Now comes the coup de grace, your rhetorical brilliance in mounting to a climax through the parallel usage of everyday verbs. It’s like the greatest, most mind-blowing judicial opinion ever written, Cardozo’s Hynes vs New York Central Railroad. . .

A – Yeah, you bring that 1921 piece of prose to light for all of us in When Lawyers Write! 231 N.Y. 229, I’ve memorized it. Every lawyer and judge should read it once a month. Every literate nonlawyer, too, just like Stendhal read sections of the Code Napoleon each night.[3]

Q – Maybe Cardozo is watching these hearings today from a perch in the heavenly Sanhedrin. He would want me to emphasize your active verb choices, which follow from your fine choice of subjects:

[McGahn] called his lawyer, drove to the White House, packed up the office, prepared to submit a resignation letter with his chief of staff, told Priebus that the President had asked him to do “crazy shit,” and informed Priebus and Bannon that he was leaving.  [P. 351]

A – I tried to imitate Cardozo in Hynes. All you have to do as a lawyer is forget the obfuscation and go for lucidity, just like Cardozo when he describes the railroad’s careless termination of a day of swimming and diving on the shores of the Hudson:

Hynes followed to the front of the springboard and stood poised for his dive. At that moment a crossarm with electric wires fell from the defendant’s pole. The wires struck the diver, flung him from the shattered board, and plunged him to his death below.

Q – Did you see the irony of answering the White House’s convolutions with sheer simplicity?

A – Yes; as Cardozo taught you and then me, the form of our writing matches its substance. If you deceive through stilted or imprecise language, your listener can see through to the lies you’re telling.

Q – And if you write with directness and to the point, the truth of what you write comes through?

A – I hope so.

Q – Your report is so well written that its occasional slippage stands out awkwardly.

A – You mean the way I refuse to exonerate the President on obstruction? Everybody says they wanted a yes or a no, like with the conclusion on conspiring with the Russians.

Q – No, not at all. Your language there perfectly suited the substance of your statement, but I think nonlawyers who are going for the jugular one way or another get upset with subtleties (see p. 264).[4]After almost two years of waiting for you, people wanted red meat, and good lawyers don’t pander. The Attorney General’s “four page summary of a 300-page report is highly inadequate” people said, but few had the patience or skill to work through all those pages knowledgeably;[5]they might have been satisfied by a four-page summary that suited their preconceptions. In fact your style throughout is of a piece with the excellence we have just discussed, and there is only that one flaw I mentioned.

A – I’ll accept such a verdict. Only one flaw?

Q – Potentially fatal. . . . I’m afraid I come out of this believing that the President and his campaign did conspire with the Russians on election fraud!

A – But my contrary conclusion is the one everyone has come to accept!

Q – It’s your own confusing language. If the report were not otherwise so well written, I would not expect clarity in its conclusions. But when you fudge on a key verb, and do so at a crucial stage, you lose me.

A – Which verb?

Q – “Established.”

A – Yes, that word is crucial. I use it almost every time I make a conclusion based on evidence.

Q – The first time you define your usage, you begin to slip:

When substantial credible evidence enabled the Office [why is “office” not the subject? How does a nonhuman agency “enable” anything?] to reach a conclusion with confidence, the report states that the investigation established that certain actions or events occurred. A statement that the investigation did not establish particular facts does not mean there was no evidence of those facts. [P. 60]

What does that mean?

A – I admit it’s not up to the rest, perhaps. I wanted the reader to know that the verb “established” as used by the report goes beyond just finding a few credible facts that might create an inference of culpability.  Established means almost complete assurance that a fact or series of facts meets the legal standard for whatever crime is being discussed, conspiracy or coordination in Part I—I reject right on that page the obscure use of “collusion”—and obstruction of justice in Part II.

Q – Could you have said “‘established’ means a fact or series of facts is credible beyond a reasonable doubt”?

A – I think that’s the way it works as the report progresses. That’s why, on the present record and given our rules, we reached no conclusion regarding obstruction of justice.

Q – But right in Part I, before you get to obstruction, you vary the verb usage from “established” to other words that are vague or undefined.

A – Examples?

Q – Sometimes the variance seals your point by exceeding the definition you’ve given for “established,” and that’s OK:

The investigation did not uncover evidence of Manafort’s passing along information about Ukrainian peace plans to the candidate or anyone else in the Campaign or the Administration. [P. 188; my emphasis]

A – OK.  If established is a difficult standard as applied, as you say, “did not uncover” is an even more definitive phrase to show an absence of culpability.  Fine. I recall using it a few pages later too (see p. 202).

Q – But the other variations on establish diminish the report’s credibility.

A – What other verbs do I use besides uncover?

Q – “Identify” is linked to the word “evidence” more than once in the report (pp. 187, 189, 225). What did you mean to accomplish by giving a synonym for an already defined word? You wouldn’t do that in drafting a deed or a will, would you? It’s at best needlessly confusing, and it’s harder to understand, I think, than uncover.

A – I think you’re nitpicking. Maybe I should have stuck with establish, but the variations you’ve “identified” so far strike me as similar in enhancing the word’s meaning, not diminishing its force. You might have referenced just now “find” as a variation too: we “did not find evidence” beyond a reasonable doubt that Campaign officials acted as agents of Russia (p. 241). Every major statement I make about conspiracy reverts to or doubles down on the word established, right? “Ultimately,” we conclude, “the investigation did not establish that the Campaign coordinated or conspired with the Russian government in its election-interference activities” (p. 231). Consistent enough for you?

Q – I can’t concede the point quite yet. The weakest link, and the one that most concerns me about this uncharacteristic stylistic slippage, relates to your chapter on the infamous Trump Tower meeting of 9 June 2016, and the possible violation there of campaign finance laws. Key Trump campaign representatives Trump, Jr., Manafort, and Jared Kushner met with various Russians, enthusiastically anticipating derogatory information about Hillary. There might have been criminal violations that day alone, notably of campaign finance prohibitions on foreign contributions of many kinds, including “anything of value” such as information (p. 244) . . .

A – Of course—our analysis of that meeting is as long and as incisive as Crime and Punishment’s sections on the investigation of Raskolnikov! I start by recognizing that this episode gets very close to Trump but conclude that (see p. 168) . . .

Q – Let me quote your conclusion:

On the facts here, the government would unlikely be able to prove beyond a shadow of a doubt that the June 9 meeting participants had general knowledge that their conduct was unlawful. [P. 245]

A – Kind of choppy, I admit, but that scienter requirement—they had to act knowingly and willfully—was the stumbling block for us under the relevant statute. We got some evidence but did not “obtain” much regarding scienter.

Q – But again there is immediate and troubling slippage in your verb usage! “The investigation,” you go on,

has not developed evidence that the participants in the meeting were familiar with the foreign-contribution ban. . . . While Manafort [for example] is experienced with political campaigns, the Office has not developed evidence showing that he had relevant knowledge of these legal issues” [Pp. 245–46; my emphasis]

A – We messed up there. I take your point.

Q – Made out of admiration for the care elsewhere. What could you have meant by “has not developed evidence”?  Aren’t you admitting that if you had moved the investigation along—“developed” this part of it—you might have met the legal requirement of the campaign’s knowing violation of law?

A – Well . . .

Q – Let’s take from this dialogue that even the conclusion of absence of conspiracy and cooperation, as well as what you say on obstruction of justice, needs to be explored further?

A – Maybe, but not by me. I did my best, and the report stands, warts and all.

Q – Small warts indeed on a fine body of writing.  Thanks Mr. Mueller, for being an excellent lawyer.

 


 

[1] See Richard H. Weisberg, When Lawyers Write (Boston, 1987).

[2] Page references are to the Washington Post version of the Report (2019), following the number on the lower right of each page.

[3] See Weisberg, When Lawyers Write, p. 6.

[4]

If we had confidence after a thorough investigation of the facts that the President clearly did not commit obstruction of justice, we would so state. Based on the facts and the applicable legal standards, however, we are unable to reach that judgment. . . . While this report does not conclude that the President committed a crime, it also does not exonerate him.

The language continues the report’s practice of strong stylistic choices; the frustration it evoked cannot be blamed on “legalese.”

[5] Senator Kirsten Gillibrand, quoted in the New York Times, 26 June 2019.

 

Richard H. Weisberg Floersheimer Prof. of Constitutional Law, Cardozo Law School, Yeshiva U and formerly Asst. Prof of Romance Languages and Comparative Studies in Literature, the University of Chicago.

2 Comments

Filed under 2016 election, Mueller Report, The Trump Election: Night Thoughts

Reading the Mueller Report

The textual icon of our moment is surely the Mueller Report. It is the most discussed and least read books in many years. It must rank among the most eagerly anticipated and anticlimactic publications in the modern history of the book. How important is it? Or rather, what, precisely is its importance? Does it matter that is boring, especially for people who have watched the entire narrative unfold publicly over the last two years. Will it come alive (as some hope) when the movie version of the report is produced by the author’s testimony before Congress in the coming weeks?

Critical Inquiry is interested in the question of the Mueller Report as both a text and an iconic event. We hope to publish a few brief (1500 word) invited essays that assess the significance of the report, along with its reception. If you have an idea for such an essay, please send a letter with a brief precis of your idea to the editors at cisubmissions@gmail.com.

We inaugurate this forum with an essay by Richard H. Weisberg, professor at Cardozo Law School and the author of When Lawyers Write (1987).

W. J. T. Mitchell

Editor

 


 

Leave a comment

Filed under The Trump Election: Night Thoughts, Uncategorized

Seventy Into ’48: The State as a Scandal

Khaled Furani

A state, is called the coldest of all cold monsters. Coldly lieth it also; and this lie creepeth from its mouth: ‘I, the state, am the people’…where all are poison-drinkers, the good and the bad: the state, where all lose themselves, the good and the bad: the state, where the slow suicide of all—is called ‘life.’ —Friedrich Nietzsche, Thus Spake Zarathustra, 1883

We are summoned today to reflect on the seventieth anniversary of 1948. On this occasion, I present a certain “gift” to my conqueror. It is in a sense an absurd gift. In a “birthday card,” I extend a gift of truth, or rather regions of truth that may come with an effort towards self-recognition. These are regions that both conqueror and conquered—inhabiting discrepant conditions of fear due to discrepant power at their disposal—may rarely visit, just as one may rarely plunge into one’s own darkness. It is a gift of recognition that 1948 is a truth of a darkness unfolding. That year—and probably a further past—lives with us still, not behind us in the past. We are seventy years into 1948, not simply since 1948. What does it mean to be seventy years into the darkness of 1948?

I do not claim 1948 as ongoing merely due to the ongoing conquest of land, by means both legal and extra-legal. Rather, 1948 stands for unfinished business, by which I mean the variegated business of finishing off the Palestinian body, one-by-one and collectively. The Palestinian’s language, home, memory, land, water, and physical and political body must be cleared away, must vanish, for purity to be attained, for victory to be declared, for death itself to be conquered, for security to be achieved. Or so runs the illusion.

So long as purity stands for security then we ought to be on the alert for a “genocidal desire” at work. This is a desire for massive death for the sake of purity of the Jewish state (meaning composed purely of Jewish bodies) whose symptoms include: erasure of the Arabic language, destruction of historic and living homes, excision and criminalization of native memory, confiscation of lands, pollution of fields, obliteration and ghettoization of villages and towns, theft and contamination of water supplies, withholding of medicine and medical care, experimentation and weapons testing on populations, and elimination of bodies, directly and by proxy.

This genocidal desire seems to find nourishment in fear, fear that lives, for example, in the hoary but protean slogan promising a people said to be without a land a land said to have no people. That is, the Palestinian must not be so that the Israeli can be, just as wild nature must be extirpated from civilization. This genocidal desire has a traceable frequency of appearances, as well as effects. A common alarmist call maligns even Palestinian eggs and sperm going about their work. I am talking about the refrain of “demographic threat.” Then there is the frequent appearance of inciteful graffiti under bridges, on highways, and in streets and alleys throughout the country—“death to the Arabs” and “Kahane was right”—etched with apparent impunity. For tracing some of this desire’s effects, consider all those uprooted from the land. Read their poets. Fadwa Tuqan inscribed their unmet wish on her tomb: “It is enough for me to die on her and be buried in her, under her soil, melt and vanish, and come back to life as weed in her soil, as a flower.” Her wish to escape dying in exile, a wish to return to life in her own soil, even if only as a weed, should perhaps be enough to recognize the destruction wrought by this genocidal desire. In case it is not, I offer some numbers.

Photo by Mohamad Badarne

Traces of a Genocidal Desire

One woman each month. Two children each month. One man each day last month, and perhaps every month since 2000. I am citing a rough but rather probable “slow trickle” of hidden murder: a generally unreported rate of destroyed Palestinian bodies under Israel’s many hands, not including mass killings as in declared military “operations,” also known as “mowing the lawn.” Some bodies are murdered by “on duty” weapons and others by rampant “off duty” weapons. Some bodies are eradicated by soldiers or police in Jerusalem, the West Bank, or Gaza. Other bodies are annihilated in a carefully managed self-destruction of Palestinian citizenry of Israel. Via its selective surveillance and “law enforcement,” one eye of the state never sleeps—it watches for and prosecutes words, even poems in cyberspace—while the other eye “turns blind” when it comes to the influx of weapons for killing ourselves. As one hand tracks weapons and words across the physical and virtual earth, the other appears paralyzed to act against them in this very land.

This destruction of physical bodies is perhaps the most brutal of lenses through which to see how we are seventy years now into an abyss that is ‘48, seventy years into the unfinished business of finishing off the Palestinian body, multifariously, collectively, and yes, corporally. Seventy years, but actually longer, of not only wanting more land but also less and less Palestinians. Thus, by no means a deviation, the “Nationality Law,” like the “Law of Return,” is but one law in a battery of legislation for fulfilling the principle of purity.

This protean principle stems from the fear of impurity and can even be found at work every time fear lives in uttering “Arab” as a way not to see or say “Palestinian,” and “minorities” or “the sector” to see neither. But who is really a minority in this landscape? What enables a powerful minority of immigrants not to recognize a majority in whose midst it keeps bulldozing its way to a fortress? Who pays for this fortress and its enabling landscape that is the modern “Middle East”? At what price?

Photo by Razan Shalabi

The Price of Traps

Trap 1: Cement

In its relentless quest for purity, I see Israel caught in a kind of scandal, from the Greek skandalon, in the sense of a trap, one that can be typified by “cement and weeds.” Clearly, like any metaphor, it has its limits, but it helps me express the recurring drama of an Israel as a prevailing culture of cement and a peasants’ verdant and fecund Palestine, now destroyed and buried over, remaining only as weeds that grow through cracks, to pollinate and spread out through the air. The debacle for Israel is that despite all efforts at purification and eradication, “the weeds” never really go away. Israel is doomed to pour ever-sprawling cement and spew ever-toxic pesticides, to ultimately no avail. I am not sure what degree of obtuseness is required to not recognize where life, any life, is or is not viable: in the thorny, undesired, yet green of the weeds or in the cold, hard, grey of cement.

Trap 2: The Ghetto Incarnate

While Jews coming from Europe aspired for a kind of freedom when colonizing Palestine, it is unfreedom that they have built with their own hands. This kind of unfreedom is the same kind that comes with models like the shtetl or crusader’s castle, crisscrossed by all sorts of ramparts, immediately visible and less so. Aspiring for rootedness at “home,” rather than grow amidst the age-old olive trees, they sought to uproot them and plant instead fast-growing, concealing, highly flammable pines imported from their xenophobic oppressors. Loyal to its European baggage, the more Israel purges the roots of Palestine the more it plunges into its own grave. Through a coursing river, it planted a mikveh, a still pool for purification. And the river in this case would be the Arab-Muslim “civilizational space”—historically a home for flourishing Jewish traditions, among others—reduced to a fragmented, faltering complex of nation-states. Caught in a pendulum between Jewish and democratic, Israel fails to wonder if it should be a state or something better than a state. Fleeing from the diseases of purificatory Europe with its plaguing “cures,” Israel brings putrefication to the entire body of the “Middle East,” by which I mean modern sovereignty’s aseptic powers.

Trap 3: Vitality and Vitiation

The cage of the Ghetto Incarnate is ensnared by other cages, peculiar to Israel being a state, and being a state here, making the Jews’ “homecoming” very impiously unbecoming. As a state, and like any state, Israel is so worried about its death that it suffocates the possibility of its citizens coming into an authentic relation with theirs. And it so venerates “life,” that is, its life, that it vitiates access to a genuine life that recognizes life’s companion: death. It calls upon God only to end up acting like one. And on its altar, its citizenry is requested to surrender and sacrifice a basic sense of humility, a basic recognition of interdependence and fragility in themselves and in the universe. Israel thereby doubles down on its zarut, that is, its foreignness, as a kind of avodah zarah (idol worship), which should be a stranger to Abrahamic tradition and strange to take root in the land from which this very tradition grew.

In the meantime, we as autochthones of this place, descendants of its fellaheen and Bedouin, as organic guardians of the land’s evolving consciousness, including the Sumerian, Akkadian, Babylonian, Assyrian, Pharaonic, Persian, Phoenician, Philistine, Nabatean, Canaanite, Syriac, Aramaic, Hebraic, Hellenic, and Latin, among others to be sure that make up Palestine, attempt to thrive among their remains or risk our own calcification. Doing so means recognizing and confronting the cages first erected seventy years ago, but maybe much earlier. Perhaps we should be asking what does it mean to be 102 years into the darkness of Sykes-Picot and 370 years into the darkness of the Peace of Westphalia, the peace that pacified us by waging a fatal war on our sense of life and above all on life’s precariousness?

Photo by Razan Shalabi

[This paper originated as a talk given at a panel on “70 to ‘48: Reflections on Local Time,” held by the Sociology and Anthropology Department at Tel Aviv University on December 27, 2018.

Khaled Furani is an associate professor in the Department of Sociology and Anthropology, Tel Aviv University.

Leave a comment

Filed under Uncategorized

More Responses to “The Computational Case against Computational Literary Studies” 

Earlier this month, Critical Inquiry hosted an online forum featuring responses to and discussion about Nan Z. Da’s “The Computational Case against Computational Literary Studies.”  To accommodate further commentary to Da’s article and to the forum itself, we have created a new page for responses.

RESPONSES

  • Taylor Arnold (University of Richmond).
  • Duncan Buell (University of South Carolina, Columbia).

 


Taylor Arnold

As a statistician who has worked and published extensively within the fields of digital humanities (DH) and computational linguistics over the past decade, I have been closely following Nan Z. Da’s article “The Computational Case against Computational Literary Studies” and the ensuing conversations in the online forum. It has been repeatedly pointed out that the article contains numerous errors and misunderstandings about statistical inference, Bayesian inference, and mathematical topology. It is not my intention here to restate these same objections. I want to focus instead on an aspect of the work that has gone relatively undiscussed: the larger role to be played by statistics and statisticians within computational DH.

Da correctly points out that computational literary studies, and computational DH more generally, takes a large proportion of its methods, theories, and tools from the field of statistics. And yet, she also notes, scholars have had only limited collaborations with statisticians. It is easy to produce quantitative evidence of this fact. There are a total of zero trained statisticians (having either a Ph.D. or an academic position with the title of statistics) amongst: the 25 members on the editorial board of Cultural Analytics, 11 editors of Digital Humanities Quarterly, 22 members of the editorial board for Digital Scholarship in the Humanities, 10 members of the executive committee for the Australasian Association for Digital Humanities, 9 members of the executive committee for the Association for Computers and the Humanities, 9 members of the executive committee for the European Association for Digital Humanities, and the 4 executive council members in the Canadian Society for Digital Humanities.[1]While I do have great respect for these organizations and many of the people involved with them, the total of absence of any professional statisticians—and in many of the cited examples, lack of scholars with a terminal degree in any technical field—is a problem for a field grounded, at least in part, by the analysis of data.

In the last line of her response “Final Comments,” Da calls for a peer-review process “in which many people,” meaning statisticians and computer scientists, “are brought into peer review.” That is a good place to start but not nearly sufficient. I, and likely many other computationally trained scholars, am already frequently asked to review papers and abstract proposals for the aforementioned journals and professional societies. Da as well has claimed that her Critical Inquiry article was also vetted by a computational reviewer. The actual problem is instead that statisticians need to be involved in computational analyses from the start. To only use computational scholars at the level of peer-review risks falling into the classic trap famously described by Sir Ronald Fisher: consulting a statistician after already having collected data is nothing more than “a post mortem examination.”[2]

To see the potential for working closely with statisticians, one must look no further than Da’s own essay. She critiques the overuse and misinterpretation of term frequencies, latent Dirichlet allocation, and network analysis within computational literary studies. Without a solid background in these methods, however, the article opens itself up to the obvious (at least to a statistician) counterarguments offered in the forum by scholars such as Lauren Klein, Andrew Piper, and Ted Underwood. Had Da cowritten the article with someone with a background in statistics—she even admits that she is “far from being the ideal candidate for assessing this work,”[3] so why she would undertake this task alone in the first place is a mystery—these mistakes could have been avoided and replaced with stronger arguments. As a statistician, I also agree with many of her stated concerns over the particular methods listed in the article.[4]However, the empty critiques of what not to do could and should have been replaced with alternative methods that address some of Da’s concerns over reproducibility and multiple hypothesis testing. These corrections and additions would have been possible if she had heeded her own advice about engaging with statisticians.

My research in computational digital humanities has been a mostly productive and enjoyable experience. I have been fortunate to have colleagues who treat me as an equal within our joint research and I believe this has been the primary reason for the success of these projects. These relationships are unfortunately far from the norm. Collaborations with statisticians and computer scientists are too frequently either unattributed or avoided altogether. The field of DH often sees itself as challenging epistemological constraints towards the study of the humanities and transcending traditional disciplinary boundaries. These lofty goals are attainable only if scholars from other intellectual traditions are fully welcomed into the conversation as equal collaborators.

[1]I apologize in advance if I have missed anyone in the tally. I did my best to be diligent, but not every website provided easily checked contact information.

[2]Presidential Address to the First Indian Statistical Congress, 1938. Sankhya 4, 14-17.

[3]https://critinq.wordpress.com/2019/04/03/computational-literary-studies-participant-forum-responses-day-3-4/

[4]As a case in point, just last week I had a paper accepted for publication in which we lay out an argument and methodologies for moving beyond word counting methods in DH. See: Arnold, T., Baillier, N., Lissón, P., and Tilton, L. “Beyond lexical frequencies: Using R for text analysis in the digital humanities.” Linguistic Resources and Evaluation. To Appear.

TAYLOR ARNOLD is an assistant professor of statistics at the University of Richmond. He codirects the distant viewing lab with Lauren Tilton, an NEH-funded project that develops computational techniques to analyze visual culture on a large scale. He is the co-author the books Humanities Data in R and Computational Approach to Statistical Learning.

 


Duncan Buell

As a computer scientist who has been collaborating in the digital humanities for ten years now, I found Da’s article both well-written and dead on in its arguments about the shallow use of computation. I am teaching a course in text analysis this semester, and I find myself discussing repeatedly with my students the fact that they can computationally find patterns which are almost certainly not causal.

The purpose of computing being insight and not numbers (to quote Richard Hamming), computation in any area that looks like data mining is an iterative process. The first couple of iterations can be used to suggest directions for further study. That further study requires more careful analysis and computation. And at the end one comes back to analysis by scholars to determine if there’s really anything there. This can be especially true of text, more so than with scientific data, because text as data is so inherently messy; many of the most important features of text are almost impossible to quantify statistically and almost impossible to set rules for a priori.

Those first few iterations are the fun 90 percent of the work because new things show up that might only be seen by computation. It’s the next 90 percent of the work that isn’t so much fun and that often doesn’t get done. Da argues that scholars should step back from their perhaps too-easy conclusions and dig deeper. Unlike with much scientific data, we don’t have natural laws and equations to fall back on with which the data must be consistent. Ground truth is much harder to tease out, and skeptical calibration of numerical results is crucial.

Part of Da’s criticism, which seems to have been echoed by one respondent (Piper), is that scholars are perhaps too quick to conclude a “why” for the numbers they observe. Although for the purpose of making things seem more intuitive scientists often speak as if there were a “why,” there is in fact none of that.  Physics, as I learned in my freshman class at university, describes “what”; it does not explain “why.” The pull of gravity is 9.8 meters per second per second, as described by Newton’s equations. The empirical scientist will not ask why this is but will use the fact to provide models for physical interactions. It is the job of the theorist to provide a justification for the equations.

There is a need for more of this in the digital humanities. One can perform all kinds of computations (my collaborators and I, for example, have twenty thousand first-year-composition essays collected over several years). But to really provide value to scholarship one needs to frame quantitative questions that might correlate with ideas of scholarly interest, do the computations, calibrate the results, and verify that there is causation behind the results. This can be done and has been done in the digital humanities, but it isn’t as common as it should be, and Da is only pointing out this unfortunate fact.

DUNCAN BUELL is the NCR Professor of Computer Science and Engineering at the University of South Carolina, Columbia.

2 Comments

Filed under Uncategorized

Bruno Latour and Dipesh Chakrabarty: Geopolitics and the “Facts” of Climate Change

Bruno Latour and Dipesh Chakrabarty visited WB202 to discuss new “questions of concern” and the fight over “facts” and climate change in the world after Trump’s election. Latour and Timothy Lenton’s “Extending the Domain of Freedom, or Why Gaia Is So Hard to Understand” appeared in the Spring 2019 issue of Critical Inquiry. Chakrabarty’s “The Planet: An Emergent Humanist Category” is forthcoming in Autumn 2019.

You can also listen and subscribe to WB202 at:

iTunes

Google Play

TuneIn

Leave a comment

Filed under Podcast

Computational Literary Studies: Participant Forum Responses, Day 3

 

Stanley Fish

Some commentators to this forum object to my inclusion in it in part because I have no real credentials in the field. They are correct. Although I have now written five pieces on the Digital Humanities—three brief op-eds in the New York Times, an essay entitled “The Interpretive Poverty of Data” published in the blog Balkinization, and a forthcoming contribution to the New York University Journal of Law & Liberty with the title “If You Count It They Will Come”—in none of these do I display any real knowledge of statistical methods. My only possible claim to expertise, and it is a spurious one, is that my daughter is a statistician. I recently heard her give an address on some issue in bio-medical statistics and I barely understood 20 percent of it. Nevertheless, I would contend that this confessed ignorance is no bar to my pronouncing on the Digital Humanities because my objections to it are lodged on a theoretical level in relation to which actual statistical work in the field is beside the point. I don’t care what form these analyses take. I know in advance that they will fail (at least in relation to the claims made from them) in two ways: either they crank up a huge amount of machinery in order to produce something that was obvious from the get go—they just dress up garden variety literary intuition in numbers—or the interpretive conclusions they draw from the assembled data are entirely arbitrary, without motivation except the motivation to have their labors yield something, yield anything. Either their herculean efforts do nothing or when something is done with them, it is entirely illegitimate. This is so (or so I argue) because the underlying claim of the Digital Humanities (and of its legal variant Corpus Linguistics) that formal features––anything from sentence length, to image clusters, to word frequencies, to collocations of words, to passive constructions, to you name it—carry meaning is uncashable. They don’t unless all of the factors the Digital Humanities procedures leave out—including, but not limited to, context, intention, literary history, the idea of literature itself—are put back in. I was pleased therefore to find that Professor Da, possessed of a detailed knowledge infinitely greater than mine, supports my relatively untutored critique. When she says that work in Computational Studies comes in two categories—“papers that present a statistical no result finding as a finding” and “papers that draw conclusions from its finding that are wrong”—I can only cheer. When she declares “CLS as it currently exists has very little explanatory power,” I think that she gives too much credit to the project with the words “very little”; it has no explanatory power. And then there is this sentence, which to my mind, absolutely clinches the case: “there are many different ways of extracting factors and loads of new techniques for odd data sets, but these are atheoretical approaches, meaning, strictly, that you can’t use them with the hope that they will work magic for you in producing interpretations that are intentional” and “have meaning and insight.” For me the word intentional is the key. The excavation of verbal patterns must remain an inert activity until added to it is the purpose of some intentional agent whose project gives those patterns significance. Once you detach the numbers from the intention that generated them, there is absolutely nothing you can do with them, or, rather (it is the same thing) you can do with them anything you like. At bottom CLS or Digital Humanities is a project dedicated to irresponsibility masked by diagrams and massive data mining. The antidote to the whole puffed-up thing is nicely identified by Professor Da in her final paragraph: “just read the texts.”

 

STANLEY FISH is a professor of law at Florida International University and a visiting professor at the Benjamin N. Cardozo School of Law. He is also a member of the extended Critical Inquiry editorial board.

5 Comments

Filed under Uncategorized

Computational Literary Studies: Participant Forum Responses, Day 3

Final Comments

Nan Z. Da

(This is the last of three responses to the online forum. The others are “Errors” and “Argument.”)

I want to state that nothing about this forum has been unbalanced or unfair. I wrote the article. Those who may not agree with it (in part or in its entirety) have every right to critique it in an academic forum.

What my critics and neutral parties on this forum seem to want from “The Computational Case” is nothing short of: (1) an across-the-board reproducibility check (qua OSC, as Piper suggests), plus (2) careful analyses of CLS work in which even the “suppression” of tiny hedges would count as misrepresentation, plus (3) a state-of-the-field for computational literary studies and related areas of the digital humanities, past and emergent. To them, that’s the kind of intellectual labor that would make my efforts valid.

Ted Underwood’s suggestion that my article and this forum have in effect been stunts designed to attract attention does a disservice to a mode of scholarship that we may simply call critical inquiry. He is right that this might be a function of the times. The demand, across social media and elsewhere, that I must answer for myself right away for critiquing CLS in a noncelebratory manner is a symptom of the social and institutional power computational studies and the digital humanities have garnered to themselves.

Yes, “field-killing” is a term that doesn’t belong in scholarship, and one more indication that certain kinds of academic discourse should only take place in certain contexts. That said, an unrooted rhetoric of solidarity and “moreness”—we’re all in this together—is a poor way to argue. Consider what Sarah Brouillette has powerfully underscored about the institutional and financial politics of this subfield: it is time, as I’ve said, to ask some questions.

Underwood condemns social media and other public responses. He has left out the equally pernicious efforts on social media and in other circles to invalidate my article by whispering—or rather, publically publishing doubts—about Critical Inquiry’s peer review process. It has been suggested, by Underwood and many other critics of this article, that it was not properly peer-reviewed by someone out-of-field. This is untrue—my paper was reviewed by an expert in quantitative analysis and mathematical modeling—and it is damaging. It suggests that anyone who dares to check the work of leading figures in CLS will be tried by gossip.

Does my article make empirical mistakes? Yes, a few, mostly in section 3. I will list them in time, but they do not bear on the macro-claims in that section. With the exception of a misunderstanding in the discussion of Underwood’s essay none of the rebuttals presented in this forum made on empirical grounds have any substance. Piper’s evidence that I “failed at basic math” refers to a simple rhetorical example in which I rounded down to the nearest thousand for the sake of legibility.

Anyone who does serious quantitative analysis will see that I am far from being the ideal candidate for assessing this work. Still, I think the fundamental conflict of interest at issue here should be obvious to all. People who can do this work on a high level tend not to care to critique it, or else they tend not to question how quantitative methods intersect with the distinctiveness of literary criticism, in all its forms and modes of argumentation. In the interest of full disclosure: after assessing the validity of my empirical claims, my out-of-field peer reviewer did not finally agree with me that computational methods works poorly on literary objects. This is the crux of the issue. Statisticians or computer scientists can check for empirical mistakes and errors in implementation; they do not understand what would constitute a weak or conceptually-confused argument in literary scholarship. This is why the guidelines I lay out in my appendix, in which many people are brought into peer review, should be considered.

NAN Z. DA teaches literature at the University of Notre Dame.

 

3 Comments

Filed under Uncategorized

Computational Literary Studies: Participant Forum Responses, Day 3

Mark Algee-Hewitt

In 2010, as a new postdoctoral fellow, I presented a paper on James Thomson’s 1730 poem The Seasons to a group of senior scholars. The argument was modest: I used close readings to suggest that in each section of the poem Thomson simulated an aesthetic experience for his readers before teaching them how to interpret it. The response was mild and mostly positive. Six months later, having gained slightly more confidence, I presented the same project with a twist: I included a graph that revealed my readings to be based on a pattern of repeated discourse throughout the poem. The response was swift and polarizing: while some in the room thought that the quantitative methods deepened the argument, others argued strongly that I was undermining the whole field. For me, the experience was formative: the simple presence of numbers was enough to enrage scholars many years my senior, long before Digital Humanities gained any prestige, funding, or institutional support.

My experience suggests that this project passed what Da calls the “smell test”: the critical results remained valid, even without the supporting apparatus of the quantitative analysis. And while Da might argue that this proves that the quantitative aspect of the project was unnecessary in the first place, I would respectfully disagree. The pattern I found was the basis for my reading and to present it as if I had discovered it through reading alone was, at best, disingenuous. The quantitative aspect to my argument also allowed me to connect the poem to a larger pattern of poetics throughout the eighteenth century.  And I would go further to contend that just as introduction of quantification into a field changes the field, so too does the field change the method to suit its own ends; and that confirming a statistical result through its agreement with conclusions derived from literary historical methods is just as powerful as a null hypothesis test. In other words, Da’s “smell test” suggests a potential way forward in synthesizing these methods.

But the lesson I learned remains as powerful as ever: regardless of how they are embedded in research, regardless of who uses them, computational methods provoke an immediate, often negative, response in many humanities scholars. And it is worth asking why. Just as it is always worth reexamining the institutional, political, and gendered history of methods such as new history, formalism, and even close reading, so too is it important, as Katherine Bode suggests, to think through these same issues in Digital Humanities as a whole. And it is crucial that we do so without erasing the work of the new, emerging, and often structurally vulnerable members of the field that Lauren Klein highlights. These methods have a powerful appeal among emerging groups of students and young scholars. And to seek to shut down scholarship by asserting a blanket incompatibility between method and object is to do a disservice to the fascinating work of emerging scholars that is reshaping our critical practices and our understanding of literature.

MARK ALGEE-HEWITT is an assistant professor of English and Digital Humanities at Stanford University where he directs the Stanford Literary Lab. His current work combines computational methods with literary criticism to explore large scale changes in aesthetic concepts during the eighteenth and nineteenth centuries. The projects that he leads at the Literary Lab include a study of racialized language in nineteenth-century American literature and a computational analysis of differences in disciplinary style. Mark’s work has appeared in New Literary History, Digital Scholarship in the Humanities, as well as in edited volumes on the Enlightenment and the Digital Humanities.

1 Comment

Filed under Uncategorized

Computational Literary Studies: Participant Forum Responses, Day 3

Katherine Bode

Da’s is the first article (I’m aware of) to offer a statistical rejection of statistical approaches to literature. The exaggerated ideological agenda of earlier criticisms, which described the use of numbers or computers to analyze literature as neoliberal, neoimperialist, neoconservative, and more, made them easy to dismiss. Yet to some extent, this routinized dismissal instituted a binary in CLS, wherein numbers, statistics, and computers became distinct from ideology. If nothing else, this debate will hopefully demonstrate that no arguments––including statistical ones––are ideologically (or ethically) neutral.

But this realization doesn’t get us very far. If all arguments have ideological and ethical dimensions, then making and assessing them requires something more than proving their in/accuracy; more than establishing their reproducibility, replicability, or lack thereof. Da’s “Argument” response seemed to move us toward what is needed in describing the aim of her article as: “to empower literary scholars and editors to ask logical questions about computational and quantitative literary criticism should they suspect a conceptual mismatch between the result and the argument or perceive the literary-critical payoff to be extraordinarily low.” However, she closes that path down in allowing only one possible answer to such questions: “in practice” there can be no “payoff … [in terms of] literary-critical meaning, from these methods”; CLS “conclusions”––whether “corroborat[ing] or disprov[ing] existing knowledge”––are only ever “tautological at best, merely superficial at worse.”

Risking blatant self-promotion, I’d say I’ve often used quantification to show “something interesting that derives from measurements that are nonreductive.” For instance, A World of Fiction challenges the prevailing view that nineteenth-century Australian fiction replicates the legal lie of terra nullius by not representing Aboriginal characters, in establishing their widespread prevalence in such fiction; and contrary to the perception of the Australian colonies as separate literary cultures oriented toward their metropolitan centers, it demonstrates the existence of a largely separate, strongly interlinked, provincial literary culture.[1] To give just one other example from many possibilities, Ted Underwood’s “Why Literary Time is Measured in Minutes” uses hand-coded samples from three centuries of literature to indicate an acceleration in the pace of fiction.[2] Running the gauntlet from counting to predictive modelling, these arguments are all statistical, according to Da’s definition: “if numbers and their interpretation are involved, then statistics has come into play.” And as in this definition, they don’t stop with numerical results, but explore their literary critical and historical implications.

If what happens prior to arriving at a statistical finding cannot be justified, the argument is worthless; the same is true if what happens after that point is of no literary-critical interest. Ethical considerations are essential in justifying what is studied, why, and how. This is not––and should not be––a low bar. I’d hoped this forum would help build connections between literary and statistical ways of knowing. The idea that quantification and computation can only yield superficial or tautological literary arguments shows that we’re just replaying the same old arguments, even if both sides are now making them in statistical terms.

KATHERINE BODE is associate professor of literary and textual studies at the Australian National University. Her latest book, A World of Fiction: Digital Collections and the Future of Literary History (2018), offers a new approach to literary research with mass-digitized collections, based on the theory and technology of the scholarly edition. Applying this model, Bode investigates a transnational collection of around 10,000 novels and novellas, discovered in digitized nineteenth-century Australian newspapers, to offer new insights into phenomena ranging from literary anonymity and fiction syndication to the emergence and intersections of national literary traditions.

[1]Katherine Bode, A World of Fiction: Digital Collections and the Future of Literary History (Ann Arbor: University of Michigan Press, 2018).

[2]Ted Underwood, “Why Literary Time is Measured in Minutes,” ELH 85.2 (2018): 341–365.

Leave a comment

Filed under Uncategorized

Computational Literary Studies: Participant Forum Responses, Day 3

 

Lauren F. Klein

The knowledge that there are many important voices not represented in this forum has prompted me to think harder about the context for the lines I quoted at the outset of my previous remarks. Parham’s own model for “The New Rigor” comes from diversity work, and the multiple forms of labor—affective as much as intellectual—that are required of individuals, almost always women and people of color, in order to compensate for the structural deficiencies of the university. I should have provided that context at the outset, both to do justice to Parham’s original formulation, and because the same structural deficiencies are at work in this forum, as they are in the field of DH overall.

In her most recent response, Katherine Bode posed a series of crucial questions about why literary studies remains fixated on the “individualistic, masculinist mode of statistical criticism” that characterizes much of the work that Da takes on in her essay. Bode further asks why the field of literary studies has allowed this focus to overshadow so much of the transformative work that has been pursued alongside—and, at times, in direct support of––this particular form of computational literary studies.

But I think we also know the answers, and they point back to the same structural deficienciesthat Parham explores in her essay: a university structure that rewards certain forms of work and devalues others. In a general academic context, we might point to mentorship, advising, and community-building as clear examples of this devalued work. But in the context of the work discussed in this forum, we can align efforts to recover overlooked texts, compile new datasets, and preserve fragile archives, with the undervalued side of this equation as well. It’s not only that these forms of scholarship, like the “service” work described just above, are performed disproportionally by women and people of color. It is also that, because of the ways in which archives and canons are constructed, projects that focus on women and people of color require many more of these generous and generative scholarly acts. Without these acts, and the scholars who perform them, much of the formally-published work on these subjects could not begin to exist.

Consider Kenton Rambsy’s “Black Short Story Dataset,” a dataset creation effort that he undertook because his own research questions about the changing composition of African American fiction anthologies could not be answered by any existing corpus; Margaret Galvan’s project to create an archive of comics in social movements, which she has undertaken in order to support her own computational work as well as her students’ learning; or any number of the projects published with Small Axe Archipelagos, a born-digital journal edited and produced by a team of librarians and faculty that has been intentionally designed to be read by people who live in the Caribbean as well as for scholars who work on that region. These projects each involve sophisticated computational thinking—at the level of resource creation and platform development as well as of analytical method. They respond both to specific research questions and to larger scholarly need. They require work, and they require time.

It’s clear that these projects provide significant value to the field of literary studies, as they do to the digital humanities and to the communities to which their work is addressed. In the end, the absence of the voices of the scholars who lead these projects, both from this forum and from the scholarship it explores, offers the most convincing evidence of what—and who—is valued most by existing university structures; and what work—and what people—should be at the center of conversations to come.

LAUREN F. KLEIN is associate professor at the School of Literature, Media, and Communication, Georgia Institute of Technology.

2 Comments

Filed under Uncategorized

Computational Literary Studies: Participant Forum Responses, Day 2

 

Ted Underwood

More could be said about specific claims in “The Computational Case.” But frankly, this forum isn’t happening because literary critics were persuaded by (or repelled by) Da’s statistical arguments. The forum was planned before publication because the essay’s general strategy was expected to make waves. Social media fanfare at the roll-out made clear that rumors of a “field-killing” project had been circulating for months among scholars who might not yet have read the text but were already eager to believe that Da had found a way to hoist cultural analytics by its own petard—the irrefutable authority of mathematics.

That excitement is probably something we should be discussing. Da’s essay doesn’t actually reveal much about current trends in cultural analytics. But the excitement preceding its release does reveal what people fear about this field—and perhaps suggest how breaches could be healed.

While it is undeniably interesting to hear that colleagues have been anticipating your demise, I don’t take the rumored plans for field-murder literally. For one thing, there’s no motive: literary scholars have little to gain by eliminating other subfields. Even if quantitative work had cornered a large slice of grant funding in literary studies (which it hasn’t), the total sum of all grants in the discipline is too small to create a consequential zero-sum game.

The real currency of literary studies is not grant funding but attention, so I interpret excitement about “The Computational Case” mostly as a sign that a large group of scholars have felt left out of an important conversation. Da’s essay itself describes this frustration, if read suspiciously (and yes, I still do that). Scholars who tried to critique cultural analytics in a purely external way seem to have felt forced into an unrewarding posture—“after all, who would not want to appear reasonable, forward-looking, open-minded?” (p. 603). What was needed instead was a champion willing to venture into quantitative territory and borrow some of that forward-looking buzz.

Da was courageous enough to try, and I think the effects of her venture are likely to be positive for everyone. Literary scholars will see that engaging quantitative arguments quantitatively isn’t all that hard and does produce buzz. Other scholars will follow Da across the qualitative/quantitative divide, and the illusory sharpness of the field boundary will fade.

Da’s own argument remains limited by its assumption that statistics is an alien world, where humanistic guidelines like “acknowledge context” are replaced by rigid hypothesis-testing protocols. But the colleagues who follow her will recognize, I hope, that statistical reasoning is an extension of ordinary human activities like exploration and debate. Humanistic principles still apply here. Quantitative models can test theories, but they are also guided by theory, and they shouldn’t pretend to answer questions more precisely than our theories can frame them. In short, I am glad Da wrote “The Computational Case” because her argument has ended up demonstrating—as a social gesture—what its text denied: that questions about mathematical modeling are continuous with debates about interpretive theory.

TED UNDERWOOD is professor of information sciences and English at the University of Illinois, Urbana-Champaign. He has published in venues ranging from PMLA to the IEEE International Conference on Big Data and is the author most recently of Distant Horizons: Digital Evidence and Literary Change (2019).

1 Comment

Filed under Uncategorized

Computational Literary Studies: Participant Forum Responses, Day 2

 

Katherine Bode

The opening statements were fairly critical of Da’s article, less so of CLS. To balance the scales, I want to suggest that Da’s idiosyncratic definition of CLS is partly a product of problematic divisions within digital literary studies.

Da omits what I’d call digital literary scholarship: philological, curatorial, and media archaeological approaches to digital collections and data. Researchers who pursue these approaches, far from reducing all digit(al)ized literature(s) to word counts, maintain––like Da––that analyses based purely or predominantly on such features tend to produce “conceptual fallacies from a literary, historical, or cultural-critical perspective” (p. 604). Omitting such research is part of the way in which Da operationalizes her critique of CLS: defining the field as research that focuses on word counts, then criticizing the field as limited because focused on word counts.

But Da’s perspective is mirrored by many of the researchers she cites. Ted Underwood, for instance, describes “otiose debates about corpus construction” as “well-intentioned red herrings” that detract attention from the proper focus of digital literary studies on statistical methods and inferences.[1] Da has been criticized for propagating a male-dominated version of CLS. But those who pursue the methods she criticizes are mostly men. By contrast, much digital literary scholarship is conducted by women and/or focused on marginalized literatures, peoples, or cultures. The tendency in CLS to privilege data modeling and analysis––and to minimize or dismiss the work of data construction and curation––is part of the culture that creates the male dominance of that field.

More broadly, both the focus on statistical modelling of word frequencies in found datasets, and the prominence accorded to such research in our discipline, puts literary studies out of step with digital research in other humanities fields. In digital history, for instance, researchers collaborate to construct rich datasets––for instance, of court proceedings (as in The Proceedings of the Old Bailey)[2] or social complexity (as reported in a recent Nature article)[3]––that can be used by multiple researchers, including for noncomputational analyses. Where such research is statistical, the methods are often simpler than machine learning models (for instance, trends over time; measures of relationships between select variables) because the questions are explicitly related to scale and the aggregation of well-defined scholarly phenomena, not to epistemologically-novel patterns discerned among thousands of variables.

Some things I want to know: Why is literary studies so hung up on (whether in favor of, or opposed to) this individualistic, masculinist mode of statistical criticism? Why is this focus allowed to marginalize earlier, and inhibit the development of new, large-scale, collaborative environments for both computational and noncomputational literary research? Why, in a field that is supposedly so attuned to identity and inequality, do we accept––and foreground––digital research that relies on platforms (Google Books, HathiTrust, EEBO, and others) that privilege dominant literatures and literary cultures? What would it take to bridge the scholarly and critical––the curatorial and statistical––dimensions of (digital) literary studies and what alternative, shared futures for our discipline could result?

KATHERINE BODE is associate professor of literary and textual studies at the Australian National University. Her latest book, A World of Fiction: Digital Collections and the Future of Literary History (2018), offers a new approach to literary research with mass-digitized collections, based on the theory and technology of the scholarly edition. Applying this model, Bode investigates a transnational collection of around 10,000 novels and novellas, discovered in digitized nineteenth-century Australian newspapers, to offer new insights into phenomena ranging from literary anonymity and fiction syndication to the emergence and intersections of national literary traditions.

[1]Ted Underwood, Distant Horizons: Digital Evidence and Literary Change (Chicago: Chicago University Press, 2019): 180; 176.

[2]Tim Hitchcock, Robert Shoemaker, Clive Emsley, Sharon Howard and Jamie McLaughlin, et al., The Proceedings of the Old Bailey, http://www.oldbaileyonline.org, version 8.0, March 2018).

[3]Harvey Whitehouse, Pieter François, Patrick E. Savage, Thomas E. Currie, Kevin C. Feeney, Enrico Cioni, Rosalind Purcell, et al., “Complex Societies Precede Moralizing Gods Throughout World History,” Nature March 20 (2019): 1.

3 Comments

Filed under Uncategorized

Computational Literary Studies: Participant Forum Responses, Day 2

 

Argument

(This response follows Nan Da’s previous “Errors” response)

Nan Z Da

First, a qualification. Due to the time constraints of this forum, I can only address a portion of the issues raised by the forum participants and in ways still imprecise. I do plan to issue an additional response that addresses the more fine-grained technical issues.

“The Computational Case against Computational Literary Studies” was not written for the purposes of refining CLS. The paper does not simply call for “more rigor” or for replicability across the board. It is not about figuring out which statistical mode of inquiry best suits computational literary analysis. It is not a method paper; as some of my respondents point out, those are widely available.

The article was written to empower literary scholars and editors to ask logical questions about computational and quantitative literary criticism should they suspect a conceptual mismatch between the result and the argument or perceive the literary-critical payoff to be extraordinarily low.

The paper, I hope, teaches us to recognize two types of CLS work. First, there is statistically rigorous work that cannot actually answer the question it sets out to answer or doesn’t ask an interesting question at all. Second, there is work that seems to deliver interesting results but is either nonrobust or logically confused. The confusion sometimes issues from something like user error, but it is more often the result of the suboptimal or unnecessary use of statistical and other machine-learning tools. The paper was an attempt to demystify the application of those tools to literary corpora and to explain why technical errors are amplified when your goal is literary interpretation or description.

My article is the culmination of a long investigation into whether computational methods and their modes of quantitative analyses can have purchase in literary studies. My answer is that what drives quantitative results and data patterns often has little to do with the literary critical or literary historical claims being made by scholars that claim to be finding such results and uncovering such patterns—though it sometimes looks like it. If the conclusions we find in CLS corroborate or disprove existing knowledge, this is not a sign that they are correct but that they are tautological at best, merely superficial at worst.

The article is agnostic on what literary criticism ought to be and makes no prescriptions about interpretive habits. The charge that it takes a “purist” position is pure projection. The article aims to describe what scholarship ought not to be. Even the appeal to reading books in the last pages of the article does not presume the inherent meaningfulness of “actually reading” but only serves as a rebuttal to the use of tools that wish to do simple classifications for which human decision would be immeasurably more accurate and much less expensive.

As to the question of Exploratory Data Analysis versus Confirmatory Data Analysis: I don’t prioritize one over the other. If numbers and their interpretation are involved, then statistics has to come into play; I don’t know any way around this. If you wish to simply describe your data, then you have to show something interesting that derives from measurements that are nonreductive. As to the appeal to exploratory tools: if your tool will never be able to explore the problem in question, because it lacks power or is overfitted to its object, your exploratory tool is not needed.

It seems unobjectionable that quantitative methods and nonquantitative methods might work in tandem.  My paper is simply saying: that may be true in theory but it falls short in practice. Andrew Piper points us to the problem of generalization, of how to move from local to global, probative to illustrative. This is precisely the gap my article interrogates because that’s where the collaborative ideal begins to break down. One may call the forcible closing of that gap any number of things—a new hermeneutics, epistemology, or modality—but in the end, the logic has to clear.

My critics are right to point out a bind. The bind is theirs, however, not mine. My point is also that, going forward, it is not for me or a very small group of people to decide what the value of this work is, nor how it should be done.

Ed Finn accuses me of subjecting CLS to a double standard: “Nobody is calling in economists to assess the validity of Marxist literary analysis, or cognitive psychologists to check applications of affect theory, and it’s hard to imagine that scholars would accept the disciplinary authority of those critics.”

This is faulty reasoning. For one thing, literary scholars ask for advice and assessment from scholars in other fields all the time. For another, the payoff of the psychoanalytic reading, even as it seeks extraliterary meaning and validity, is not for psychology but for literary-critical meaning, where it succeeds or fails on its own terms. CLS wants to say, “it’s okay that there isn’t much payoff in our work itself as literary criticism, whether at the level of prose or sophistication of insight; the payoff is in the use of these methods, the description of data, the generation of a predictive model, or the ability for someone else in the future to ask (maybe better) questions. The payoff is in the building of labs, the funding of students, the founding of new journals, the cases made for tenure lines and postdoctoral fellowships and staggeringly large grants. When these are the claims, more than one discipline needs to be called in to evaluate the methods, their applications, and their result. Because printed critique of certain literary scholarship is generally not refuted by pointing to things still in the wings, we are dealing with two different scholarly models. In this situation, then, we should be maximally cross-disciplinary.

NAN Z. DA teaches literature at the University of Notre Dame.

 

Nan Z. Da, Critical Response III. On EDA, Complexity, and Redundancy: A Response to Underwood and Weatherby

2 Comments

Filed under Uncategorized

Computational Literary Studies: Participant Forum Responses, Day 2

 

Errors

Nan Z. Da

This first of two responses addresses errors, real and imputed; the second response is the more substantive.

1. There is a significant mistake in footnote 39 (p. 622) of my paper. In it I attribute to Hugh Craig and Arthur F. Kinney the argument that Marlowe wrote parts of some late Shakespeare plays after his (Marlowe’s) death. The attribution is incorrect. What Craig asks in “The Three Parts of Henry VI” (pp. 40-77) is whether Marlowe wrote segments of these plays. I would like to extend my sincere apologies to Craig and to the readers of this essay for the misapprehension that it caused.

2. The statement “After all, statistics automatically assumes” (p. 608) is incorrect. A more correct statement would be: In standard hypothesis testing a 95 percent confidence level means that, when the null is true, you will correctly fail to reject 95 percent of the time.

3. The description of various applications of text-mining/machine-learning (p. 620) as “ethically neutral” is not worded carefully enough. I obviously do not believe that some of these applications, such as tracking terrorists using algorithms, is ethically neutral. I meant that there are myriad applications of these tools: for good, ill, and otherwise. On balance it’s hard to assign an ideological position to them.

4. Ted Underwood is correct that, in my discussion of his article on “The Life Cycle of Genres,” I confused the “ghastly stew” with the randomized control sets used in his predictive modeling. Underwood also does not make the elementary statistical mistake I suggest he has made in my article (“Underwood should train his model on pre-1941” [p. 608]).

As to the charge of misrepresentation: paraphrasing a paper whose “single central thesis … is that the things we call ‘genres’ may be entities of different kinds, with different life cycles and degrees of textual coherence” is difficult. Underwood’s thesis here refers to the relative coherence of detective fiction, gothic, and science fiction over time, with 1930 as the cutoff point.

The other things I say about the paper remain true. The paper cites various literary scholars’ definitions of genre change, but its implicit definition of genre is “consistency over time of 10,000 frequently used terms.” It cannot “reject Franco Moretti’s conjecture that genres have generational cycles” (a conjecture that most would already find too reductive) because it is not using the same testable definition of genre or change.

5. Topic Modeling: my point isn’t that topic models are non-replicable but that, in this particular application, they are non-robust. Among other evidence: if I remove one document out of one hundred, the topics change. That’s a problem.

6. As far as Long and So’s essay “Turbulent Flow” goes, I need a bit more time than this format allows to rerun the alternatives responsibly. So and Long have built a tool in which there are thirteen features for predicting the difference between two genres—Stream of Consciousness and Realism. They say: most of these features are not very predictive alone but together become very predictive, with that power being concentrated in just one feature. I show that that one feature isn’t robust. To revise their puzzling metaphor: it’s as if someone claims that a piano plays beautifully and that most of that sound comes from one key. I play that key; it doesn’t work.

7. So and Long argue that by proving that their classifier misclassifies nonhaikus—not only using English translations of Chinese poetry, as they suggest, but also Japanese poetry that existed long before the haiku—I’ve made a “misguided decision that smacks of Orientalism. . . . It completely erases context and history, suggesting an ontological relation where there is none.” This is worth getting straight. Their classifier lacks power because it can only classify haikus with reference to poems quite different from haikus; to be clear, it will classify equally short texts with overlapping keywords close to haikus as haikus. Overlapping keywords is their predictive feature, not mine. I’m not sure how pointing this out is Orientalist. As for their model, I would if pushed say it is only slightly Orientalist, if not determinatively so.

8. Long and So claim that my “numbers cannot be trusted,” that my “critique . . . is rife with technical and factual errors”; in a similar vein it ends with the assertion that my essay doesn’t “encourag[e] much trust.”  I’ll admit to making some errors in this article, though not in my analyses of Long and So’s papers (the errors mostly occur in section 3). I hope to list all of these errors in the more formal response that appears in print or else in an online appendix. That said, an error is not the same as a specious insinuation that the invalidation of someone’s model indicates Orientalism, pigheadedness, and so on. Nor is an error the same as the claim that “CI asked Da to widen her critique to include female scholars and she declined” recently made by So, which is not an error but a falsehood.

NAN Z. DA teaches literature at the University of Notre Dame.

 

Nan Z. Da, Critical Response III. On EDA, Complexity, and Redundancy: A Response to Underwood and Weatherby

2 Comments

Filed under Uncategorized

Computational Literary Studies: Participant Forum Responses

 

Ted Underwood

In the humanities, as elsewhere, researchers who work with numbers often reproduce and test each other’s claims.Nan Z. Da’s contribution to this growing genre differs from previous examples mainly in moving more rapidly. For instance, my coauthors and I spent 5,800 words describing, reproducing, and partially criticizing one article about popular music.By contrast, Da dismisses fourteen publications that use different methods in thirty-eight pages. The article’s energy is impressive, and its long-term effects will be positive.

But this pace has a cost. Da’s argument may be dizzying if readers don’t already know the works summarized, as she rushes through explanation to get to condemnation. Readers who know these works will recognize that Da’s summaries are riddled with material omissions and errors. The time is ripe for a theoretical debate about computing in literary studies. But this article is unfortunately too misleading—even at the level of paraphrase—to provide a starting point for the debate.

For instance, Da suggests that my article “The Life Cycles of Genres”makes genres look stable only because it forgets to compare apples to apples: “Underwood should train his model on pre-1941 detective fiction (A) as compared to pre-1941 random stew and post-1941 detective fiction (B) as compared to post-1941 random stew, instead of one random stew for both” (p. 608).3

This perplexing critique tells me to do exactly what my article (and public code) make clear that I did: compare groups of works matched by publication date.4There is also no “random stew” in the article. Da’s odd phrase conflates a random contrast set with a ghastly “genre stew” that plays a different role in the argument.

More importantly, Da’s critique suppresses the article’s comparative thesis—which identifies detective fiction as more stable than several other genres—in order to create a straw man who argues that all genres “have in fact been more or less consistent from the 1820s to the present” (p. 609). Lacking any comparative yardstick to measure consistency, this straw thesis becomes unprovable. In other cases Da has ignored the significant results of an article, in order to pour scorn on a result the authors acknowledge as having limited significance—without ever mentioning that the authors acknowledge the limitation. This is how she proceeds with Jockers and Kirilloff (p. 610).

In short, this is not an article that works hard at holistic critique. Instead of describing the goals that organize a publication, Da often assumes that researchers were trying (and failing) to do something she believes they should have done. Topic modeling, for instance, identifies patterns in a corpus without pretending to find a uniquely correct description. Humanists use the method mostly for exploratory analysis. But Da begins from the assumption that topic modeling must be a confused attempt to prove hypotheses of some kind. So, she is shocked to discover (and spends a page proving) that different topics can emerge when the method is run multiple times. This is true. It is also a basic premise of the method, acknowledged by all the authors Da cites—who between them spend several pages discussing how results that vary can nevertheless be used for interpretive exploration. Da doesn’t acknowledge the discussion.

Finally, “The Computational Case” performs some crucial misdirection at the outset by implying that cultural analytics is based purely on linguistic evidence and mainly diction. It is true that diction can reveal a great deal, but this is a misleading account of contemporary trends. Quantitative approaches are making waves partly because researchers have learned to extract social relations from literature and partly because they pair language with external social testimony—for instance the judgments of reviewers.Some articles, like my own on narrative pace, use numbers entirely to describe the interpretations of human readers.Once again, Da’s polemical strategy is to isolate one strand in a braid, and critique it as if it were the whole.

A more inquisitive approach to cultural analytics might have revealed that it is not a monolith but an unfolding debate between several projects that frequently criticize each other. Katherine Bode, for instance, has critiqued other researchers’ data (including mine), in an exemplary argument that starts by precisely describing different approaches to historical representation.Da could have made a similarly productive intervention—explaining, for instance, how researchers should report uncertainty in exploratory analysis. Her essay falls short of that achievement because a rush to condemn as many examples as possible has prevented it from taking time to describe and genuinely understand its objects of critique.

TED UNDERWOOD is professor of information sciences and English at the University of Illinois, Urbana-Champaign. He has published in venues ranging from PMLA to the IEEE International Conference on Big Data and is the author most recently of Distant Horizons: Digital Evidence and Literary Change (2019).

1.Andrew Goldstone, “Of Literary Standards and Logistic Regression: A Reproduction,” January 4, 2016, https://andrewgoldstone.com/blog/2016/01/04/standards/. Jonathan Goodwin, “Darko Suvin’s Genres of Victorian SF Revisited,” Oct 17, 2016, https://jgoodwin.net/blog/more-suvin/.

2. Ted Underwood, “Can We Date Revolutions in the History of Literature and Music?”, The Stone and the Shell, October 3, 2015, https://tedunderwood.com/2015/10/03/can-we-date-revolutions-in-the-history-of-literature-and-music/ Ted Underwood, Hoyt Long, Richard Jean So, and Yuancheng Zhu, “You Say You Found a Revolution,” The Stone and the Shell, February 7, 2016, https://tedunderwood.com/2016/02/07/you-say-you-found-a-revolution/.

3. Nan Z. Da, “The Computational Case against Computational Literary Studies,” Critical Inquiry 45 (Spring 2019): 601-39.

4. Ted Underwood, “The Life Cycles of Genres,” Journal of Cultural Analytics, May 23, 2016, http://culturalanalytics.org/2016/05/the-life-cycles-of-genres/.

5. Eve Kraicer and Andrew Piper, “Social Characters: The Hierarchy of Gender in Contemporary English-Language Fiction,” Journal of Cultural Analytics, January 30, 2019, http://culturalanalytics.org/2019/01/social-characters-the-hierarchy-of-gender-in-contemporary-english-language-fiction/

6. Ted Underwood, “Why Literary Time is Measured in Minutes,” ELH 25.2 (2018): 341-65.

7. Katherine Bode, “The Equivalence of ‘Close’ and ‘Distant’ Reading; or, Toward a New Object for Data-Rich Literary History,” MLQ 78.1 (2017): 77-106.

 

Ted Underwood, Critical Response II. The Theoretical Divide Driving Debates about Computation

1 Comment

Filed under Uncategorized

Computational Literary Studies: Participant Forum Responses

 

The Select

Andrew Piper

Nan Z. Da’s study published in Critical Inquiry participates in an emerging trend across a number of disciplines that falls under the heading of “replication.”[1] In this, her work follows major efforts in other fields, such as the Open Science Collaboration’s “reproducibility project,” which sought to replicate past studies in the field of psychology.[2] As the authors of the OSC collaboration write, the value of replication, when done well, is that it can “increase certainty when findings are reproduced and promote innovation when they are not.”

And yet despite arriving at sweeping claims about an entire field, Da’s study fails to follow any of the procedures and practices established by projects like the OSC.[3] While invoking the epistemological framework of replication—that is, to prove or disprove the validity of both individual articles as well as an entire field—her practices follow instead the time-honoured traditions of selective reading from the field of literary criticism. Da’s work is ultimately valuable not because of the computational case it makes (that work still remains to be done), but the way it foregrounds so many of the problems that accompany traditional literary critical models when used to make large-scale evidentiary claims. The good news is that this article has made the problem of generalization, of how we combat the problem of selective reading, into a central issue facing the field.

Start with the evidence chosen. When undertaking their replication project, the OSC generated a sample of one hundred studies taken from three separate journals within a single year of publication to approximate a reasonable cross-section of the field. Da on the other hand chooses “a handful” of articles (fourteen by my count) from different years and different journals with no clear rationale of how these articles are meant to represent an entire field. The point is not the number chosen but that we have no way of knowing why these articles and not others were chosen and thus whether her findings extend to any work beyond her sample. Indeed, the only linkage appears to be that these studies all “fail” by her criteria. Imagine if the OSC had found that 100 percent of articles sampled failed to replicate. Would we find their results credible? Da by contrast is surprisingly only ever right.

Da’s focus within articles exhibits an even stronger degree of nonrepresentativeness. In their replication project, the OSC establishes clearly defined criteria through which a study can be declared not to replicate, while also acknowledging the difficulty of arriving at this conclusion. Da by contrast applies different criteria to every article, making debatable choices, as well as outright errors, that are clearly designed to foreground differences.[4] She misnames authors of articles, mis-cites editions, mis-attributes arguments to the wrong book, and fails at some basic math.[5] And yet each of these assertions always adds-up to the same certain conclusion: failed to replicate. In Da’s hands, part is always a perfect representation of whole.

Perhaps the greatest limitation of Da’s piece is her extremely narrow (that is, nonrepresentative) definition of statistical inference and computational modeling. In Da’s view, the only appropriate way to use data is to perform what is known as significance testing, where we use a statistical model to test whether a given hypothesis is “true.”[6] There is no room for exploratory data analysis, for theory building, or predictive modeling in her view of the field.[7] This is particularly ironic given that Da herself performs no such tests. She holds others to standards to which she herself is not accountable. Nor does she cite articles where authors explicitly undertake such tests[8] or research that calls into question the value of such tests[9] or research that explores the relationship between word frequency and human judgments that she finds so problematic.[10] The selectivity of Da’s work is deeply out of touch with the larger research landscape.

All of these practices highlight a more general problem that has for too long gone unexamined in the field of literary study. How are we to move reliably from individual observations to general beliefs about things in the world? Da’s article provides a tour de force of the problems of selective reading when it comes to generalizing about individual studies or entire fields. Addressing the problem of responsible and credible generalization will be one of the central challenges facing the field in the years to come. As with all other disciplines across the university, data and computational modeling will have an integral role to play in that process.

ANDREW PIPER is Professor and William Dawson Scholar in the Department of Languages, Literatures, and Cultures at McGill University. He is the author most recently of Enumerations: Data and Literary Study (2018).

[1]Nan Z. Da, “The Computational Case Against Computational Literary Studies,” Critical Inquiry 45 (Spring 2019) 601-639. For accessible introductions to what has become known as the replication crisis in the sciences, see Ed Yong, “Psychology’s Replication Crisis Can’t Be Wished Away,” The Atlantic March 4, 2016.

[2]Open Science Collaboration, “Estimating the Reproducibility of Psychological Science,” Science 28 Aug 2015: Vol. 349, Issue 6251, aac4716.DOI: 10.1126/science.aac4716.

[3]Compare Da’s sweeping claims with the more modest ones made by the OSC in Science even given their considerably larger sample and far more rigorous effort at replication, reproduced here. For a discussion of the practice of replication, see Brian D. Earp and David Trafimow, “Replication, Falsification, and the Crisis of Confidence in Social Psychology,” Frontiers in Psychology May 19, 2015: doi.org/10.3389/fpsyg.2015.00621.

[4]For a list, see Ben Schmidt, “A computational critique of a computational critique of a computational critique.” I provide more examples in the scholarly response here: Andrew Piper, “Do We Know What We Are Doing?Journal of Cultural Analytics, April 1, 2019.

[5]She cites Mark Algee-Hewitt as Mark Hewitt, cites G. Casella as the author of Introduction to Statistical Learning when it was Gareth James, cites me and Andrew Goldstone as co-authors in the Appendix when we were not, claims that “the most famous example of CLS forensic stylometry” was Hugh Craig and Arthur F. Kinney’s book that advances a theory of Marlowe’s authorship of Shakespeare’s plays which they do not, and miscalculates the number of people it would take to read fifteen thousand novels in a year. The answer is 1250 not 1000 as she asserts. This statistic is also totally meaningless.

[6]Statements like the following also suggest that she is far from a credible guide to even this aspect of statistics: “After all, statistics automatically assumes that 95 percent of the time there is no difference and that only 5 percent of the time there is a difference. That is what it means to look for p-value less than 0.05.” This is not what it means to look for a p-value less than 0.05. A p-value is the estimated probability of getting our observed data assuming our null hypothesis is true. The smaller the p-value, the more unlikely it is to observe what we did assuming our initial hypothesis is true. The aforementioned 5% threshold says nothing about how often there will be a “difference” (in other words, how often the null hypothesis is false). Instead, it says: “if our data leads us to conclude that there is a difference, we estimate that we will be mistaken 5% of the time.” Nor does “statistics” “automatically” assume that .05 is the appropriate cut-off. It depends on the domain, the question and the aims of modeling. These are gross over-simplifications.

[7]For reflections on literary modeling, see Andrew Piper, “Think Small: On Literary Modeling.” PMLA 132.3 (2017): 651-658; Richard Jean So, “All Models Are Wrong,” PMLA 132.3 (2017); Ted Underwood, “Algorithmic Modeling: Or, Modeling Data We Do Not Yet Understand,” The Shape of Data in Digital Humanities: Modeling Texts and Text-based Resources, eds. J. Flanders and F. Jannidis (New York: Routledge, 2018).

[8]See Andrew Piper and Eva Portelance, “How Cultural Capital Works: Prizewinning Novels, Bestsellers, and the Time of Reading,” Post-45 (2016); Eve Kraicer and Andrew Piper, “Social Characters: The Hierarchy of Gender in Contemporary English-Language Fiction,” Journal of Cultural Analytics, January 30, 2019. DOI: 10.31235/osf.io/4kwrg; and Andrew Piper, “Fictionality,” Journal of Cultural Analytics, Dec. 20, 2016. DOI: 10.31235/osf.io/93mdj.

[9]The literature debating the values of significance testing is vast. See Simmons, Joseph P., Leif D. Nelson, and Uri Simonsohn. “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant.” Psychological Science 22, no. 11 (November 2011): 1359–66. doi:10.1177/0956797611417632.

 [10]See Rens Bod, Jennifer Hay, and Stefanie Jannedy, Probabilistic Linguistics (Cambridge, MA: MIT Press, 2003); Dan Jurafsky and James Martin, “Vector Semantics,” Speech and Language Processing, 3rd Edition (2018): https://web.stanford.edu/~jurafsky/slp3/6.pdf; for the relation of communication to information theory, M.W. Crocker, Demberg, V. & Teich, E. “Information Density and Linguistic Encoding,” Künstliche Intelligenz 30.1 (2016) 77-81. https://doi.org/10.1007/s13218-015-0391-y; and for the relation to language acquisition and learning, Erickson  LC, Thiessen  ED, “Statistical learning of language: theory, validity, and predictions of a statistical learning account of language acquisition,” Dev. Rev. 37 (2015): 66–108.doi:10.1016/j.dr.2015.05.002.

1 Comment

Filed under Uncategorized

Computational Literary Studies: Participant Forum Responses

 

Trust in Numbers

Hoyt Long and Richard Jean So

 

Nan Da’s “The Computational Case against Computational Literary Criticism” stands out from past polemics against computational approaches to literature in that it purports to take computation seriously. It recognizes that a serious engagement with this kind of research means developing literacy of statistical and other concepts. Insofar as her essay promises to move the debate beyond a flat rejection of numbers, and towards something like a conversation about replication, it is a useful step forward.

This, however, is where its utility ends. “Don’t trust the numbers,” Da warns. Or rather, “Don’t trust their numbers, trust mine.” But should you? If you can’t trust their numbers, she implies, the entire case for computational approaches falls apart. Trust her numbers and you’ll see this. But her numbers cannot be trusted. Da’s critique of fourteen articles in the field of cultural analytics is rife with technical and factual errors. This is not merely quibbling over details. The errors reflect a basic lack of understanding of fundamental statistical concepts and are akin to an outsider to literary studies calling George Eliot a “famous male author.” Even more concerning, Da fails to understand statistical method as a contextual, historical, and interpretive project. The essay’s greatest error, to be blunt, is a humanist one.

Here we focus on Da’s errors related to predictive modeling. This is the core method used in the two essays of ours that she critiques. In “Turbulent Flow,” we built a model of stream-of-consciousness (SOC) narrative with thirteen linguistic features and found that ten of them, in combination, reliably distinguished passages that we identified as SOC (as compared with passages taken from a corpus of realist fiction). Type-token ratio (TTR), a measure of lexical diversity, was the most distinguishing of these, though uninformative on its own. The purpose of predictive modeling, as we carefully explain in the essay, is to understand how multiple features work in concert to identify stylistic patterns, not alone. Nothing in Da’s critique suggests she is aware of this fundamental principle.

Indeed, Da interrogates just one feature in our model (TTR) and argues that modifying it invalidates our modeling. Specifically, she tests whether the strong association of TTR with SOC holds after removing words in her “standard stopword list,” instead of in the stopword list we used. She finds it doesn’t. There are two problems with this. First, TTR and “TTR minus stopwords” are two separate features. We actually included both in our model and found the latter to be minimally distinctive. Second, while the intuition to test for feature robustness is appropriate, it is undercut by the assertion that there is a “standard” stopword list that should be universally applied. Ours was specifically curated for use with nineteenth- and early twentieth-century fiction. Even if there was good reason to adopt her “standard” list, one still must rerun the model to test if the remeasured “TTR minus stopwords” feature changes the overall predictive accuracy. Da doesn’t do this. It’s like fiddling with a single piano key and, without playing another note, declaring the whole instrument to be out of tune.

But the errors run deeper than this. In Da’s critique of “Literary Pattern Recognition,” she tries to invalidate the robustness of our model’s ability to classify English-language haiku poems from nonhaiku poems. She does so by creating a new corpus of “English translations of Chinese couplets” and tests our model on this corpus. Why do this? She suggests that it is because they are filled “with similar imagery” to English haiku and are similarly “Asian.” This is a misguided decision that smacks of Orientalism. It completely erases context and history, suggesting an ontological relation where there is none. This is why we spend over twelve pages delineating the English haiku form in both critical and historical terms.

These errors exemplify a consistent refusal to contextualize and historicize one’s interpretative practices (indeed to “read well”), whether statistically or humanistically. We do not believe there exist “objectively” good literary interpretations or that there is one “correct” way to do statistical analysis: Da’s is a position most historians of science, and most statisticians themselves, would reject.  Conventions in both literature and science are continuously debated and reinterpreted, not handed down from on high. And like literary studies, statistics is a body of knowledge formed from messy disciplinary histories, as well as diverse communities of practice. Da’s essay insists on a highly dogmatic, “objective,” black-and-white version of knowledge, a disposition totally antithetical to bothstatistics and literary studies. It is not a version that encourages much trust.

Hoyt Long is associate professor of Japanese literature at the University of Chicago. He publishes widely in the fields of Japanese literary studies, media history, and cultural analytics. His current book project is Figures of Difference: Quantitative Approaches to Modern Japanese Literature.

Richard Jean So is assistant professor of English and cultural analytics at McGill University. He works on computational approaches to literature and culture with a focus on contemporary American writing and race. His current book project is Redlining Culture: A Data History of Race and US Fiction.

1 Comment

Filed under Uncategorized

Computational Literary Studies: Participant Forum Responses

 

What the New Computational Rigor Should Be

Lauren F. Klein

Writing about the difficulties of evaluating digital scholarship in a recent special issue of American Quarterlydevoted to DH, Marisa Parham proposes the concept of “The New Rigor” to account for the labor of digital scholarship as well as its seriousness: “It is the difference between what we say we want the world to look like and what we actually carry out in our smallest acts,” she states (p. 683). In “The Computational Case against Computational Literary Studies,” Nan Z. Da also makes the case for a new rigor, although hers is more narrowly scoped. It entails both a careful adherence to the methods of statistical inquiry and a concerted rejection of the application of those methods to domains—namely, literary studies—that fall beyond their purported use.

No one would argue with the former. But it is the latter claim that I will push back against. Several times in her essay, Da makes the case that “statistical tools are designed to do certain things and solve specific problems,” and for that reason, they should not be employed to “capture literature’s complexity” (pp. 619-20, 634). To be sure, there exists a richness of language and an array of ineffable—let alone quantifiable—qualities of literature that cannot be reduced to a single model or diagram. But the complexity of literature exceeds even that capaciousness, as most literary scholars would agree. And for that very reason, we must continue to explore new methods for expanding the significance of our objects of study. As literary scholars, we would almost certainly say that we want to look at—and live in—a world that embraces complexity. Given that vision, the test of rigor then becomes, to return to Parham’s formulation, how we usher that world into existence through each and every one of “our smallest acts” of scholarship, citation, and critique.

In point of fact, many scholars already exhibit this new computational rigor. Consider how Jim Casey, the national codirector of the Colored Conventions Project, is employing social network analysis—including the centrality scores and modularity measures that Da finds lacking in the example she cites—in order to detect changing geographic centers for this important nineteenth-century organizing movement. Or how Lisa Rhody has found an “interpretive space that is as vital as the weaving and unraveling at Penelope’s loom” in a topic model of a corpus of 4,500 poems. This interpretive space is one that Rhody creates in no small part by accounting for the same fluctuations of words in topics—the result of the sampling methods employed in almost all topic model implementations—that Da invokes, instead, in order to dismiss the technique out of hand. Or how Laura Estill, Dominic Klyve, and Kate Bridal have employed statistical analysis, including a discussion of the p-values that Da believes (contramany statisticians) are always required, in order to survey the state of Shakespeare studies as a field.

That these works are authored by scholars in a range of academic roles, including postdoctoral fellows and DH program coordinators as well as tenure-track faculty, and are published in a range of venues, including edited collections and online as well as domain-specific journals; further points to the range of extant work that embraces the complexity of literature in precisely the ways that Da describes. But these works to do more: they also embrace the complexity of the statistical methods that they employ. Each of these essays involve a creative repurposing of the methods they borrow from more computational fields, as well as a trenchant self-critique. Casey, for example, questions how applying techniques of social network analysis, which are premised on a conception of sociality as characterized by links between individual “nodes,” can do justice to a movement celebrated for its commitment to collective action. Rhody, for another, considers the limits of the utility of topic modeling, as a tool “designed to be used with texts that employ as little figurative language as possible,” for her research questions about ekphrasis. These essays each represent “small acts” and necessarily so. But taken alongside the many other examples of computational work that are methodologically sound, creatively conceived, and necessarily self-critical, they constitute the core of a field committed to complexity in both the texts they elucidate and the methods they employ.

In her formulation of the “The New Rigor,” Parham—herself a literary scholar—places her emphasis on a single word: “Carrying, how we carry ourselves in our relationships and how we carry each other, is the real place of transformation,” she writes. Da, the respondents collected in this forum, and all of us in literary studies—computational and not—might linger on that single word. If our goal remains to celebrate the complexity of literature—precisely because it helps to illuminate the complexity of the world—then we must carry ourselves, and each other, with intellectual generosity and goodwill. We must do so, moreover, with a commitment to honoring the scholarship, and the labor, that has cleared the path up to this point. Only then can we carry forward the field of computational literary studies into the transformative space of future inquiry.

LAUREN F. KLEIN is associate professor at the School of Literature, Media, and Communication, Georgia Institute of Technology.

3 Comments

Filed under Uncategorized

Computational Literary Studies: Participant Forum Responses

 

What Is Literary Studies?

Ed Finn

This is the question that underpins Da’s takedown of what she calls computational literary studies (CLS). The animus with which she pursues this essay is like a search light that creates a shadow behind it. “The discipline is about reducing reductionism,” she writes (p. 638), which is a questionable assertion about a field that encompasses many kinds of reduction and contradictory epistemic positions, from thing theory to animal studies. Da offers no evidence or authority to back up her contention that CLS fails to validate its claims. Being charitable, what Da means, I think, is that literary scholars should always attend to context, to the particulars of the works they engage.

Da’s essay assails what she terms the false rigor of CLS: the obsession with reductive analyses of large datasets, the misapplied statistical methods, the failure to disentangle artifacts of measurement from significant results. And there may be validity to these claims: some researchers use black box tools they don’t understand, not just in the digital humanities but in fields from political science to medicine. The most helpful contribution of Da’s article is tucked away in the online appendix, where she suggests a very good set of peer review and publication guidelines for DH work. I can imagine a version of this essay that culminated with those guidelines rather than the suggestion that “reading literature well” is a bridge too far for computational approaches.

The problem with the spotlight Da shines on the rigor of CLS is that shadow looming behind it. What does rigor look like in “the discipline” of literary studies, which is defined so antagonistically to CLS here? What are the standards of peer review that ensure literary scholarship validates its methods, particularly when it draws those methods from other disciplines? Nobody is calling in economists to assess the validity of Marxist literary analysis, or cognitive psychologists to check applications of affect theory, and it’s hard to imagine that scholars would accept the disciplinary authority of those critics. I am willing to bet Critical Inquiry’s peer review process for Da’s article did not include federal grants program officers, university administrators, or scholars of public policy being asked to assess Da’s rhetorical—but central—question “of why we need ‘labs’ or the exorbitant funding that CLS has garnered” (p. 603).

I contend this is actually a good idea: literary studies can benefit from true dialog and collaboration with fields across the entire academy. Da clearly feels that this is justified in the case of CLS, where she calls for more statistical expertise (and brings in a statistician to guide her analysis in this paper). But why should CLS be singled out for this kind of treatment?

Either one accepts that rigor sometimes demands literary studies should embrace expertise from other fields—like Da bringing in a statistician to validate her findings for this paper—or one accepts that literary studies is made up of many contradictory methods and that “the discipline” is founded on borrowing methods from other fields without any obligation validate findings by the standards of those other fields. What would it look like to generalize Da’s proposals for peer review to other areas of literary studies? The contemporary research I find most compelling makes this more generous move: bringing scholars in the humanities together with researchers in the social sciences, the arts, medicine, and other arenas where people can actually learn from one another and do new kinds of work.

To me, literary studies is the practice of reading and writing in order to better understand the human condition. And the condition is changing. Most of what we read now comes to us on screens that are watching us as we watch them. Many of the things we think about have been curated and lobbed into our consciousness by algorithmic feeds and filters. I studied Amazon recommendation networks because they play an important role in contemporary American literary reception and the lived experience of fiction for millions of readers—at least circa 2010, when I wrote the article. My approach in that work hewed to math that I understand and a scale of information that I call small data because it approximates the headspace of actual readers thinking about particular books. Small data always leads back to the qualitative and to the particular, and it is a minor example of the contributions humanists can make beyond the boundaries of “the discipline.”

We desperately need the humanities to survive the next century, when so many of our species’ bad bets are coming home to roost. Text mining is not “ethically neutral,” as Da gobsmackingly argues (p. 620), any more than industrialization was ethically neutral, or the NSA using network analysis to track suspected terrorists (Da’s example of a presumably acceptable “operationalizable end” for social network analysis) (p. 632). The principle of charity would, I hope, preclude Da’s shortsighted framing of what matters in literary studies, and it would open doors to other fields like computer science where many researchers are, either unwittingly or uncaringly, deploying words like human and read and write with the same kind of facile dismissal of methods outside “the discipline” that are on display here. That is the context in which we read and think about literature now, and if we want to “read literature well,” we need to bring the insights of literary study to broader conversations where we participate, share, educate, and learn.

ED FINN is the founding director of the Center for Science and the Imagination at Arizona State University where he is an associate professor in the School of Arts, Media, and Engineering and the Department of English.

1 Comment

Filed under Uncategorized

Computational Literary Studies: Participant Forum Responses

 

Sarah Brouillette

DH is here to stay, including in the CLS variant whose errors Nan Da studies. This variant is especially prevalent in English programs, and it will continue to gain force there. Even when those departments have closed or merged with other units, people with CLS capacities will continue to find positions—though likely contractually —when others no longer can. This is not to say that DH is somehow itself the demise of the English department. The case rather is that both the relative health of DH and the general decline in literary studies—measured via enrollments, number of tenured faculty, and university heads’ dispositions toward English—arise from the same underlying factors. The pressures that English departments face are grounded in the long economic downturn and rising government deficits, deep cuts to funding for higher education, rising tuition, and a turn by university administrators toward boosting business and STEM programs. We know this. There has been a foreclosure of futurity for students who are facing graduation with significant debt burdens and who doubt that they will find stable work paying a good wage. Who can afford the luxury of closely reading five hundred pages of dense prose? Harried anxious people accustomed to working across many screens, many open tabs, with constant pings from social media, often struggle with sustained reading. Myself included. DH is a way of doing literary studies without having to engage in long periods of sustained reading, while acquiring what might feel like job skills. It doesn’t really matter how meaningful CLS labs’ findings are. As Da points out, practitioners themselves often emphasize how tentative their findings are or stress flaws in the results or the method that become the occasion for future investment and development. That is the point: investment and development. The key to DH’s relative health is that it supports certain kinds of student training and the development of technologically enhanced learning environments. One of the only ways to get large sums of grant money from the Social Sciences and Humanities Research Council of Canada (SSHRC) is to budget for equipment and for student training. Computer training is relatively easy to describe in a budget justification. Universities for their part often like DH labs because they attract these outside funders, and because grants don’t last forever, a campus doesn’t have to promise anything beyond short-term training and employment. As for the students: to be clear, those with DH skills don’t necessarily walk more easily into jobs than those without them. But DH labs, which at least in Canada need to be able to list training as a priority, offer an experience of education that has an affective appeal for many students—an appeal that universities work hard to cultivate and reinforce. This cultivation is there in the constant contrasts made between old fashioned and immersive learning, between traditional and project-based classrooms, between the dull droning lecture and the experiential . . . well, experience. (The government of Ontario has recently mandated that every student have an opportunity to experience “work-integrated learning” before graduation.) It is there also in the push to make these immersive experiences online ones, mediated by learning management systems such as Brightspace or Canvas, which store data via Amazon Web Services. Learning in universities increasingly occurs in data capturable forms. The experience of education, from level of participation to test performance, is cultivated, monitored, and tracked digitally. Students who have facility with digital technologies are, needless to say, at an advantage in this environment. Meanwhile the temptation to think that courses that include substantial digital components are more practical and professional – less merely academic – is pretty understandable, as universities are so busily cultivating and managing engagement in a context in which disengagement otherwise makes total sense. DH is simply far more compatible with all of these observable trends than many other styles of literary inquiry.

SARAH BROUILLETTE is a professor in the Department of English at Carleton University in Ottawa, Canada.

2 Comments

Filed under Uncategorized

Computational Literary Studies: Participant Forum Responses

 

Katherine Bode

Nan Z. Da’s statistical review of computational literary studies (CLS) takes issue with an approach I also have concerns about, but it is misconceived in its framing of the field and of statistical inquiry. Her definition of CLS—using statistics, predominantly machine learning, to investigate word patterns—excludes most of what I would categorize as computational literary studies, including research that: employs data construction and curation as forms of critical analysis; analyzes bibliographical and other metadata to explore literary trends; deploys machine-learning methods to identify literary phenomena for noncomputational interpretation; or theorizes the implications of methods such as data visualization and machine learning for literary studies. (Interested readers will find diverse forms of CLS in the work of Ryan Cordell, Anne DeWitt, Johanna Drucker, Lauren Klein, Matthew Kirschenbaum, Anouk Lang, Laura B. McGrath, Stephen Ramsay, and Glenn Roe, among others.)

Beyond its idiosyncratic and restrictive definition of CLS, what strikes me most about Da’s essay is its constrained and contradictory framing of statistical inquiry. For most of the researchers Da cites, the pivot to machine learning is explicitly conceived as rejecting a positivist view of literary data and computation in favor of modelling as a subjective practice. Da appears to argue, first, that this pivot has not occurred enough (CLS takes a mechanistic approach to literary interpretation) and, second, that it has gone too far (CLS takes too many liberties with statistical inference, such as “metaphor[izing] … coding and statistics” [p. 606 n. 9]). On the one hand, then, Da repeatedly implies that, if CLS took a slightly different path—that is, trained with more appropriate samples, demonstrated greater rigor in preparing textual data, avoided nonreproducible methods like topic modelling, used Natural Language Processing with the sophistication of corpus linguists—it could reach a tipping point at which the data used, methods employed, and questions asked became appropriate to statistical analysis. On the other, she precludes this possibility in identifying “reading literature well” as the “cut-off point” at which computational textual analysis ceases to have “utility” (p. 639). This limited conception of statistical inquiry also emerges in Da’s two claims about statistical tools for text mining: they are “ethically neutral”; and they must be used “in accordance with their true function” (p. 620), which Da defines as reducing information to enable quick decision making. Yet as with any intellectual inquiry, surely any measurements—let alone measurements with this particular aim—are interactions with the world that have ethical dimensions.

Statistical tests of statistical arguments are vital. And I agree with Da’s contention that applications of machine learning to identify word patterns in literature often simplify complex historical and critical issues. As Da argues, these simplifications include conceiving of models as “intentional interpretations” (p. 621) and of word patterns as signifying literary causation and influence. But there’s a large gap between identifying these problems and insisting that statistical tools have a “true function” that is inimical to literary studies. Our discipline has always drawn methods from other fields (history, philosophy, psychology, sociology, and others). Perhaps it’s literary studies’ supposed lack of functional utility (something Da claims to defend) that has enabled these adaptations to be so productive; perhaps such adaptations have been productive because the meaning of literature is not singular but forged constitutively with a society where the prominence of particular paradigms (historical, philosophical, psychological, sociological, now statistical) at particular moments shapes what and how we know. In any case, disciplinary purity is no protection against poor methodology; and cross disciplinarity can increase methodological awareness.

Da’s rigid notion of a “true function” for statistics prevents her asking more “argumentatively meaningful” (p. 639) questions about possible encounters between literary studies and statistical methods. These might include: If not intentional or interpretive, what is the epistemological—and ontological and ethical—status of patterns discerned by machine learning? Are there ways of connecting word counts with other, literary and nonliterary, elements that might enhance the “explanatory power” (p. 604) and/or critical potential of such models and, if not, why not? As is occurring in fields such as philosophy, sociology, and science and technology studies, can literary studies apply theoretical perspectives (such as feminist empiricism or new materialism) to reimagine literary data and statistical inquiry? Without such methodological and epistemological reflection, Da’s statistical debunking of statistical models falls into the same trap she ascribes to those arguments: of confusing “what happens mechanistically with insight” (p. 639). We very much need critiques of mechanistic—positivist, reductive, and ahistorical—approaches to literary data, statistics, and machine learning. Unfortunately, Da’s critique demonstrates the problems it decries.

 

KATHERINE BODE is associate professor of literary and textual studies at the Australian National University. Her latest book, A World of Fiction: Digital Collections and the Future of Literary History (2018), offers a new approach to literary research with mass-digitized collections, based on the theory and technology of the scholarly edition. Applying this model, Bode investigates a transnational collection of around 10,000 novels and novellas, discovered in digitized nineteenth-century Australian newspapers, to offer new insights into phenomena ranging from literary anonymity and fiction syndication to the emergence and intersections of national literary traditions.

1 Comment

Filed under Uncategorized

Computational Literary Studies: Participant Forum Responses

 

Criticism, Augmented

Mark Algee-Hewitt

A series of binaries permeates Nan Z. Da’s article “The Computational Case against Computational Literary Studies”: computation OR reading; numbers OR words; statistics OR critical thinking. Working from these false oppositions, the article conjures a conflict between computation and criticism. The field of cultural analytics, however rests on the discovery of compatibilities between these binaries: the ability of computation to work hand in hand with literary criticism and the use of critical interpretation by its practitioners to make sense of their statistics.

The oppositions she posits lead Da to focus exclusively on the null hypothesis testing of confirmatory data analysis (CDA): graphs are selected, hypotheses are proposed, and errors in significance are sought.[1]

But, for mathematician John Tukey, the founder of exploratory data analysis (EDA), allowing the data to speak for itself, visualizing it without an underlying hypothesis, allows researchers to avoid the pitfalls of confirmation bias.[2]This is what psychologist William McGuire (1989) calls “the hypothesis testing myth”: if a researcher begins by believing a hypothesis (for example, that literature is too complex for computational analysis), then, with a simple manipulation of statistics, she or he can prove herself or himself correct (by cherry-picking examples that support her argument).[3]Practitioners bound by the orthodoxy of their fields often miss the new patterns revealed when statistics are integrated into new areas of research.

In literary studies, the visualizations produced by EDA do not replace the act of reading but instead redirect it to new ends.[4]Each site of statistical significance reveals a new locus of reading: the act of quantification is no more a reduction than any interpretation.[5]Statistical rigor remains crucial, but equally as essential are the ways in which these data objects are embedded within a theoretical apparatus that draws on literary interpretation.[6]And yet, in her article, Da plucks single statistics from thirteen articles with an average length of about 10,250 words each.[7]It is only by ignoring these 10,000 words, by refusing to read the context of the graph, the arguments, justifications, and dissentions, that she can marshal her arguments.

In Da’s adherence to CDA, her critiques require a hypothesis: when one does not exist outside of the absent context, she is forced to invent one. Even a cursory reading of “The Werther Topologies” reveals that we are not interested in questions of the “influence of Werther on other texts”: rather we are interested in exploring the effect on the corpus when it is reorganized around the language of Werther.[8]The topology creates new adjacencies, prompting new readings: it does not prove or disprove, it is not right or wrong – to suggest otherwise is to make a category error.

Cultural analytics is not a virtual humanities that replaces the interpretive skills developed by scholars over centuries with mathematical rigor. It is an augmented humanities that, at its best, presents new kinds of evidence, often invisible to even the closest reader, alongside carefully considered theoretical arguments, both working in tandem to produce new critical work.

 

MARK ALGEE-HEWITT is an assistant professor of English and Digital Humanities at Stanford University where he directs the Stanford Literary Lab. His current work combines computational methods with literary criticism to explore large scale changes in aesthetic concepts during the eighteenth and nineteenth centuries. The projects that he leads at the Literary Lab include a study of racialized language in nineteenth-century American literature and a computational analysis of differences in disciplinary style. Mark’s work has appeared in New Literary History, Digital Scholarship in the Humanities, as well as in edited volumes on the Enlightenment and the Digital Humanities.

[1]Many of the articles cited by Da combine both CDA and EDA; a movement of the field noted by Ted Underwood in Distant Horizons (p. xii).

[2]Tukey, John. Exploratory Data Analysis New York, Pearson, 1977.

[3]McGuire, William J. A perspectivist approach to the strategic planning of programmatic scientific research.” In Psychology of Science: Contributions to Metascience ed. B. Gholson et al. Cambridge: Cambridge UP, 1989. 214-245. See also Frederick Hartwig and Brian Dearling on the need to not rely exclusively on CDA (Exploratory Data Analysis, Newbury Park: Sage Publications, 1979) and John Behrens on the “hypothesis testing myth.” (“Principles and Procedures of Exploratory Data Analysis.” Psychological Methods, 2(2): 1997, 131-160.

[4]Da, Nan Z. “The Computational Case against Computational Literary Analysis.” Critical Inquiry 45(3): 2019. 601-639.

[5]See, for example, Gemma, Marissa, et al. “Operationalizing the Colloquial Style: Repetition in 19th-Century American Fiction” Digital Scholarship in the Humanities, 32(2): 2017. 312-335; or Laura B. McGrath et al. “Measuring Modernist Novelty” The Journal of Cultural Analytics (2018).

[6]See, for example, our argument about the “modularity of criticism” in Algee-Hewitt, Mark, Fredner, Erik, and Walser, Hannah. “The Novel As Data.” Cambridge Companion to the Noveled. Eric Bulson. Cambridge: Cambridge UP, 2018. 189-215.

[7]Absent the two books, which have a different relationship to length, Da extracts visualizations or numbers from 13 articles totaling 133,685 words (including notes and captions).

[8]Da (2019), 634; Piper and Algee-Hewitt, (“The Werther Effect I” Distant Readings: Topologies of German Culture in the Long Nineteenth Century Ed Matt Erlin and Lynn Tatlock. Rochester: Camden House, 2014), 156-157.

1 Comment

Filed under Uncategorized

Computational Literary Studies: A Critical Inquiry Online Forum

Beginning on 1 April, this Critical Inquiry online forum will feature responses to and discussion about Nan Z. Da’s “The Computational Case against Computational Literary Studies.”  This essay raises a number of challenges for the field of computational literary studies. As Da observes in the first sentence of this piece:

This essay works at the empirical level to isolate a series of technical problems, logical fallacies, and conceptual flaws in an increasingly popular subfield in literary studies variously known as cultural analytics, literary data mining, quantitative formalism, literary text mining, computational textual analysis, computational criticism, algorithmic literary studies, social computing for literary studies, and computational literary studies (the phrase I use here).

Since its publication in Critical Inquiry on 14 March 2019, this essay has already prompted numerous responses online. For instance, in The Chronicle of Higher Education on 27 March 2019, Da published a companion piece to the Critical Inquiry essay, “The Digital Humanities Debacle,” and Ted Underwood published a defense of the digital humanities and cultural analytics, “Dear Humanists: Fear Not the Digital Revolution.” Other responses have emerged across social media.

In order to continue this conversation, in a shared space, Critical Inquiry has invited several practitioners and critics in the digital humanities and the computational literary studies to respond. This group of participants includes several of the scholars discussed in Da’s essay, as well as a few additional contributors to and critics of the field.

CONTRIBUTORS

RESPONDENT

AFTERWORD

  • Stanley Fish (Yeshiva University). Response

This forum begins with a series of short responses from participants. It then continues for several days with an open-ended discussion. We invite you to follow along and return to this page, as the blog will be updated several times a day to incorporate new posts. In order to enable mutual responses, we have limited the number of primary contributors. However, the comments will be available for responses for others from outside of this group, so readers should feel free to contribute with their own thoughts. We look forward to a generative discussion.

Patrick Jagoda (Executive Editor, Critical Inquiry, University of Chicago)

 


DAY 1 RESPONSES 


Criticism, Augmented

Mark Algee-Hewitt

A series of binaries permeates Nan Z. Da’s article “The Computational Case against Computational Literary Studies”: computation OR reading; numbers OR words; statistics OR critical thinking. Working from these false oppositions, the article conjures a conflict between computation and criticism. The field of cultural analytics, however rests on the discovery of compatibilities between these binaries: the ability of computation to work hand in hand with literary criticism and the use of critical interpretation by its practitioners to make sense of their statistics.

The oppositions she posits lead Da to focus exclusively on the null hypothesis testing of confirmatory data analysis (CDA): graphs are selected, hypotheses are proposed, and errors in significance are sought.[1]

But, for mathematician John Tukey, the founder of exploratory data analysis (EDA), allowing the data to speak for itself, visualizing it without an underlying hypothesis, allows researchers to avoid the pitfalls of confirmation bias.[2]This is what psychologist William McGuire (1989) calls “the hypothesis testing myth”: if a researcher begins by believing a hypothesis (for example, that literature is too complex for computational analysis), then, with a simple manipulation of statistics, she or he can prove herself or himself correct (by cherry-picking examples that support her argument).[3]Practitioners bound by the orthodoxy of their fields often miss the new patterns revealed when statistics are integrated into new areas of research.

In literary studies, the visualizations produced by EDA do not replace the act of reading but instead redirect it to new ends.[4]Each site of statistical significance reveals a new locus of reading: the act of quantification is no more a reduction than any interpretation.[5]Statistical rigor remains crucial, but equally as essential are the ways in which these data objects are embedded within a theoretical apparatus that draws on literary interpretation.[6]And yet, in her article, Da plucks single statistics from thirteen articles with an average length of about 10,250 words each.[7]It is only by ignoring these 10,000 words, by refusing to read the context of the graph, the arguments, justifications, and dissentions, that she can marshal her arguments.

In Da’s adherence to CDA, her critiques require a hypothesis: when one does not exist outside of the absent context, she is forced to invent one. Even a cursory reading of “The Werther Topologies” reveals that we are not interested in questions of the “influence of Werther on other texts”: rather we are interested in exploring the effect on the corpus when it is reorganized around the language of Werther.[8]The topology creates new adjacencies, prompting new readings: it does not prove or disprove, it is not right or wrong – to suggest otherwise is to make a category error.

Cultural analytics is not a virtual humanities that replaces the interpretive skills developed by scholars over centuries with mathematical rigor. It is an augmented humanities that, at its best, presents new kinds of evidence, often invisible to even the closest reader, alongside carefully considered theoretical arguments, both working in tandem to produce new critical work.

 

MARK ALGEE-HEWITT is an assistant professor of English and Digital Humanities at Stanford University where he directs the Stanford Literary Lab. His current work combines computational methods with literary criticism to explore large scale changes in aesthetic concepts during the eighteenth and nineteenth centuries. The projects that he leads at the Literary Lab include a study of racialized language in nineteenth-century American literature and a computational analysis of differences in disciplinary style. Mark’s work has appeared in New Literary History, Digital Scholarship in the Humanities, as well as in edited volumes on the Enlightenment and the Digital Humanities.

[1]Many of the articles cited by Da combine both CDA and EDA; a movement of the field noted by Ted Underwood in Distant Horizons (p. xii).

[2]Tukey, John. Exploratory Data Analysis, New York, Pearson, 1977.

[3]McGuire, William J. A perspectivist approach to the strategic planning of programmatic scientific research.” In Psychology of Science: Contributions to Metascience ed. B. Gholson et al. Cambridge: Cambridge UP, 1989. 214-245. See also Frederick Hartwig and Brian Dearling on the need to not rely exclusively on CDA (Exploratory Data Analysis. Newbury Park: Sage Publications, 1979) and John Behrens on the “hypothesis testing myth.” (“Principles and Procedures of Exploratory Data Analysis.” Psychological Methods. 2(2): 1997, 131-160.

[4]Da, Nan Z. “The Computational Case against Computational Literary Analysis.” Critical Inquiry 45 (3): 2019. 601-639.

[5]See, for example, Gemma, Marissa, et al. “Operationalizing the Colloquial Style: Repetition in 19th-Century American Fiction” Digital Scholarship in the Humanities, 32(2): 2017. 312-335; or Laura B. McGrath et al. “Measuring Modernist Novelty” The Journal of Cultural Analytics (2018).

[6]See, for example, our argument about the “modularity of criticism” in Algee-Hewitt, Mark, Fredner, Erik, and Walser, Hannah. “The Novel As Data.” Cambridge Companion to the Noveled. Eric Bulson. Cambridge: Cambridge UP, 2018. 189-215.

[7]Absent the two books, which have a different relationship to length, Da extracts visualizations or numbers from 13 articles totaling 133,685 words (including notes and captions).

[8]Da (2019), 634; Piper and Algee-Hewitt, (“The Werther Effect I” Distant Readings: Topologies of German Culture in the Long Nineteenth Century, Ed Matt Erlin and Lynn Tatlock. Rochester: Camden House, 2014), 156-157.


Katherine Bode

Nan Z. Da’s statistical review of computational literary studies (CLS) takes issue with an approach I also have concerns about, but it is misconceived in its framing of the field and of statistical inquiry. Her definition of CLS—using statistics, predominantly machine learning, to investigate word patterns—excludes most of what I would categorize as computational literary studies, including research that: employs data construction and curation as forms of critical analysis; analyzes bibliographical and other metadata to explore literary trends; deploys machine-learning methods to identify literary phenomena for noncomputational interpretation; or theorizes the implications of methods such as data visualization and machine learning for literary studies. (Interested readers will find diverse forms of CLS in the work of Ryan Cordell, Anne DeWitt, Johanna Drucker, Lauren Klein, Matthew Kirschenbaum, Anouk Lang, Laura B. McGrath, Stephen Ramsay, and Glenn Roe, among others.)

Beyond its idiosyncratic and restrictive definition of CLS, what strikes me most about Da’s essay is its constrained and contradictory framing of statistical inquiry. For most of the researchers Da cites, the pivot to machine learning is explicitly conceived as rejecting a positivist view of literary data and computation in favor of modelling as a subjective practice. Da appears to argue, first, that this pivot has not occurred enough (CLS takes a mechanistic approach to literary interpretation) and, second, that it has gone too far (CLS takes too many liberties with statistical inference, such as “metaphor[izing] … coding and statistics” [p. 606 n. 9]). On the one hand, then, Da repeatedly implies that, if CLS took a slightly different path—that is, trained with more appropriate samples, demonstrated greater rigor in preparing textual data, avoided nonreproducible methods like topic modelling, used Natural Language Processing with the sophistication of corpus linguists—it could reach a tipping point at which the data used, methods employed, and questions asked became appropriate to statistical analysis. On the other, she precludes this possibility in identifying “reading literature well” as the “cut-off point” at which computational textual analysis ceases to have “utility” (p. 639). This limited conception of statistical inquiry also emerges in Da’s two claims about statistical tools for text mining: they are “ethically neutral”; and they must be used “in accordance with their true function” (p. 620), which Da defines as reducing information to enable quick decision making. Yet as with any intellectual inquiry, surely any measurements—let alone measurements with this particular aim—are interactions with the world that have ethical dimensions.

Statistical tests of statistical arguments are vital. And I agree with Da’s contention that applications of machine learning to identify word patterns in literature often simplify complex historical and critical issues. As Da argues, these simplifications include conceiving of models as “intentional interpretations” (p. 621) and of word patterns as signifying literary causation and influence. But there’s a large gap between identifying these problems and insisting that statistical tools have a “true function” that is inimical to literary studies. Our discipline has always drawn methods from other fields (history, philosophy, psychology, sociology, and others). Perhaps it’s literary studies’ supposed lack of functional utility (something Da claims to defend) that has enabled these adaptations to be so productive; perhaps such adaptations have been productive because the meaning of literature is not singular but forged constitutively with a society where the prominence of particular paradigms (historical, philosophical, psychological, sociological, now statistical) at particular moments shapes what and how we know. In any case, disciplinary purity is no protection against poor methodology; and cross disciplinarity can increase methodological awareness.

Da’s rigid notion of a “true function” for statistics prevents her asking more “argumentatively meaningful” (p. 639) questions about possible encounters between literary studies and statistical methods. These might include: If not intentional or interpretive, what is the epistemological—and ontological and ethical—status of patterns discerned by machine learning? Are there ways of connecting word counts with other, literary and nonliterary, elements that might enhance the “explanatory power” (p. 604) and/or critical potential of such models and, if not, why not? As is occurring in fields such as philosophy, sociology, and science and technology studies, can literary studies apply theoretical perspectives (such as feminist empiricism or new materialism) to reimagine literary data and statistical inquiry? Without such methodological and epistemological reflection, Da’s statistical debunking of statistical models falls into the same trap she ascribes to those arguments: of confusing “what happens mechanistically with insight” (p. 639). We very much need critiques of mechanistic—positivist, reductive, and ahistorical—approaches to literary data, statistics, and machine learning. Unfortunately, Da’s critique demonstrates the problems it decries.

 

KATHERINE BODE is associate professor of literary and textual studies at the Australian National University. Her latest book, A World of Fiction: Digital Collections and the Future of Literary History (2018), offers a new approach to literary research with mass-digitized collections, based on the theory and technology of the scholarly edition. Applying this model, Bode investigates a transnational collection of around 10,000 novels and novellas, discovered in digitized nineteenth-century Australian newspapers, to offer new insights into phenomena ranging from literary anonymity and fiction syndication to the emergence and intersections of national literary traditions.

 


 

Sarah Brouillette

DH is here to stay, including in the CLS variant whose errors Nan Da studies. This variant is especially prevalent in English programs, and it will continue to gain force there. Even when those departments have closed or merged with other units, people with CLS capacities will continue to find positions—though likely contractually —when others no longer can. This is not to say that DH is somehow itself the demise of the English department. The case rather is that both the relative health of DH and the general decline in literary studies—measured via enrollments, number of tenured faculty, and university heads’ dispositions toward English—arise from the same underlying factors. The pressures that English departments face are grounded in the long economic downturn and rising government deficits, deep cuts to funding for higher education, rising tuition, and a turn by university administrators toward boosting business and STEM programs. We know this. There has been a foreclosure of futurity for students who are facing graduation with significant debt burdens and who doubt that they will find stable work paying a good wage. Who can afford the luxury of closely reading five hundred pages of dense prose? Harried anxious people accustomed to working across many screens, many open tabs, with constant pings from social media, often struggle with sustained reading. Myself included. DH is a way of doing literary studies without having to engage in long periods of sustained reading, while acquiring what might feel like job skills. It doesn’t really matter how meaningful CLS labs’ findings are. As Da points out, practitioners themselves often emphasize how tentative their findings are or stress flaws in the results or the method that become the occasion for future investment and development. That is the point: investment and development. The key to DH’s relative health is that it supports certain kinds of student training and the development of technologically enhanced learning environments. One of the only ways to get large sums of grant money from the Social Sciences and Humanities Research Council of Canada (SSHRC) is to budget for equipment and for student training. Computer training is relatively easy to describe in a budget justification. Universities for their part often like DH labs because they attract these outside funders, and because grants don’t last forever, a campus doesn’t have to promise anything beyond short-term training and employment. As for the students: to be clear, those with DH skills don’t necessarily walk more easily into jobs than those without them. But DH labs, which at least in Canada need to be able to list training as a priority, offer an experience of education that has an affective appeal for many students—an appeal that universities work hard to cultivate and reinforce. This cultivation is there in the constant contrasts made between old fashioned and immersive learning, between traditional and project-based classrooms, between the dull droning lecture and the experiential . . . well, experience. (The government of Ontario has recently mandated that every student have an opportunity to experience “work-integrated learning” before graduation.) It is there also in the push to make these immersive experiences online ones, mediated by learning management systems such as Brightspace or Canvas, which store data via Amazon Web Services. Learning in universities increasingly occurs in data capturable forms. The experience of education, from level of participation to test performance, is cultivated, monitored, and tracked digitally. Students who have facility with digital technologies are, needless to say, at an advantage in this environment. Meanwhile the temptation to think that courses that include substantial digital components are more practical and professional – less merely academic – is pretty understandable, as universities are so busily cultivating and managing engagement in a context in which disengagement otherwise makes total sense. DH is simply far more compatible with all of these observable trends than many other styles of literary inquiry.

SARAH BROUILLETTE is a professor in the Department of English at Carleton University in Ottawa, Canada.


What Is Literary Studies?

Ed Finn

This is the question that underpins Da’s takedown of what she calls computational literary studies (CLS). The animus with which she pursues this essay is like a search light that creates a shadow behind it. “The discipline is about reducing reductionism,” she writes (p. 638), which is a questionable assertion about a field that encompasses many kinds of reduction and contradictory epistemic positions, from thing theory to animal studies. Da offers no evidence or authority to back up her contention that CLS fails to validate its claims. Being charitable, what Da means, I think, is that literary scholars should always attend to context, to the particulars of the works they engage.

Da’s essay assails what she terms the false rigor of CLS: the obsession with reductive analyses of large datasets, the misapplied statistical methods, the failure to disentangle artifacts of measurement from significant results. And there may be validity to these claims: some researchers use black box tools they don’t understand, not just in the digital humanities but in fields from political science to medicine.  The most helpful contribution of Da’s article is tucked away in the online appendix, where she suggests a very good set of peer review and publication guidelines for DH work. I can imagine a version of this essay that culminated with those guidelines rather than the suggestion that “reading literature well” is a bridge too far for computational approaches.

The problem with the spotlight Da shines on the rigor of CLS is that shadow looming behind it. What does rigor look like in “the discipline” of literary studies, which is defined so antagonistically to CLS here? What are the standards of peer review that ensure literary scholarship validates its methods, particularly when it draws those methods from other disciplines? Nobody is calling in economists to assess the validity of Marxist literary analysis, or cognitive psychologists to check applications of affect theory, and it’s hard to imagine that scholars would accept the disciplinary authority of those critics. I am willing to bet Critical Inquiry’s peer review process for Da’s article did not include federal grants program officers, university administrators, or scholars of public policy being asked to assess Da’s rhetorical—but central—question “of why we need ‘labs’ or the exorbitant funding that CLS has garnered” (p. 603).

I contend this is actually a good idea: literary studies can benefit from true dialog and collaboration with fields across the entire academy. Da clearly feels that this is justified in the case of CLS, where she calls for more statistical expertise (and brings in a statistician to guide her analysis in this paper). But why should CLS be singled out for this kind of treatment?

Either one accepts that rigor sometimes demands literary studies should embrace expertise from other fields—like Da bringing in a statistician to validate her findings for this paper—or one accepts that literary studies is made up of many contradictory methods and that “the discipline” is founded on borrowing methods from other fields without any obligation validate findings by the standards of those other fields. What would it look like to generalize Da’s proposals for peer review to other areas of literary studies? The contemporary research I find most compelling makes this more generous move: bringing scholars in the humanities together with researchers in the social sciences, the arts, medicine, and other arenas where people can actually learn from one another and do new kinds of work.

To me, literary studies is the practice of reading and writing in order to better understand the human condition. And the condition is changing. Most of what we read now comes to us on screens that are watching us as we watch them. Many of the things we think about have been curated and lobbed into our consciousness by algorithmic feeds and filters. I studied Amazon recommendation networks because they play an important role in contemporary American literary reception and the lived experience of fiction for millions of readers—at least circa 2010, when I wrote the article. My approach in that work hewed to math that I understand and a scale of information that I call small data because it approximates the headspace of actual readers thinking about particular books. Small data always leads back to the qualitative and to the particular, and it is a minor example of the contributions humanists can make beyond the boundaries of “the discipline.”

We desperately need the humanities to survive the next century, when so many of our species’ bad bets are coming home to roost. Text mining is not “ethically neutral,” as Da gobsmackingly argues (p. 620), any more than industrialization was ethically neutral, or the NSA using network analysis to track suspected terrorists (Da’s example of a presumably acceptable “operationalizable end” for social network analysis) (p. 632). The principle of charity would, I hope, preclude Da’s shortsighted framing of what matters in literary studies, and it would open doors to other fields like computer science where many researchers are, either unwittingly or uncaringly, deploying words like human and read and write with the same kind of facile dismissal of methods outside “the discipline” that are on display here. That is the context in which we read and think about literature now, and if we want to “read literature well,” we need to bring the insights of literary study to broader conversations where we participate, share, educate, and learn.

ED FINN is the founding director of the Center for Science and the Imagination at Arizona State University where he is an associate professor in the School of Arts, Media, and Engineering and the Department of English.


What the New Computational Rigor Should Be

Lauren F. Klein

Writing about the difficulties of evaluating digital scholarship in a recent special issue of American Quarterlydevoted to DH, Marisa Parham proposes the concept of “The New Rigor” to account for the labor of digital scholarship as well as its seriousness: “It is the difference between what we say we want the world to look like and what we actually carry out in our smallest acts,” she states (p. 683). In “The Computational Case against Computational Literary Studies,” Nan Z. Da also makes the case for a new rigor, although hers is more narrowly scoped. It entails both a careful adherence to the methods of statistical inquiry and a concerted rejection of the application of those methods to domains—namely, literary studies—that fall beyond their purported use.

No one would argue with the former. But it is the latter claim that I will push back against. Several times in her essay, Da makes the case that “statistical tools are designed to do certain things and solve specific problems,” and for that reason, they should not be employed to “capture literature’s complexity” (pp. 619-20, 634). To be sure, there exists a richness of language and an array of ineffable—let alone quantifiable—qualities of literature that cannot be reduced to a single model or diagram. But the complexity of literature exceeds even that capaciousness, as most literary scholars would agree. And for that very reason, we must continue to explore new methods for expanding the significance of our objects of study. As literary scholars, we would almost certainly say that we want to look at—and live in—a world that embraces complexity. Given that vision, the test of rigor then becomes, to return to Parham’s formulation, how we usher that world into existence through each and every one of “our smallest acts” of scholarship, citation, and critique.

In point of fact, many scholars already exhibit this new computational rigor. Consider how Jim Casey, the national codirector of the Colored Conventions Project, is employing social network analysis—including the centrality scores and modularity measures that Da finds lacking in the example she cites—in order to detect changing geographic centers for this important nineteenth-century organizing movement. Or how Lisa Rhody has found an “interpretive space that is as vital as the weaving and unraveling at Penelope’s loom” in a topic model of a corpus of 4,500 poems. This interpretive space is one that Rhody creates in no small part by accounting for the same fluctuations of words in topics—the result of the sampling methods employed in almost all topic model implementations—that Da invokes, instead, in order to dismiss the technique out of hand. Or how Laura Estill, Dominic Klyve, and Kate Bridal have employed statistical analysis, including a discussion of the p-values that Da believes (contramany statisticians) are always required, in order to survey the state of Shakespeare studies as a field.

That these works are authored by scholars in a range of academic roles, including postdoctoral fellows and DH program coordinators as well as tenure-track faculty, and are published in a range of venues, including edited collections and online as well as domain-specific journals; further points to the range of extant work that embraces the complexity of literature in precisely the ways that Da describes. But these works to do more: they also embrace the complexity of the statistical methods that they employ. Each of these essays involve a creative repurposing of the methods they borrow from more computational fields, as well as a trenchant self-critique. Casey, for example, questions how applying techniques of social network analysis, which are premised on a conception of sociality as characterized by links between individual “nodes,” can do justice to a movement celebrated for its commitment to collective action. Rhody, for another, considers the limits of the utility of topic modeling, as a tool “designed to be used with texts that employ as little figurative language as possible,” for her research questions about ekphrasis. These essays each represent “small acts” and necessarily so. But taken alongside the many other examples of computational work that are methodologically sound, creatively conceived, and necessarily self-critical, they constitute the core of a field committed to complexity in boththe texts they elucidate andthe methods they employ.

In her formulation of the “The New Rigor,” Parham—herself a literary scholar—places her emphasis on a single word: “Carrying, how we carry ourselves in our relationships and how we carry each other, is the real place of transformation,” she writes. Da, the respondents collected in this forum, and all of us in literary studies—computational and not—might linger on that single word. If our goal remains to celebrate the complexity of literature—precisely because it helps to illuminate the complexity of the world—then we must carry ourselves, and each other, with intellectual generosity and goodwill. We must do so, moreover, with a commitment to honoring the scholarship, and the labor, that has cleared the path up to this point. Only then can we carry forward the field of computational literary studies into the transformative space of future inquiry.

LAUREN F. KLEIN is associate professor at the School of Literature, Media, and Communication, Georgia Institute of Technology.


Trust in Numbers

Hoyt Long and Richard Jean So

 

Nan Da’s “The Computational Case against Computational Literary Criticism” stands out from past polemics against computational approaches to literature in that it purports to take computation seriously. It recognizes that a serious engagement with this kind of research means developing literacy of statistical and other concepts. Insofar as her essay promises to move the debate beyond a flat rejection of numbers, and towards something like a conversation about replication, it is a useful step forward.

This, however, is where its utility ends. “Don’t trust the numbers,” Da warns. Or rather, “Don’t trust their numbers, trust mine.” But should you? If you can’t trust their numbers, she implies, the entire case for computational approaches falls apart. Trust her numbers and you’ll see this. But her numbers cannot be trusted. Da’s critique of fourteen articles in the field of cultural analytics is rife with technical and factual errors. This is not merely quibbling over details. The errors reflect a basic lack of understanding of fundamental statistical concepts and are akin to an outsider to literary studies calling George Eliot a “famous male author.” Even more concerning, Da fails to understand statistical method as a contextual, historical, and interpretive project. The essay’s greatest error, to be blunt, is a humanist one.

Here we focus on Da’s errors related to predictive modeling. This is the core method used in the two essays of ours that she critiques. In “Turbulent Flow,” we built a model of stream-of-consciousness (SOC) narrative with thirteen linguistic features and found that ten of them, in combination, reliably distinguished passages that we identified as SOC (as compared with passages taken from a corpus of realist fiction). Type-token ratio (TTR), a measure of lexical diversity, was the most distinguishing of these, though uninformative on its own. The purpose of predictive modeling, as we carefully explain in the essay, is to understand how multiple features work in concert to identify stylistic patterns, not alone. Nothing in Da’s critique suggests she is aware of this fundamental principle.

Indeed, Da interrogates just one feature in our model (TTR) and argues that modifying it invalidates our modeling. Specifically, she tests whether the strong association of TTR with SOC holds after removing words in her “standard stopword list,” instead of in the stopword list we used. She finds it doesn’t. There are two problems with this. First, TTR and “TTR minus stopwords” are two separate features. We actually included both in our model and found the latter to be minimally distinctive. Second, while the intuition to test for feature robustness is appropriate, it is undercut by the assertion that there is a “standard” stopword list that should be universally applied. Ours was specifically curated for use with nineteenth- and early twentieth-century fiction. Even if there was good reason to adopt her “standard” list, one still must rerun the model to test if the remeasured “TTR minus stopwords” feature changes the overall predictive accuracy. Da doesn’t do this. It’s like fiddling with a single piano key and, without playing another note, declaring the whole instrument to be out of tune.

But the errors run deeper than this. In Da’s critique of “Literary Pattern Recognition,” she tries to invalidate the robustness of our model’s ability to classify English-language haiku poems from nonhaiku poems. She does so by creating a new corpus of “English translations of Chinese couplets” and tests our model on this corpus. Why do this? She suggests that it is because they are filled “with similar imagery” to English haiku and are similarly “Asian.” This is a misguided decision that smacks of Orientalism. It completely erases context and history, suggesting an ontological relation where there is none. This is why we spend over twelve pages delineating the English haiku form in both critical and historical terms.

These errors exemplify a consistent refusal to contextualize and historicize one’s interpretative practices (indeed to “read well”), whether statistically or humanistically. We do not believe there exist “objectively” good literary interpretations or that there is one “correct” way to do statistical analysis: Da’s is a position most historians of science, and most statisticians themselves, would reject.  Conventions in both literature and science are continuously debated and reinterpreted, not handed down from on high. And like literary studies, statistics is a body of knowledge formed from messy disciplinary histories, as well as diverse communities of practice. Da’s essay insists on a highly dogmatic, “objective,” black-and-white version of knowledge, a disposition totally antithetical to both statistics and literary studies. It is not a version that encourages much trust.

Hoyt Long is associate professor of Japanese literature at the University of Chicago. He publishes widely in the fields of Japanese literary studies, media history, and cultural analytics. His current book project is Figures of Difference: Quantitative Approaches to Modern Japanese Literature.

Richard Jean So is assistant professor of English and cultural analytics at McGill University. He works on computational approaches to literature and culture with a focus on contemporary American writing and race. His current book project is Redlining Culture: A Data History of Race and US Fiction.

 


The Select

Andrew Piper

Nan Z. Da’s study published in Critical Inquiry participates in an emerging trend across a number of disciplines that falls under the heading of “replication.”[1]In this, her work follows major efforts in other fields, such as the Open Science Collaboration’s “reproducibility project,” which sought to replicate past studies in the field of psychology.[2]As the authors of the OSC collaboration write, the value of replication, when done well, is that it can “increase certainty when findings are reproduced and promote innovation when they are not.”

And yet despite arriving at sweeping claims about an entire field, Da’s study fails to follow any of the procedures and practices established by projects like the OSC.[3]While invoking the epistemological framework of replication—that is, to prove or disprove the validity of both individual articles as well as an entire field—her practices follow instead the time-honoured traditions of selective reading from the field of literary criticism. Da’s work is ultimately valuable not because of the computational case it makes (that work still remains to be done), but the way it foregrounds so many of the problems that accompany traditional literary critical models when used to make large-scale evidentiary claims. The good news is that this article has made the problem of generalization, of how we combat the problem of selective reading, into a central issue facing the field.

Start with the evidence chosen. When undertaking their replication project, the OSC generated a sample of one hundred studies taken from three separate journals within a single year of publication to approximate a reasonable cross-section of the field. Da on the other hand chooses “a handful” of articles (fourteen by my count) from different years and different journals with no clear rationale of how these articles are meant to represent an entire field. The point is not the number chosen but that we have no way of knowing why these articles and not others were chosen and thus whether her findings extend to any work beyond her sample. Indeed, the only linkage appears to be that these studies all “fail” by her criteria. Imagine if the OSC had found that 100 percent of articles sampled failed to replicate. Would we find their results credible? Da by contrast is surprisingly only ever right.

Da’s focus within articles exhibits an even stronger degree of nonrepresentativeness. In their replication project, the OSC establishes clearly defined criteria through which a study can be declared not to replicate, while also acknowledging the difficulty of arriving at this conclusion. Da by contrast applies different criteria to every article, making debatable choices, as well as outright errors, that are clearly designed to foreground differences.[4]She misnames authors of articles, mis-cites editions, mis-attributes arguments to the wrong book, and fails at some basic math.[5]And yet each of these assertions always adds-up to the same certain conclusion: failed to replicate. In Da’s hands, part is always a perfect representation of whole.

Perhaps the greatest limitation of Da’s piece is her extremely narrow (that is, nonrepresentative) definition of statistical inference and computational modeling. In Da’s view, the only appropriate way to use data is to perform what is known as significance testing, where we use a statistical model to test whether a given hypothesis is “true.”[6]There is no room for exploratory data analysis, for theory building, or predictive modeling in her view of the field.[7]This is particularly ironic given that Da herself performs no such tests. She holds others to standards to which she herself is not accountable. Nor does she cite articles where authors explicitly undertake such tests[8]or research that calls into question the value of such tests[9]or research that explores the relationship between word frequency and human judgments that she finds so problematic.[10]The selectivity of Da’s work is deeply out of touch with the larger research landscape.

All of these practices highlight a more general problem that has for too long gone unexamined in the field of literary study. How are we to move reliably from individual observations to general beliefs about things in the world? Da’s article provides a tour de forceof the problems of selective reading when it comes to generalizing about individual studies or entire fields. Addressing the problem of responsible and credible generalization will be one of the central challenges facing the field in the years to come. As with all other disciplines across the university, data and computational modeling will have an integral role to play in that process.

ANDREW PIPER is Professor and William Dawson Scholar in the Department of Languages, Literatures, and Cultures at McGill University. He is the author most recently of Enumerations: Data and Literary Study (2018).

[1]Nan Z. Da, “The Computational Case Against Computational Literary Studies,” Critical Inquiry 45 (Spring 2019) 601-639. For accessible introductions to what has become known as the replication crisis in the sciences, see Ed Yong, “Psychology’s Replication Crisis Can’t Be Wished Away,” The Atlantic, March 4, 2016.

[2]Open Science Collaboration, “Estimating the Reproducibility of Psychological Science,” Science 28 Aug 2015:Vol. 349, Issue 6251, aac4716.DOI: 10.1126/science.aac4716.

[3]Compare Da’s sweeping claims with the more modest ones made by the OSC in Science even given their considerably larger sample and far more rigorous effort at replication, reproduced here. For a discussion of the practice of replication, see Brian D. Earp and David Trafimow, “Replication, Falsification, and the Crisis of Confidence in Social Psychology,” Frontiers in Psychology May 19, 2015: doi.org/10.3389/fpsyg.2015.00621.

[4]For a list, see Ben Schmidt, “A computational critique of a computational critique of a computational critique.” I provide more examples in the scholarly response here: Andrew Piper, “Do We Know What We Are Doing?Journal of Cultural Analytics, April 1, 2019.

[5]She cites Mark Algee-Hewitt as Mark Hewitt, cites G. Casella as the author of Introduction to Statistical Learning when it was Gareth James, cites me and Andrew Goldstone as co-authors in the Appendix when we were not, claims that “the most famous example of CLS forensic stylometry” was Hugh Craig and Arthur F. Kinney’s book that advances a theory of Marlowe’s authorship of Shakespeare’s plays which they do not, and miscalculates the number of people it would take to read fifteen thousand novels in a year. The answer is 1250 not 1000 as she asserts. This statistic is also totally meaningless.

[6]Statements like the following also suggest that she is far from a credible guide to even this aspect of statistics: “After all, statistics automatically assumes that 95 percent of the time there is no difference and that only 5 percent of the time there is a difference. That is what it means to look for p-value less than 0.05.” This is not what it means to look for a p-value less than 0.05. A p-value is the estimated probability of getting our observed data assuming our null hypothesis is true. The smaller the p-value, the more unlikely it is to observe what we did assuming our initial hypothesis is true. The aforementioned 5% threshold says nothing about how often there will be a “difference” (in other words, how often the null hypothesis is false). Instead, it says: “if our data leads us to conclude that there is a difference, we estimate that we will be mistaken 5% of the time.” Nor does “statistics” “automatically” assume that .05 is the appropriate cut-off. It depends on the domain, the question and the aims of modeling. These are gross over-simplifications.

[7]For reflections on literary modeling, see Andrew Piper, “Think Small: On Literary Modeling.” PMLA132.3 (2017): 651-658; Richard Jean So, “All Models Are Wrong,” PMLA132.3 (2017); Ted Underwood, “Algorithmic Modeling: Or, Modeling Data We Do Not Yet Understand,” The Shape of Data in Digital Humanities: Modeling Texts and Text-based Resources, eds. J. Flanders and F. Jannidis (New York: Routledge, 2018).

[8]See Andrew Piper and Eva Portelance, “How Cultural Capital Works: Prizewinning Novels, Bestsellers, and the Time of Reading,” Post-45(2016); Eve Kraicer and Andrew Piper, “Social Characters: The Hierarchy of Gender in Contemporary English-Language Fiction,” Journal of Cultural Analytics, January 30, 2019. DOI: 10.31235/osf.io/4kwrg; and Andrew Piper, “Fictionality,” Journal of Cultural Analytics, Dec. 20, 2016. DOI: 10.31235/osf.io/93mdj.

[9]The literature debating the values of significance testing is vast. See Simmons, Joseph P., Leif D. Nelson, and Uri Simonsohn. “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant.” Psychological Science 22, no. 11 (November 2011): 1359–66. doi:10.1177/0956797611417632.

 [10]See Rens Bod, Jennifer Hay, and Stefanie Jannedy, Probabilistic Linguistics (Cambridge, MA: MIT Press, 2003); Dan Jurafsky and James Martin, “Vector Semantics,” Speech and Language Processing, 3rd Edition (2018): https://web.stanford.edu/~jurafsky/slp3/6.pdf; for the relation of communication to information theory, M.W. Crocker, Demberg, V. & Teich, E. “Information Density and Linguistic Encoding,” Künstliche Intelligenz 30.1 (2016) 77-81. https://doi.org/10.1007/s13218-015-0391-y; and for the relation to language acquisition and learning, Erickson  LC, Thiessen  ED, “Statistical learning of language: theory, validity, and predictions of a statistical learning account of language acquisition,” Dev. Rev. 37 (2015): 66–108.doi:10.1016/j.dr.2015.05.002.

 


Ted Underwood

In the humanities, as elsewhere, researchers who work with numbers often reproduce and test each other’s claims.1Nan Z. Da’s contribution to this growing genre differs from previous examples mainly in moving more rapidly. For instance, my coauthors and I spent 5,800 words describing, reproducing,and partially criticizing one article about popular music.2By contrast, Da dismisses fourteen publications that use different methods in thirty-eight pages. The article’s energy is impressive, and its long-term effects will be positive.

But this pace has a cost. Da’s argument may be dizzying if readers don’t already know the works summarized, as she rushes through explanation to get to condemnation. Readers who know these works will recognize that Da’s summaries are riddled with material omissions and errors. The time is ripe for a theoretical debate about computing in literary studies. But this article is unfortunately too misleading—even at the level of paraphrase—to provide a starting point for the debate.

For instance, Da suggests that my article “The Life Cycles of Genres” makes genres look stable only because it forgets to compare apples to apples: “Underwood should train his model on pre-1941 detective fiction (A) as compared to pre-1941 random stew and post-1941 detective fiction (B) as compared to post-1941 random stew, instead of one random stew for both” (p. 608).3

This perplexing critique tells me to do exactly what my article (and public code) make clear that I did: compare groups of works matched by publication date.4There is also no “random stew” in the article. Da’s odd phrase conflates a random contrast set with a ghastly “genre stew” that plays a different role in the argument.

More importantly, Da’s critique suppresses the article’s comparative thesis—which identifies detective fiction as more stable than several other genres—in order to create a straw man who argues that all genres “have in fact been more or less consistent from the 1820s to the present” (p. 609). Lacking any comparative yardstick to measure consistency, this straw thesis becomes unprovable. In other cases Da has ignored the significant results of an article, in order to pour scorn on a result the authors acknowledge as having limited significance—without ever mentioning that the authors acknowledge the limitation. This is how she proceeds with Jockers and Kirilloff (p. 610).

In short, this is not an article that works hard at holistic critique. Instead of describing the goals that organize a publication, Da often assumes that researchers were trying (and failing) to do something she believes they should have done. Topic modeling, for instance, identifies patterns in a corpus without pretending to find a uniquely correct description. Humanists use the method mostly for exploratory analysis. But Da begins from the assumption that topic modeling must be a confused attempt to prove hypotheses of some kind. So, she is shocked to discover (and spends a page proving) that different topics can emerge when the method is run multiple times. This is true. It is also a basic premise of the method, acknowledged by all the authors Da cites—who between them spend several pages discussing how results that vary can nevertheless be used for interpretive exploration. Da doesn’t acknowledge the discussion.

Finally, “The Computational Case” performs some crucial misdirection at the outset by implying that cultural analytics is based purely on linguistic evidence and mainly diction. It is true that diction can reveal a great deal, but this is a misleading account of contemporary trends. Quantitative approaches are making waves partly because researchers have learned to extract social relations from literature and partly because they pair language with external social testimony—for instance the judgments of reviewers.5Some articles, like my own on narrative pace, use numbers entirely to describe the interpretations of human readers.6Once again, Da’s polemical strategy is to isolate one strand in a braid, and critique it as if it were the whole.

A more inquisitive approach to cultural analytics might have revealed that it is not a monolith but an unfolding debate between several projects that frequently criticize each other. Katherine Bode, for instance, has critiqued other researchers’ data (including mine), in an exemplary argument that starts by precisely describing different approaches to historical representation.7Da could have made a similarly productive intervention—explaining, for instance, how researchers should report uncertainty in exploratory analysis. Her essay falls short of that achievement because a rush to condemn as many examples as possible has prevented it from taking time to describe and genuinely understand its objects of critique.

TED UNDERWOODis professor of information sciences and English at the University of Illinois, Urbana-Champaign. He has published in venues ranging from PMLA to the IEEE International Conference on Big Data and is the author most recently of Distant Horizons: Digital Evidence and Literary Change (2019).

1.Andrew Goldstone, “Of Literary Standards and Logistic Regression: A Reproduction,” January 4, 2016, https://andrewgoldstone.com/blog/2016/01/04/standards/. Jonathan Goodwin, “Darko Suvin’s Genres of Victorian SF Revisited,” Oct 17, 2016, https://jgoodwin.net/blog/more-suvin/.

2. Ted Underwood, “Can We Date Revolutions in the History of Literature and Music?”, The Stone and the Shell, October 3, 2015, https://tedunderwood.com/2015/10/03/can-we-date-revolutions-in-the-history-of-literature-and-music/ Ted Underwood, Hoyt Long, Richard Jean So, and Yuancheng Zhu, “You Say You Found a Revolution,” The Stone and the Shell, February 7, 2016, https://tedunderwood.com/2016/02/07/you-say-you-found-a-revolution/.

3. Nan Z. Da, “The Computational Case against Computational Literary Studies,” Critical Inquiry 45 (Spring 2019): 601-39.

4. Ted Underwood, “The Life Cycles of Genres,” Journal of Cultural Analytics, May 23, 2016, http://culturalanalytics.org/2016/05/the-life-cycles-of-genres/.

5. Eve Kraicer and Andrew Piper, “Social Characters: The Hierarchy of Gender in Contemporary English-Language Fiction,” Journal of Cultural Analytics, January 30, 2019, http://culturalanalytics.org/2019/01/social-characters-the-hierarchy-of-gender-in-contemporary-english-language-fiction/

6. Ted Underwood, “Why Literary Time is Measured in Minutes,” ELH 25.2 (2018): 341-65.

7. Katherine Bode, “The Equivalence of ‘Close’ and ‘Distant’ Reading; or, Toward a New Object for Data-Rich Literary History,” MLQ 78.1 (2017): 77-106.


DAY 2 RESPONSES 


Argument

Nan Z Da

First, a qualification. Due to the time constraints of this forum, I can only address a portion of the issues raised by the forum participants and in ways still imprecise. I do plan to issue an additional response that addresses the more fine-grained technical issues.

“The Computational Case against Computational Literary Studies” was not written for the purposes of refining CLS. The paper does not simply call for “more rigor” or for replicability across the board. It is not about figuring out which statistical mode of inquiry best suits computational literary analysis. It is not a method paper; as some of my respondents point out, those are widely available.

The article was written to empower literary scholars and editors to ask logical questions about computational and quantitative literary criticism should they suspect a conceptual mismatch between the result and the argument or perceive the literary-critical payoff to be extraordinarily low.

The paper, I hope, teaches us to recognize two types of CLS work. First, there is statistically rigorous work that cannot actually answer the question it sets out to answer or doesn’t ask an interesting question at all. Second, there is work that seems to deliver interesting results but is either nonrobust or logically confused. The confusion sometimes issues from something like user error, but it is more often the result of the suboptimal or unnecessary use of statistical and other machine-learning tools. The paper was an attempt to demystify the application of those tools to literary corpora and to explain why technical errors are amplified when your goal is literary interpretation or description.

My article is the culmination of a long investigation into whether computational methods and their modes of quantitative analyses can have purchase in literary studies. My answer is that what drives quantitative results and data patterns often has little to do with the literary critical or literary historical claims being made by scholars that claim to be finding such results and uncovering such patterns—though it sometimes looks like it. If the conclusions we find in CLS corroborate or disprove existing knowledge, this is not a sign that they are correct but that they are tautological at best, merely superficial at worst.

The article is agnostic on what literary criticism ought to be and makes no prescriptions about interpretive habits. The charge that it takes a “purist” position is pure projection. The article aims to describe what scholarship ought not to be. Even the appeal to reading books in the last pages of the article does not presume the inherent meaningfulness of “actually reading” but only serves as a rebuttal to the use of tools that wish to do simple classifications for which human decision would be immeasurably more accurate and much less expensive.

As to the question of Exploratory Data Analysis versus Confirmatory Data Analysis: I don’t prioritize one over the other. If numbers and their interpretation are involved, then statistics has to come into play; I don’t know any way around this. If you wish to simply describe your data, then you have to show something interesting that derives from measurements that are nonreductive. As to the appeal to exploratory tools: if your tool will never be able to explore the problem in question, because it lacks power or is overfitted to its object, your exploratory tool is not needed.

It seems unobjectionable that quantitative methods and nonquantitative methods might work in tandem.  My paper is simply saying: that may be true in theory but it falls short in practice. Andrew Piper points us to the problem of generalization, of how to move from local to global, probative to illustrative. This is precisely the gap my article interrogates because that’s where the collaborative ideal begins to break down. One may call the forcible closing of that gap any number of things—a new hermeneutics, epistemology, or modality—but in the end, the logic has to clear.

My critics are right to point out a bind. The bind is theirs, however, not mine. My point is also that, going forward, it is not for me or a very small group of people to decide what the value of this work is, nor how it should be done.

Ed Finn accuses me of subjecting CLS to a double standard: “Nobody is calling in economists to assess the validity of Marxist literary analysis, or cognitive psychologists to check applications of affect theory, and it’s hard to imagine that scholars would accept the disciplinary authority of those critics.”

This is faulty reasoning. For one thing, literary scholars ask for advice and assessment from scholars in other fields all the time. For another, the payoff of the psychoanalytic reading, even as it seeks extraliterary meaning and validity, is not for psychology but for literary-critical meaning, where it succeeds or fails on its own terms. CLS wants to say, “it’s okay that there isn’t much payoff in our work itself as literary criticism, whether at the level of prose or sophistication of insight; the payoff is in the use of these methods, the description of data, the generation of a predictive model, or the ability for someone else in the future to ask (maybe better) questions. The payoff is in the building of labs, the funding of students, the founding of new journals, the cases made for tenure lines and postdoctoral fellowships and staggeringly large grants. When these are the claims, more than one discipline needs to be called in to evaluate the methods, their applications, and their result. Because printed critique of certain literary scholarship is generally not refuted by pointing to things still in the wings, we are dealing with two different scholarly models. In this situation, then, we should be maximally cross-disciplinary.

NAN Z. DA teaches literature at the University of Notre Dame.


Errors

Nan Z. Da

This first of two responses addresses errors, real and imputed; the second response is the more substantive.

1. There is a significant mistake in footnote 39 (p. 622) of my paper. In it I attribute to Hugh Craig and Arthur F. Kinney the argument that Marlowe wrote parts of some late Shakespeare plays after his (Marlowe’s) death. The attribution is incorrect. What Craig asks in “The Three Parts of Henry VI” (pp. 40-77) is whether Marlowe wrote segments of these plays. I would like to extend my sincere apologies to Craig and to the readers of this essay for the misapprehension that it caused.

2. The statement “After all, statistics automatically assumes” (p. 608) is incorrect. A more correct statement would be: In standard hypothesis testing a 95 percent confidence level means that, when the null is true, you will correctly fail to reject 95 percent of the time.

3. The description of various applications of text-mining/machine-learning (p. 620) as “ethically neutral” is not worded carefully enough. I obviously do not believe that some of these applications, such as tracking terrorists using algorithms, is ethically neutral. I meant that there are myriad applications of these tools: for good, ill, and otherwise. On balance it’s hard to assign an ideological position to them.

4. Ted Underwood is correct that, in my discussion of his article on “The Life Cycle of Genres,” I confused the “ghastly stew” with the randomized control sets used in his predictive modeling. Underwood also does not make the elementary statistical mistake I suggest he has made in my article (“Underwood should train his model on pre-1941” [p. 608]).

As to the charge of misrepresentation: paraphrasing a paper whose “single central thesis … is that the things we call ‘genres’ may be entities of different kinds, with different life cycles and degrees of textual coherence” is difficult. Underwood’s thesis here refers to the relative coherence of detective fiction, gothic, and science fiction over time, with 1930 as the cutoff point.

The other things I say about the paper remain true. The paper cites various literary scholars’ definitions of genre change, but its implicit definition of genre is “consistency over time of 10,000 frequently used terms.” It cannot “reject Franco Moretti’s conjecture that genres have generational cycles” (a conjecture that most would already find too reductive) because it is not using the same testable definition of genre or change.

5. Topic Modeling: my point isn’t that topic models are non-replicable but that, in this particular application, they are non-robust. Among other evidence: if I remove one document out of one hundred, the topics change. That’s a problem.

6. As far as Long and So’s essay “Turbulent Flow” goes, I need a bit more time than this format allows to rerun the alternatives responsibly. So and Long have built a tool in which there are thirteen features for predicting the difference between two genres—Stream of Consciousness and Realism. They say: most of these features are not very predictive alone but together become very predictive, with that power being concentrated in just one feature. I show that that one feature isn’t robust. To revise their puzzling metaphor: it’s as if someone claims that a piano plays beautifully and that most of that sound comes from one key. I play that key; it doesn’t work.

7. So and Long argue that by proving that their classifier misclassifies nonhaikus—not only using English translations of Chinese poetry, as they suggest, but also Japanese poetry that existed long before the haiku—I’ve made a “misguided decision that smacks of Orientalism. . . . It completely erases context and history, suggesting an ontological relation where there is none.” This is worth getting straight. Their classifier lacks power because it can only classify haikus with reference to poems quite different from haikus; to be clear, it will classify equally short texts with overlapping keywords close to haikus as haikus. Overlapping keywords is their predictive feature, not mine. I’m not sure how pointing this out is Orientalist. As for their model, I would if pushed say it is only slightly Orientalist, if not determinatively so.

8. Long and So claim that my “numbers cannot be trusted,” that my “critique . . . is rife with technical and factual errors”; in a similar vein it ends with the assertion that my essay doesn’t “encourag[e] much trust.”  I’ll admit to making some errors in this article, though not in my analyses of Long and So’s papers (the errors mostly occur in section 3). I hope to list all of these errors in the more formal response that appears in print or else in an online appendix. That said, an error is not the same as a specious insinuation that the invalidation of someone’s model indicates Orientalism, pigheadedness, and so on. Nor is an error the same as the claim that “CI asked Da to widen her critique to include female scholars and she declined” recently made by So, which is not an error but a falsehood.

NAN Z. DA teaches literature at the University of Notre Dame.


Katherine Bode

The opening statements were fairly critical of Da’s article, less so of CLS. To balance the scales, I want to suggest that Da’s idiosyncratic definition of CLS is partly a product of problematic divisions within digital literary studies.

Da omits what I’d call digital literary scholarship: philological, curatorial, and media archaeological approaches to digital collections and data. Researchers who pursue these approaches, far from reducing all digit(al)ized literature(s) to word counts, maintain––like Da––that analyses based purely or predominantly on such features tend to produce “conceptual fallacies from a literary, historical, or cultural-critical perspective” (p. 604). Omitting such research is part of the way in which Da operationalizes her critique of CLS: defining the field as research that focuses on word counts, then criticizing the field as limited because focused on word counts.

But Da’s perspective is mirrored by many of the researchers she cites. Ted Underwood, for instance, describes “otiose debates about corpus construction” as “well-intentioned red herrings” that detract attention from the proper focus of digital literary studies on statistical methods and inferences.[1] Da has been criticized for propagating a male-dominated version of CLS. But those who pursue the methods she criticizes are mostly men. By contrast, much digital literary scholarship is conducted by women and/or focused on marginalized literatures, peoples, or cultures. The tendency in CLS to privilege data modeling and analysis––and to minimize or dismiss the work of data construction and curation––is part of the culture that creates the male dominance of that field.

More broadly, both the focus on statistical modelling of word frequencies in found datasets, and the prominence accorded to such research in our discipline, puts literary studies out of step with digital research in other humanities fields. In digital history, for instance, researchers collaborate to construct rich datasets––for instance, of court proceedings (as in The Proceedings of the Old Bailey)[2] or social complexity (as reported in a recent Nature article)[3]––that can be used by multiple researchers, including for noncomputational analyses. Where such research is statistical, the methods are often simpler than machine learning models (for instance, trends over time; measures of relationships between select variables) because the questions are explicitly related to scale and the aggregation of well-defined scholarly phenomena, not to epistemologically-novel patterns discerned among thousands of variables.

Some things I want to know: Why is literary studies so hung up on (whether in favor of, or opposed to) this individualistic, masculinist mode of statistical criticism? Why is this focus allowed to marginalize earlier, and inhibit the development of new, large-scale, collaborative environments for both computational and noncomputational literary research? Why, in a field that is supposedly so attuned to identity and inequality, do we accept––and foreground––digital research that relies on platforms (Google Books, HathiTrust, EEBO, and others) that privilege dominant literatures and literary cultures? What would it take to bridge the scholarly and critical––the curatorial and statistical––dimensions of (digital) literary studies and what alternative, shared futures for our discipline could result?

KATHERINE BODE is associate professor of literary and textual studies at the Australian National University. Her latest book, A World of Fiction: Digital Collections and the Future of Literary History (2018), offers a new approach to literary research with mass-digitized collections, based on the theory and technology of the scholarly edition. Applying this model, Bode investigates a transnational collection of around 10,000 novels and novellas, discovered in digitized nineteenth-century Australian newspapers, to offer new insights into phenomena ranging from literary anonymity and fiction syndication to the emergence and intersections of national literary traditions.

[1]Ted Underwood, Distant Horizons: Digital Evidence and Literary Change (Chicago: Chicago University Press, 2019): 180; 176.

[2]Tim Hitchcock, Robert Shoemaker, Clive Emsley, Sharon Howard and Jamie McLaughlin, et al., The Proceedings of the Old Bailey, http://www.oldbaileyonline.org, version 8.0, March 2018).

[3]Harvey Whitehouse, Pieter François, Patrick E. Savage, Thomas E. Currie, Kevin C. Feeney, Enrico Cioni, Rosalind Purcell, et al., “Complex Societies Precede Moralizing Gods Throughout World History,” Nature March 20 (2019): 1.


Ted Underwood

More could be said about specific claims in “The Computational Case.” But frankly, this forum isn’t happening because literary critics were persuaded by (or repelled by) Da’s statistical arguments. The forum was planned before publication because the essay’s general strategy was expected to make waves. Social media fanfare at the roll-out made clear that rumors of a “field-killing” project had been circulating for months among scholars who might not yet have read the text but were already eager to believe that Da had found a way to hoist cultural analytics by its own petard—the irrefutable authority of mathematics.

That excitement is probably something we should be discussing. Da’s essay doesn’t actually reveal much about current trends in cultural analytics. But the excitement preceding its release does reveal what people fear about this field—and perhaps suggest how breaches could be healed.

While it is undeniably interesting to hear that colleagues have been anticipating your demise, I don’t take the rumored plans for field-murder literally. For one thing, there’s no motive: literary scholars have little to gain by eliminating other subfields. Even if quantitative work had cornered a large slice of grant funding in literary studies (which it hasn’t), the total sum of all grants in the discipline is too small to create a consequential zero-sum game.

The real currency of literary studies is not grant funding but attention, so I interpret excitement about “The Computational Case” mostly as a sign that a large group of scholars have felt left out of an important conversation. Da’s essay itself describes this frustration, if read suspiciously (and yes, I still do that). Scholars who tried to critique cultural analytics in a purely external way seem to have felt forced into an unrewarding posture—“after all, who would not want to appear reasonable, forward-looking, open-minded?” (p. 603). What was needed instead was a champion willing to venture into quantitative territory and borrow some of that forward-looking buzz.

Da was courageous enough to try, and I think the effects of her venture are likely to be positive for everyone. Literary scholars will see that engaging quantitative arguments quantitatively isn’t all that hard and does produce buzz. Other scholars will follow Da across the qualitative/quantitative divide, and the illusory sharpness of the field boundary will fade.

Da’s own argument remains limited by its assumption that statistics is an alien world, where humanistic guidelines like “acknowledge context” are replaced by rigid hypothesis-testing protocols. But the colleagues who follow her will recognize, I hope, that statistical reasoning is an extension of ordinary human activities like exploration and debate. Humanistic principles still apply here. Quantitative models can test theories, but they are also guided by theory, and they shouldn’t pretend to answer questions more precisely than our theories can frame them. In short, I am glad Da wrote “The Computational Case” because her argument has ended up demonstrating—as a social gesture—what its text denied: that questions about mathematical modeling are continuous with debates about interpretive theory.

TED UNDERWOOD is professor of information sciences and English at the University of Illinois, Urbana-Champaign. He has published in venues ranging from PMLA to the IEEE International Conference on Big Data and is the author most recently of Distant Horizons: Digital Evidence and Literary Change (2019).


DAY 3 RESPONSES


Lauren F. Klein

The knowledge that there are many important voices not represented in this forum has prompted me to think harder about the context for the lines I quoted at the outset of my previous remarks. Parham’s own model for “The New Rigor” comes from diversity work, and the multiple forms of labor—affective as much as intellectual—that are required of individuals, almost always women and people of color, in order to compensate for the structural deficiencies of the university. I should have provided that context at the outset, both to do justice to Parham’s original formulation, and because the same structural deficiencies are at work in this forum, as they are in the field of DH overall.

In her most recent response, Katherine Bode posed a series of crucial questions about why literary studies remains fixated on the “individualistic, masculinist mode of statistical criticism” that characterizes much of the work that Da takes on in her essay. Bode further asks why the field of literary studies has allowed this focus to overshadow so much of the transformative work that has been pursued alongside—and, at times, in direct support of––this particular form of computational literary studies.

But I think we also know the answers, and they point back to the same structural deficienciesthat Parham explores in her essay: a university structure that rewards certain forms of work and devalues others. In a general academic context, we might point to mentorship, advising, and community-building as clear examples of this devalued work. But in the context of the work discussed in this forum, we can align efforts to recover overlooked texts, compile new datasets, and preserve fragile archives, with the undervalued side of this equation as well. It’s not only that these forms of scholarship, like the “service” work described just above, are performed disproportionally by women and people of color. It is also that, because of the ways in which archives and canons are constructed, projects that focus on women and people of color require many more of these generous and generative scholarly acts. Without these acts, and the scholars who perform them, much of the formally-published work on these subjects could not begin to exist.

Consider Kenton Rambsy’s “Black Short Story Dataset,” a dataset creation effort that he undertook because his own research questions about the changing composition of African American fiction anthologies could not be answered by any existing corpus; Margaret Galvan’s project to create an archive of comics in social movements, which she has undertaken in order to support her own computational work as well as her students’ learning; or any number of the projects published with Small Axe Archipelagos, a born-digital journal edited and produced by a team of librarians and faculty that has been intentionally designed to be read by people who live in the Caribbean as well as for scholars who work on that region. These projects each involve sophisticated computational thinking—at the level of resource creation and platform development as well as of analytical method. They respond both to specific research questions and to larger scholarly need. They require work, and they require time.

It’s clear that these projects provide significant value to the field of literary studies, as they do to the digital humanities and to the communities to which their work is addressed. In the end, the absence of the voices of the scholars who lead these projects, both from this forum and from the scholarship it explores, offers the most convincing evidence of what—and who—is valued most by existing university structures; and what work—and what people—should be at the center of conversations to come.

LAUREN F. KLEIN is associate professor at the School of Literature, Media, and Communication, Georgia Institute of Technology.


Katherine Bode

Da’s is the first article (I’m aware of) to offer a statistical rejection of statistical approaches to literature. The exaggerated ideological agenda of earlier criticisms, which described the use of numbers or computers to analyze literature as neoliberal, neoimperialist, neoconservative, and more, made them easy to dismiss. Yet to some extent, this routinized dismissal instituted a binary in CLS, wherein numbers, statistics, and computers became distinct from ideology. If nothing else, this debate will hopefully demonstrate that no arguments––including statistical ones––are ideologically (or ethically) neutral.

But this realization doesn’t get us very far. If all arguments have ideological and ethical dimensions, then making and assessing them requires something more than proving their in/accuracy; more than establishing their reproducibility, replicability, or lack thereof. Da’s “Argument” response seemed to move us toward what is needed in describing the aim of her article as: “to empower literary scholars and editors to ask logical questions about computational and quantitative literary criticism should they suspect a conceptual mismatch between the result and the argument or perceive the literary-critical payoff to be extraordinarily low.” However, she closes that path down in allowing only one possible answer to such questions: “in practice” there can be no “payoff … [in terms of] literary-critical meaning, from these methods”; CLS “conclusions”––whether “corroborat[ing] or disprov[ing] existing knowledge”––are only ever “tautological at best, merely superficial at worse.”

Risking blatant self-promotion, I’d say I’ve often used quantification to show “something interesting that derives from measurements that are nonreductive.” For instance, A World of Fiction challenges the prevailing view that nineteenth-century Australian fiction replicates the legal lie of terra nullius by not representing Aboriginal characters, in establishing their widespread prevalence in such fiction; and contrary to the perception of the Australian colonies as separate literary cultures oriented toward their metropolitan centers, it demonstrates the existence of a largely separate, strongly interlinked, provincial literary culture.[1] To give just one other example from many possibilities, Ted Underwood’s “Why Literary Time is Measured in Minutes” uses hand-coded samples from three centuries of literature to indicate an acceleration in the pace of fiction.[2] Running the gauntlet from counting to predictive modelling, these arguments are all statistical, according to Da’s definition: “if numbers and their interpretation are involved, then statistics has come into play.” And as in this definition, they don’t stop with numerical results, but explore their literary critical and historical implications.

If what happens prior to arriving at a statistical finding cannot be justified, the argument is worthless; the same is true if what happens after that point is of no literary-critical interest. Ethical considerations are essential in justifying what is studied, why, and how. This is not––and should not be––a low bar. I’d hoped this forum would help build connections between literary and statistical ways of knowing. The idea that quantification and computation can only yield superficial or tautological literary arguments shows that we’re just replaying the same old arguments, even if both sides are now making them in statistical terms.

KATHERINE BODE is associate professor of literary and textual studies at the Australian National University. Her latest book, A World of Fiction: Digital Collections and the Future of Literary History (2018), offers a new approach to literary research with mass-digitized collections, based on the theory and technology of the scholarly edition. Applying this model, Bode investigates a transnational collection of around 10,000 novels and novellas, discovered in digitized nineteenth-century Australian newspapers, to offer new insights into phenomena ranging from literary anonymity and fiction syndication to the emergence and intersections of national literary traditions.

[1]Katherine Bode, A World of Fiction: Digital Collections and the Future of Literary History (Ann Arbor: University of Michigan Press, 2018).

[2]Ted Underwood, “Why Literary Time is Measured in Minutes,” ELH 85.2 (2018): 341–365.


Mark Algee-Hewitt

In 2010, as a new postdoctoral fellow, I presented a paper on James Thomson’s 1730 poem The Seasons to a group of senior scholars. The argument was modest: I used close readings to suggest that in each section of the poem Thomson simulated an aesthetic experience for his readers before teaching them how to interpret it. The response was mild and mostly positive. Six months later, having gained slightly more confidence, I presented the same project with a twist: I included a graph that revealed my readings to be based on a pattern of repeated discourse throughout the poem. The response was swift and polarizing: while some in the room thought that the quantitative methods deepened the argument, others argued strongly that I was undermining the whole field. For me, the experience was formative: the simple presence of numbers was enough to enrage scholars many years my senior, long before Digital Humanities gained any prestige, funding, or institutional support.

My experience suggests that this project passed what Da calls the “smell test”: the critical results remained valid, even without the supporting apparatus of the quantitative analysis. And while Da might argue that this proves that the quantitative aspect of the project was unnecessary in the first place, I would respectfully disagree. The pattern I found was the basis for my reading and to present it as if I had discovered it through reading alone was, at best, disingenuous. The quantitative aspect to my argument also allowed me to connect the poem to a larger pattern of poetics throughout the eighteenth century.  And I would go further to contend that just as introduction of quantification into a field changes the field, so too does the field change the method to suit its own ends; and that confirming a statistical result through its agreement with conclusions derived from literary historical methods is just as powerful as a null hypothesis test. In other words, Da’s “smell test” suggests a potential way forward in synthesizing these methods.

But the lesson I learned remains as powerful as ever: regardless of how they are embedded in research, regardless of who uses them, computational methods provoke an immediate, often negative, response in many humanities scholars. And it is worth asking why. Just as it is always worth reexamining the institutional, political, and gendered history of methods such as new history, formalism, and even close reading, so too is it important, as Katherine Bode suggests, to think through these same issues in Digital Humanities as a whole. And it is crucial that we do so without erasing the work of the new, emerging, and often structurally vulnerable members of the field that Lauren Klein highlights. These methods have a powerful appeal among emerging groups of students and young scholars. And to seek to shut down scholarship by asserting a blanket incompatibility between method and object is to do a disservice to the fascinating work of emerging scholars that is reshaping our critical practices and our understanding of literature.

MARK ALGEE-HEWITT is an assistant professor of English and Digital Humanities at Stanford University where he directs the Stanford Literary Lab. His current work combines computational methods with literary criticism to explore large scale changes in aesthetic concepts during the eighteenth and nineteenth centuries. The projects that he leads at the Literary Lab include a study of racialized language in nineteenth-century American literature and a computational analysis of differences in disciplinary style. Mark’s work has appeared in New Literary History, Digital Scholarship in the Humanities, as well as in edited volumes on the Enlightenment and the Digital Humanities.

 

 

 

To read the Summer 2020 journal responses by Leif Weatherby, Ted Underwood, and Nan Da, click the links below:

Ted Underwood, Critical Response II. The Theoretical Divide Driving Debates about Computation

 

12 Comments

Filed under Uncategorized

Progressive Surge Propels Turning Point in US Policy on Yemen

Screenshot 2019-03-26 15.27.55

Protesters call for an end to US involvement in the war in Yemen, November 2018 in Chicago. The blue backpacks stand for the 40 children killed in an air strike on a school bus that used an American-made bomb. CHARLES EDWARD MILLER [CREATIVE COMMONS LICENSE BY SA 2.0]

By Danny Postel

This article was originally published in Middle East Report.

The US House of Representatives passed a potentially historic resolution on February 13, 2019, calling for an end to US military support for the Saudi-led coalition’s intervention in Yemen that began in 2015. Although the US government has never formally declared its involvement in the war, it assists the coalition with intelligence and munitions and supports the aerial campaign with refueling and targeting. The United States is therefore complicit in the myriad atrocities the coalition has committed against Yemeni civilians, which Human Rights Watch and Amnesty International have characterized as war crimes.1

What is already historic about the resolution (introduced by Democratic Representatives Ro Khanna of California and Mark Pocan of Wisconsin) and its Senate counterpart (introduced by Independent Bernie Sanders of Vermont, Republican Mike Lee of Utah and Democrat Chris Murphy of Connecticut) is their invocation of the War Powers Resolution of 1973, which restrains a president’s capacity to commit forces abroad. Aimed to prevent “future Vietnams,” the act gives Congress the authority to compel the removal of US military forces engaged in hostilities absent a formal declaration of war.

The House resolution was the first time Congress flexed its War Powers muscle in the 45 years since that resolution’s passage. The Senate passed a parallel resolution in December, but the measure died when the Republican leadership refused to bring it to a vote. These congressional moves not only register opposition to US involvement in this war but also strike a major blow against unlimited executive power when it comes to launching war.

This long overdue Congressional action to constrain executive war-making, however, would not have been possible without a tremendous grassroots mobilization against US involvement in this disastrous war and the surging progressive tide that is raising deeper questions about US foreign policy.

Anti-war activists in the United States have been organizing against US support for the Saudi intervention in Yemen since2015. While these efforts made an impact on the public debate about Yemen, they failed to move the policy needle—until an unexpected chain of events in late 2018 gave the campaign new traction and occasioned a momentous grassroots mobilization. The national organizing campaign is led by a combination of Yemen-oriented groups (the Yemen Peace Project, the Yemeni Alliance Committee and others) along with more established anti-war organizations like Just Foreign Policy, Win Without War, Code Pink and Peace Action. The addition of the ascendant Democratic Socialists of America contributed to the momentum. Yet it was the confluence of events outside the control of these groups—but to which these groups were well-positioned to rapidly respond—that propelled the campaign into broad Congressional support for War Powers resolutions in early 2019.

This campaign is poised to change not only US policy on Yemen, but possibly the longstanding US-Saudi relation- ship. To be sure, major obstacles stand in the way of such a shift—notably, the Israel lobby and the swampy Donald Trump-Jared Kushner ties with Gulf monarchs. But the tide is now turning, and the 2020 presidential election could change the equation even more dramatically.

Game-Changers

The Barack Obama administration gave the green light for the Saudi bombing campaign in 2015, dubbed Operation Decisive Storm, as a way to placate Saudi Arabia’s furious opposition to the Iran nuclear deal, which they viewed as betrayal and a sign that Washington was pivoting to Tehran.2Some commentators retrospectively regard the Iran deal as wrongheaded given the catastrophe that has unfolded in Yemen.3 But this imagines that Obama’s decision to sign off on the kingdom’s military campaign was automatic or inevitable. It was neither. The problem was not the Iran deal itself, but rather the decision to appease the Saudis in Yemen.

The Saudis viewed Trump’s election as a godsend. Here was someone who embraced their assertion that Iran was the source of most of the region’s problems and shared their determination to isolate and confront Tehran.4 Trump’s first foreign visit as president was to Riyadh, where he told the ensemble of autocrats, monarchs and thugs what they wanted to hear: They have US support. Immediately after the May 2017 gathering, the Saudis stepped up their aerial assault on Yemen, and Trump announced a massive new weapons deal with the kingdom.

As the war intensified and the humanitarian crisis deep- ened, a broad coalition of US anti-war activists emerged and shifted their attention to Yemen, initiating a variety of educational events, protests and meetings to pressure congressional leaders. Despite their efforts, it took two events in the summer of 2018—one a horrific act of violence in Yemen that illuminated all that was wrong with US involvement, and the other a horrific act of violence in Istanbul not directly related to the war itself—to spark a major opening in public consciousness and on Capitol Hill.

On August 9, 2018, a Saudi-led coalition warplane bombed a school bus in Saada, northern Yemen, killing several dozen children between the ages of six and 11. Mainstream media coverage of this event was unusually extensive and graphic, with CNN airing chilling video footage of the final moments inside the bus before the bomb struck. The video found itself in heavy rotation and went viral on social media. The visceral imagery of children on a school bus struck a deep nerve among many Americans who otherwise had not been following events in Yemen.

Reports that the warplane in question was sold to Riyadh by Washington, and that the bomb was manufactured in the United States, began to materialize. The Yemen-based human rights organization Mwatana played an important role by providing CNN access to a cache of documents showing fragments of American-made bombs at the scene of multiple attacks in which civilians were killed and injured, going back to 2015.5 Mwatana’s engagement with the US media also drew upon the knowledge and connections of US-based organizations that had long been working to draw attention to the direct role of the United States in the little-understood war. The horror of the school bus bombing, followed by this investigative surge, had a palpable effect on public opinion as Washington’s direct role in the suffering of Yemeni civilians came into public focus.6

The second event, the October 2, 2018 assassination of the Saudi journalist Jamal Khashoggi, was the game-changer. When it was revealed that the Washington Post contributor was dismembered with a bone saw in the Saudi consulate in Istanbul and that Khashoggi’s murder was directed by the highest levels of the Saudi regime, virtually the entire Washington foreign policy world condemned Saudi Crown Prince Mohammed bin Salman for his brazen brutality.

“The Khashoggi killing shocked official Washington, which was forced to overcompensate for having endorsed Crown Prince Mohammed bin Salman as an enlightened reformer,” Yasmine Farouk observes. “The humanitarian consequences of the war in Yemen added to that, so that the kingdom in its entirety has become entangled in the current polarization of US politics.”7

Many Yemenis are ambivalent about what might be called the Khashoggi effect—the ways in which the Saudi journal- ist’s brutal murder has drawn attention to the injustices of the war in Yemen. Abdulrasheed Alfaqih, Executive Director of Mwatana, conveys this ambivalence in his observation that “Yemen is one big Saudi consulate.” “All Yemenis are like Khashoggi,” he notes, “but without the Washington Post.”

But Khashoggi’s murder proved pivotal on the legisla- tive front, when a handful of Republican senators joined Democrats in their support for Senate Joint Resolution54, the War Powers measure to end US support for the coalition’s military operations in Yemen. Just a few months earlier, in March 2018, this resolution had been rejected by the Senate. But following the school bus bombing, revelations of Washington’s complicity in such atrocities and the Khashoggi affair, the Senate passed the Sanders- Lee-Murphy resolution in December 2018. While outgoing Speaker Paul Ryan blocked the House resolution on his way out of office, a new version, House Joint Resolution 37, passed the Democratic-controlled House in February 2019. Euphoria was widespread in progressive circles: Anti-war activists celebrated not just the passage of the resolution, but the critical role they played in bringing it about.

Mobilizing a Coalition

Since the beginning of 2018, a coalition of organizations have worked around the clock mobilizing grassroots support for congressional action. Groups like Win Without War, Just Foreign Policy, the Yemen Peace Project, Code Pink, Peace Action, the Yemeni Alliance Committee, the Friends Committee on National Legislation, Action Corps and the Fellowship of Reconciliation have worked closely with congressional allies, providing policy expertise and helping draft resolutions (both Senate and House versions). These organizations have mobilized their members and supporters around the country to pressure their congressional representatives to co-sponsor and vote for the resolutions. They organized rallies at US Senate offices in Nevada, Arizona, Illinois, New York, New Jersey, Rhode Island and Maine (as well as on Capitol Hill), resulting in grassroots and media pressure on every Democrat who voted against the Yemen resolution in March 2018, which had a direct impact on the historic Senate vote in December.

While many efforts were coordinated, the mobilization was broad and diffuse enough to pressure congressional representatives across the country. In November, the Yemeni Alliance Committee, Just Foreign Policy and Action Corps organized rallies at the San Francisco and Los Angeles offices of two key House Democrats, Nancy Pelosi (then House Minority Leader, now Speaker) and Adam Schiff. Until then, Pelosi’s position on Yemen was unclear.8 Yemeni and Yemeni-American activists figured prominently in both actions. Within 24 hours of the rallies, both Pelosi and Schiff agreed to co-sponsor the original House resolution.

Employing creative means, Chicago activists in November2018 led by Voices for Creative Nonviolence, Just Foreign Policy and the Chicago chapter of Peace Action held a powerful demonstration at Chicago’s Federal Building, placing 40 blue backpacks on the ground with the names of the children killed by the Saudi missile fired at their school bus. A teach-in on US involvement in the Yemen war held the next day at a packed auditorium at Loyola University featured the Yemeni-Canadian activist and Michigan State professor Shireen Al-Adeimi, who has emerged as one of the key voices on Yemen. Students at Loyola, DePaul and the University of Chicago have made Yemen a central focus of their activism.

Democratic Socialists of America, which now has more than 55,000 members nationally, has also played an important role. In November 2018 the organization issued a forceful statement on Yemen. In January 2019, it held a national video conference to educate and spark its members to participate in the National Day of Action for Yemen on February 4, 2019, which mobilized support for the current House and Senate resolutions to end US support for the Saudi military intervention.

A Left-Right Alliance on Yemen?

Yemen has become an important subplot in a larger story: the development of a new progressive foreign policy vision in Congress. A central figure in this story is Rep. Ro Khanna, who was first elected to Congress in 2016 and has emerged as a leading member of the Congressional Progressive Caucus. With his frequent appearances on such shows as All In with Chris Hayes, Democracy Now! and the Intercepted podcasts, Khanna has become a prominent voice in progressive and anti-war circles. Khanna goes beyond advocating simply for the end of US support for the Saudi-led coalition in Yemen: He wants to stop all US military assistance to Saudi Arabia.

At the same time, Khanna is part of a disconcerting trend in certain quarters of the anti-war left, sometimes expressing affinity with right-wing reactionaries whose opposition to neoconservatism overlaps with their own. In February 2019, Khanna tweeted about an article by Fox News’ Tucker Carlson in The American Conservative magazine: “Tucker Carlson offers a devastating critique of interventionism and shows how much of the foreign policy establishment has failed the American people. There is an emerging, left-right coalition of common sense for a foreign policy of restraint.”9

Carlson may be a critic of neoconservatism, but he is also a defender of white nationalism and a purveyor of demonizing rhetoric about immigrants and Muslims. Praising someone like Carlson—especially without offering this caveat—risks rendering Khanna’s anti-war position hostile to Yemeni-Americans and many other allies in the progressive push to end the war in Yemen.

Talk of a left-right coalition has been gaining traction in some anti-war circles, particularly since Trump’s election. To be sure, the War Powers resolution could not have made progress without making common cause with some conservatives. Republican Sen. Mike Lee of Utah, for example, has been an instrumental ally on Yemen. But to speak of a broad left-right coalition, as Khanna and others do, risks alienating many progressives who fiercely oppose “America First” nationalism (read: white nationalism).

Rep. Tulsi Gabbard of Hawaii is also frequently quoted and retweeted in anti-war circles despite her well-documented Islamophobia, her enthusiastic support for the chauvinistic Hindu nationalism of Indian Prime Minister Narendra Modi, her praise for brutal dictators like Egypt’s Abdel Fattah al-Sisi and her cooperation with the right-wing organization that arranged her trip to Syria to meet with the war criminal Bashar al-Assad.10

The troubling politics of this left-right coalition did not originate in Congress. Many progressives and anti-war activists, for example, contributed to the virality of this tweet from Sen. Rand Paul: “Sunnis have been killing Shia since the massacre at Karbala in 680 AD. If we wait until they stop killing each other, we will stay for a thousand years or more. I agree with @realDonaldTrump. Bring the troops home.”11 Many progressives, however, oppose building a left-right coalition that overlooks Orientalist and racist distortions about the Middle East and Muslims on the basis of shared support for a smaller US military footprint.12 Such a coalition would be hostile, if not unrecognizable, to many of the people in whose name progressive activists often claim to speak.

Bernie, the Democratic Party and US-Saudi Relations

Unlike in 2016, when Bernie Sanders seemed to shy away from foreign policy issues, foreign policy has become a major focus as he enters the presidential race for 2020.13 In recent months he has issued an internationalist manifesto and delivered a major foreign policy address at Johns Hopkins University’s School of Advanced International Studies.14 With Sanders’ timely leadership on ending US involvement in the war in Yemen, his increasingly critical views on US-Saudi relations and his broader anti- authoritarian internationalist vision, the contours of a Sanders-administration foreign policy are taking shape and could become a reality: Every poll shows Sanders beating Trump in a general election. As with domestic issues, Sanders’ influence over the terms of the Democrats’ foreign policy debate will be significant.

Moreover, every Democratic senator running for presi- dent is on board as a co-sponsor of the Sanders-Lee-Murphy resolution on Yemen. This development is remarkable and may portend a major shift in US foreign policy—at least toward Saudi Arabia. Resetting US relations with the Saudi kingdom, which Gilbert Achcar has felicitously called “the most reactionary state on earth,” would go well beyond the Obama-Clinton-Kerry legacy—indeed, well beyond any previous Democratic administration—and have far-reaching repercussions in the Middle East.15

If US policy moves in this progressive direction, the grassroots mobilization to end US involvement in the war in Yemen—particularly the surge of 2018 and 2019—will be a key reason.

 

 

Danny Postel is assistant director of the Middle East and North African Studies Program at Northwestern University. He is involved in the activist mobilization to end US support for the Saudi military intervention in Yemen.

 

 

Endnotes

1 Human Rights Watch, “Yemen: Civilians Bombed, Shelled, Starved,” January 17, 2019; Amnesty International, “Yemen: The forgotten war,” September 2015.

2 John M. Willis “Operation Decisive Storm and the Expanding Counter-Revolution,” Middle East Report Online, March 30, 2015.

3 See Joshua Keating, “What if the Iran Deal Was a Mistake?” Slate, February 6, 2018.

4 See Danny Postel and Nader Hashemi, “Playing with Fire: Trump, the Saudi-Iranian Rivalry, and the Geopolitics of Sectarianization in the Middle East,” Mediterranean Yearbook 2018(Institut Europeu de la Mediterrània, 2018).

5 Nima Elbagir, Salma Abdelaziz, Ryan Browne, Barbara Arvanitidis and Laura Smith-Spark, “Bomb that killed 40 children in Yemen was supplied by the US,” CNN, August 17, 2018; and Nima Elbagir, Salma Abdelaziz and Laura Smith-Spark, “Made in America: Shrapnel found in Yemen ties US bombs to string of civilian deaths over course of bloody civil war,” CNN, September 2019.

6 Borzou Daragahi, “Majority of Americans want congress to cut arms sales to Saudi Arabia over Yemen war, survey finds,” The Independent, November 26, 2018.

7 Yasmine Farouk, “Guilt by Association,” Diwan, February 15, 2019.
8 Sarah Lazare, “Nancy Pelosi Finds Time for War Hawks—But Not Yemeni-American Peace Advocates,” In These Times, December 7, 2018.

9 The tweet is available at https://twitter.com/i/web/status/1096467708831510530

10 Branko Marcetic, “Tulsi Gabbard Is Not Your Friend,” Jacobin, May 26, 2017; Soumya Shankar, “Tulsi Gabbard Is a Rising Progressive Star, Despite Her Support for Hindu Nationalists,” The Intercept, January 5, 2019; Alex Rowell, Tim Mak and Michael Weiss, “Tulsi Gabbard’s Fascist Escorts to Syria,” Daily Beast, January 26, 2017; Evan Hill, “Tulsi Gabbard’s Deceptive Foreign Policy,” The Nation, January 17, 2019.

11 The tweet is available at https://twitter.com/randpaul/status/1085600177682071552?lang=en

12 For a critique of this Orientalist narrative about ancient sectarian hatreds, see Nader Hashemi and Danny Postel, eds., Sectarianization: Mapping the New Politics of the Middle East (Oxford, 2017).

13 Peter Beinart, “It’s Foreign Policy That Distinguishes Bernie This Time,” The Atlantic, February 21, 2019.

14 Bernie Sanders, “A new authoritarian axis demands an international progressive front,”The Guardian, September 13, 2018.

15 Nada Matta, “What Happened to the Arab Spring? An Interview with Gilbert Achcar,”Jacobin, December 17, 2015.

Leave a comment

Filed under Uncategorized

Icarus High and Low: Yaron Ezrahi and the Fate of the Israeli Political Imaginary

Daniel Bertrand Monk

 

“Today, we should have to add: it is part of morality not to be at home in one’s home.”

Theodor W. Adorno. Mínima Moralia.

 

Theodor Adorno’s famous adage that a “wrong life cannot be lived rightly” appeared at the end of an aphorism entitled “Refuge for the Homeless.” It explored how the predicament of “private life”—its impossible necessity—could be derived from the repertoire of possible attitudes one might take towards “its arena”: our homes. “Dwelling, in the proper sense, is now impossible,” Adorno explained, because the immanent development of technology—including things like the invention of concentration camps and the carpet bombing of cities—had killed houses altogether. He was suggesting that the “possibility of residence” implies a distinction—however minimal—between the private sphere (our own place) and the place assigned it by the social order. Under present circumstances, then, to make oneself at home is to deny knowledge of the conditions that have exterminated the possibility of a refuge at all. On the other hand, to live like a refugee—that is, to make a fetish of one’s lack of attachment to home—is no alternative: “a loveless disregard for things . . . necessarily turns against people too.”[1]

 

 

One could say much the same thing for a homeland. Recognizing the important distinction to be preserved between an extinguished private existence and an equally lifeless public sphere in Adorno’s aphorism, it may be productive—just for a moment—to squint one’s eyes, extrapolate to different level of analysis, and think of his “Refuge for the Homeless” as an invitation to analyze the corresponding relation between guilt and knowledge in a people’s efforts to make itself at home.  This, indeed, was the entire intellectual project of Yaron Ezrahi, for whom the rationalization of national and collective existence was the principal political question underlying modern political history, practice, and thought.  A political scientist by training and a brilliant historian of science by grit, Ezrahi transformed himself into his own experiment.  As he sought to lay claim to the way that his own status as a citizen of Israel exhumed contrarieties at the level of thought, and not just ethics, he raised generalizable questions about the place of the rightless in the domain of those who enjoy “the right to have rights.”[2]

For Ezrahi, one’s polity is always in question—irreducibly, and without romanticism—and, its forms are, by extension, necessarily contingent. Democracy, in particular, is a perpetual referendum on its own possibility and little more.  In fact, by the end of his life Ezrahi concluded that democracy “must be imagined and performed” in order to exist at all.[3] The “inhabitants” of an order “regulated by the imaginary of self-government by the people” find it difficult to “recognize the fictive-performative foundations of . . . [their] . . . world,” even as that same imaginary appeals in flawed, but important, ways to something beyond these normative foundations.[4]  Under these circumstances, the adjudication of political meaning may now be closer to aesthetic criticism, as we both practice and assess collective and intersubjective performances of legitimacy, Ezrahi seemed to suggest in his last works. (This affective turn in his political thought was developed, in part, in partnership with the musicologist and his wife Ruth HaCohen [Pinczower].)[5]

Ezrahi’s best-known work of public scholarship, Rubber Bullets (1997), demonstrates the fate of any argument that would seek to stand critically between a conflict and its imaginary. An indictment of Israeli self-talk concerning the first Palestinian Intifada, the work presents the eponymous ammunition then used by the IDF against stone-throwing Arab youths, as a shortcut for the way that the nation sought to reconcile its repression of the Palestinian people with Israel’s self-conception as a liberal democratic state. (The rubber bullet, so the conventional argument went, responds with superior force without breaking the bounds of proportionality demanded by justice.) Smashing this kind of chatter to bits, Ezrahi exposed the contradiction between force and law suspended in the rubber bullet: it was at once affirmative (in the sense that it was a perverse excuse for a perverse condition), and at the same time, rubber bullet talk—in all its permutations—was potentially critical: the palpable lack of reconciliation between norms and reason it laid bare by virtue of its own existence necessarily appealed to a different standard of politics.[6]

The standstill captured in Ezrahi’s account of a political imaginary was sometimes experienced as a betrayal of thought. Nationalist critics accused him of being a fifth columnist, while security hawks accused him of prioritizing abstractions like justice over the protection of civilians. Conversely, some academic interlocutors treated Ezrahi as an apologist for the state. As one review phrased it, Ezrahi’s critique of the rubber bullet revealed the limits of “Liberal Zionist Angst.”[7] Moreover, in a phenomenon referred to in Hebrew slang as “hafuch al-hafuch“—a metacritical “inverse of the inverse”—Ezrahi’s own criticism of Israeli military policy was itself treated as evidence of the way the same self-talk he analyzed could be drafted into a second-order legitimacy. (In other words, here, political analysis became the intellectuals’ contribution to a form of Intifada-era Israeli cultural production commonly known as the shoot’n’cry genre.  (“If we can grieve about the injury we cause, we must still be just.”)

The reception of Ezrahi’s public scholarship is an allegory of the fate of any argument that would refuse to make itself too much at home in the realm of given positions. Eschewing communitarian rationalizations of the state, Ezrahi could also not adopt valorizations of exile as an Archimedean lever on thought.  As a result, his intellectual project sometimes ran afoul of established interpretations of the Israel/Palestine conflict, which has tended to disqualify arguments that fail to ratify an either-or logic of recrimination.[8]  (Conflating interpretations of the structure of contention with the moral judgments embedded in its moments, this logic of recrimination requires one to fall on one side or the other of the rubber bullet, or risk appearing as the custodian of a phony middle ground.)  To the extent that the rubber bullet was understood solely as a trope for millennial Israeli politics per se, the critical force of Ezrahi’s thought went unnoticed or was willfully set aside.

In fact, Ezrahi’s interlocutors were right and wrong.  Right, because as a metonymy the rubber bullet connoted a fully-realized politics. Its temporality was implicitly retrospective inasmuch as the trope pointed to a political truth that was presumably already in existence. As a result, the moral job of the critic was to call that political truth by its proper name. Wrong, because Ezrahi—whose academic researches had focused primarily on the relation between scientific rationality and politics—also understood the rubber bullet as a technology.[9] Anticipating a new materialist turn in political thought (and, particularly, its concern with the agentic capacities of entire classes of objects), Ezrahi understood technologies to be form-constitutive in the socio-political sphere.  As artefacts of a fictive “escape” from politics, in Ezrahi’s terms, technologies can be part of new political imaginaries that their very existence may help conjure into being.[10] Pointing towards the future, the valuations of the political associated with them are tinged by a generalizable indeterminacy. In his last work, Ezrahi presented this finding succinctly: “political order has no basis other than an unstable, ungrounded, elusive and inherently debatable human authority.”[11]

A “lay epistemology” of coherence once premised on the existence of god, nature, or reason as our image of the universal has now “disintegrated,” Ezrahi argued, leaving humanity with a politics premised upon nothing but the same “debatable human authority.” In this context, the given dynamics of contention concerning the Israel/Palestine conflict have themselves become a “refuge” from indeterminacy among those who might otherwise have to take responsibility for reimagining the political order.

And this is no way to live. Those who would treat the realities of this very human conflict as exceptional by reducing them to the certainties of a reified politics of blame—”the ‘truths’ and ‘lies” that reassert the power of given dichotomies—were, and are, constructing for themselves a regressive utopia, Ezrahi seemed to suggest.  The way we talk about this conflict is itself evidence of a species of thinking that cannot make itself at home in a present where legitimation crises are irreducible.   It is this exceptionalization of thinking about Israel/Palestine in relation to our thinking about conflicts in general that Ezrahi challenged; precisely by rejecting the given alternatives, and, in the process, opening himself up to charges of betrayal and complicity alike.

 

Daniel Bertrand Monk is the George R. and Myra T. Cooley Professor of Peace and Conflict Studies and Professor of Geography and Middle-East Studies

Colgate University

 

[1]Theodor Adorno, Minima Moralia: Notes on a Damaged Life (New York, 1978), p. 39.

[2]Hannah Arendt, “The Perplexities of the Rights of Man,” Headline Series; New York 318 (Winter 1998): 88, search.proquest.com/docview/228271571/citation/E8F4234B1C334242PQ/1; and “‘The Rights of Man’: What are They,” Modern Review 3, no. 1 (1949): 24-37.

[3]“A democratic society cannot fully or at every moment be a democracy. Its precarious existence depends upon mutually reinforcing democratic ideas, political culture, political imaginaries, institutions, and practices. These very elements, which make a system of government democratic, almost never fully coexist in any society” (Yaron Ezrahi, Imagined Democracies: Necessary Political Fictions [New York, 2012], p. 1).

[4]Ibid., p. 3.

[5]See Ruth HaCohen Pinczower and Ezrahi,  Lehalchin Koach, Lashir Herut [Composing Power, Singing Freedom] (Jerusalem, 2017).

[6]Ezrahi, Rubber Bullets: Power and Conscience in Modern Israel  (New York, 1997).

[7]Ilan Pappe, “Liberal Zionist Angst. Review of Yaron Ezrahi, Rubber Bullets,” Journal of Palestine Studies 29, no. 1 (1999): 95–96, doi.org/10.2307/2676438.

[8]Daniel Bertrand Monk, “The Intractability Lobby: Material Culture and the Interpretation of the Israel/Palestine Conflict,” Critical Inquiry 36 (Spring 2010): 601–8, doi.org/10.1086/653415.

[9]Ezrahi, The Descent of Icarus: Science and the Transformation Of contemporary Democracy (New York, 1990).

[10]Ezrahi, “Technology and the Illusion of the Escape from Politics,” in Technology, Pessimism, and Postmodernism, ed. Ezrahi, E. Mendelsohn, and H. Segal (Amherst, Mass., 1995), pp. 29-38

[11]Ezrahi. Can Democracy Recover? (Unpublished MS, excerpt)

Leave a comment

Filed under Uncategorized

Catherine Malabou on Life: A Critical Inquiry Interview

Catherine Malabou stopped by the office of Critical Inquiry for a short and informal audio interview during her visit to the University of Chicago two years ago. We talked about her two CI essays, her book Before Tomorrow: Epigenesis and Rationality (2017), and her work in progress.

You can also listen and subscribe to WB202 at:

iTunes

Google Play

TuneIn

Leave a comment

Filed under Podcast

Davidson and His Interlocutors, Part 2: An Interview with Arnold I. Davidson

Coeditor Richard Neer interviews Arnold Davidson about, among other things, his writing on music. This interview expands on the work featured in “Davidson and His Interlocutors,” a Winter 2019 special issue of Critical Inquiry. This is the second part of a two-part interview.

You can also listen and subscribe to WB202 at:

iTunes

Google Play

TuneIn

Leave a comment

Filed under Podcast