Category Archives: Uncategorized

“Terror” at the University of Chicago

W. J. T. MItchell

The attached poster (redacted to remove students’ names) was created by the David Horowitz Freedom Center. It was posted anonymously and illegally in several places at the University of Chicago last week, as classes were resuming. It is the second time that the Horowitz’s campaign of intimidation has attacked students and faculty at the University of Chicago.

David Horowitz is a well-known flack for the radical right. His “Freedom Center” has been designated as a hate group by the Southern Poverty Law Center. He believes that anyone who exercises his or her right to free speech to criticize the state of Israel or defend the rights of Palestinians with nonviolent means is automatically a supporter of terrorism. He has carried this message to several American universities, including Berkeley, Irvine, DePaul, and now the University of Chicago. His tactics are similar to those of sexual harassers and racists who like to hang nooses to intimidate black students or send threatening Twitter messages to bully those they disagree with. Fortunately, his name appears on these posters; the people who sneak around posting them may be able to hide, but he cannot.

So far the University of Chicago’s administration has declined to call out Horowitz by name and to demand that he cease and desist in promoting these defamatory attacks. If you feel that universities should be more proactive about banning these kinds of attacks, please make your views known. This kind of campaign has nothing to do with freedom of speech. It is a form of hate speech, and like the recent Russian attacks on US elections, it corrupts social media by circulating lies and slander to produce division and paranoia, not to mention its potential for damaging the careers of vulnerable students and faculty.

I am proud to say that I was one of the professors subjected to this vile, slanderous attack. If we are known by the company we keep, we are perhaps even better known by the enemies we make. And I am happy to call David Horowitz and his nasty, cowardly minions my enemies. FDR put it best in 1936, when he said “I welcome their hatred.” He was referring to the right-wing reactionaries who tried to block the New Deal that rescued the United States from the worst depression in history. I hope Horowitz will send me a copy of his poster so I can hang it up in my office. The artist who drew my portrait has flattered me by making me look a bit like Salman Rushdie. Can I assume that this means that the Ayatollah Horowitz has declared a fatwa on me?

Advertisements

3 Comments

Filed under Uncategorized

A Case for Neurohumanities

Ana Hedberg Olenina

1. Introduction: The Current Status of Neuroscience Vis-à-Vis the Humanities

Over the past twenty years, evolving technologies have allowed us to map the activity of the brain with unprecedented precision. Initially driven by medical goals, neuroscience has advanced to the level where it is rapidly transforming our understanding of emotions, empathy, reasoning, love, morality, and free will. What is at stake today is our sense of the self: who we are, how we act, how we experience the world, and how we interact with it. By now nearly all of our subjective mental states have been tied to some particular patterns of cortical activity. Beyond the radical philosophical implications, these studies have far-reaching social consequences. Neuroscientists are authoritatively establishing norms and deviations; they make predictions about our behavior based on processes that lie outside our conscious knowledge and control. The insights of neuroscience are being imported into the social sphere, informing debates in jurisprudence, forensics, healthcare, education, business, and politics. A recent collection of essays, compiled by Semir Zeki, a leading European proponent of applied neuroscience, in collaboration with the American lawyer Oliver Goodenough, calls for further integration of lab findings into discussions of public policy and personnel training.[1] Neuroscience thus plays an increasingly active role in shaping society, intervening into the arena traditionally overseen by the humanistic disciplines: political science, law theory, sociology, history, and philosophy.

In the cultural sphere, neuroscience has invigorated the study of art psychology by highlighting neurophysiological processes that accompany the creation and appreciation of art. Within the burgeoning transdisciplinary field of neuroaesthetics, researchers are evaluating the responses of the amygdala to paintings ranked by subjects as pleasant or unpleasant, documenting patterns of distraction during the reading of Jane Austen’s novels, and exploring the neural mechanisms involved in watching dance – to name but a few recent high-profile projects.[2] Yet, it is not always clear how the data gathered in these cutting-edge studies could figure in crucial disciplinary debates within literature, visual art, or performance studies. More often than not, laboratory experiments are operating with reductive models that take into account only a limited set of variables. In their current state, the neuro subfields within the humanities are making little use of the wealth of knowledge accumulated by the established methods of interpretation, such as historical contextualization, hermeneutics, formal analysis, semiotics, narratology, sociological reception studies, gender and ideological critique.

The “neuro-turn” sweeping the humanities has already generated a great deal of skepticism. Many of these objections revolve around the mind-body problem. As the philosopher Alva Noë, a long-term critic of applied neuroscience, puts it, no research has ever been able to demonstrate how consciousness arises out of brain processes.[3] The reduction of our mind to the latter is not simply a pet peeve of the entrenched humanitarians; rather, it is bad science. To give an obvious example, writes Noë, considering depression as solely a neurochemical brain disorder would mean disregarding the social and psychological factors that contributed to it.[4] By analogy, detailing the functional anatomy of the brain will not provide us with a full picture of the subject’s unique lifetime experiences, which have influenced the formation of cortical synapses. What is more, in explaining mental states on the basis of brain processes, scientists are often drawing on animal research without duly acknowledging the vast gap that separates us from other species. In doing so, researchers frequently fall prey to what Raymond Tallis calls the fallacy of “Darwinitis” (to be distinguished from legitimate Darwinism), where the complexity of our mental behavior is reduced to a simplified account of evolutionary adaptation.[5]

The fact that the study of the social and cultural life of the mind is now being outsourced to neuroscience is a direct consequence of the routine defunding of the humanities. As Joseph Dumit points out, the undercutting of the humanities and more traditional psychology “means that certain arenas of inquiry are being starved of evidence.”[6] In the long run, such erosion of specialist knowledge is not good for neuroscience itself – if it indeed aspires to a nuanced view of a system as infinitely complex as the human mind. With regards to financial backing, an alarming trend is currently starting to thwart the prospects of neuroscience as such. Once, fundamental research into the functioning of the central nervous system relied on the big pharmaceutical companies, which invested in developing new drugs for brain disorders. Between 2009 and 2012, however, the majority of international drug corporations, such as GlaxoSmithCline, AstraZeneka, Pfizer, Merck, Sanofi, and Novatis, began to wind down these programs, because they realized that it was more profitable to issue slightly modified versions of the already existent, FDA-approved medications rather than pursue the costly and risky research for new products.[7] Paradoxically, then, in the midst of surging enthusiasm for all things neuro, new medical research on the brain is shrinking. The plateauing serves the current business model of the drug producers, but to presume that everything we need to know about the brain has been already discovered is preposterous. And yet, as another sign of the times, Dumit cites the complete exclusion of experimental cognitive neuroscience from The Human Brain Project (HBP): a multi-billion dollar EU venture to simulate the entirety of cerebral processes on the computer.[8] The HBP was founded on the astounding premise that “previous neuroscientific research has already generated most of the data necessary for understanding the human brain from genes to cognition.”[9] Once again, the plateauing of fundamental research is being justified by a new priority: the translation of biological processes into digital codes and algorithms. To model the brain in silico would mean to ascertain the EU’s status as a world leader in neurotechnology. It is disturbing to think that research standards within neuroscience proper are so tightly tied to the political and business priorities of its funders.

2. What Can We Learn from History?

Looking back in time may help us understand the promises and limitations of neuroscience and its impact on the cultural and public spheres. My own research focuses on what may be seen as the precursor of neuroscience at the turn of the twentieth century – the discipline of physiological psychology, which pioneered the systematic quest for the physical underpinning of mental states. In the late nineteenth century, laboratories of experimental psychology introduced instruments, procedures, and modes of representation that focused on patterns of muscular contractions and changes in vital signs as markers of nervous activity. This data was then deployed in the study of cognitive and affective processes. New scientific discourses rapidly penetrated into a broader cultural sphere, generating wide interest in the question how the body participates in and reflects affective and cognitive processes.

My work examines the repercussions of these methods for the arts, revealing the factors that motivated writers, actors, and filmmakers in the 1910s-1920s to reformulate corporeality in accordance with recent trends in science. These factors ranged from a search for a more immediate transcription of unconscious creative impulses in handwriting, articulatory movements, and gestures, to utilitarian concerns with optimizing labor efficiency and raising the effectiveness of spectacles and propaganda. Both a history and a critical project, my book attends to the ways in which artists and theorists dealt with the materialist reductionism inherent in biologically-oriented psychology – at times, endorsing the positivist, deterministic outlook, and at times, resisting, reinterpreting, and defamiliarizing scientific notions. I am particularly interested in cases in which the explanatory power of science was overstretched, leading to dubious results. For example, in 1928, the inventor of the polygraph lie detector, William Moulton Marston, was recruited by Universal Studios to gauge the emotional responses of film spectators by recording changes in their respiration patterns and systolic blood pressure. Yet, Marston’s findings only replicated gender stereotypes of his time in suggesting that women spectators are predisposed to fall for scenes of romantic conquest.[10]

Overall, what I have learned from my study is that:

  • Science always exists in contexts, both institutional and political
  • Science is not neutral: biases play into the design of experiments and interpretation of data, as well as the extrapolation of findings beyond each individual experiment
  • The application of science in other areas – law, business, education, or aesthetics– is never a direct, transparent channeling of “truth” to achieve more “progressive” results.

This explains why I am alarmed by the news of technologies such as “brain fingerprinting” entering the arsenal of police interrogators. [11] Brain fingerprinting supposedly can reveal whether the subject has any vivid emotional memories associated with the circumstances of the crime, as it detects surges of electrical activity of the brain in response to the interrogator’s prompts. Heavily criticized by leading neuroscientists as underdeveloped, this technology has nevertheless been already adopted in court procedures in India, and is currently being tested in Singapore and the state of Florida.[12] Working on the nineteenth and early twentieth century, I am very familiar with the devastating social consequences of discredited scientific concepts such as phrenology, Alphonse Bertillon’s photographic galleries of rogues, and the polygraph lie detector. And I cannot agree more with the Italian neuroscientists Paolo Legrenzi and Carlo Umiltà, who warn that laboratories of applied neuroscience often misrepresent the revelatory powers of brain research.[13] I believe that errors in science will eventually be corrected by science itself, but the intervention of the humanities is necessary in order to avoid the oversimplification of premises used in experiments and to warn policy makers about rushed wholesale applications of neuroscienscientific data.

In their book asserting the usefulness of neuroscience for law, Zeki and Goodenough brush off the historical misgivings of a purely biological perspective on the mind:

To the extent that biological approaches had been included in the great arguments of the twentieth century between fascism, communism, capitalism, socialism, dictatorship and liberal democracy, they wore a distorted and appropriately discredited aspect that had more to do with political expediency than with any accurate application of the admittedly limited science of the time. But that biology had been thus misused in the past is not a good reason for not taking into account its findings in the future, always of course with appropriate safeguards.[14]

Yet, who will be issuing the safeguards if the humanities continue to erode?

3. Conclusion

The humanities can help neuroscience to become aware of its current blindspots, to define more profound questions for research experiments, and design more sensitive and responsible methods for applying scientific insights outside the laboratory space. In particular, I would like to highlight several issues, where the sharing of expertise between neuroscience and the humanities would be crucial.

  • In the field of neuroaesthetics, how can we account for the complexity of human engagement with art objects? Too often we hear of studies that operate with a reductive model of aesthetic experience, relying on the subject’s reports of pleasure correlated with certain cortical activity and formal patterns of the art piece. Yet, to be impressive, art does not necessarily need to be pleasure inducing. Moreover, the perceptual properties of an art piece are not the only variable shaping our response; a much greater role is played by our cultural knowledge, memory, and imagination. Is there a way to create an empirical, quantificational method to factor in these variables? This formidable task cannot be accomplished without cultural historians, communication specialists, psychologists, and sociologists. Working towards this goal would give us a more nuanced view of the individual, contextualized, situational reactions, instead of the limited sets of universal, ahistorical laws that neuroscience gravitates toward.
  • What can we learn from the past? An inquiry into the social and political consequences of biologically-oriented approaches to the human mind, prominent in the nineteenth and twentieth centuries, may help us anticipate the potential dangers of overstretching neuroscience’s findings. Likewise, a study of artists’ and cultural theorists’ engagement with neurophysiological psychology in the past provides both cautionary tales and forgotten insights relevant for contemporary research priorities.
  • Foucauldian historiography may help open our eyes on the very functioning of what Nima Bassiri calls the current “regime of neuroscientific reasoning.”[15] The history of science teaches us that “the emergent authority of the neuroscience is a consequence of, among other things, complex political, economic and material contingencies rather than a consequence quasi-metaphisical revelations of the brain’s processes.”[16] What factors compel today’s research institutions, educators, politicians, and law enforcement agencies to embrace the neuroscientific explanations of human mind? In what way such reframing of our individual selves reflects the anxieties and impasses of our culture at this particular historical moment?
  • In light of the recent assertions that gender identity and sexual orientation are fixed during the fetal development of the brain, it is crucially important to draw on the expertise of women and gender studies specialists in the humanities. In working with neuroscientists, these experts could help create more nuanced categories of gender identity to be used in experimental setups, as well as more comprehensive and responsible interpretations of results. Moreover, as Sigrid Schmitz and Grit Hoppner argue in their article on “neurofeminism,” the recent research on the plasticity of the brain points to the “social influences on the gendered development of the brain and of behavior,” therefore opening up further avenues for transdisciplinary collaboration between brain scientists and humanities.[17]
  • Last but not least, the humanities may help to prevent the uncritical overstretching of “neuro-facts” and “neuro-explanations” in the popular media and applied neuroscience technologies. A very sensitive area, where such intervention is needed, is law theory, criminology, and court ethics.

 

 

 

Ana Hedberg Olenina is an assistant professor of comparative literature and media studies at Arizona State University and the founder of an interdisciplinary research cluster Embodied Cognition in Performance.  Her essays on performance in the Soviet avant-garde cinema, modern dance, and the history of applied psychology have appeared in journals such as Discourse and Film History, as well as several anthologies in Russia, the US, and Germany. Her current book project, Psychomotor Aesthetics: Movement and Affect in Russian and American Modernity,  traces the ways in which early-twentieth-century film directors, actors, and performance theorists used the psychological ideas of their time to conceptualize expressive movement and transference of emotion.

 

[1]See Oliver R. Goodenough and Semir Zeki, Law and the Brain (New York, 2006), p. xiii.

[2] See Zeki and T. Ishizu, “The Brain’s Specialized Systems for Aesthetic and Perceptual Judgment,” European Journal of Neuroscience 37 (2013): 1413-20; Natalie Phillips, “Distraction as Liveliness of Mind: A Cognitive Approach to Characterization in Jane Austen,” Theory of Mind and Literature, ed. Paula Leverage (West Lafayette, Ind., 2011), pp. 105-22; and Bettina Bläsing et al., The Neurocognition of Dance: Mind, Movement and Motor Skills (New York, 2010).

[3] See Alva Noë, Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (New York, 2009), p. vi.

[4] See Ibid., viii.

[5] See Raymond Tallis, Aping Mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity (Durham, England, 2011).

[6] Joseph Dumit, “The Fragile Unity of Neuroscience,” Neuroscience and Critique: Exploring the Limits of the Neurological Turn, ed. Jan Vos and Ed Pluth (New York, 2016), pp. 223-30, 226.

[7] See ibid., p. 226.

[8] See ibid., p. 226.

[9] Phillip Haueis and Jan Slaby, “Brain in the Shell: Assessing the Stakes and the Transformative Potential of the Human Brain Project,” Neuroscience and Critique, p. 120.

[10] See Ana Olenina, “The Doubly Wired Spectator: Psychophysiological Research on Cinematic Pleasure in the 1920s.” Film History: An International Journal 27, no.1 (2015): 29-57.

[11] See David Cox, “Can Your Brain Reveal You Are a Liar?” BBC Future, 25 Jan. 2016, www.bbc.com/future/story/20160125-is-it-wise-that-the-police-have-started-scanning-brains

[12] See ibid.

[13] See Paolo Legrenzi and Carlo Umiltà, Neuromania: On the Limits of Brain Science (New York, 2011).

[14] Goodenough and Zeki, Law and the Brain, p. xii.

[15] Nima Bassiri, “Who Are We, If We Are Indeed Our Brains?” Neuroscience and Critique, p. 45.

[16] Ibid.

[17] Sigrid Schmitz and Grit Hoppner. “Neurofeminism and Feminist Neurosciences: A Critical Review of Contemporary Brain Research.” Frontiers in Human Neuroscience, 25 July 2014, journal.frontiersin.org/article/10.3389/fnhum.2014.00546/full

Leave a comment

Filed under Uncategorized

Hurricanes!

Bill Ayers

A natural and expected reaction to the disasters in Texas and Florida is the normal, everyday human response: as fellow creatures, we will help you. Of course.

But when we watch Governors Abbott and Scott rolling up the sleeves of their work shirts, donning their “NAVY” baseball caps, and offering the optics of responsible leadership, it’s only fair to point out that these guys and their donors and allies are leading climate change deniers, that they’ve intentionally underfunded infrastructure development and safety programs, that they are austerity hawks who consistently serve the interests of the banksters and their hedge-fund homies, that they are vicious America-firsters and proponents of the harshest treatment of immigrants, and that they always seem to want FEMA, the EPA, and Washington “off our backs…” except for right now. They urge us to keep politics away from a “natural disaster,” and with the complicity of the bought media and the chattering class it is done—endless images of flood and storm, less and less illuminating as the catastrophe rolls forward, and not a peep about the climate chaos brought on by human-caused change and run-away predatory capitalism. And within the ballooning hypocrisy this: immigrant scrutiny and harsh treatment will be suspended for the storm, so please go to shelters; after the storm, back to normal: scapegoating, targeting, exploiting, oppressing. The gathering catastrophic storms here in Chicago and around the country—terrible schools, scarce jobs and crisis-level unemployment, shoddy health care, inadequate housing, and occupying militarized police forces—are of no interest to the political and financial classes, or the 1%. It’s up to us to organize and rise!

Original posted at https://billayers.org/2017/09/10/hurricanes/

Leave a comment

Filed under Critical Inquiry, Poetic Justice, Uncategorized

A Response That Isn’t

Chad Wellmon, Andrew Piper, and Yuancheng Zhu

The post by Jordan Brower and Scott Ganz is less a response than an attempt to invalidate by suggestion. Debate over the implications of specific measurements or problems in data collection are essential to scholarly inquiry. But their post fails to deliver empirical evidence for their main argument: that our descriptions of gender bias and the concentration of institutional prestige in leading humanities journals should be met with deep doubts. Their ethos and strategy of argumentation is to instill doubt via suspicion rather than achieve clarity about the issues at stake. They do so by proposing strict disciplinary hierarchies and methodological fault lines as a means of invalidating empirical evidence.

Yet as we will show, their claims are based on a misrepresentation of the essay’s underlying arguments; unqualified overstatements of the invalidity of one measure used in the essay; and the use of anecdotal evidence to disqualify the study’s data. Under the guise of empirical validity, their post conceals its own interpretive agenda and plays the very game of institutional prestige that our article seeks to understand and bring to light.

We welcome and have already received pointed criticisms and incisive engagements from others. We will continue to incorporate these insights as we move forward with our work. We agree with Brower and Ganz that multiple disciplinary perspectives are warranted to fully understand our object of study. For this reason we have invited Yuancheng Zhu, a former PhD in statistics and now research fellow at the Wharton School of the University of Pennsylvania, to review our findings and offer feedback.

With respect to the particular claims Brower and Ganz make, we will show:

  1. they address only a portion of––and only two of seven total tables and figures in––an article whose findings they wish to refute;
  2. their proposed heterogeneity measure is neither mathematically more sound nor empirically sufficient to invalidate the measure we chose to prioritize;
  3. their identification of actual errors in the data set do not invalidate the statistical significance of our findings;
  4. their anecdotal reasoning is ultimately deployed to defend a notion of “quality” as an explanation of extreme institutional inequality, a defense for which they present no evidence.

1. Who Gets to Police Disciplinary Boundaries?

Brower and Ganz argue that our essay belongs to the social sciences and, therefore, that neither the humanities nor the field to which it actually aspires to belong, cultural analytics, has a legitimate claim to the types of argument, evidence, and knowledge that we draw upon. Such boundary keeping is one of the institutional norms we hoped to put into question in our essay, because it is a central strategy for establishing and maintaining epistemic authority.

But Brower and Ganz’s boundary policing is self-serving. Although they identify the entire essay as “social science,” they only discuss sections that account for roughly 35 percent of our original article and only two of seven figures and tables presented as evidence. Our essay sought to address a complex problem, and so we brought together multiple ways of understanding a problem, from historical and conceptual analysis to contemporary data, in order better to understand institutional diversity and gender bias. Brower and Ganz ignored a majority of our essay and yet sought to invalidate it in its entirety.

2. Claiming that HHI Is “Right” Is Wrong.

Brower and Ganz focus on two different methods of measuring inequality as discussed in our essay, and they suggest that our choice of method undermines our entire argument. In the process, they suggest that we did not use two different measures or discuss HHI (or talk about other things like gender bias). They also omit seven other possible measures we could have used. In other words, they present a single statistical measure as a direct representation of reality, rather than one method to model a challenging concept.

If we view the publication status of each year as a probability distribution over the institutions, then coming up with a single score is simply trying to use one number to summarize a multidimensional object. Doing so inevitably requires a loss of information, no matter how one chooses to do it. Just like mean, median, or mode summarizes the average position of a sample or a distribution, type-token score—or the HH index—summarizes “heterogeneity” from different perspectives. Brower and Ganz call the use of type-token ratio a “serious problem,” but in most circumstances one does not call using mean rather than median to summarize data a serious problem.

If there is not a single appropriate score to use, which one should we choose? The first question is what assumptions we are trying to model. The type-token ratio we used assumes that the ratio of institutions to articles is a good representation of inequality. The small number of institutions represented across articles suggests that there is a lack of diversity in the institutional landscape of publishing. The HH index looks at the market share of each actor (here, institutions), so that the more articles that an institution commands, the more concentrated the “industry” is thought to be. Because the HH index is typically used to measure financial competitiveness, it is based on the assumption that simply increasing the number of actors in the field decreases the inequality among institutional representation––that is, that more companies means more competitiveness. But as we argue in our piece, this is not an assumption we wanted to make.

Here is a demonstration of the problem drawn from the email correspondence from Ganz that we received prior to publication:

For example, imagine in year 1, there are 10 articles distributed equally among five institutions. Your heterogeneity metric would provide a score of 5/10 = 0.5.

Then in year 2, there are 18 articles distributed equally among six institutions. We would want this to be a more heterogeneous population (because inequality has remained the same, but the total number of institutions has increased). However, according to your metrics, this would indicate less heterogeneity (6/18 = 0.33).

In our case, we do not actually want the second example to suggest greater heterogeneity. In effect the number of articles has increased by 60 percent, but the institutional diversity by only 20 percent. In our view heterogeneity has decreased in this scenario, not increased. More institutions (the actors in the model) is for us not an inherent good. It’s the ratio of institutions to articles that matters most to us.

The second way to answer the question is to understand the extent to which each measure would (or would not) represent the underlying distributions of the data in different ways. Assuming that the number of articles for each journal is relatively similar each year, the type-token score and the HH index actually belong to the same class of metric, the Renyi entropy. The HH index is equivalent to the entropy with alpha equal to 2 (ignoring the log and the constant), and the type-token score corresponds to when alpha equals 0 (it is log of the number of “types”; we assume that the number of tokens is relatively constant). To put it in a more mathematical way, HH index corresponds to the L2 norm of the probability vector, and type-token score corresponds to the L0 norm. Given that the L1 norm of the probability vector is 1 (probabilities sum up to 1), the HH index and type-token score tend to be negatively correlated. There is, then, not much of a difference between options. Another special case is when alpha = 1, which is the usual definition of entropy.

A big assumption is that the number of articles each year stays relatively constant. It is also debated and debatable which one (TT score or HH index) is more sensitive to the sample size. If we look at the article distributions for each journal, the assumption of a constant number of articles is in this case a fair one to make. Once becoming nonzero, the number of publications for each journal stays relatively unchanged, in terms of scale. It is indeed the case that sample size will affect both metrics, just like sample entropy will be affected by sample size. One could eliminate the effect of sample size by randomly downsizing each year to the same number (or maybe aggregating neighboring years and then downsizing).

If the two metrics are similar, then why do they appear to tell different stories? In fact, upon further review they appear to be telling the same story. In figure 1, we see the two scores plotted for each journal. The first row is the type-token scores for each of the four journals, red for institutions and blue for PhDs. The second row is for 1/HHI,  the effective number. In none of the plots do we see the dramatic decrease of heterogeneity in the early years shown in figure 4 of the original essay or the consistently strong increase of heterogeneity that Ganz and Brower argue for. The first row and the second row agree with each other in terms of the general trend most of the time. This is because in our figure 4 and Brower and Ganz’s replication, the four journals are aggregated. When two journals (Critical Inquiry and Representations) come into play in the late seventies and the eighties, the scores are dragged down because, on average, those two journals are less diverse. Hence, the two metrics do give us the same trend once the journals are disaggregated.

So when we pull apart the four journals, what story do they tell? If we run a linear regression model on each of the journals individually, since 1990 there has either been no change or a decline of heterogeneity for both measures (with one notable exception, PMLA for author institutions which has increased). In other words, either nothing changes about our original argument, or things actually look worse from this perspective.

We were grateful to Brower and Ganz when they first shared their thinking about HHI and tried to acknowledge that gratitude, even while disagreeing with their assumptions, in our essay. Understanding different models and different kinds of evidence is, we’d suggest, a central value of scholarship in the social sciences or in the humanities. That is why we discussed the two measures together. But to suggest that the marginal differences between the scores invalidate an entire study is wrong. It is also not accurate to imply that we made this graph a centerpiece of our essay—“its most provocative finding,” in their words.

Consider how we frame our discussion of the time-based findings in our essay. We point out the competing ways of seeing this early trend and emphasize that post-1990 levels of inequality have remained unchanged. Here is our text:

Using a different measure such as the Herfindahl-Hirschman Index (HHI) discusssed in note 37 above suggests a different trajectory for the pre-1990 data. According to this measure, prior to 1990 there was a greater amount of homogeneity in both PhD and authorial institutions, but no significant change since then. In other words, while there is some uncertainty surrounding the picture prior to 1990, an uncertainty that is in part related to the changing number of journal articles in our data, since 1990 there has been no significant change to the institutional concentrations at either the authorial or PhD level. It is safe to say that in the last quarter century this problem has not improved.

In other words, based on what we know, it is safe to say that the problem has remained unchanged for the past quarter century, though one could argue that in some instances it has gotten worse. If you turned the post-1990 data into a Gini coefficient, the degree of institutional inequality for PhD training would be 0.82, compared to a Gini of 0.45 for U.S. wealth inequality. But for Brower and Ganz, this recent consistency is overshadowed by the earlier improvement that they detect. To insist that institutional diversity is improving is, at best, to miss the proverbial forest for the trees. At worst, it’s misleading. Their argument is something like: We know there has been no change to the extremely high levels of concentration for the past twenty-five years. But if you just add in twenty more years before that then things have been getting better.

Their second example about the relative heterogeneity between journals reflects a similar pattern: legitimate concern about a potential effect of the data that is blown into a dramatic rebuttal not supported by the empirical results.

In the one example of PhD heterogeneity, they show that a random sample of articles always has PMLA with more diversity than Representations and yet our measure shows that Representations exhibits more diversity than PMLA. What is going on here?

It appears Representations is unfairly being promoted in our model because it publishes so many fewer articles than PMLA (PMLA has more than twice as many articles as Representations). But notice how they choose to focus on the two most disparate journals to make their point. What about for the rest of the categories?

Interestingly, when it comes to institutional diversity, the only difference that their proposed measure makes is to shift the relative ranking of Representations. What is troubling here is the fact they they chose not to show this similarity when they replicated our findings, which are shown here:

Author   PhD  
       
Ours HHI Ours HHI
PMLA PMLA NLH NLH
NLH NLH Rep PMLA
Rep CI PMLA CI
CI Rep CI Rep

 

In other words, our essay overestimates one journal’s relative diversity. We agree that their example is valid, important, interesting and worth using. But as an argument for invalidation it fails. How could their measure invalidate our broader argument when it reproduces all but one of our findings?

Given both measures’ strong correlation with article output, we would argue that the best recourse is to randomly sample from the pools to control for sample size rather than rely on yearly data. In this way we we avoid the trap of type-token ratios that are overly sensitive to sample size and the HHI assumption of more institutions being an inherent good. Doing so for 1,000 random samples (of 100 articles per sample), we replicate the rankings produced by the HHI score (ditto for a Gini coefficient). So Brower and Ganz are correct in arguing that we overrepresented Representations’ diversity, which should be the lowest for all journals in both categories. We are happy to revise this in our initial essay. But to suggest as they do that this invalidates the study is a gross oversimplification.

3. When Is Error a Problem?

Brower and Ganz’s final major point is this: “We are concerned that the data, in its current state, is sufficiently error-laden to call into question any claims the authors wish to make.” This is indeed major cause for concern. But Brower and Ganz provide little evidence for their sweeping claim.

Brower and Ganz are correct to point out errors in our data set, a data set which we made public months ago precisely in hopes that colleagues would help us improve it. This is indeed a nascent field, and we do not have the same long-standing infrastructures in place for data collection that the social sciences do. We’re learning, and we are grateful to have generous people reading our work in advance and helping contribute to the collective effort of improving data for public consumption.

As with the above discussion about HHI, the real question is, what is the effect of these errors in our data set? Is the data sufficiently “error-laden” to call into question any of the findings, as they assert? That’s a big claim, one that Brower and Ganz could have tested but chose not to.

We can address this issue by testing what effect random errors might have on our findings. This too we can do in two ways. We can either remove an increasing number of articles to see what effect taking erroneous articles out of the data set might have, or we can randomly reassign labels according to a kind of worst-case scenario logic. In the case of gender, it would mean flipping a gender label from its current state to its opposite. What if you changed an increasing number of people’s gender—how would that impact estimates of gender bias? In the case of institutional diversity, we could relabel articles according to some random and completely erroneous university name (“University of WqX30i0Z”) to simulate what would happen if we mistakenly entered data that could not contribute to increased concentration (since we would choose a new fantastic university with every error). How many errors in the data set would be necessary before assumptions about inequality, gender bias, or change over time need to be reconsidered?

Figure 2 shows the impact that those two types of errors have on three of our primary metrics. As we can see, removing even fifty percent of articles from our data set has no impact on any of our measures. The results of gender bias and overall concentratedness are more sensitive to random errors. But here too it takes more than 10 percent of articles (or over 500 mistakes) before you see any appreciable shift (before the Gini drops below 0.8 for PhDs and 0.7 for authors). Gender equality is only achieved when you flip 49 percent of all authors to their opposite gender. And in no cases does the problem ever look like it’s improving since 1990.

But what if those errors are more systematic—in other words, if the errors they identify are not random, but have a particular quality about them (for example, if everyone wrongly included had actually gone to Harvard). So let’s take a look. Here are the errors they identify:

  • 100 mislabeled titles
  • twenty-three letters that should not be considered publications
  • one omitted article that was published but not included because it was over our page filter limit
  • eight articles that appear in duplicate and one in triplicate
  • one mislabeled gender (sorry Lindsay Waters)

First, consider those 100 mislabeled titles. We were not counting titles, but rather institutional affiliations. While they do matter for the record (and we have corrected them; the corrected titles will appear in the revised version of our publicly available data set), they have little bearing on our findings.

In terms of duplicates, all but one duplicate occurred because authors have multiple institutional affiliations. We have clarified this by adding article IDs and a long document explaining all instances of duplicates, which will be included with the revised data.

So what about those letters? Actually, the problem is worse than Brower and Ganz point out. We inadvertently included a number of texts below the six-page filter we had set as our definition of an article. We are thankful that Brower and Ganz have helped identify this error. After a review of our dataset, we found 251 contributions that did not meet our article threshold. These were extremely short documents (one or two pages), such as forums, roundtables, and letters that should not have been included.

So, do these errors call into question our findings? How do they impact the overall results?

Here is a list of the major findings before and after we cleaned our dataset:

 

                                                                       Before                         After

Gini coefficient

PhD institution.                                       0.816                           0.816

Author institution                                   0.746                           0.743

Diversity over time (since 1990)

#cases of decrease                                   3                                  3

#cases of no change                                5                                  5

#cases of increase                                   0                                  0

Journal Diversity Ranking                             PMLA                         PMLA

NLH                           NLH

      CI                               CI

                                                                            Rep                              Rep

Gender Bias (% Women)

4 Journal Yearly Mean                        30.4%                          30.7%

4 Journal Yearly Mean Since 2010   39.4%                          39.5%

 

Finally, they say we have failed to adequately define our problem, once again invalidating the whole undertaking:

Wellmon and Piper fail to adequately answer the logically prior question that undergirds their study: what is a publication?

What is a publication, indeed? And why and how did printed publication come to be the arbiter of scholarly legitimacy and authority in the modern research university? We think these are important and “logically prior” questions as well and that’s why we devoted the first 3,262 words of our essay to considering them. This hardly exhausts what is a complex conceptual problem, but to suggest we didn’t consider it is disingenuous.

So let’s start by granting Brower and Ganz their legitimate concern. Confronted with the historical and conceptual difficulty of defining a publication, we made a heuristic choice. For the purposes of our study, we defined an article as a published text of six pages or more in length. It would be interesting to focus on a narrower definition of “publication,” as a “research article” of a specified length that undergoes a particular type of review process across time and publications. But that in no way reflects the vast bulk of “publications” in these journals. Imposing norms that might be better codified in other fields, Brower and Ganz’s desired definition overlooks the very real inconsistencies that surround publication and peer review practices in the humanities generally and in these journals’ histories in particular. As with their insistence on a single measure, they ask for a single immutable definition of a publication for a historical reality that is far more varied than their definition accounts for. Their insistence on definitional clarity is historically anachronistic and disciplinarily incongruous. It is precisely this absence of consensus and self-knowledge within humanities scholarship––and the consequences of such non-knowledge––that our piece aims to bring to light.

Clearly more work can be done here. Subsetting our data by other parameters and testing the extent to which this impacts our findings would indeed be helpful and insightful. And we welcome more collaboration to continue to remove errors in the dataset. In fact, after the publication of our essay, Jonathan Goodwin kindly noted anomalies in our PhD program size numbers, which when adjusted change the correlation between program size and article output from 0.358 to 0.541.

4. Is Quality Measurable?

In sum, we readily concede that the authors raise legitimate concerns about the quality and meaning of different measures and how they might, or might not, tell different stories about our data. This is why we discuss them in our piece in the first place. We also appreciate that they have drawn our attention to errors in the dataset. We would be surprised if there were none. The point of statistical inference is to make estimations of validity given assumptions about error.

What we do not concede is that any of these issues makes the problem of institutional inequality and gender disparity in elite humanities publishing disappear. None of the issues Ganz and Brower raise invalidate or even undermine the basic findings surrounding our sample of contemporary publishing––that scholarship publishing in these four prestige humanities journals is massively concentrated in the hands of a few elite institutions, that most journals do not have gender parity in publishing, and that the situation has not improved in the past quarter century.

There are many ways to think about what to do about this problem. And here the authors are on even shakier evidentiary ground. We make no claims in our piece about what the causes of this admittedly complex problem might be. “Where’s the test for quality?” they ask. This is precisely something we did not test because the data in its current form does not allow for such inferences. In this first essay, which is part of a longer-term project, we simply want readers to be aware of the historical context of academic publication in the humanities and introduce them (and ourselves) to its current state of affairs for this limited sample.

Ganz and Brower, by contrast, assume, in their response at least, that quality––a concept for which they provide no definition and no measure––is the cause of the institutional disparity we found. They suggest that blind peer review is the most effective guarantor of this nebulous concept called “quality.” They provide no evidence for their claims. But there is strong counter evidence that peer review does not, in fact, function as robust a control mechanism as the authors wish to insinuate. For a brief taste of how complex and relatively recent peer review is, we would recommend Melinda Baldwin’s Making “Nature”: The History of a Scientific Journal as well as studies of other fields such as Rebecca M. Blank’s “The Effects of Double-Blind versus Single-Blind Reviewing: Experimental Evidence from The American Economic Review” or Amber E. Budden’s “Double-Blind Review Favours Increased Representation of Female Authors.”[1]

These are complicated issues with deep institutional and epistemic consequences. It is neither analytically productive nor logically coherent to conclude, as Ganz and Brower do, that because high prestige institutions are disproportionately represented in high prestige publications, high prestige institutions produce higher quality scholarship. It is precisely this kind of circular logic that we hope to question before asserting that the status quo is the best state of affairs.

In our essay we simply argue that whatever filter we in the humanities are using to adjudicate publication systems (call it patronage, call it quality, call it various versions of blind peer review, call it “Harvard and Yale PhDs are just smarter”) has been remarkably effective at maintaining both gender and institutional inequality. This is what we have found. We would welcome a debate about the causes and the competing goods that various filtering systems must inevitably balance. This is precisely the type of debate our article hoped to invoke. But Brower and Ganz sought to invalidate our arguments and findings by anecdote and quantitative obfuscation. And the effect, intended or not, is an argument for the status quo.

[1] See Melinda Baldwin, Making “Nature”: The History of a Scientific Journal (Chicago, 2015); Rebecca M. Blank, “Effects of Double-Blind versus Single-Blind Reviewing: Experimental Evidence from The American Economic Review” (American Economic Review 81 [Dec. 1991]: 1041–67); and Amber E. Budden et al., “Double-Blind Review Favours Increased Representation of Female Authors” (Trends in Ecology and Evolution 23 [Jan. 2008]: 4-6).

Chad Wellmon is associate professor of German studies at the University of Virginia. He is the author, most recently, of Organizing Enlightenment: Information Overload and the Invention of the Modern Research University and coeditor of Rise of the Research University: A Sourcebook. He can be reached at mcw9d@virginia.eduAndrew Piper is professor and William Dawson Scholar of Languages, Literatures, and Cultures at McGill University. He is the director of .txtLAB,  a digital humanities laboratory, and author of Book Was There: Reading in Electronic Times. Yuancheng Zhu is a former PhD in statistics and now research fellow at the Wharton School of the University of Pennsylvania

 

Leave a comment

Filed under Uncategorized

One Finch, Two Finch, Red Finch, Blue Finch: Measuring Concentration and Diversity in the Humanities, A Response to Wellmon and Piper

Jordan Brower and Scott Ganz

 

Introduction

In “Publication, Power, and Patronage: On Inequality and Academic Publishing,” Chad Wellmon and Andrew Piper motivate their study in a laudable spirit: they seek to expose and root out elitism in the name of a more egalitarian and truly meritocratic academy.[1] That the study at the same time makes a claim for more studies of its kind— “What we need in our view is not less quantification but more” (“P”)—seems justifiable based on the results it found. We find, then, an argument for the continued practice of the digital humanities (DH).

But this study is not DH as we typically understand the term. Wellmon and Piper are not producing new software or a digital archive, or offering an interpretation of a large corpus of books using quantitative methods. Rather, they are humanists making a claim about social organization, where the organization in question is their own field. This is an important distinction to make. Rather than holding their study to a research standard held by other digital humanists, we ought instead to evaluate their work using the rubrics of disciplines that answer similar kinds of questions.

Specifically, Wellmon and Piper assess the heterogeneity of university representation in top humanities journals as an indicator of the extent to which publication practices in the humanities are corrupted by “patterns and practices of patronage and patrimony and the tight circulation of cultural capital” (“P”). Perhaps unknowingly, the authors find themselves a part of a long and contentious literature in the social sciences[2] and natural sciences[3] over the creation and interpretation of metrics for diversity (and its opposite, concentration) that continues through the current decade.[4] The authors put themselves into the shoes of ecologists seeking novel data in unexplored terrain. Traditional bibliometric indicators of status and concentration in the sciences that rely on citation and coauthorship lose traction in the humanities.[5] As such, the authors seek to do what any good ecologist might: they go out into the field and count species.

In their analysis, the field is represented by articles published in four prominent humanities journals, and observations are individual articles. Observations are grouped into species by examining their university affiliation: is that finch Harvard crimson or Yale blue? Then the raw counts are aggregated into summary metrics that try to capture the concept of heterogeneity. The latter half of their paper presents conclusions drawn from their expedition.

The first two parts of this essay examine a pair of questions associated with this effort. First, how closely does Wellmon and Piper’s constructed measure of heterogeneity reflect what is usually meant by heterogeneity? Second, are the data collected representative of the field of the humanities that they seek to analyze? In our final section, we turn to a brief consideration of the broader cultural and political motivations for and implications of this study.

We conclude that the heterogeneity metric is inappropriate. We also worry that the data may not be representative of the field of the humanities due to numerous recording errors and a lack of conceptual clarity about what constitutes a publication. As two pillars of statistical analysis are the representativeness of the sample and the consistency of measure, we believe the study fails to achieve the level of methodological rigor demanded in other fields. There are many aspects of Wellmon and Piper’s study that live up to the highest standards of scientific method. Our criticism would not have been possible had the authors’ data and methods not been transparent or had the authors not willingly engaged in lengthy correspondence. However, the shortcomings of their quantitative analysis corrupt the foundations of their study’s conclusions.

Our essay is also a call for digital humanists to take seriously the multidisciplinary nature of their project. At a time when universities are clamoring to produce DH scholarship, it is imperative that humanities scholars subject that work to the same level of rigorous criticism that they apply to other types of arguments. At the same time, DH scholars must admit that the criticism they seek is different in kind. This is to say that in order to take DH work seriously, scholars must take the methods seriously, which means an investment in learning statistical methods and a push towards coauthorship with others willing to lend their expertise.

 

Measuring Heterogeneity

The latter half of Wellmon and Piper’s analysis measures the heterogeneity in the data they collect. Their “heterogeneity score,” which is the total number of unique universities divided by the total number of articles, seeks to capture a spectrum from “institutional homogeneity” to “institutional difference” (“P”). They justify their metric through reference to the similar type-token ratio metric of vocabulary richness.

There are two serious problems with Wellmon and Piper’s measure. The first is that heterogeneity is not synonymous with richness. Heterogeneity instead is associated with both richness and evenness.[6] In the present context, richness refers to the number of unique universities represented in each journal. Evenness refers to the extent to which articles are equally distributed among the institutions represented. A good metric for heterogeneity should therefore increase with the number of universities represented and increase with evenness of representation across universities. Wellmon and Piper treat a journal that publishes authors from one university eleven times and authors from nine other universities one time each the same as a journal that publishes authors from ten universities two times each.

Another useful property of a heterogeneity metric is that it should not decline as the total number of observations increases. Whether the ecologist spends a day or a month counting species on a tropical island should not affect the assessed level of heterogeneity, on average. (That said, if an ecologist spends one month each on two different islands and records more observations on the first than the second, that might well indicate greater ecological diversity on the first island.) In this respect, the Wellmon-Piper heterogeneity metric also fails, because larger observation counts will mechanically produce lower scores indicating more homogeneity. (As Brian Richards notes, the type-token ratio, too, faces this shortcoming, which is why linguists assign their subjects a fixed number of tokens.[7]) The probability of observing a first-time publisher decreases with each additional article recorded. Journals that publish more articles (such as PMLA) will therefore tend to have lower heterogeneity scores than those that publish fewer articles (like Representations).

The following thought experiment demonstrates this troubling property of Wellmon and Piper’s heterogeneity score. We take one thousand random samples of sixty articles from PMLA and Representations, journals that are indicated to be approximately equal in their heterogeneity score with respect to PhD institution. We then calculate the mean of Wellmon and Piper’s heterogeneity score across the samples as the number of articles grows from ten to sixty. Figure 1 displays the trend of the mean heterogeneity score for PMLA (black solid line) and Representations (red solid line) (fig. 1). For all article counts, PMLA is identified as considerably more heterogeneous than Representations. However, when evaluated at the average number of articles per year (represented for PMLA and Representations by the black and red dotted lines, respectively), Representations receives a higher mark for diversity (indicated by the fact that the dashed red line exceeds the dashed black line). This is the type of perverse outcome a metric for heterogeneity should seek to avoid.

Standard measures of diversity and concentration avoid these pitfalls. They decompose into a function of the equality of the shares across the groups represented and the total number of groups. They are not mechanically tied to the number of observations. One metric of concentration that has both of these characteristics is the Herfindahl-Hirschman Index (HHI). Use of the HHI and similar indices is widespread. For example, the HHI is used by the US Department of Justice when considering the competitiveness implications of potential mergers. The HHI is one of a class of metrics that are a function of the weighted sum of the shares of overall resources allocated to each group that is observed in a population.[8] The standard HHI equals the sum of the squared market shares of each firm in an industry or, in our setting, the share of the number of articles published in a journal by authors from each university. The range of the HHI thus spans from 1/N to 1, where N is the number of firms in an industry. The inverse of the HHI (in other words, 1/HHI) is a commonly used measure of diversity (in ecology, the inverse of the HHI is referred to as a “Hill number”). This metric corresponds to the number of firms in an industry in which all firms have equal market share with the equivalent HHI as the one under observation. As such, it is often called the “effective number” of firms. Imagine an industry with four firms, one with half of the market share and the others with one sixth each. The HHI equals 1/22 +3· 1/62 = 1/3, which is the same as the HHI in an industry with three firms with equal market share. The effective number of firms is, therefore, three.
 Figure 2 recreates figure 1 using the effective number of universities metric (fig. 2). PMLA remains considerably more diverse with respect to PhD affiliation than Representations for all sample sizes. However, now the difference in the number of articles published per year creates a larger divergence between the estimated level of institutional diversity across the two journals.


Heterogeneity Comparisons Over Time and Across Journals

Using the effective number of universities metric changes many of the quantitative conclusions in the study. The trend toward journals publishing more articles over time and the differences between the count of articles published annually across the four journals leads Wellmon and Piper to mistakenly identify more recent and larger journals as less heterogeneous. For example, we reproduce Wellmon and Piper’s Figure 4, which examines the trend in heterogeneity across the four journals over time, using the effective number of universities metric in figure 3 (fig. 3). In the figure, the black line indicates the heterogeneity with respect to the authors’ current institution and the red line indicates heterogeneity with respect to the authors’ PhD institution. Wellmon and Piper’s graph indicates a long-term decline in heterogeneity, but little change since 1990. While we also observe little change since 1990, the “effective number of firms” metric indicates a longer-term trend towards greater diversity.

Similarly, we come to different conclusions about the relative level of heterogeneity across the four journals.[9] 
In table 1, we present the effective number of universities for each journal both in terms of the current and PhD university affiliations of the authors, along with 95 percent confidence intervals, using the methodology in Chao and Jost.[10]

We find that New Literary History (NLH) and PMLA are the most heterogeneous, both in terms of the author’s current and PhD affiliations. Representations is the least heterogeneous. Critical Inquiry (CI) falls in the middle. Unlike Wellmon and Piper, we find this ranking to be consistent across the types of author affiliation. The journals with more diverse institutional representation in terms of current author affiliation are also more diverse in terms of where the author received their PhD. However, we do observe that there is greater disparity across journals when examining the diversity of the authors’ current affiliation than the PhD affiliation.

 

Continue reading

2 Comments

Filed under Uncategorized

Tzvetan Todorov (1939–2017)

Françoise Meltzer

 

Tzvetan Todorov—the literary theorist, historian, philosopher, structuralist and essayist—died in Paris at the age of seventy-seven in February of this year. His importance to every one of these disciplines and subjects to which he turned his attention is enormous. A Bulgarian born in 1939, Todorov emigrated to Paris to do graduate work and was the student of Roland Barthes. His Bulgarian experience under Soviet communism gave him a mistrust of “everything the state defends or that is related to the public sphere.”[1] But the fall of the Berlin wall in 1989 changed that mistrust: “I felt like I was no longer conditioned by those childhood and teenage years living in a totalitarian world.” Thus it is unsurprising that Todorov’s intellectual trajectory took a strong turn toward ethics and politics beginning in the early eighties (as if sensing the end of Soviet communism) and continued until his death.

tzvetan-todorov-150-g

Todorov famously began as a structuralist, well-schooled in the Russian formalism of the twenties and the Prague School of Linguistics of the thirties. Early in his career, he translated the Russian formalists into French, Théorie de la littérature. Textes des formalistes russes (1965). One might say that his form of structuralism is politically “safe” to a certain extent, searching as it does for repetitive patterns and deep meanings that are frequently unrelated to their manifestations (plot events and narrative systems, for example) and certainly unconcerned—at least overtly—with the hegemony of the state. Literature and Signification (1969) continued in this vein and also put him on the map as the scholar who created a renaissance in rhetoric.

Todorov continued with structuralist analyses, which he combined with semiotics and a study of narrative systems (along with Gérard Genette, Barthes and the early Fredric Jameson). For this approach to narrative, in collaboration with Algirdas Julien Greimas and Barthes, Todorov coined the term narratology. All types of narrative, he wrote, “pertain less to poetics than to a discipline which seems to me to have solid claim to the right of existence, and which could be called narratology.[2] The sentence is taken from an article, “The 2 Principles of Narrative,” which analyzed what Vladimir Propp had called “functions” (in Russian fairy tales)— the succession of recurring plot elements that Propp showed could be mapped, or listed, in succession. Todorov added, contra Propp’s system, that the relationship between the units (or functions) cannot be only one of succession, but must “also be one of transformation.”[3] For example, a narrative may present events at the beginning of its récit, but the reader will see them differently if the same events return at the end. And yet even this syntactical power of transformations is not what is to be valued most in a narrative, adds Todorov. Narratives can be further broken down into the gnoseological type or of the mythological type; more layers need to be added to parsing any given plot events. Even in the early seventies, then, he was already drifting away from structuralism and its ancestors, the Russian formalists. Though Todorov continued in semiotics, structuralist approaches and narratology—for example, his 1971 The Poetics of Prose, which continues to analyze narrative (récit) on which the article in question draws— something else was brewing.

Well-known and much admired in France, by 1970 Todorov had authored many books and had helped to found, with Genette, the journal Poétique. In the same year, shortly before the Poetics of Prose, one of Todorov’s works was so immediately important and successful, that it was added to the French school curriculum the year after the book’s appearance, and is still there today. That book is The Fantastic: A Structural Approach to a Literary Genre. This is not the place to summarize Todorov’s famous argument. Suffice it to say that to this day, no student or scholar can write on the fantastic without alluding to Todorov’s seminal work on the subject.

But, to repeat, something else was brewing. Todorov’s early life under communism had profoundly marked him: “Today I believe,” he said in a late interview, “that my initial interest in questions of form and structure in literature . . . was closely linked to the fact that debating ideas was impossible in a totalitarian country.” If you wanted to say anything about literature in that context, he continued, you had the choice “between serving the purposes of official propaganda and focusing on the formal aspects of the text alone.” So he concentrated solely on the formal aspects of texts. By the early eighties, however, as Soviet communism was collapsing, Todorov was changing. In 1982 The Conquest of America: The Question of the Other appeared in France (the English translation two years later). The book, a type of echo and counter-response to Alexis de Tocqueville’s Democracy in America, examines the Mesoamerican Indian population confronting the Spanish conquistadors of the sixteenth century. Tocqueville, curiously enough, had written that while the Spanish had been horrible in their treatment of native populations, they were nonetheless unable to eradicate the native populations of North America. The United States succeeded in doing this, continues Tocqueville, “with felicity” and “without shedding any blood.” This appalling and absurd conclusion notwithstanding, Todorov writes his study with a different question: to what extent did the fact that the Aztecs had no notion of the Other, and that the Spaniards had a very clear and (let us say) xenophobic and racist one, contribute to the destruction of the pre-Columbian civilizations of Mexico and the Caribbean? Could it explain the Aztec passivity in the face of the brutal conquerors? Todorov mainly consulted the archives of Columbus and “then of his contemporaries and companions.” Todorov concludes his work with the following: “For Cortés, the conquest of knowledge leads to the conquest of power. I take the conquest of knowledge from his example, even if I do so in order to resist power.”

As of Conquest, Todorov will deeply engage in the issues of ethics in the political realm. He does not believe that history obeys a system, he writes in conclusion, but believes rather that “to become conscious of the relativity . . . of any feature of our culture is already to shift it a little, and that history (not the science but its object) is nothing other than a series of such imperceptible shifts.”[4] If Michel Foucault explored the tectonic shifts that occurred and caused changes in varying power structures in varying eras and discourses, Todorov is more optimistic; he believes that uncovering historical “shifts” can create a shift in itself, such that an event can be wrested from its underpinnings and historical behaviors can be somewhat modified in turn. “I myself,” says Todorov in another interview, “aspire less today than in the past to produce a text reducible to its theses; I try to enrich it with stories, other people’s or my own, and, as we know, stories give rise to interpretations, not refutations.” As one reader of Todorov puts it, he teaches us to eradicate, or deeply to question, the binary of “them and us.” What will follow will be rooted in ethics: On Human Diversity: Nationalism, Racism and Exoticism in French Thought (1989); Facing the Extreme: Moral Life in the Concentration Camps (on the “social schizophrenia specific to totalitarian regimes,” 1991); A French Tragedy: Scenes of Civil War, Summer ’44 (on the French Resistance’s killing of thirteen pro-Nazi militia and the Nazi revenge murder of thirty-eight Jews in Saint-Armand, 1994); The Fragility of Goodness (on the rescue of Bulgarian Jews, 1999); A New World Disorder: Reflections of a European (2003, two years after September 11 and on the eve of the Iraq war); Duties and Delights: The Life of a Go-Between (his intellectual autobiography, 2002)—and too many other books and articles to mention here.[5] “Only totalitarianism,” writes Todorov, “makes it obligatory to love one’s country” (a statement we would do well, at present in the United States, to keep in mind).

“From now on,” writes Todorov in 2007, “I will stick by and large to the humanist family. This unique perspective prohibits me from any claim to an evenhanded clarification of the other families: I shall systematically privilege one of the voices in the dialogue of the past.”

This, from a man who had, in the first half of his career, carefully avoided polemics. The same year, he published Literature in Danger, a manifesto that argues that current trends in criticism have made it an “object of closed, self-sufficient, absolute language,” a “smothering corset” enclosed by “factual formal games, nihilistic whining and solipsistic egotism.” Literature must be freed from the “formalist ghetto that is of interest only to other critics.” Formalism, nihilism and solipsism are endangering the literary enterprise, he wrote.

Todorov’s greatest influence was Raymond Aron, but also, again just to name a few, Michel de Montaigne, Benjamin Constant, Jean-JacquesRousseau, Henry James, Oscar Wilde, Rainer Maria Rilke, Mikhail Bakhtin, the poet Marina Tsvetaeva (whom Todorov translated), and Edward Said. Human rights, Islam, the question of Europe, economic conditions, racism, genocide, the Holocaust, humanism, colonization, fanaticism, ethics and moral philosophy—these were, to name the most salient, Todorov’s passionate concerns as of the early eighties. Rather than attacking other critics, “who are not there to contradict you and you can ridicule the person to your heart’s delight,” Todorov came to prefer a more measured and solitary approach: “Asserting your conception of the world without worrying too much about other people’s conceptions seems to me at once more difficult and more interesting.” As his friend Thomas Pavel put it recently, what Todorov wished to express above all in his writings was “his simple and sincere friendship for humanity and its cultures.”

After the murder of a French priest last year in France, Todorov remarked, “To systematically bomb a town in the Middle East is no less barbaric than to slit somebody’s throat in a French church. Actually, it destroys more lives.” He was against all forms of fanaticism, from the left or the right: “Certain ideological stances could be defined as the simple refusal to recognize this or that boundary,” he wrote, as if anticipating contemporary arguments about borders. His approach to history was ethical; his concern was with how to treat the representation of other cultures; he believed that self-knowledge develops through knowledge of the Other; and he held that goodness can exist even in the evilest of contexts.

In his eulogy to his teacher, “The Last Barthes,” Todorov noted that he owed his mentor a great deal. And now, after Barthes’s death, writes Todorov, “I will owe him more every day.” Todorov opened his Barthes encomium with these words: “He belonged, in France, to that small group at the top of the intellectual pyramid; he was one of those writers whose books you were always supposed to have read, books which could become the subject of conversation among strangers.”[6] The same may be said of Todorov, and a great many of us will continue to owe him more every day.

 

[1] Sewell Chan, “Tzvetan Todorov, Literary Theorist and Historian of Evil, Dies at 77,” New York Times, 7 Feb. 2017, http://www.nytimes.com/2017/02/07/world/europe/tzvetan-todorov-dead.html

[2] Tzvetan Todorov, “The 2 Principles of Narrative,” Diacritics 1 (Autumn 1971): p.44.

[3] Ibid., p. 39.

[4] Todorov, The Conquest of America: The Question of the Other, trans. Richard Howard (Norman, Okla., 1999), p. 254.

[5] All dates refer to the original French publication.

[6] Todorov, “The Last Barthes,” trans. Howard, Critical Inquiry 7 (Spring 1981): 454, 449.

 

TODOROV and CRITICAL INQUIRY

The following essays (and interview) by Todorov were published in past issues of the journal.

“The Verbal Age,” trans. Patricia Martin Gibby, Critical Inquiry 4 (Winter 1977): 351-71.

“The Last Barthes,” trans. Howard, Critical Inquiry 7 (Spring 1981): 449-54.

“Critical Response: ‘Race,’ Writing, and Culture,” trans. Loulou Mack, Critical Inquiry 13 (Autumn 1986): 171-81.

Interview with Danny Postel, “Moving Targets,” trans. Gila Walker, Critical Inquiry 34 (Winter 2008): 249-73

(Danny Postel’s interview is available to read for free on our website http://criticalinquiry.uchicago.edu)

2 Comments

Filed under Uncategorized

Reports of Its Death Were Pre-mature: A Response to Gabriel Noah Brahm

David Palumbo-Liu

In “The End of Identity Liberalism at MLA: The End of Identity Liberalism at MLA: Saying ‘No’ to Discrimination on the Basis of Nationality,” an essay derived from his op-ed in The Jerusalem Post, Gabriel Noah Brahm makes a number of pronouncements regarding the death of this and the lack of value of that. One core element of his essay regards a concern we share—the new presidency of Donald Trump and the notion of a posttruth and indeed even postreason age. However, we differ in terms of what or who might be responsible for this state of affairs. Brahm argues that disdain for the truth began with postmodernism and other associated intellectual ills. Happily, according to him, the academy has now been delivered from such evils by a historical shift evident in the recent votes in Philadelphia at the meeting of the Modern Language Association.

Brahm’s argument is that the votes at the Modern Language Association help us understand a fundamental shift away from political correctness, which Brahm describes as:

The self-righteous politics of selective outrage associated with “p.c.” makes vacuous expressions of indignation over abstractions like White Privilege, Western Colonialism, Neoliberalism or Global Capitalism more important than concrete scholarship rooted in reasons and evidence. Where p.c. prevails in the humanities, careful attention to complex works of literary merit worth reading is jettisoned in favor of simplistic moralizing, always harping on the same monotonous litany of concerns.

He declares “a victory for facts over trendy ‘post-truth’ epistemology” based on the fact that the Delegate Assembly voted down a resolution to endorse Palestinian civil society’s call for an academic boycott of Israel. As coeditor, with Cary Nelson, of The Case Against Academic Boycotts of Israel, Brahm has more than a passing interest in the topic.

To seriously address the issue of Brahm’s assertion that Israel is a “progressive cause” would take more space than allotted here, but readers interested in pursuing that line of reasoning, as expounded by Brahm and others, can refer to the volume just cited. Part of what I would say in response can be found in my review of that book, published in Symploké.

Instead of getting involved in what Brahm himself argues is a “complex” issue—Israel-Palestine—I will use this opportunity to focus specifically on what actually happened at the MLA, as those events occasioned Brahm’s op-ed.

I argue that much of what transpired at MLA smacked of the white-supremacist tactics and thematics we associate with Trump (amongst them his signature attacks on “political correctness,” which indeed sound a lot like Brahm’s), and that, pace Brahm, it is precisely the fields Brahm associates with “p.c.” that provide us with the tools we need to understand what happened in Philadelphia and also what is going on with our new presidential administration.

Let us thus turn to “the facts” and not Brahm’s opinions. Let’s look at two public debates which took place on the floor of the Delegate Assembly of the MLA that show the use of Trumpist tactics and thematics—in both cases the truths and facts that Brahm wishes to rescue were explicitly suppressed by the so-called MLA Members for Scholars’ Rights, which trampled on precisely those rights. Rather than, as Brahm glosses the events, “effectively vindicat[ing] both academic freedom and academic responsibility, over the pseudo-academic license to indoctrinate at will,” the antiboycott vote exhibited the antiboycott side’s political will precisely to abrogate academic freedom and academic responsibility, to shut down dialogue and is so doing violate the basic premises of academic inquiry.

First, a resolution was put forward that decried the denials of academic freedom to Palestinians, and placed the blame for that on the Palestinian Authority and Hamas. Now the Palestinians themselves and various international human rights groups have indeed borne witness to both these groups impinging on their rights—I have no argument there. But conspicuous in its absence was any mention whatsoever of the state of Israel and its own responsibilities in this area.

That resolution, put forward by the antiboycotters, is completely of a piece with the argument voiced by the white supremacist Trump campaign that “black on black” killing is more to blame for black deaths than police violence, which is in turn of a piece with the violence of the US state as exemplified in the so-called justice system and the prison industrial complex. In much the same way, denials of academic freedom to Palestinians are an essential part of the virtual apartheid system that exists in Israel and indeed preserves Israel in its current form.

Resolution proposer Russell Berman’s disingenuous offer to table the resolution in a spirit of “reconciliation” was seen by many for what it was—a desire to prevent even discussing the question of whether or not Israeli state policies might play any role in the suffering of the Palestinians. Debate of the issue would inevitably have exposed facts about Israel’s constant violations of academic freedom that its supporters are eager to keep concealed. The bad faith of Berman’s offer of “reconciliation” was made patently clear when, despite the calls to allow debate, he refused to withdraw his motion to table the resolution and discussion of it indefinitely.  At that moment of silencing, the bad faith behind the claim that we should not boycott institutions because we want to preserve “dialogue” was exposed; at that point a free inquiry into the “truth” was terminated by those attesting to argue “for scholars’ rights.”

In removing the possibility of discussing such a fundamental issue, Berman and those who voted for his motion violated one of the basic principles of something they always hold out to be our beacon: liberalism. As John Stuart Mill wrote, “the peculiar evil of silencing the expression of an opinion is, that it is robbing the human race; posterity as well as the existing generation; those who dissent from the opinion, still more than those who hold it. If the opinion is right, they are deprived of the opportunity of exchanging error for truth: if wrong, they lose, what is almost as great a benefit, the clearer perception and livelier impression of truth, produced by its collision with error.”

Those mounting analyses of political and cultural phenomena from the standpoints of postcolonial studies, studies in race and ethnicity, and others Brahm associates with “political correctness” are not “expressing” “indignation” when they see such blatant acts of hypocrisy emanate from self-anointed guardians of the liberal west—they are issuing an indispensable analytical critique that, among other things, helps shed light on how the rhetoric of “reconciliation” may be used to cover the tracks of exertions of raw power.

Second, the antiboycott resolution that, having been passed by the same assembly, must now be voted on by the general membership, demands that the MLA “refrain from endorsing the boycott.” This resolution amounts to a prohibition of a mode of protest the United States Supreme Court has declared a constitutionally protected form of free speech (NAACP v. Claiborne Hardware, 1982). The resolution makes even the act of endorsing, in a nonbinding manner, a boycott of Israel impossible—the MLA would be acquiescing to the same kind of silencing we saw in Russell Berman’s motion to table discussion on Resolution 3 indefinitely.

Given that Trump has vowed to destroy BDS, if it passes that resolution, the MLA will be doing Trump’s work for him, and at the expense of its own members and of their right to deliberate the issue. But beyond that, passing that resolution would also set a terrible and destructive precedent—it would mean that it is permissible to deny MLA members the right to comment on what the United States Supreme Court holds to be a basic right.

In both of these cases and many others during the convention, scholars engaged in critiques of colonial knowledge, of gender epistemologies, of privilege and power and race had more than enough material upon which to shine a light. In numerous other scholarly associations that do not regard histories of race and colonialism merely as matters of “trendy ‘post-truth’ epistemology,” the logic and justice of the boycott has made itself felt. These associations not only engage with the facts of racial discrimination and injustice that the Trump administration is likely to make all the more urgent in both Israel and the US but also recognize that the study of these facts requires scholars also to take action for justice. In this, they represent not the past, nor a “trend,” but an indispensable and permanent element of both current and future scholarship.

Indeed, no matter what the general membership decides in the spring, it’s hard to imagine that even if the antiboycott measure is passed this will be a sign of the “end” of anything—the Modern Language Association voted down a resolution to support the anti-apartheid boycott after all. With truth comes power, and the more this issue is debated, the stronger the case for BDS appears. This is why opponents of the boycott resolution felt debate had to be tabled and a resolution to deprive scholars of their right to free speech introduced.

While the move to table the resolution placing the blame for Palestinian suffering solely on them was voted passed by the Delegate Assembly, one should note that the margin of that vote was exceedingly slim: eighty-three “yes,” seventy-eight “no” to table—it passed by the narrowest margin of any of the resolutions, five votes.   The antiboycott resolution also won by a very small margin—eight votes out of a total of 194 vote cast. The vote against the resolution to endorse the BDS call was the widest, seventy-nine “yes” ; 113 “no.” However, it is important to put this in perspective. That a small handful of volunteers could muster a 40 percent vote in favor—with both presidential candidates, the governor of New York, numerous state legislatures, two hundred college presidents, twelve past presidents of the MLA, and major Israeli organizations aiding indirectly or directly the other side—is remarkable. And with the Trump administration bent on endorsing more settlement building, and more violations of human rights, it is highly likely that the pro-boycott side will grow in strength.

 

1 Comment

Filed under Uncategorized