How do we acknowledge the past without making unearned judgments about the moral failures of our ancestors?
At the recent meeting of the American Historical Association in Philadelphia, members predictably debated how academics should evaluate historical figures who acted within the framework of their generation’s social norms. We know that Thomas Jefferson and many of America’s founders owned slaves. At the time of the founding of the country cultural leaders were content to exclude women, African Americans and others to wealth and access to real power. We can’t ignore such serious offenses. Yet, sometimes lives need to be assessed with an eye on coping with complex binaries that exist within the same person.
It is now a social science given that key institutions—the church, education, government and most of the working world—carried built-in biases against citizens who were clearly entitled to equal protections and opportunities. Any number of politicians would like to challenge what now vilified as ‘critical race theory.’ But there is no question that earlier narratives and practices across the culture perpetuated embedded racial and gender biases. Think of Woodrow Wilson, Ronald Reagan or James Webb. Each carried prejudices that explain serious leadership deficits. Reagan, for example, was slow to act on the AIDS crisis that tore through the gay community. I’ll add another: growing up in Colorado, I don’t remember any schooling that covered the displacement or massacre of the indigenous people who originally inhabited the region. It’s possible my earlier distracted person missed something, but the sad story of the Sand Creek Massacre was definitely not a preferred narrative.
To our credit, most of us feel a degree of cognitive dissonance on discovering that beloved institutions or figures were carriers of poisonous prejudices. When we apply our newer sensibilities to what we see in history’s rear-view mirror, we can’t help but cringe at mainstream attitudes that were once accepted, mostly without dissent.
The challenge of “presentism”
How do we acknowledge the details of the past without making unearned judgments about the moral failures of our ancestors? To do so is sometimes labeled as “presentism,” an urge to render assessments of individuals, bypassing the necessary work of accurately placing their lives within the context of their own world.
At the conference James Sweet, a black studies historian at the University of Wisconsin, noted that “repairing historical wrongs” is important, but the job of a historian is to offer context, giving “as full a render of the past as our sources allow.” But his view was doubted by many, who believe it is wrong to separate description from necessary judgment—especially in an era when many leaders on the right would like to prohibit classroom discussions of racial or sexual discrimination.
Like most others, I’m incensed by this kind of misguided legislating. But if we believe we are now ahead of the curve in moving toward moral justice, we should probably think again. As George Scialabba recently noted recently in Commonweal, “it is pretty certain that the average educated human of the twenty-third century will look back at the average educated human of the twenty-first century and ask incredulously about a considerable number of our most cherished moral and political axioms, “How could they have believed that?” His complaint is centered on everyday social inequalities that we rarely notice: for example, the fact that an American CEO can make 300 times what their employees take home. We only notice it when someone reminds us to look. The point is that moral certainty that allows definitive judgments about short-sighted ancestors is perpetually reflexive. There is no finite geography of moral certitude we can claim as our own. There is always another higher peak beyond the one we thought we just topped.
If we all “hang out” virtually, we make ourselves smaller.
A few days ago I watched a car drifting on its own across a sloped parking lot, motor off. There was an occupant, but he was lost to everything except the text he was writing. He was clearly headed for trouble on the other side when he finally realized that the laws of physics had put him in the path of others. I fear this is us, drifting–even while the world waits–and too preoccupied with a screen to notice.
As a case in point Brian Chen’s recent technology piece in the New York Times (December 29, 2022) eagerly described of coming advances in digital media: better iPhones, new virtual reality equipment, software that allows people to “share selfies at the same time,” and social media options that provide new “fun places to hang out.”
So glib and so short-sighted. When did a few inches of glass with microchips become a “place?” Language like this makes one wonder if, as students, these technology journalists encountered the rich expanses of social intelligence that come to life in real time. Too few technology mavens seem to give any weight to the ranges of human experience predicated on hard-won human achievements of cognition and competence. Consumer-based digital media are mostly about speed rather than light. If we all “hang out” virtually, we make ourselves smaller, using the clever equivalent of a mirror to not notice our diminished relevance.
Most social media sites only give us only the illusion of connection. This is perhaps one reason movies, sports and modern narratives are so attractive: we can at least witness people in actual “places” doing more with their lives than exercising their thumbs. Spending time with young children also a helps. In their early years children reflect our core nature by seeking direct and undivided attention; no virtual parenting, please. In expecting more than nominal indifference they may be more like their grandparents than parents.
A.I. pollutes the idea of authorship
Among more changes awaited next year, Chen described a “new chatty assistant” from an A. I. firm. The software is called Chat-GPT, which can allow a nearly sentient chatbot to act as a person’s “research assistant,” or maybe generate business proposals, or even write research papers. He’s enthusiastic about how these kinds of programs will “streamline people’s work flows.” But I suspect these require us to put our minds in idle: no longer burdened with functioning as an agency of thought. Apparently the kinks to be worked out would be no more than technical, freeing a person from using complex problem-solving skills. Indeed, the “work” of a computer generated report cannot be said to come from the person at all. As with so many message assistants, A.I. pollutes the idea of authorship. Who is in charge of the resulting verbal action? Hello Hal.
Consider how much worse it is for teachers of logic, writing, grammar, vocabulary, research and rhetoric, let alone their students. All ought to be engaged in shaping minds that are disciplined, smart about sources, and able to apply their life experiences to new circumstances. It is no wonder that the increasing presence of intellectual fakery makes some college degrees nearly meaningless. Paying for an A.I.-generated college paper is bad enough; generating plans for action from a self-writing Word program is a nightmare for all of us who expect our interlocutors to be competent, conscious and moral free agents.