Tag Archives: Steven Spielberg

two color line

The Great Appreciators

Informed criticism is clearly diminished as a cultural mainstay, in part because we have made it so much easier to produce and distribute simulations of cultural products.

This is an era in American life where the young seem as interested in becoming content creators than content appreciators. To be sure, this is a broad  and inexact distinction. But it is clear that a large segment of younger Americans today are ready to self-identify as musicians, songwriters, filmmakers, writers or audio producers, without much experience or training. The results are usually predictably modest: unplanned videos, under-edited and “published” books, magazine-inspired blogs, or derivative music produced in front of a computer.  Without doubt, serendipity has always had a place in producing wonderful new talent. But it is also true that more of us want to count ourselves as being a part of the broad media mix made possible with nearly universal internet access. It’s now hardly surprising to meet a middle schooler who edits their own videos or, after a fashion, curates their own web presence.  As You Tube demonstrates, self-produced media content is unmistakably popular.

If this first quarter of the new century is the age of the content producer, it seems that—broadly speaking—the last half of the previous century was an era for witnessing and reflecting on breathtaking talent. The decline of this impulse is a loss. An appreciator is more than a consumer. These are folks with an understanding of the history and conventions of a form, with an equal interest in exploring how new works can build on and stretch the most stale of cultural ideas. The best work of appreciators can be cautionary, encouraging, or fire us with the enthusiasm that comes with new insights. Productive analysis can help us fathom what we do not yet understand.

               Pauline Kael

In the previous century, critics and essayists of all kinds of art were ubiquitous. Periodicals and big city newspapers routinely published considered assessments of trend-setters in popular culture, fiction, television, theater and film. Some combined their pieces in book-length studies of the period that are still worth reading. Michael Arlen and Neil Postman wrote insightful analyses of news and entertainment television. Pauline Kael and Roger Ebert were among many popular reviewers producing novel assessments of films and the film industry. They were matched by music critics like Michael Kennedy, Dave Marsh, Gene Lees and Donal Hanahan, who provided appraisals of performers and performances. Their counterparts in the visual arts included writers like Robert Hughes, Walter Benjamin, and Jerry Saltz: all exploring the vagaries of talent and caché in that enigmatic world.

Among countless publications, readers poured over this criticism in the pages of The Dial, The New Yorker, Gramophone, Paris Review, Harpers, The Atlantic, New York Review of Books and Rolling Stone. And no self-respecting daily newspaper considered itself complete without its own music and film critics. Bigger city papers also added performance reviews of dance, along with the assessments from urbanists of a city’s newest additions to its skyline.

Even beyond obvious and daily samples of book and theater reviews in many Twentieth Century news outlets, there was an entire world of appreciators with appetites for reconsidering the rivers of culture that came from distant headwaters. For example, Gramophone was founded in 1923 by the Scottish author Compton Mackenzie, who understood that there was an appetite for essays about the composers and performers captured in the new electrical recordings of the time. He proved the unlikely proposition that many wanted to read about music almost as much as they wanted to hear it.

Criticism has Diminished as a Cultural Mainstay

                       Susan Sontag

With video and digital media still mostly in the future, Americans in the first half of the century, had the time and the will to know the backstories of the cultural products of the day. Indeed, some writers like Norman Mailer, Susan Sontag, Joan Didion and Janet Malcom became intellectual thought leaders. They helped to explain what artistic mastery should look like. And they had the counterparts in a range of academic thinkers—T.W. Adorno, David Riesman, Marshall McLuhan and Kenneth Burke, for example—whose deeper cultural probes would soak into the fabric of the nation’s undergraduate curriculum. Sampling the output of so many professional appreciators would keep liberal arts students preoccupied for years, and sometimes forever.

        Toland Image From Scene from Citizen Kane

To be sure, our interest in the understandings the nation’s cultural output has not vanished. But criticism is clearly diminished as a cultural mainstay, in part because we have made it so much easier to produce and distribute simulations of cultural products. I use the word “simulations” because the impulse to be a content producer often bypasses the intellectual labor that comes in value-added art. So many today proceed without a grounding in the canons of a particular form: its histories, possibilities, and innovators. I suspect the desire to be an immediate practitioner in a realm that is barely understood is usually fed by the promise of fame. The result, as my colleagues in film sometimes lament, is that students want to be producers of video stories before the have considered the durable conventions of narrative: for example, the norms of a written screenplay, or how this first written map is converted into the visual “language” and grammar of film. To cite a specific case, it would be useful for a young filmmaker to know how cinematographer Greg Toland used light and shadow to create the unmistakable visual palette of Citizen Kane (1941), or how Steven Spielberg and John Williams exploited the tricky business of musical underscoring to leave audiences suitably terrified by Jaws (1975).

In our schools and colleges, the equipment to make art is frequently made available to students who have only rudimentary understandings of how they might be used. The youthful conceit that progress is made by setting aside what has come before is mostly an excuse to avoid the work of contemplation that creates competence and a lasting passion for an art form.

Yellow bar graphic

Turing, And The Bogus Rivalry With Machine-Based Intelligence

IBM's Watson Wikipedia.org
                     IBM’s Watson           Wikipedia.org

In reality, humans have nothing to fear. Most measures of artificial intelligence use the wrong yardsticks.

We are awash in articles, books and films about the coming age of “singularity:” the point at which machines will supposedly duplicate and surpass human intelligence.  For decades it’s been the stuff of science fiction, reaching perhaps its most eloquent expression in Stanley Kubrick’s 1968 motion picture, 2001: A Space Odyssey.  The film is still a visual marvel. Who would have thought that Strauss waltzes and images of deep space could be so compatible?  Functionally, the waltzes have the effect of juxtaposing the familiar with a hostile void, making the film a surprising celebration of all things earthbound.  But that’s another story.

The central agent in the film is the HAL-9000 computer that begins to turn off the life support of the crew during a long voyage, mostly because it “thinks” the humans aren’t up to the enormous task facing them.

Kubrick’s vision of very smart computers is also evident in the more recent A.I., Artificial Intelligence (2001), a project started just before his death and eventually brought to the screen by Steven Spielberg.  It’s a dystopian nightmare. In the film intelligent “mechas” (mechanical robots) are generally nicer than the humans who created them.  In pleasant Haddonfield New Jersey, of all places, they are shot on sight for sport.

Fantasies of machine intelligence have lately given way to IBM’s “Big Blue” and “Watson,” mega-computers with amazing memories and—with Watson—a stunning speech recognition capability that is filtering down to all kinds of devices.  If we can talk to machines, aren’t we well on our way to singularity?

For one answer consider the Turing Test, the challenge laid down by the World War II code-breaker Alan Turing. A variation of it has been turned into a recurring world competitions.  The challenge is to construct a “chatterbot” that can pass for a human in blind side-by-side “conversations” that include real people.  For artificial intelligence engineers, the trick is to fool a panel of questioners at least 30 percent of the time over 25 minutes. According to the BBC, a recent winner was a computer from the University of Reading in the U.K. It passed itself off as a Ukrainian teen (“Eugene Goostman”) speaking English as a second language.

In actual fact, humans have nothing to fear.  Most measures of “human like” intelligence such as the Turing Test use the wrong yardsticks. These computers are never embodied. The rich information of non-verbal communication is not present, nor can it be.  Proximate human features are not enough.  For example, Watson’s “face” in its famous Jeopardy challenge a few years ago was a set of cheesy electric lights.  Moreover, these smart machines tend to be asked questions that we would ask of Siri or other informational databases.  What they “know” is often defined as a set of facts, not feelings. And, of course, these machines lack what we so readily reveal in our conversations with others: that we have a sense of self, that we have an accumulated biography of life experiences that shape our reactions and dispositions, and that we want to be understood.

Just the issue of selfhood should remind us of the special status that comes from living through dialogue with others.  A sense of self is complicated, but it includes the critical ability to be aware of another’s awareness of who we are.  If this sounds confusing, it isn’t.  This process is central to all but the most perfunctory communication transactions.  As we address others we are usually “reading” their responses in light of what we believe they already have discerned about us.  We triangulate between our  perceptions of who we are, who they are, and what they are thinking about our behavior. This all happens in a flash, forming what is sometimes called “emotional intelligence.”  It’s an ongoing form of self-monitoring that functions to oil the complex relationships. Put this sequence together, and you get a transaction that is full of feedback loops that involve estimates if intention and interest, and—frequently—a general desire born in empathy to protect the feelings of the other.

It’s an understatement to say these transactions are not the stuff of machine-based intelligence, and probably never can be.  We are not computers.  As Walter Isaacson reminds us in his recent book, The Innovators, we are carbon based creatures with chemical and electrical impulses that mix to create unique and idiosyncratic individuals.  This is when the organ of the brain becomes so much more: the biographical homeland of an experience-saturated mind.  With us there is no central processor.  We are not silicon-based. There are the nearly infinite forms of consciousness in a brain with 100-billion neurons with 100-trillion connections.  And because we often “think” in ordinary language, we are so much more—and sometimes less—than an encyclopedia on two legs.