Tag Archives: A.I. Artificial Intelligence

red bar graphic

0 for 2 or 3 for 3?

Was I wrong about A.I.? I believe my arguments still stand, and are clearer if we accept the solid idea that communication involves the assessment of three essential components: a source, message, and audience.

The trouble with writing is that our words sometimes hang around to remind others of the outmoded antiques we once proposed as innovative thoughts. Twice I’ve offered views on what I considered the non-threatening nature of A.I.: one in 2015, and one last year. While it would not be a new experience for me, was I wrong? In this case, I don’t think so.

The upshot of these posts is that A.I. messages will always be problematic because they are not sourced by a single human. We need information about a source to estimate their credibility. Perhaps I was a tad wide of the mark in one piece to say that “humans have nothing to fear” from A.I. But I still think my primary argument stands. It’s based in the centuries-old dictum that communication  messages must be measured against the credibility and motivations of a human agent making them.

In terms of influencing the larger debate, I may be 0 for 2. But I believe nothing has changed if we accept the old dictum that communication involves three essential components: a message, an audience and a source. A.I. systems carry no information about the carrier of a message. A.I. is more encyclopedic and less able to judge good information and sources. In an earlier essay I noted that  A.I. “lacks the kind of human information that we  so readily reveal in our conversations with others. We have a sense of self and an accumulated biography of life experiences that shapes our reactions and dispositions.” In short, the communication that should matter to us is always measured against the known character and motivations of a human source. Knowing something about a source is a key part of understanding what is being said. What do we believe? It depends on who is doing the telling. Should be accept an A.I. version of the claims made frequently in the U.S. about illegal voting? A.I. might dig up background data. But we would still need a fair-minded expert on American voting habits to draw an accurate conclusion.  It is obvious we would want to qualify the source to rule out reasons that might bias their views.

As I noted in previous posts, most meaningful human transactions are not the stuff of machine-based intelligence, and probably never can be. We are not computers. As Walter Isaacson reminds us in The Innovators, we are carbon-based creatures with chemical and electrical impulses that mix to create unique and idiosyncratic individuals. This is when the organ of the brain becomes so much more: the biographical homeland of an experience-saturated mind. With us there is no central processor. We are not silicon-based. There are nearly infinite forms of consciousness in a brain with 100-billion neurons with 100-trillion connections. And because we often “think” in nuanced language and metaphors, we are so much more—and sometimes less—than an encyclopedia on two legs.

We triangulate between our  perceptions of who we are, who the source is, and how the source is processing what they think we know.  This monitoring is full of feedback loops that can produce estimates of intention shaped by relevant lived experience.

Just the idea of selfhood should remind us of the special status that comes from living through dialogue with others. A sense of self is complicated, but it includes the critical ability to be aware of another’s awareness of who we are. If this sounds confusing, it isn’t. This process of making character estimations is central to all but the most perfunctory communication transactions. The results are feelings and judgments that make us smarter about another source’s claims and judgments.

hello dave image

The one gap in my thinking is what could be called the “Dave” problem. What is to be done with computers that “think” they know best, and set in motion what human designers failed to take into account? It was a problem in Stanley Kubrick’s 2001: A Space Odyssey, and is surely possible because of a bad designer, or one with the intention of creating havoc. But to some extent, this has always been the case with automated systems.

Finally, as I wrote in a previous post. “Everyone seems to be describing humans as information-transfer organisms. But, in truth, we are not particularly good at creating reliable accounts of events. What we seem hardwired to do is to add to our understanding of events around us” by determining the credibility of a source.

Any thoughts? 0 for 3? Write to woodward@tcnj.edu.

Sora Will be A Game Changer

I would love to be wrong, but filmed entertainment seems to be facing its own equivalent of the robotic assembly line.

A little-reported but hugely significant white flag of surrender surfaced a few weeks ago when the producer and actor Tyler Perry suddenly canceled a planned expansion of his Atlanta studios. A dozen new sound stages were originally projected, but that was before he saw what he considered a “mind blowing” demonstration.

Perry changed his mind after he viewed a collection of short videos produced by an A.I. program called Sora. On just verbal prompts to Sora, the name of an image generator from Open AI, a fabricated scene emerged as an instant “video” that was difficult to distinguish from a sequence that a Hollywood production company might take days to set up. The crane shots in some of these fake videos are stunning. The characters look like they have been groomed for their parts. Shadows are mostly authentic. And the live action from people and animals look mostly “real.” As the Washington Post noted in an excellent must-see article,  the images and actions are “shockingly realistic.”  The article and its examples are best seen on a computer screen. Here’s a sample of one of the videos with its text prompt that is cited by the Post.

[Verbal Prompt: A cat waking up its sleeping owner demanding breakfast. The owner tries to ignore the cat, but the cat tries new tactics and finally the owner pulls out a secret stash of treats from under the pillow to hold the cat off a little longer. (OpenAI)]

We expect that most institutions will evolve incrementally: slow enough to allow for adjustments to new realities. That may not be the case here. Every trade in the film and video industry must be asking how they will fit into a world of narrative storytelling when anyone without experience with computer generated images can “create” stunning video effects.

To be sure, things aren’t perfect in this early generation of Sora. Look at a sample of an invented scene from a 1930s movie, also cited by The Post.  It looks great, but Sora doesn’t know how to light a cigarette:

[Verbal Prompt: A person in a 1930s Hollywood movie sits at a desk. They pick up a cigarette case, remove a cigarette and light it with a lighter. The person takes a long drag from the cigarette and sits back in their chair. Golden age of Hollywood, black and white film style. (OpenAI)]

Hollywood is not alone in confronting technological advancement, but the ease of use of this technology makes it an existential threat to the film world as we know it. Producers and various content providers will love this tool. But it cannot be anything but a blow to artists and trades that usually make traditional film or video projects. No wonder actors were so concerned about achieving a new contract that would prohibit the use of their likenesses without their permission. I would love to be wrong, but the future of “filmed” entertainment seems to be facing its own equivalent of the robot revolution in the production of automobiles.

A colleague who knows about these things notes that crews have been dealing with Computer Generated sets and effects for years. As actor can now appear to be walking down a street in Prague while passing in front of a green screen in Burbank. And many are working these days. There’s also the example of recent films like Poor Things (2023), with actual Victorian sets on sound stages and the inventive use of crafts that go with a period piece. My colleague also wonders if many A.I. scenes aren’t essentially rip-offs of other location videos, slightly modified to seem more original than they are.  Newer generations of this software should help clarify the charge of “mere copying”.

To be sure, the future appears bright at least for copyright lawyers.  Then, too, actors in dense roles driven by dialogue construct screen personas carefully.  Performances come from assumed motivations and hard-to-fake nuances. Can a fully integrated performance like Emma Stone’s in Poor Things really be put together from just from verbal directions?  Even so, an upheaval is bound to happen as seemingly recognizable persons are placed in novel settings and given words that they never muttered.

A.I. appears to be a new and fearsome thing facing the film industry, but it is even more of a threat to the culture as a whole if journalists and public figures face an endless tangle of anger and confusion over real and fabricated words and images.

black bar

Revised square logo

flag ukraine