Tag Archives: ChatGPT

red bar graphic

0 for 2 or 3 for 3?

Was I wrong about A.I.? I believe my arguments still stand, and are clearer if we accept the solid idea that communication involves the assessment of three essential components: a source, message, and audience.

The trouble with writing is that our words sometimes hang around to remind others of the outmoded antiques we once proposed as innovative thoughts. Twice I’ve offered views on what I considered the non-threatening nature of A.I.: one in 2015, and one last year. While it would not be a new experience for me, was I wrong? In this case, I don’t think so.

The upshot of these posts is that A.I. messages will always be problematic because they are not sourced by a single human. We need information about a source to estimate their credibility. Perhaps I was a tad wide of the mark in one piece to say that “humans have nothing to fear” from A.I. But I still think my primary argument stands. It’s based in the centuries-old dictum that communication  messages must be measured against the credibility and motivations of a human agent making them.

In terms of influencing the larger debate, I may be 0 for 2. But I believe nothing has changed if we accept the old dictum that communication involves three essential components: a message, an audience and a source. A.I. systems carry no information about the carrier of a message. A.I. is more encyclopedic and less able to judge good information and sources. In an earlier essay I noted that  A.I. “lacks the kind of human information that we  so readily reveal in our conversations with others. We have a sense of self and an accumulated biography of life experiences that shapes our reactions and dispositions.” In short, the communication that should matter to us is always measured against the known character and motivations of a human source. Knowing something about a source is a key part of understanding what is being said. What do we believe? It depends on who is doing the telling. Should be accept an A.I. version of the claims made frequently in the U.S. about illegal voting? A.I. might dig up background data. But we would still need a fair-minded expert on American voting habits to draw an accurate conclusion.  It is obvious we would want to qualify the source to rule out reasons that might bias their views.

As I noted in previous posts, most meaningful human transactions are not the stuff of machine-based intelligence, and probably never can be. We are not computers. As Walter Isaacson reminds us in The Innovators, we are carbon-based creatures with chemical and electrical impulses that mix to create unique and idiosyncratic individuals. This is when the organ of the brain becomes so much more: the biographical homeland of an experience-saturated mind. With us there is no central processor. We are not silicon-based. There are nearly infinite forms of consciousness in a brain with 100-billion neurons with 100-trillion connections. And because we often “think” in nuanced language and metaphors, we are so much more—and sometimes less—than an encyclopedia on two legs.

We triangulate between our  perceptions of who we are, who the source is, and how the source is processing what they think we know.  This monitoring is full of feedback loops that can produce estimates of intention shaped by relevant lived experience.

Just the idea of selfhood should remind us of the special status that comes from living through dialogue with others. A sense of self is complicated, but it includes the critical ability to be aware of another’s awareness of who we are. If this sounds confusing, it isn’t. This process of making character estimations is central to all but the most perfunctory communication transactions. The results are feelings and judgments that make us smarter about another source’s claims and judgments.

hello dave image

The one gap in my thinking is what could be called the “Dave” problem. What is to be done with computers that “think” they know best, and set in motion what human designers failed to take into account? It was a problem in Stanley Kubrick’s 2001: A Space Odyssey, and is surely possible because of a bad designer, or one with the intention of creating havoc. But to some extent, this has always been the case with automated systems.

Finally, as I wrote in a previous post. “Everyone seems to be describing humans as information-transfer organisms. But, in truth, we are not particularly good at creating reliable accounts of events. What we seem hardwired to do is to add to our understanding of events around us” by determining the credibility of a source.

Any thoughts? 0 for 3? Write to woodward@tcnj.edu.

black bar

A Pronoun Test of A.I. Sentience

speech bubbles 2

By necessity, A.I. must assume a kind of fraudulent authorship, easily revealed in meaningless pronouns.

The recent flurry of news about refined A.I. “intelligence” which can process and mimic coherent discourse– if not authentic emotional states—has been hard to miss.  But in the breathless rush to proclaim the human-like capabilities of ChatGPT and other language-based systems, something basic has been overlooked. The real deficits of these programs are their incapacities to handle human processes represented in meaningful pronouns.

Obviously, these systems are using language and grammar forms we recognize, but their shortcomings are concealed by their verbosity. We now have chatbots that can talk more than our worst oversharing relatives.

Here’s what’s missing.  When we use it, the pronoun “I” is the human equivalent of the North Star. Our awareness of it gives us the power to take ownership of objects, needs, feelings, and a reserved space in what is usually a growing social network. Children learn this early, building an emerging sense of self that expands rapidly in the first few years. Eventually they will distinguish the meanings of  other pronouns that allow for the possibility of  not just “I,” but “we, “you,” and “them” as well.  This added capacity is a major threshold.   It’s an immense task to fathom other “selves” with their distinct social orbits and prerogatives. Adequate consideration of another’s “otherness” is a lifelong process that even adult humans struggle to master.  As examples of this capacity importance, consider the elaborate backstories of motivation that you routinely apply when talking to a friend or family member. What is heard and what is understood may be two very different things. Understanding others is a delicate process of inference-making that can’t be duplicated by a machine that lacks a requisite  social and organic lineage.

speech bubbles 3

This shift to “I” from “we” also enables us to assert intellectual and social kinship, one biological creature to another, bound by an awareness of similar arcs that include learning, living and dying. These natural processes motivate us to assert our own sense of agency: to be engines of action and reaction. We “know” and often boldly announce our intentions, at the same time doing our best to infer them in others. Estimations of motive shape most of our conversations with others. Think of  the “I” statements used by others as sitting atop a deep well of attitudes and feelings we struggle to bring to the surface.

So, it is clear that every time Chat GPT composes a message to us, it needs to depend on fraudulent pronouns, stated or implied. It uses forms of everyday language that conceal the fact that it has no resources of the self: no capability to “feel” as a sentient being.  This makes it clueless in gaining even a rudimentary sense of what others are about.

two color line

Revised square logo

flag ukraine