Tag Archives: human life cycle and its imperatives

What I Got Wrong and Right about Artificial Intelligence

Personhood is a unique state rooted in carbon-sourced biology, not easily replicated by silicon-based machines. 

In 2015 I wrote that “In reality, “humans have nothing to fear” from the growth of artificial Intelligence. “Most measures” of it use the wrong yardsticks.”

Well, knock me over with a feather.

istock man falling

I seem to have been wrong about that. Job losses caused by new uses of A.I. make it apparent that many word and data handling jobs have indeed been given to computers running A.I. programs. The first contact many of us have with doctors offices, food services or even mental health services is some chatter bot mascarading as the functional equivelant of an adaptable and sensitive person. The hubris that makes that possible is our mistake.  I feel like a fraud every time I “chat” with a machine. But the fraud is on the other side.

Banks and Silicon Valley tech firms are now beginning to purge their staffs. Estimates suggest that perhaps organizations and businesses in the near future will have twenty percent fewer employees. Even so, I would still guess that A.I. is not going to cut it in some functions. Imagine as a new retailer you tout the advantage of guaranteeing a real customer service person immediately if you have a problem.  That’s a claim I saw in an ad recently, representing a unique selling proposition.

What I missed in the first post here was that my mind was too focused on those workers whose jobs are either creative, or tied to the trickiest of forms of human problem solving.   And my heart goes out to people who have been let go for nothing worse than serving as one of the  human faces of an organization.

short black line

Well, knock me over with a feather. Job losses from new uses of A.I. make it apparent that many word and data handling jobs have indeed been given to computers running A.I. programs.

One key point in that rash post still stands and seems to be ignored by many in the A.I. community. It hinges on what personhood means, including having a sense of self. If this sounds wooly, it isn’t. If we think that computers, robots or chatterbots have a sense of individual identity, I would beg to differ. Without a personal human history that includes the biology of living in the physical world and adapting to a socially mediated and carbon-based life cycle, a machine is just a machine.  We have a biography, a family lineage, a sense of place, and a collection of life-transforming experiences. Our lives must reckon with the processes attraction, illness, aging, and fostering new beings as members of a tribe. A machine can only fake the experiences and feelings of a human being.

GW: "Alexa, How are you feeling today?"

Amazon A.I. Assistant: "My Monday is starting off marvelously." 
 
(This actual response can't help but be fraudulent. Forms of "me" suggest a living person,a being, someone's son or daughter, and social intelligence based on a lifetime of interactions. "Marvelously" suggests an ordinary language stab at an unearned feeling.)  

All of these features are essential prerequisites for a sense of self, which is thinly constructed using the feedback and interactions of other humans. Humans can estimate the interiority of another person from the wealth of experiences that we and they have undergone. How does that get communicated in terms of the social intelligence values of empathy, sympathy, or feelings of alienation or identification? These states of mind or more than the products of algorithms in large language models of A.I.. They are unique to the human mind. It’s another reason to reassuract the idea of a person’s “soul,” and perhaps to routinely italicize artificial as a reminder that the word truncates the much richer meanings behind “intelligence.”

dudriks flickr

As I previously noted, just this issue of selfhood should remind us of the special status that comes from living through dialogue with others. A sense of self is complicated, but it includes the critical ability to be aware of another’s awareness of who we are. If this sounds confusing, it isn’t. This process is central to all but the most perfunctory communication transactions. As we address others we are usually “reading” their responses in light of what we believe they already have discerned about us. We triangulate between our  perceptions of who we are, who they are, and what we imagine they may be thinking about our behavior. Put this sequence together, and you get a transaction that is full of feedback loops that involve estimates if intention and interest, and—frequently—a general desire born through human social intelligence to protect the feelings of others.

It’s an understatement to say these transactions are not the stuff of machine-based intelligence, and probably never can be. To be sure, the intricacies of many newer A.I. systems are beyond me, but I am still comfortable asserting that feelings, attitudes, experiences and beliefs that create human agency cannot be generated by GPUs, TPUs, and NPUs programmed to produce simulacrums of consciousness. As Walter Isaacson reminds us in The Innovators, we are carbon-based creatures with chemical and electrical impulses that create unique and idiosyncratic individuals. This is when the organ of the brain becomes so much more: the biographical homeland of an experience-saturated mind. With us there is no central processor. We are not silicon-based. There are the nearly infinite forms of consciousness in a brain with 100-billion neurons with 100-trillion connections. And because we often “think” in ordinary language, we are so much more—and sometimes less—than an encyclopedia of large language algorithms.