Was I wrong about A.I.? I believe my arguments still stand, and are clearer if we accept the solid idea that communication involves the assessment of three essential components: a source, message, and audience.
The trouble with writing is that our words sometimes hang around to remind others of the outmoded antiques we once proposed as innovative thoughts. Twice I’ve offered views on what I considered the non-threatening nature of A.I.: one in 2015, and one last year. While it would not be a new experience for me, was I wrong? In this case, I don’t think so.
The upshot of these posts is that A.I. messages will always be problematic because they are not sourced by a single human. We need information about a source to estimate their credibility. Perhaps I was a tad wide of the mark in one piece to say that “humans have nothing to fear” from A.I. But I still think my primary argument stands. It’s based in the centuries-old dictum that communication messages must be measured against the credibility and motivations of a human agent making them.
In terms of influencing the larger debate, I may be 0 for 2. But I believe nothing has changed if we accept the old dictum that communication involves three essential components: a message, an audience and a source. A.I. systems carry no information about the carrier of a message. A.I. is more encyclopedic and less able to judge good information and sources. In an earlier essay I noted that A.I. “lacks the kind of human information that we so readily reveal in our conversations with others. We have a sense of self and an accumulated biography of life experiences that shapes our reactions and dispositions.” In short, the communication that should matter to us is always measured against the known character and motivations of a human source. Knowing something about a source is a key part of understanding what is being said. What do we believe? It depends on who is doing the telling. Should be accept an A.I. version of the claims made frequently in the U.S. about illegal voting? A.I. might dig up background data. But we would still need a fair-minded expert on American voting habits to draw an accurate conclusion. It is obvious we would want to qualify the source to rule out reasons that might bias their views.
As I noted in previous posts, most meaningful human transactions are not the stuff of machine-based intelligence, and probably never can be. We are not computers. As Walter Isaacson reminds us in The Innovators, we are carbon-based creatures with chemical and electrical impulses that mix to create unique and idiosyncratic individuals. This is when the organ of the brain becomes so much more: the biographical homeland of an experience-saturated mind. With us there is no central processor. We are not silicon-based. There are nearly infinite forms of consciousness in a brain with 100-billion neurons with 100-trillion connections. And because we often “think” in nuanced language and metaphors, we are so much more—and sometimes less—than an encyclopedia on two legs.
We triangulate between our perceptions of who we are, who the source is, and how the source is processing what they think we know. This monitoring is full of feedback loops that can produce estimates of intention shaped by relevant lived experience.
Just the idea of selfhood should remind us of the special status that comes from living through dialogue with others. A sense of self is complicated, but it includes the critical ability to be aware of another’s awareness of who we are. If this sounds confusing, it isn’t. This process of making character estimations is central to all but the most perfunctory communication transactions. The results are feelings and judgments that make us smarter about another source’s claims and judgments.
The one gap in my thinking is what could be called the “Dave” problem. What is to be done with computers that “think” they know best, and set in motion what human designers failed to take into account? It was a problem in Stanley Kubrick’s 2001: A Space Odyssey, and is surely possible because of a bad designer, or one with the intention of creating havoc. But to some extent, this has always been the case with automated systems.
Finally, as I wrote in a previous post. “Everyone seems to be describing humans as information-transfer organisms. But, in truth, we are not particularly good at creating reliable accounts of events. What we seem hardwired to do is to add to our understanding of events around us” by determining the credibility of a source.
Any thoughts? 0 for 3? Write to woodward@tcnj.edu.