Tag Archives: source credibility

Triangulating Toward the Truth

retochet

It is not enough for a thinking adult to remain captive to the highly corrupted spaces of video fantasists.

With media such as YouTube, we have reached a point where the presumption going forward will have to be that the content is fake until it is verified.  More and more images and audio are A.I. fabrications.

Last week I was briefly taken in by the YouTube post, since taken down by the platform, that had columnist George Will describing a supposedly sudden transformation within the Republican Congressional caucus. He described their separation from President Trump, and even the possibility of using the 25th Amendment to remove the President from office. The video looked like Will and more or less duplicated his usually clipped cadences.

The first clue that all might not be what it seems was the source, which was not his newspaper, The Washington Post, but some sort of A.I. group called “Inside the Union.” A second was that the words put into Will’s mouth were not quite what he would use at this moment in time. Only a “synthetic content” flag visible for a short time in the corner of the video indicated that it is an A.I. fabrication.

My attention was initially heightened because I hoped the sudden report might be true. Alas, no one else at the AP, the New York Times, or The Wall Street Journal was reporting anything like this supposed GOP insurrection. Clearly Mr. Will had become an unwilling avatar for someone else’s political agenda.

short black line

Tech leaders who present themselves as in the forefront of the race to the future haven’t even left the starting blocks in terms of controlling the veracity of their offerings.

As we know, A.I. technology is capable of more convincing fakes. It was only a matter of time until fake news would become ubiquitous across the political spectrum. As noted in an earlier post, we may be OK with images of a cat minding the fry grill at McDonalds. The joke is obvious. But we should be on guard when the likeness of a person with a curated reputation is hijacked in complete defiance of what they actually believe. Elon Musk’s Grok image generator has similarly been used repeatedly to create false and malicious images that can end up on other sites. And, obviously, X, YouTube and other platforms are not immune.  Tech leaders who like to present themselves as in the forefront in the race to the future haven’t even left the starting blocks when it comes to controlling the veracity of their offerings.

A person’s reputation for accuracy may be the most important character trait they have. Routine fakery should not be allowed to rob them of that. Whether we want to or not, all of us are going to have learn to do what journalists and prosecutors do to test the credibility of their sources.

Their method is sometimes called “triangulation,” where a given story is checked against other sources known for credible reporting. To be sure, this takes a little bit of time. As landmark movies about journalism remind us, investigative reporters usually need two or three sources to confirm that a narrative is accurate. Think of All the President’s Men (1976), Shattered Glass (2003) or Spotlight (2015). The related and honorable practice of fact-checking is also a tradition at major news outlets and legendary at The New Yorker. In addition, triangulation usually means getting out of the video media bubble and moving on to more reliable human and print sources. It is not enough for a thinking adult to remain in the highly corrupted spaces of video fantasists.

All of this is a reminder schools should be regularly teaching some version of a course in Evidence and sources in the middle and upper grades. Every citizen needs to know what high and low credibility looks like, as well as some of the basic rules of evidence, Navigating the swamps of digital media where anything can be faked is going to require cognitive screening skills that will have to become second nature.

red bar graphic

0 for 2 or 3 for 3?

Was I wrong about A.I.? I believe my arguments still stand, and are clearer if we accept the solid idea that communication involves the assessment of three essential components: a source, message, and audience.

The trouble with writing is that our words sometimes hang around to remind others of the outmoded antiques we once proposed as innovative thoughts. Twice I’ve offered views on what I considered the non-threatening nature of A.I.: one in 2015, and one last year. While it would not be a new experience for me, was I wrong? In this case, I don’t think so.

The upshot of these posts is that A.I. messages will always be problematic because they are not sourced by a single human. We need information about a source to estimate their credibility. Perhaps I was a tad wide of the mark in one piece to say that “humans have nothing to fear” from A.I. But I still think my primary argument stands. It’s based in the centuries-old dictum that communication  messages must be measured against the credibility and motivations of a human agent making them.

In terms of influencing the larger debate, I may be 0 for 2. But I believe nothing has changed if we accept the old dictum that communication involves three essential components: a message, an audience and a source. A.I. systems carry no information about the carrier of a message. A.I. is more encyclopedic and less able to judge good information and sources. In an earlier essay I noted that  A.I. “lacks the kind of human information that we  so readily reveal in our conversations with others. We have a sense of self and an accumulated biography of life experiences that shapes our reactions and dispositions.” In short, the communication that should matter to us is always measured against the known character and motivations of a human source. Knowing something about a source is a key part of understanding what is being said. What do we believe? It depends on who is doing the telling. Should be accept an A.I. version of the claims made frequently in the U.S. about illegal voting? A.I. might dig up background data. But we would still need a fair-minded expert on American voting habits to draw an accurate conclusion.  It is obvious we would want to qualify the source to rule out reasons that might bias their views.

As I noted in previous posts, most meaningful human transactions are not the stuff of machine-based intelligence, and probably never can be. We are not computers. As Walter Isaacson reminds us in The Innovators, we are carbon-based creatures with chemical and electrical impulses that mix to create unique and idiosyncratic individuals. This is when the organ of the brain becomes so much more: the biographical homeland of an experience-saturated mind. With us there is no central processor. We are not silicon-based. There are nearly infinite forms of consciousness in a brain with 100-billion neurons with 100-trillion connections. And because we often “think” in nuanced language and metaphors, we are so much more—and sometimes less—than an encyclopedia on two legs.

We triangulate between our  perceptions of who we are, who the source is, and how the source is processing what they think we know.  This monitoring is full of feedback loops that can produce estimates of intention shaped by relevant lived experience.

Just the idea of selfhood should remind us of the special status that comes from living through dialogue with others. A sense of self is complicated, but it includes the critical ability to be aware of another’s awareness of who we are. If this sounds confusing, it isn’t. This process of making character estimations is central to all but the most perfunctory communication transactions. The results are feelings and judgments that make us smarter about another source’s claims and judgments.

hello dave image

The one gap in my thinking is what could be called the “Dave” problem. What is to be done with computers that “think” they know best, and set in motion what human designers failed to take into account? It was a problem in Stanley Kubrick’s 2001: A Space Odyssey, and is surely possible because of a bad designer, or one with the intention of creating havoc. But to some extent, this has always been the case with automated systems.

Finally, as I wrote in a previous post. “Everyone seems to be describing humans as information-transfer organisms. But, in truth, we are not particularly good at creating reliable accounts of events. What we seem hardwired to do is to add to our understanding of events around us” by determining the credibility of a source.

Any thoughts? 0 for 3? Write to woodward@tcnj.edu.