Category Archives: Models

Examples we can productively study

A Case of Failed Journalism Ethics

Source: Wikipedia.org
Source: Wikipedia.org

Members of the Fourth Estate should honor the idea of privacy when there is no compelling public need to know.

The massive hack into Sony and Columbia Pictures’ computers at the end 2013 was recently made worse because of a decision by WikiLeaks to catalogue and release 30-thousand company documents and 173-thousand employee e-mails. The original cyber-attack, which included theft of the sophomoric film, The Interview, is sometimes credited to North Korea. Others who follow these things aren’t so sure.

What is apparent is that WikiLeaks founder Julian Assange has decided that the presumed confidentiality for communications we all expect as a functional necessity in organizational life means nothing.  His rationale for releasing the documents is generic and lame: Sony is “newsworthy and at the centre of a geo-political conflict.”  And so—with too many hangers-on within a compliant American press—we can all be voyeurs to the health documents of employees, their social security numbers, their phone numbers, and their private communications.  Apparently Assange is determined to “fence” this stolen data to the rest of us.  His egregious decision is made even worse by his decision to  set up a separate website to display the material and make it fully searchable.

Sony has a right to expect that its internal affairs—which apparently involve no violations of any laws–are private. This is simply an unauthorized peek into someone else’s business that is really none of our business.

What makes the theft and publication of these materials even worse is the willingness of many news outlets to mine this stolen property to pander to its readers. It’s still early, and already the New York Post, CNN, the Associated Press, the New York Times, The Verge, Gawker, the Daily Beast, New York Magazine and others have published gossipy stories from these memos. To my knowledge no major news organizations outlets have editorialized against the practice, even though the fig leaf of legitimate “news” keeps slipping out of place with every innocuous story ginned out of the celebrity e-mails.

This gossip won’t be repeated here. But you get the idea if you remember the temptations of news sites who carried images of an unclothed Jennifer Lawrence stolen from Apple’s ‘cloud.”

Too few journalists have taken the principled position of Slate’s Jacob Weisberg:

“News outlets should obviously cover the story of the hack itself, the effect on Sony, the question of how it happened, and who’s responsible. This is a big and legitimate news story. But when it comes to exploiting the fruits of the digital break-in, journalists should voluntarily withhold publication. They shouldn’t hold back because they’re legally obligated to—I don’t believe they are—but because there’s no ethical justification for publishing this damaging, stolen material."

Federal courts have  ruled that the press can publish most kinds of material stolen by a third party. There is understandable value of immunizing journalists against governmental sources that would like to suppress evidence of failed policies or simple malfeasance. There is no question the nation was well-served by the unauthorized release and publication of the Pentagon Papers in 1971.  But nothing so grand is at risk here. This is the equivalent of opportunists looting a store that has been torn open by an earthquake.

Assange’s efforts to frame his voyeurism in the language of “public service” is an abuse of the term. His error of judgment should also be apparent to members of the Fourth Estate, who must honor the idea of privacy when there is no compelling public need to know.

Comments: Woodward@tcnj.edu

Yellow bar graphic

Turing, And The Bogus Rivalry With Machine-Based Intelligence

IBM's Watson Wikipedia.org
                     IBM’s Watson           Wikipedia.org

In reality, humans have nothing to fear. Most measures of artificial intelligence use the wrong yardsticks.

We are awash in articles, books and films about the coming age of “singularity:” the point at which machines will supposedly duplicate and surpass human intelligence.  For decades it’s been the stuff of science fiction, reaching perhaps its most eloquent expression in Stanley Kubrick’s 1968 motion picture, 2001: A Space Odyssey.  The film is still a visual marvel. Who would have thought that Strauss waltzes and images of deep space could be so compatible?  Functionally, the waltzes have the effect of juxtaposing the familiar with a hostile void, making the film a surprising celebration of all things earthbound.  But that’s another story.

The central agent in the film is the HAL-9000 computer that begins to turn off the life support of the crew during a long voyage, mostly because it “thinks” the humans aren’t up to the enormous task facing them.

Kubrick’s vision of very smart computers is also evident in the more recent A.I., Artificial Intelligence (2001), a project started just before his death and eventually brought to the screen by Steven Spielberg.  It’s a dystopian nightmare. In the film intelligent “mechas” (mechanical robots) are generally nicer than the humans who created them.  In pleasant Haddonfield New Jersey, of all places, they are shot on sight for sport.

Fantasies of machine intelligence have lately given way to IBM’s “Big Blue” and “Watson,” mega-computers with amazing memories and—with Watson—a stunning speech recognition capability that is filtering down to all kinds of devices.  If we can talk to machines, aren’t we well on our way to singularity?

For one answer consider the Turing Test, the challenge laid down by the World War II code-breaker Alan Turing. A variation of it has been turned into a recurring world competitions.  The challenge is to construct a “chatterbot” that can pass for a human in blind side-by-side “conversations” that include real people.  For artificial intelligence engineers, the trick is to fool a panel of questioners at least 30 percent of the time over 25 minutes. According to the BBC, a recent winner was a computer from the University of Reading in the U.K. It passed itself off as a Ukrainian teen (“Eugene Goostman”) speaking English as a second language.

In actual fact, humans have nothing to fear.  Most measures of “human like” intelligence such as the Turing Test use the wrong yardsticks. These computers are never embodied. The rich information of non-verbal communication is not present, nor can it be.  Proximate human features are not enough.  For example, Watson’s “face” in its famous Jeopardy challenge a few years ago was a set of cheesy electric lights.  Moreover, these smart machines tend to be asked questions that we would ask of Siri or other informational databases.  What they “know” is often defined as a set of facts, not feelings. And, of course, these machines lack what we so readily reveal in our conversations with others: that we have a sense of self, that we have an accumulated biography of life experiences that shape our reactions and dispositions, and that we want to be understood.

Just the issue of selfhood should remind us of the special status that comes from living through dialogue with others.  A sense of self is complicated, but it includes the critical ability to be aware of another’s awareness of who we are.  If this sounds confusing, it isn’t.  This process is central to all but the most perfunctory communication transactions.  As we address others we are usually “reading” their responses in light of what we believe they already have discerned about us.  We triangulate between our  perceptions of who we are, who they are, and what they are thinking about our behavior. This all happens in a flash, forming what is sometimes called “emotional intelligence.”  It’s an ongoing form of self-monitoring that functions to oil the complex relationships. Put this sequence together, and you get a transaction that is full of feedback loops that involve estimates if intention and interest, and—frequently—a general desire born in empathy to protect the feelings of the other.

It’s an understatement to say these transactions are not the stuff of machine-based intelligence, and probably never can be.  We are not computers.  As Walter Isaacson reminds us in his recent book, The Innovators, we are carbon based creatures with chemical and electrical impulses that mix to create unique and idiosyncratic individuals.  This is when the organ of the brain becomes so much more: the biographical homeland of an experience-saturated mind.  With us there is no central processor.  We are not silicon-based. There are the nearly infinite forms of consciousness in a brain with 100-billion neurons with 100-trillion connections.  And because we often “think” in ordinary language, we are so much more—and sometimes less—than an encyclopedia on two legs.