This is what the erosion of personal choice can look like. We may not act on the best options available, because others are choosing them.
In the West we cherish the idea of individual freedom. We act on the belief that we have ‘agency.’ That’s to say, the choices that matter are ours alone to make. It’s a basic tenet of American life. Within the broad boundaries set by a civil society, no one has the right to deny or ignore them. But we are beginning to hear more for mathematician’s and others about “decisions” that are not as informed or self-generated as we may think.
Hannah Fry, a mathematician at University College, London has written about “tiny decisions on our behalf” that can be made without our blessing or our awareness. In an interview in reproduced in Vox, she notes that algorithms now effect many functions within our lives: “From what we choose to read and watch to who we choose to date, algorithms are increasingly playing a huge role. And it’s not just the obvious cases, like Google search algorithms or Amazon recommendation algorithms.”(Vox, October 1, 2018).
The broad palette of options presented to us from internet-based material have obviously been preselected, first and most obviously by advertisers who pay for high search placement, but also by algorithms used by internet providers to ostensibly match our interests. If you have puzzled over why certain Facebook feeds go to some and not to others, you may have a growing sense that someone else is dealing the cards from a stacked deck. What someone sees at a given site is always a mystery: partly a function of the digital footprints we leave every time we double-click, but also because of unknowable algorithms. We already know this, but its easy to forget about choices we never see.
By shifting decisions to mathematical formulas composed of triggering conditions we do not know we have essentially given up some of our autonomy.
At its most basic, an algorithm is just a formula for content selection that seems appropriate for a given consumer or class of consumers. It’s an efficient gatekeeping tool. And, to be sure, we have always had gatekeepers channeling some content in our direction and filtering out other items. But most of those decision-makers, especially in the news business, presumably use journalistic or source credibility standards for winnowing content. Yet based on what I’ve seen from various feeds, its clear that those standards have been replaced by various triggers that have little to do with the quality of a given story. For example, I see lots of stories of celebrity gossip from unknown “publications” on my Google Play, even though my interests lie elsewhere.
By shifting decisions to mathematical formulas composed of triggering conditions, we do not know what we have essentially given up. Even a system truly based on probabilities and past practices is bound to yield results that are less than they should be. So when we are given “choices”—ranging from the best Asian restaurant “nearby,” to the most qualified news source for a specialized story—the recommendations are based on criteria we generally do not know.
All of this suggests that we have less to fear from robots than from programmed servers that only appear to be offering targeted information. In the 21st Century this is what the erosion of agency can mean. Too often we are acting on options that have been set by others.
We have evidence that internet users are less interested in tracking the provenance of a story than consumers of straight print media.
It comes as no surprise to any thoughtful consumer that most media make money by attracting eyeballs for the ads they have strung around their content. In print media this is the role of display advertising. In conventional television the clusters of ads that interrupt program content have the same function. Even so, in the large scale public migration to internet sites many consumers of “new” media seem not to have noticed the close proximity of genuine news to the qualitatively different “sponsored content” nearby. Sometimes these “stories” at the end of a section feature an interesting picture, the promise of a shocking revelation, and always another new set of pages that will pull us in to see even more ads. These “news” items are sometimes labeled “Promoted Stories” or content “From Our Partners.”
On one particular day the popular website The Daily Beast had sponsored articles at the end of real journalistic pieces from a range of self-interested groups. One “article” entitled “Do This Every Time You Turn on Your PC” was really selling “Scanguard,” which is supposed to speed up balky computers. Another “article,” “How to Fix Your Fatigue” was click bait from a food supplement “doctor.” And an ancestry research service was embedded in a third “news story” entitled “What did People Eat in the 1800s?”
Sometimes this clutter of “advertorial” content has no appeal. But we may find it irresistible to take a time-wasting detour baited by headlines like “You won’t believe how the actors in ‘Gilmore Girls’ have changed.” At the risk of giving away my Calvinist/Methodist roots, all this spontaneous grazing pulls us away from more purposeful tasks. As if we needed it, a writing course at the University of Pennsylvania is actually called “Wasting Time on the Internet.”
As things go, advertising masquerading as news probably doesn’t qualify as a crime against humanity. And there can be little question that news sites of all sorts need the revenue stream of advertising that allowed print media to prosper for well over a century. But a problem remains: paid web content is now melded so seamlessly into the mix of stories offered on many sites that we may fail to notice that we have passed from the hands of editors and journalists into a strategic marketing world dominated by advertisers and copywriters.
In greater numbers Americans don’t consider the self-serving nature of much online content.
This doesn’t pose a serious problem to a savvy reader. But we have more evidence that internet users are less interested in tracking the provenance of a story than consumers of straight print media. In greater numbers Americans don’t consider the self-serving nature of online content, even when solid expertise and neutrality should weigh heavily on what we “know,” especially if we are researching subjects as consequential as health information. This lack of critical insight makes Americans a bit less intelligent, turning us into better consumers than citizens..
Add in another factor that makes the problem of accepting low-credibility sources even more unsettling. Traditionally our memory for content outlasts our memory for where it came from. This so-called “sleeper effect” means there is a time in our cognitive life when we are more likely to remember a stray fact or assertion than the source that it came from. You know the effect if you have ever heard yourself say “I don’t remember where I saw it, but I do remember seeing . . .” It’s at this point that the paid flacking of click bait creates the greatest opportunity for cognitive mischief. It’s content outlasts what should be reasonable suspicions about its fictions and limitations.