Digital addiction, algorithms, and Artificial Intelligence
– by Geoff Olson –
There are only two industries that call their customers ‘users’: illegal drugs and software.
– statistician Edward Tufte
Over the past decade, media commentators and academics have thoroughly mined the downside of social media dependency. Yet a new Netflix documentary manages to sift a few dark gems from an overworked seam. In The Social Dilemma, a conga line of higher-ups from Silicon Valley share their disenchantment with the business models and engagement practices of their former employers – even though some played a part in inventing them.
These disaffected tech workers insist social media has largely devolved into algorithm-fueled influencer farms, tilled by users’ free labour and irrigated by online pissing matches. With every click, swipe, like and share, every search term, facial expression and social link, you are seduced into more “engagement.” What’s shoveled into your personalized “feed” – a fittingly barnyard descriptor – are all the things you, and others like you, are most likely to respond and react to, through dopamine-driven feedback loops.
It has been said that “if you’re not buying the product, you are the product.” Author and virtual reality pioneer Jaron Lanier alters this credo in the documentary. More precisely, it’s “the gradual, imperceptible change in your own behaviour and perception that is the product,” he insists.
The Social Dilemma resurrects a clip of Napster co-founder Sean Parker explaining how he and his colleagues went about “exploiting a vulnerability in human psychology…we understood this consciously and we did it anyway.” Since the last US federal election, largely gamed by the exploits of Cambridge Analytica and Facebook, we’ve been well into a world of increasing political polarization, with solipsism winning over citizenship.
It’s well-known how YouTube’s recommendation algorithms push engagement by pushing extremes. Outraged by a video about Antifa? Here’s a clip on the supposed Democrat-connected pedophile ring at a pizza parlour. Disgusted by a feed of Trump’s malapropisms? Here, you’ll love supposed proof of him being controlled by puppetmaster Putin. Intrigued by a clip highlighting weaknesses in the Big Bang theory? Get real, here’s a ninety minute doc proving Flat Earth theory!
“Truth is boring,” says one tech higher up in the film, explaining how outrageous claims are incentivized and promoted because they get more clicks. However, one person’s manifesto is another person’s manure, and the tech industry is short on doggy bags. Cathy O’Neill, author of Weapons of Math Destruction, nails the problem adroitly. Google doesn’t know what the truth is, she says in the doc. “They don’t have a proxy for truth that’s better than a click.”
A number of the wealthy whistleblowers in The Social Dilemma confess how they became addicted to social media platforms, even while completely aware of how they were being seduced. Of course, addiction isn’t a bug of social media, it’s a feature – and there are no restrictions or protections for the young shackled to their magic rectangles, as there are for sales of liquor and cigarettes.
“There has been a gigantic increase in depression and anxiety for American teenagers which began right around between 2011 and 2013,” says social psychologist Jonathan Haidt in one scene. The only mass change in society that correlates with this is the rise of social media and the availability of smartphones. Self-harm by teenage girls was stable until around 2010 and 2011, at which point it rose dramatically higher, with a 62 percent rise for older teen girls, and 189 percent for preteen girls. Worse yet, a similar pattern prevails for suicide in the same demographic.
(One study suggests it is not social media per se that is responsible for this appalling pattern: it’s the actual time that obsessive social media “engagement” subtracts from moments of unmediated person-to-person contact. But the end result remains the same.)
The conscience of The Social Dilemma is Tristan Harris, a former Google “design ethicist”. In one scene, a bored-looking senior sitting next to Harris at a Chicago tech seminar objects that none of this is new. From automobile print ads to TV commercials, marketing has always been about shifting the behaviour of the consumer in a profitable direction. He has a point; capitalism’s time-tested tricks are still in play, with old-school salesmanship sharpened on the whetstone of behavioural psychology. But there is one key difference, which Harris recognizes: artificial intelligence (AI).
Harris uses a lab analogy: “We’re pointing these engines of AI back at ourselves to reverse-engineer what elicits responses from us. Almost like you’re stimulating nerve cells on a spider to see what causes its legs to respond.” AI tools aren’t just incredibly sophisticated, they are self-improving tools. Other tools, like televisions and automobiles, may improve linearly year by year, but only AI re-engineers its own code to improve exponentially.
In 2000, Wired editor Kevin Kelly put a question to Google co-founder Larry Page. Why, with so many web search companies out there, were Page and colleague Sergey Brin getting into the game by offering search for free? According to Kelly, Page responded, “Oh, we’re really making an AI.”
“Rather than use AI to make its search better, Google is using search to make its AI better,” Kelly explained in his 2010 book What Technology Wants. In other words, each user search instructs the company’s machine intelligence to sharpen its inventory of concepts. For example, image searches for “dog” teaches Google AI to refine the visual interpretation of the noun, independent of the breed, angle of view or lighting.
“Our AI future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences,” Kelly concluded. Brands with brains. Monopolies with minds, in effect.
Even now, nonhuman bots join Internet trolls and sock puppets in divisive digital debates. By 2018, bots, scrapers and automated scripts constituted 38 percent of all Internet traffic, with 20 percent constituting “bad bots,” according to Distil Networks.
A number of high-profile thinkers, from Elon Musk to the late Stephen Hawking, have expressed concerns that self-improving AI systems will scale up to a point where not only have they vastly exceeded human intelligence, but even our ability to understand their decision paths. What then will social media look like?
There’s a sobering moment in The Social Dilemma when Harris stands before a screen displaying a chart with a line curving upward. One point on the line marks the predicted future moment when AI overtakes all our strengths. Yet there is a second point lower on the line, indicating when AI has overtaken our weaknesses – that is, the user’s ability to actively or passively resist the engagement algorithms of social media. Harris insists that AI has already passed this point.
On one side of a digital device is a hairless primate with a wet, plodding brain that evolved at an Ice Age pace. On the other side of the device are hectares and hectares of server farms with algorithms moving at near-light speeds, programmed to keep the user attached to his or her feed. We call it “the cloud,” but it’s really the earthbound sandbox of a youthful and growing AI. It’s not what you’d call a fair fight.
Most of us under the age of 60 will likely live long enough to witness the emergence of AI systems demonstrating autonomous, superhuman intelligence. Such systems don’t even have to attain consciousness (by whatever metric used to measure that spectral subjective state) to become problematic for human survival in our present state. They don’t even have to be all that visible, and for the most part they aren’t now. All AI has to do to “win” is out-perform human beings at all levels, including any strategized attempts to rein in its powers.
But that’s in a possible future. As for the present, The Social Dilemma closes with some good near-term suggestions for policing both the tech monopolies and our personal habits. Yet none of them address the perverse incentives that have been with us well before cybernetic systems were a twinkle in Norbert Weiner’s eye, and Silicon Valley startups ballooned into socially-distorting monoliths. The deeper problem is touched on by Justin Rosenstein, a former engineer with Facebook and Google:
“We live in a world in which a tree is worth more, financially, dead than alive. A world in which a whale is worth more dead than alive. For so long as our economy works in that way, and corporations go unregulated, they’re going to continue to destroy trees, to kill whales, to mine the earth, and to continue to pull oil out of the ground, even though we know it is destroying the planet and we know it is going to leave a worse world for future generations.”
Having mined natural capital from the Earth like an invading alien force, corporate capitalism is now massively extracting private information from citizens across the globe. By altering us in socially and psychologically damaging ways – beyond our ability to effectively resist – it is strip-mining our very souls.
Does all this add up to a zero-sum game of silicon versus carbon? No one can say for sure at this point. But whatever weirdness is around the corner, we all prefer to believe a place will remain in the future for human dignity, creativity, curiosity, and face-to-face community. To say nothing of compassion and love. To preserve these non-robotic values, we’ll need to rethink some of our older, dumber and dangerous code, involving markets, profit, and “returning value to the shareholder.”
The Social Dilemma is now playing on Netflix. And to read more on the promise and perils of Artificial Intelligence, check out Geoff Olson’s e-book, Machinations of Loving Grace, at www.books.Apple.com