The author of the bestselling Sapiens offers a penetrating critique of the insidious dangers of machine learning and its capacity to manipulate the truth
What jumps to mind when you think about the impending AI apocalypse? If you’re partial to sci-fi movie cliches, you may envisage killer robots (with or without thick Austrian accents) rising up to terminate their hubristic creators. Or perhaps, a la The Matrix, you’ll go for scary machines sucking energy out of our bodies as they distract us with a simulated reality.
For Yuval Noah Harari, who has spent a lot of time worrying about AI over the past decade, the threat is less fantastical and more insidious. “In order to manipulate humans, there is no need to physically hook brains to computers,” he writes in his engrossing new book Nexus. “For thousands of years prophets, poets and politicians have used language to manipulate and reshape society. Now computers are learning how to do it. And they won’t need to send killer robots to shoot us. They could manipulate human beings to pull the trigger.”
Language – and the human ability to spin it into vast, globe-encircling yarns – is fundamental to how the Israeli historian, now on his fourth popular science book, understands our species and its vulnerabilities. In his 2014 mega-hit Sapiens (originally published in Hebrew in 2011), he argued that humans became dominant because they learned to cooperate in large numbers, thanks to a newfound aptitude for telling stories. That aptitude, which enabled our ancestors to believe in completely imaginary things, lies at the root of our religions, economies and nations, all of which would dissolve if our narrative-spinning faculties were somehow switched off.
Sapiens has sold 25m copies to date – a testament to Harari’s own storytelling prowess – though it’s had its share of detractors. Academics questioned its accuracy and the idea of cramming 70,000 years of human history into 450 pages. Sitcoms poked fun at Harari superfans who wave the book around like a modern-day bible. The appeal of Sapiens lies in its dizzying scope but, as a 2020 New Yorker profile pointed out, Harari’s zoomed-out approach can have the effect of minimising the importance of current affairs.
Nexus could be seen as a rebuke to that criticism. Though it executes its own breakneck dash through the millennia, hopping back and forth in time and between continents, it is very much concerned about what’s happening today.
If stories were fundamental to the schema of Sapiens, here it’s all about information networks, which Harari views as the basic structures undergirding our societies. “Power always stems from cooperation between large numbers of humans,” he writes, and the “glue” that holds these networks of cooperation together is information, which “many philosophers and biologists” see as “the most basic building block of reality”.
But information doesn’t reliably tell the truth about the world. More often, Harari emphasises, it gives rise to fictions, fantasies and mass delusions, which lead to such catastrophic developments as nazism and Stalinism. Why is Homo sapiens, for all its evolutionary successes, so perennially self-destructive? “The fault,” according to Harari, “isn’t with our nature but with our information networks.”
Casting his eye over how information has led us astray in the past, Harari has no shortage of examples to draw on. One of the more gorily memorable is Malleus Maleficarum, written by the Dominican friar Heinrich Kramer in 1480s Austria. A guide to exposing and murdering witches in deliriously horrible ways, the book wouldn’t have travelled far had the printing press not been invented a few decades earlier, allowing Kramer’s deranged ideas to spread across Europe, stoking a witch-hunting frenzy.
Harari’s basic point is that information revolutions can give rise to periods of human flourishing but always come at a cost. When we invent shiny new technologies that carry words and ideas farther and faster than ever before, much of the information that spews out is dross or actively dangerous. It’s not helped by the fact that, when it comes to maintaining social order, fictions tend to be more reliable binding agents than truths.
Keanu Reeves and Carrie-Anne Moss in the 1999 film The Matrix. Photograph: Allstar
What’s scary about the AI revolution isn’t just that we’ll be overwhelmed with misinformation from chatbots, or that the powers that be will use it to crunch data on our private lives. Unlike previous technologies such as books and radios, writes Harari: “AI is the first tool that is capable of making decisions and generating ideas by itself.” We saw an early warning of this in Myanmar in 2016-17, when Facebook algorithms, tasked with maximising user engagement, responded by promoting hateful anti-Rohingya propaganda that fuelled mass murder and ethnic cleansing.
Harari makes a strong case for why we should regard such algorithms as autonomous agents and how, if we’re not careful, humans could become tools for AI to manipulate with ever more terrible force. Unless we take immediate action, this burgeoning “alien intelligence”, as he prefers to call it, could trigger catastrophes we can’t even imagine, up to and including the destruction of human civilisation.
This pessimistic take on AI is nothing new: “doomers” such as Eliezer Yudkowsky have been warning of its apocalyptic potential for years and even the AI industry has started voicing concerns. What Harari seeks to add to the debate is the long view. By applying his lens to previous information revolutions and showing how different forms of government have reacted to them, he believes we can prepare ourselves for the earthquakes to come.
Nexus has some curious blind spots; it’s odd, in a critique of a technology driven largely by profit-seeking corporations, that capitalism is hardly mentioned at all. But whether or not you agree with Harari’s historical framing of AI, it’s hard not to be impressed by the meticulous way he builds it up, speckling what could be a rather dry analysis with vivid examples, such as the story of Cher Ami, a first world war messenger pigeon, used here to tease out information’s fundamental slipperiness. As in previous books, he relies heavily on lists (“the two main challenges”, “the five basic principles”) and binaries (truth versus order, democracy versus dictatorship), but this serves to organise his thinking rather than dull the writing.
The solutions he proposes to restrict AI’s power range from the sensible (ban bots from impersonating humans) to the laughable (encourage artists and bureaucrats to “cooperate” to help the rest of us understand the computer network), but Nexus operates primarily as a diagnosis and a call to action, and on those terms it’s broadly successful. If it sells anywhere near as well as Sapiens did, we’ll be that bit better equipped as a species to deal with the rise of the machines.
By Killian Fox