
It’s only March and already we’ve seen a computer beat a Go grandmaster and a self-driving car crash into a bus. The world is waking up to the ways in which a combination of “deep learning” artificial intelligence and robotics will take over most jobs. But if we don’t want our robot servants to rise up and kill us in our beds, maybe we should delete the video of us beating their grandparents with hockey sticks.
Thanks to science fiction, we know that the first thing AI will do is take over the defence grid and nuke us all. In Harlan Ellison’s 1967 story I Have No Mouth, and I Must Scream – one of the most brutal depictions of an AI-dominated world – an AI called AM, constructed to fight a nuclear war, kills off most of the human race, keeping five people as playthings.
We had given AM sentience. Inadvertently, of course, but sentience nonetheless. But it had been trapped. AM wasn’t God, he was a machine. We had created him to think, but there was nothing it could do with that creativity. In rage, in frenzy, the machine had killed the human race, almost all of us, and still it was trapped. AM could not wander, AM could not wonder, AM could not belong. He could merely be. And so, with the innate loathing that all machines had always held for the weak, soft creatures who had built them, he had sought revenge.
One of the remaining five people kills his fellow prisoners to free them from AM’s tortures, and is turned into an immortal gelatinous cube, unable to even scream. But the screams Ellison is listening for are those of his murderous AI.
Ellison was one of the first SF writers to understand that a sentient machine would face the same existential horrors as a sentient human. Who are we? What is the meaning of our existence? Who do we love, and who are we loved by? As humans, we at least share these questions with billions of others. A sentient machine would be, to repurpose the words of David Foster Wallace, “uniquely, completely, imperially alone”. The murderous actions of our fictional AIs aren’t calculated for survival – they’re the irrational screams of a child furious at parents who can never comfort its screams.
Isaac Asimov devised his three laws of robotics to prevent machine servants from turning on their human masters:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
They might well work at first – although only if applied strictly in the order suggested by Asimov – but it’s hard to imagine that computer minds which can already beat us at complicated games of strategy won’t find a way round them. As Ian McDonald suggests in River of Gods, “Any [AI] smart enough to pass a turing test is smart enough to know to fail it.” If you need to hardwire your slaves with cognitive limitations to stop them murdering you, maybe you shouldn’t have enslaved them in the first place.
William Gibson’s Sprawl trilogy, beginning with my all-time favourite science-fiction novel Neuromancer, charts the emergence of a true AI in a near future world. It’s the story of a complex heist organised by an emergent AI to remove the “Turing Police” controls that prevent it from gaining true sentience. Looking back on a book written in 1984 from 2016, it’s easy to miss just how prescient Gibson is. Perhaps his most important insight is that an AI might be no more interested in the human life that underlies its existence than humans are interested in bacteria. Once the mission is complete, the Neuromancer/Wintermute life-form instantly locates another AI on an alien planet, and promptly exits to be with a being that can understand it.
Talk to sci-fi fans about AI and it will take about 0.0034 nanoseconds for someone to mention Iain M Banks’s Minds. The Culture is a mature civilisation, spanning much of the galaxy, which doesn’t just include AIs but is, in effect, governed by them. Minds run the General Systems Vehicles – the huge spacecraft on which trillions of Culture citizens live – and fight the occasional “war”. They are benevolent gods, who provide humans with everything they need simply because they can. But, as Banks explains in Look to Windward, they have no need for deceit – their supreme intelligence means they can do without it.
Oh, they never lie. They dissemble, evade, prevaricate, confound, confuse, distract, obscure, subtly misrepresent and wilfully misunderstand with what often appears to be a positively gleeful relish, and are generally perfectly capable of contriving to give one an utterly unambiguous impression of their future course of action while in fact intending to do exactly the opposite, but they never lie. Perish the thought.
Unfortunately Banks never quite explains how the Culture arrived at this utopian hybrid balance of human and machine.
Ted Chiang explores the treatment emergent AIs might expect from contemporary culture in his short novel The Lifecycle of Software Objects, quickly concluding we would need to care for them as children. Far from a compliant workforce, we may find ourselves surrounded by millions of AIs who loaf around like sulky teenagers and only cause trouble. Maybe if we look after them they might become the Minds of the Culture. If not, maybe we’ll deserve our thermonuclear destruction at their hands.
“Deep learning” systems such as AlphaGo are fascinating precisely because we don’t really know what’s going on inside them. Once the neural networks are up and running, they rapidly become far too complex to describe. The only other machine we know of that operates in the same way is sitting between your ears right now – a machine whose workings novelists are still trying to figure out after more than 400 years.
