Man vs. Machine
The limits (?) of artificial intelligence.
Apr 28, 2014, Vol. 19, No. 31 • By JAMES C. BANKS
The failure to explore and monitor the threat [of dangerous artificial intelligence] is almost society-wide. But that failure does not . . . alter the fact that we will have just one chance to establish a positive coexistence with beings whose intelligence is greater than our own.
‘Gog,’ starring Richard Egan (second from right) and Constance Dowling (1954)
This is not just the closing paragraph of James Barrat’s new manifesto; it is the overarching theme of this book. And when Barrat speaks of the threat of artificial intelligence, he wants you to know it is a matter of your life and death.
Barrat knows that we have heard all of this before, from Isaac Asimov and Arthur C. Clarke and James Cameron, but the foremost merit of Our Final Invention is that Barrat manages to distance himself credibly from the fantasists who have preceded him. This is not to say his argument is so persuasive that we should begin storing weapons and resign ourselves to becoming the protagonists of an H. G. Wells novel. But it does mean he raises issues that deserve to be considered more than they are.
Barrat knows how hard this argument is to take seriously, and he tries to stave off the dismissive laughter it invites. For instance, he acknowledges that “entertainment has inoculated [artificial intelligence] from serious consideration in the not-so-entertaining category of catastrophic risks. For decades, getting wiped out by artificial intelligence . . . has been a staple of popular movies, science-fiction novels, and video games.” The sort of apocalyptic scenarios Barrat fears do not sound much like a plotline from one of the Terminator films. The end of the world he describes comes with something that sounds closer to a whimper than a bang.
Barrat is more concerned with artificial intelligence than humanoid robots, and, unlike so many of the novels and movies that he believes inoculate us to the real threat, he believes that artificial intelligence is more worrisome insofar as it is unlike human intelligence:
The problem with machines is not that they are broadly malignant but that they are not specifically benign. Barrat has a few examples of how extremely intelligent, amoral machines might pose a threat. What happens if, for instance, an artificially intelligent machine with the ability to self-replicate does not know when to stop itself and consumes our natural resources in its drive to reproduce? Questions like this should concern us; but there is a problem with Barrat’s argument, stemming from the tricky business of defining intelligence itself.
At one point, Barrat picks up the New Oxford American Dictionary to help him define “artificial intelligence,” but even when he talks about some of the symptoms of intelligence—such as the ability to self-program, or to learn from one’s mistakes—the systems that Barrat fears always seem to have the superficial appearance of intelligence without actually being intelligent. Artificial intelligence might be able to calculate pi to more decimal points than humans are ever capable of doing; but this is intelligence that is still mediated through human experience and human programming, for human purposes.
Barrat comes close to grasping this point a number of times, without ever putting his finger on it. He asks Richard Granger, an expert in “computational neuroscience,” whether a computer would ever be able to develop intelligence on the same level as a human, considering that a computer has no senses. It is a good question, but one for which Richard Granger does not have much of an answer. His response is that Helen Keller managed to develop a human level of intelligence—an answer that does not take into account the fact that those lacking one or more of the senses can compensate through the senses they do have.
And whether our senses are limited or not, they still serve as the medium through which we experience, evaluate, and learn to make rational decisions about the world. A machine could conceivably have sensors for picking up objects or avoiding perilous situations, but it will only know to avoid contact with a hot oven to the extent that a human mind has dictated that it will do so.
Barrat and other techno-critics might argue that this does not matter: After all, a steamroller does not need an IQ to run over you; a nuclear weapon does not need to think to explode. But it matters because, as long as we are unable to invent a fully self-conscious machine, machines will always have to borrow their consciousness or drive from humans. Toward the end of this book, Barrat mentions Moravec’s Paradox: Tasks that humans find easy robots tend to find difficult, whereas tasks that humans find difficult robots tend to find easy. Barrat sees this as evidence of how imperiled we are. If computers can easily acquire reasoning capabilities, how are we to defend ourselves? But it is the smaller things—adaptability, creativity, the ability to think on one’s feet and reconfigure one’s environment for survival—that are at the root of human success.
Nonetheless, if Barrat does not always make a convincing case, his predictions are preferable to those of some of the techno-utopians he interviews. If there is anything more disturbing than the prospect of being destroyed by a self-replicating computer that feeds itself by harvesting carbon, it is the visions of people like Ray Kurzweil, a man who “plans to fend off death” through dieting and exercise “until technology finds a cure he’s certain will come.”
Immortality has long been a pursuit of the “transhumanist” movement, but there is nothing immortal about the sort of goal that Kurzweil is setting for humanity. It may be nonagenarian, and it may hold the promise of indefinite, if not eternal, life. But in the transhumanist world that Kurzweil and others dream of, we would still be fed with the same food, killed by the same weapons, subject to the same diseases, and warmed and cooled by the same winter and summer.
In such a world—a world in which people may have the ability to live forever but are not guaranteed to do so—would anyone muster the courage to set foot outside his front door? Would people still desire to raise children if they had no intention of leaving any legacy to them? Or would we become a society of old minds trapped in young bodies with the desire to achieve no more wisdom than will preserve our bodies for another year?
Time will tell. But, as Aldous Huxley’s savage-hero from Brave New World points out, a world without adversity makes virtue irrelevant. It is also a world in which humans could survive, but in which their humanity would not last an hour.
James C. Banks is a writer in New York.