The Magazine

Be Afraid

The Meaning of Deep Blue's Victory.

May 26, 1997, Vol. 2, No. 36 • By CHARLES KRAUTHAMMER
Widget tooltip
Single Page Print Larger Text Smaller Text Alerts

In Game Two, Deep Blue passed the Turing test. Yes, of course, it was for chess only, a very big caveat. But, first, no one was ever quite sure that a machine ever would pass even this limited test. Kasparov himself was deeply surprised and unnerved by the humanlike quality of Deep Blue's play. He was so unnerved, in fact, that after Game Two he spoke darkly of some "hand of God" intervening, a not-so-veiled suggestion that some IBM programmer must have altered Deep Blue's instructions in mid-game. Machines are not supposed to play the way Deep Blue played Game Two. Well, Deep Blue did. (There is absolutely no evidence of human tampering.)

And second, if a computer has passed the Turing test for chess, closed logical system though it may be, that opens the possibility that computers might in time pass the Turing test in other areas.

One reason to believe so is that, in this case, Deep Blue's Turing-like artificial intelligence was achieved by inadvertence. Joe Hoane, one of Deep Blue's programmers, was asked, "How much of your work was devoted specifically to artificial intelligence in emulating human thought?" His answer: "No effort was devoted to [that]. It is not an artificial intelligence project in any way. It is a project in -- we play chess through sheer speed of calculation and we just shift through the possibilities and we just pick one line."

You build a machine that does nothing but calculation and it crosses over and creates poetry. This is alchemy. You build a device with enough number- crunching algorithmic power and speed -- and, lo, quantity becomes quality, tactics becomes strategy, calculation becomes intuition. Or so it seems. And, according to Turing, what seems is what counts.
From Ape to Archimedes

But is that not what evolution did with us humans: build a device -- the brain -- of enough neuronal size and complexity that lo, squid begat man, quantity begat quality, reflex begat intuition, brain begat mind?

After all, how do humans get intuition and thought and feel? Unless you believe in some metaphysical homunculus hovering over (in?) the brain directing its bits and pieces, you must attribute our strategic, holistic mental abilities to the incredibly complex firing of neurons in the brain. Kasparov does not get the gestalt of a position because some angel whispers in his ear. (Well, maybe Bobby Fischer does. But he's mad.) His brain goes through complex sequences of electrical and chemical events that produce the ability to "see" and "feel" what is going on. It does not look like neurons firing. It does not feel like neurons firing. But it certainly is neurons firing, as confirmed by the lack of chess ability among the dead.

And the increasing size and complexity of the neuronal environment has produced in humans not just the capacity for strategic thought, but consciousness, too. Where does that come from if not from neurons firing? A million years ago, human ancestors were swinging from trees and composing no poetry. They led, shall we say, the unexamined life. And yet with the gradual, non-magical development of ever more complex neuronal attachments and connections, we went from simian to Socrates. Somehow along the way -- we know not how it happened but we know that it happened -- a thought popped up like an overhead cartoon balloon. We became self-aware, like Adam in the Garden.

Unless you are ready to posit that this breakthrough occurred as the result of some physics-defying rupture of nature, you must believe that human intelligence, thought, self-consciousness itself are the evolutionary product of an increasingly complex brain.

But then if the speed and complexity of electrochemical events in the brain can produce thought and actual self-consciousness, why in principle could this not occur in sufficiently complex machines? If it can be done with a carbon-based system, why not with silicon (the stuff of computer chips)?

An even more powerful mystery about human agency is free will. Yet even here we have an inkling of how it might derive from a physical-material base. We know from chaos theory that when systems become complex enough, one goes from the mechanistic universe, where one can predict every molecular collision down to the last one, to a universe of contingency, where one cannot predict the final event. When that final event is human action, we call the contingency that underlies it free will.

I ask again: If contingency, and with it free will, evolved out of the complexity of a carbon-based system, why not with silicon?
"You Can Never Know for Sure . . ."

On May 4 in New York City, a computer demonstrated subtlety and nuance in chess. A more general intelligence will require a level of complexity that might take decades more of advances in computer speed and power. (Not bad, actually, considering that it took nature using its raw materials three billion years to produce intelligence in us.) And it will take perhaps a few centuries more for computers to reach the final, terrifying point of self- awareness, contingency, and autonomous will.

It is, of course, a very long way to go from a chess game on the 35th floor of the Equitable Center to sharing the planet with logic monsters descended distantly from Deep Blue. But we've had our glimpse. For me, the scariest moment of the match occurred when Murray Campbell, one of the creators of Deep Blue, was asked about a particular move the computer made. He replied, " The system searches through many billions of possibilities before it makes its move decision, and to actually figure out exactly why it made its move is impossible. It takes forever. You can look at various lines and get some ideas, but you can never know for sure exactly why it did what it did."

You can never know for sure why it did what it did. The machine has already reached such a level of complexity that its own creators cannot trace its individual decisions in a mechanistic A to B to C way. It is simply too complicated. Deep Blue's actions have already eclipsed the power of its own makers to fully fathom. Why did Blue reposition its king's rook on move 23 of Game Two? Murray Campbell isn't sure. Why did Adam eat from the apple? Does his maker know?

We certainly know the rules, the equations, the algorithms, the database by which Deep Blue decides. But its makers have put in so many and so much at such levels of complexity -- so many equations to be reconciled and to " collide" at once -- that we get a result that already has the look of contingency. Indeed, one of the most intriguing and unnerving aspects of Deep Blue is that it does not always make the same move in a given position.

We have the idea that all computers (at least ones that aren't on the blink) are totally predictable adding machines. Put your question in and you will get the answer out -- the same answer every time. This is true with your hand- held calculator. Do 7 times 6 and you will get 42 every time. It is not true with the kind of problems Deep Blue deals with.

Why? Because Deep Blue consists of 32 computer nodes (of 16 co-processors each) talking to one another at incredible speed. If you present the same question to it a second time, the nodes might talk to one another in a slightly different order (depending on minute alterations in the way tasks are farmed out to the various chips), yielding a different result. In other words, in a replay tomorrow of Game Two, Deep Blue might not reposition its king's rook on move 23.

This is not, of course, free will. The machine is not choosing path A rather than path B. But it is a form of contingency -- already a qualitative leap beyond the determinism of the calculator -- and this is occurring with the computer still in its infancy, barely 50 years old.

To have achieved this level of artificial intelligence -- passing the Turing test against the greatest chess player in history -- less than 40 years after the invention of the integrated circuit, less than 30 years after the introduction of the microprocessor, should give us pause about the future possibilities of this creation. It will grow ever beyond our control, even our understanding. It will do things that leave its creators baffled -- even as Deep Blue's creators today are baffled by their baby's moves.

The skeptics have a final fallback, however. Okay, they say, maybe we will be able to create machines with the capacity for nuance, subtlety, strategic thinking, and even consciousness. But they still could never feel, say, pain, i.e., have the subjective experience we have when a pin is pushed into our finger. No pain, no sadness, no guilt, no jealousy, no joy. Just logic. What kind of creature is that?

The most terrifying of all. Assume the skeptics are right. (I suspect they are.) All they are saying is that we cannot fully replicate humans in silicon. No kidding. The fact is that we will instead be creating a new and different form of being. And infinitely more monstrous: creatures sharing our planet who not only imitate and surpass us in logic, who have perhaps even achieved consciousness and free will, but are utterly devoid of the kind of feelings and emotions that, literally, humanize human beings.

Be afraid.

You might think it is a little early for fear. Well, Garry Kasparov doesn't think so. "I'm not afraid to admit that I'm afraid," said perhaps the most fearless player in the history of chess when asked about his tentative play. When it was all over, he confessed why: "I'm a human being, you know. . . . When I see something that is well beyond my understanding, I'm scared."

We have just seen the ape straighten his back, try out his thumb, utter his first words, and fashion his first arrow. The rest of the script is predictable. Only the time frame is in question.