The Magazine

Be Afraid

The Meaning of Deep Blue's Victory.

May 26, 1997, Vol. 2, No. 36 • By CHARLES KRAUTHAMMER
Widget tooltip
Single Page Print Larger Text Smaller Text Alerts

Deep Blue, however, having calculated every possible outcome of the next 10 or 15 moves, had determined it could (1) capture the pawn, then (2) bring its expeditionary force back to defend its king exactly a hairsbreadth before Kasparov could deliver the fatal checkmate, thus (3) foil Kasparov's attack -- no matter how he tried it -- and then (4) win the game thanks to the extra pawn it had captured on its hair-raising gambit.

So it calculated. And so, being exactly right, it won.

No human would have tried this because no human could have been certain that in this incredibly complex position he had seen every combination. Deep Blue did try it because, up to a certain horizon (10-15 moves into the future) , it is omniscient.

Game One in Philadelphia became legend. It was a shock to Kasparov's pride and a tribute to the power of brute tactical calculation. But that is all it was: tactics.
 
Playing Like a Human

Fast forward to Game Two of this year's match, on May 4. This time the machine won but in a totally different way.

It did not use fancy tactics -- tactics being the calculation of parry and thrust, charge and retreat, the tit-for-tat of actual engagement, the working out of "If I do A and you do B and I do C, then X." Game Two allowed for no clever tactics. Its position was closed, meaning that both sides' pieces were fairly locked in and had very few tactical and combinational opportunities.

Kasparov had deliberately maneuvered the game into this structure. He knew (from Game One in Philadelphia) that when the armies are out in the open and exchanging fire rapidly, the machine can outcalculate him. He knew that his best chance lay in a game of closed positions, where nothing immediate is happening, where the opposing armies make little contact, just eyeing each other warily across the board, maneuvering their units, making subtle changes in their battle lines.

Such strategic, structural contests favor humans. After all, Kasparov does not evaluate 200 million positions per second. He can evaluate three per second at most. But he has such intuition, such feel for the nuances and subtleties that lie in the very structure of any position, that he can instinctively follow the few lines that are profitable and discard the billions of combinations that Deep Blue must look at. Kasparov knows in advance which positions "look" and "feel" right. And in closed strategic games like Game Two, look and feel are everything.

The great chess master Saviely Tartakower once said: "Tactics is what you do when there is something to do. Strategy what you do when there is nothing to do." Strategic contests are contests of implied force and feints, of hints and muted thrusts. They offer nothing (obvious) to do. And they are thus perfectly suited to human flexibility and "feel."

Calculators, on the other hand, are not good at strategy. Which is why historically, when computers -- even the great Deep Blue -- have been given nothing tactically to do, no tit-for-tat combinations to play with, they have tended to make aimless moves devoid of strategic sense.

Not this time. To the amazement of all, not least Kasparov, in this game drained of tactics, Deep Blue won. Brilliantly. Creatively. Humanly. It played with -- forgive me -- nuance and subtlety.

How subtle? When it was over, one grandmaster commentator was asked where Kasparov went wrong. He said he didn't know. Kasparov had done nothing untoward. He made no obvious errors. He had not overlooked some razzle-dazzle combination. He had simply been gradually, imperceptibly squeezed to death by a machine that got the "feel" of the position better than he.

Why is this important? Because when Deep Blue played like a human, even though reaching its conclusions in a way completely different from a human, something monumental happened: Deep Blue passed the Turing test.
 
The Turing Test

In 1950, the great mathematician and computer scientist Alan Turing proposed the Turing test for "artificial intelligence." It is brilliantly simple: You put a machine and a human behind a curtain and ask them questions. If you find that you cannot tell which is the human and which is the machine, then the machine has achieved artificial intelligence.

This is, of course, a mechanistic and functional way of defining artificial intelligence. It is not interested in how the machine -- or, to be sure, how even the human -- comes to its conclusions. It is not interested in what happens in the black box, just what comes out, results. You cannot tell man and machine apart? Then there is no logical reason for denying that the machine has artificially recreated or recapitulated human intelligence.