The Magazine

Preempting Terrorism

The case for anticipatory self-defense.

Jan 28, 2002, Vol. 7, No. 19 • By MICHAEL J. GLENNON
Widget tooltip
Single Page Print Larger Text Smaller Text Alerts

THE BUSH DOCTRINE, as promulgated by President Bush following the events of September 11, contemplates preemptive use of force against terrorists as well as the states that harbor them. If the United Nations Charter is to be believed, however, carrying out that doctrine would be unlawful: The Charter permits use of force by states only in response to an armed attack. In 1945, when the Charter was framed, this prohibition against anticipatory self-defense may have seemed realistic. Today, it is not. Indeed, it is no longer binding law.

Since time immemorial, the use of force has been permitted in self-defense in the international as well as all domestic legal systems, and for much the same reason: With states as with individuals, the most elemental right is survival. So powerful has been its claim that the right of self-defense was considered implicit in earlier treaties limiting use of force by states; the Kellogg-Briand Peace Pact of 1928, like the 1919 Covenant of the League of Nations, made no mention of it.

In 1945, the right was made explicit. Article 51 of the United Nations Charter states expressly: "Nothing in the present Charter shall impair the inherent right of individual or collective self-defense if an armed attack occurs against a Member of the United Nations. . . ." Self-defense thus emerged as the sole purpose under the Charter for which states may use force without Security Council approval.

While the Charter professes not to "impair" the inherent right to self-defense, it does precisely that. Prior to 1945, states used defensive force before an attack had occurred, to forestall an attack. The plain language of Article 51 permits defensive use of force only if an armed attack occurs. If none has occurred, defensive force--"anticipatory self-defense"--is not permitted.

This new impairment of the right of self-defense was widely seen as sensible when the Charter was adopted. States had often used the claim of self-defense as a pretext for aggression. (The Nazi defendants at Nuremberg argued that Germany had attacked the Soviet Union, Norway, and Denmark in self-defense, fearing that Germany was about to be attacked.) If profligate use of force was ever to be reined in, narrower limits had to be imposed. And those limits had to be set out with a bright line; qualifying defensive rights with words like "reasonable," "imminent," or even "necessary" would leave states too much discretion and too much room for abuse. The occurrence of an actual armed attack was thus set up as an essential predicate for the use of force. The new requirement narrowed significantly the circumstances in which force could be used. And it set out a readily identifiable and, it was thought, objectively verifiable event to trigger defensive rights. Phony defensive justifications would be less plausible and war would be less frequent, thereby vindicating the first great purpose of the Charter--"to maintain international peace and security."

The impairment was realistic, it was further thought, because the need for anticipatory defense would diminish. The reason was that the Security Council would pick up where individual states were now compelled by the Charter to leave off. The Council, to be equipped with its own standing or standby forces, was authorized to use force in response to any "threat to the peace"--authority far broader than that accorded individual states. Coupled with the requirement that states report to the Security Council when using defensive force, this new institution--this "constabulary power before which barbaric and atavistic forces will stand in awe," as Churchill described it--would make anticipatory self-help a thing of the past.

All know that it didn't work out that way. Throughout the Cold War the Security Council deadlocked repeatedly on security issues. States never gave the Council the peace enforcement troops contemplated by the Charter's framers. The Council authorized (rather than used) force only haphazardly "to maintain or restore international peace and security." And, as discussed later, states continued to use force often, obviously not in response to armed attacks.

STILL, like most states, the United States never formally claimed a right to anticipatory self-defense--i.e., to use armed force absent an armed attack, so as to prevent one from occurring. During the 1962 Cuban Missile Crisis, the United States declined to rely upon Article 51, claiming instead that the "quarantine" of Cuba was authorized by the Organization of American States (and implicitly by the Security Council). When Israel seemed to assert a right to use defensive force to prevent an imminent Arab attack in June 1967, and even when Israel squarely claimed that right in attacking an Iraqi nuclear reactor in 1981, the United States steered clear of the issue of anticipatory self-defense. In 1986, however, the United States finally did claim the right to use "preemptive" force against Libya following the bombing of a Berlin night club that killed two Americans.

This last incident is worth considering closely: The Libyan bombing highlights the doctrinal confusion surrounding self-defense and also marks a proverbial "paradigm shift" in American thinking on the question. Why insist upon an actual armed attack as a precondition for the use of force? The axiomatic answer, under long-standing dogma, is of course that force is necessary to protect against the attack. But by acknowledging that its use of force against Libya was preemptive, the United States in effect moved beyond the conventional justification. The Berlin bombing was obviously over and finished; no use of force was, or conceivably could have been, instrumental in "defending" Americans killed at the Berlin club. The United States was not, in this sense, responding defensively. It was engaged in a forward-looking action, an action directed at future, not past, attacks on Americans. Its use of force against Libya was triggered by the Berlin attack only in the sense that that attack was evidence of the threat of future attacks. Evidence of Libyan capabilities and intentions sufficient to warrant preemptive force might well have taken (and, in fact, also did take) the form of intelligence reports. From a purely epistemological standpoint, no actual armed attack was necessary.

Although the United States did not spell out its thinking this explicitly, in later incidents it acted on precisely this future-looking rationale. True, the United States was in each instance able to argue that actual armed attacks had occurred. But in each of those subsequent incidents, the United States was responding to evidence of future intent and capability, not defending against past action. Its objective was to avert future attacks through preemption and deterrence.

In 1993, for example, the United States fired cruise missiles at the Iraqi intelligence headquarters in Baghdad following an alleged effort by Iraq to assassinate President Bush. But the assassination attempt was long since over; the United States used force not to defend against illicit force already deployed, but to discourage such force from being deployed in the future. In 1998, the United States fired cruise missiles at a terrorist training camp in Afghanistan and a pharmaceutical plant in Sudan following attacks on U.S. embassies in Kenya and Tanzania. Again, the provocation had ended; in no way can the United States be seen as having defended itself against the specific armed attack to which its embassies had been subject.

So, too, with the use of force against Afghanistan following September 11. The armed attack against the World Trade Center and the Pentagon was over, and no defensive action could have ameliorated its effects. The U.S. use of force was prompted by the threat of future attacks. And it was evidence of that threat--gleaned from multiple intelligence sources, not simply from the September 11 attack--to which the United States responded with its action against Afghanistan. That action could well have been warranted even if September 11 had never occurred. The problem lay in the future, not the past.

In each of these incidents, the United States justified its action under Article 51 of the Charter, claiming to be engaged in the defensive use of force. But in fact something different was going on. In each incident, the United States was--as it acknowledged forthrightly following the 1986 bombing of Libya--engaged in the use of preemptive force. The two are not the same. The justification for genuine defensive force was set forth by U.S. Secretary of State Daniel Webster in the famous Caroline case of 1837. To use it, he wrote, a state must "show a necessity of self-defense, instant, overwhelming, leaving no choice of means, and no moment of deliberation." (This formula continues to be widely cited by states, tribunals, and commentators as part and parcel of the law of the Charter.) Obviously, in none of the incidents canvassed above can the American use of force be said to meet the Caroline standard. None of the American armed responses needed to be, or was, instant. In each the United States deliberated for weeks or months before responding, carefully choosing its means. Those means were directed not at defending against an attack that had already begun, but at preempting, or deterring, an attack that could begin at some point in the future.

In fact, the United States had long ago accepted the logic of using armed force without waiting to be attacked. In the early 1960s, President Kennedy seriously considered launching a preemptive strike against the People's Republic of China to prevent it from developing nuclear weapons. In 1994, President Clinton contemplated a preemptive attack against North Korea for the same reason. During the Cold War, the United States retained the option of launching its nuclear weapons upon warning that a nuclear attack was about to occur--before the United States actually had been attacked--so as to protect command and control systems that were vulnerable to a Soviet first strike.

It thus came as no dramatic policy change when, in the Bush Doctrine, the United States publicly formalized its rejection of the armed attack requirement and officially announced its acceptance of preemption as a legitimate rationale for the use of force. "Every nation now knows," President Bush said on December 11, "that we cannot accept--and we will not accept--states that harbor, finance, train, or equip the agents of terror."

THAT FORMALIZATION was overdue. Twenty-first-century security needs are different from those imagined in San Francisco in 1945.

First, as noted above, the intended safeguard against unlawful threats of force--a vigilant and muscular Security Council--never materialized. Self-help is the only realistic alternative.

Second, modern methods of intelligence collection, such as satellite imagery and communications intercepts, now make it unnecessary to sit out an actual armed attack to await convincing proof of a state's hostile intent.

Third, with the advent of weapons of mass destruction and their availability to international terrorists, the first blow can be devastating--far more devastating than the pinprick attacks on which the old rules were premised.

Fourth, terrorist organizations "of global reach" were unknown when Article 51 was drafted. To flourish, they need to conduct training, raise money, and develop and stockpile weaponry--which in turn requires communications equipment, camps, technology, staffing, and offices. All this requires a sanctuary, which only states can provide--and which only states can take away.

Fifth, the danger of catalytic war erupting from the use of preemptive force has lessened with the end of the Cold War. It made sense to hew to Article 51 during the Cuban Missile Crisis, when two nuclear superpowers confronted each other toe-to-toe. It makes less sense today, when safe-haven states and terrorist organizations are not themselves possessed of preemptive capabilities.

Still, it must be acknowledged that, at least in the short term, wider use of preemptive force could be destabilizing. The danger exists that some states threatened with preemptive action (consider India and Pakistan) will be all too ready to preempt probable preemptors. This is another variant of the quandary confronted when states, in taking steps to enhance their security, unintentionally threaten the security of adversaries--and thus find their own security diminished as adversaries take compensatory action.

But the way out of the dilemma, here as elsewhere, is not underreaction and concession. The way out lies in the adoption of prudent defensive strategies calculated to meet reasonably foreseeable security threats that pose a common danger. Such strategies generate community support and cause adversaries to adapt perceptions and, ultimately, to recalibrate their intentions and capabilities. That process can take time, during which the risk of greater systemic instability must be weighed against the risk of worldwide terrorist attacks of increased frequency and magnitude.

The greater danger is not long-term instability but the possibility that use of preemptive force could prove incomplete or ineffective. It is not always possible to locate all maleficent weapons or facilities, thereby posing the risk that some will survive a preemptive strike and be used in retaliation. Similarly, if a rogue state such as Iraq considers itself the likely target of preemptive force, its leaders may have an incentive to defend with weapons of mass destruction--weapons they would not otherwise use--in the belief that they have nothing to lose. A reliable assessment of likely costs is an essential precondition to any preemptive action.

THESE ARE the sorts of considerations that policymakers must weigh in deciding whether to use preemptive force. Preemption obviously is a complement, not a stand-alone alternative, to non-coercive policy options. When available, those options normally are preferable. The point here is simply that preemption is a legitimate option, and that--the language of the Charter notwithstanding--preemption is lawful. States can no longer be said to regard the Charter's rules concerning anticipatory self-defense--or concerning the use of force in general, for that matter--as binding. The question--the sole question, in the consent-based international legal system--is whether states have in fact agreed to be bound by the Charter's use-of-force rules. If states had truly intended to make those rules obligatory, they would have made the cost of violation greater than the perceived benefits.

They have not. The Charter's use-of-force rules have been widely and regularly disregarded. Since 1945, two-thirds of the members of the United Nations--126 states out of 189--have fought 291 interstate conflicts in which over 22 million people have been killed. In every one of those conflicts, at least one belligerent necessarily violated the Charter. In most of those conflicts, most of the belligerents claimed to act in self-defense. States' earlier intent, expressed in words, has been superseded by their later intent, expressed in deeds.

Rather, therefore, than split legal hairs about whether a given use of force is an armed reprisal, intervention, armed attack, aggression, forcible countermeasure, or something else in international law's over-schematized catalogue of misdeeds, American policymakers are well advised to attend directly to protecting the safety and well-being of the American people. For fifty years, despite repeated efforts, the international community has been unable to agree on when the use of force is lawful and when it is not. There will be plenty of time to resume that discussion when the war on terrorism is won. If the "barbaric and atavistic" forces succeed, however, there will be no point in any such discussion, for the law of the jungle will prevail. Completing that victory is the task at hand. And winning may require the use of preemptive force against terrorist forces as well as against the states that harbor them.

Michael J. Glennon is a fellow at the Woodrow Wilson International Center for Scholars in Washington, D.C., and professor of law at the University of California, Davis, Law School. He is the author of "Limits of Law, Prerogatives of Power: Interventionism after Kosovo" (Palgrave, 2001).