To the Editors:

Chapman and Harris are right to question the costs in money, opportunities, and civil liberties of many of the policies adopted in response to 9/11. And they are right to call attention to the vulnerability of the human mind to fallacies in statistical reasoning, as in people's overestimation of the dangers posed by air travel, shark attacks, and trace levels of carcinogens. But they are not correct in saying that the responses to 9/11 are consequences of fallacious statistical reasoning. The classic experiments by Paul Slovic, Amos Tversky, Daniel Kahneman demonstrating those fallacies presupposes a number of conditions that are not met by the events of 9/11.

First, since every event is unique, estimating risk requires one to define some class of events to be treated as equivalent, and then to compare the frequency of those events with the number of opportunities for such events to occur. For a singular event like 9/11, the equivalence class could be defined in many ways. If it is defined as "airplanes crashed into buildings," then the probability of the event multiplied by the number of deaths per event may indeed be smaller than other risks we tolerate. (Even then, one could question C&H's characterization of the casualty rates for 9/11-like events, because if a few parameters had been different - the hour of the day, the time available for people to escape before the towers collapsed, the success of the passenger mutiny over Pennsylvania - the death toll could have been far higher.) But if one defines the class as "acts designed to inflict as many American deaths as possible" - which could include nuclear bombs simultaneously set off in New York, Los Angeles, and Chicago - then the multiplication gives a very different result, and taking expensive measures to prevent such events is not necessarily irrational. Similarly, one gets very different risk estimates for the class "anthrax attacks" (probably small) and the class "biological attacks, including smallpox" (possibly catastrophic).

In general, it is fairly straightforward to define an equivalence class for events with physical definitions such as plane crashes, shark attacks, and lung cancer deaths. But it is not at all straightforward to define the equivalence class for events such as terrorist attacks, which are limited only by the ideology, ingenuity, and resources of the perpetrators. Prior to 9/11, people had little reason to estimate that the equivalence class "terrorist attack" included massive destruction of American lives and landmarks brought about by well-funded suicidal fanatics exploiting hitherto unrecognized vulnerabilities of a technologically advanced democracy. The events of 9/11 provide new information relevant to estimating those unknowns.

Second, a probability estimate is specific to an interval of time in which the causal structure of the world remains unchanged. If the world has changed, all bets are off. If I notice that a nefarious character has just tampered with a slot machine, then ignoring the published odds is not fallacious. Or to take an example from the psychologist Gerd Gigerenzer, it would not be irrational to keep one's child out of a river that had no previous fatalities after hearing that a neighbor's child was attacked there by a crocodile that morning: there was no crocodile in the river before then, but now there is. For this reason one cannot use the rate of major terrorist attacks in, say, the past 10 years to estimate the rate in the next 10 years. Wahabism and anti-Americanism may be more widespread, nuclear weapons more available, copycats more emboldened, and so on. Because of these uncertainties, anyone who claims to have calculated the mathematically correct probability that a horrendous terrorist attack will take place in the next year would be talking through his hat.

There is a third reason that terrorist attacks cannot be equated with the kinds of risks that people have been shown to treat irrationally. Nonhuman causes of deaths (such as sharks, airplane part failures, and carcinogens) don't take into account how people react to them. Human causes of deaths (such as terrorists) do. Bin Laden had no negotiable demands, but thought that Americans society was so decadent and spiritually bankrupt that a few easily inflicted humiliating blows would lead to its collapse. A public response of defiance and solidarity, and the implementation of extensive preventive security measures, could change such calculations in the minds of future terrorists. Similarly, if we calibrated our response to the anthrax attacks by cost-benefit comparisons to other risks, future bioterrorists could be emboldened to inflict exactly as many deaths as we decided we could endure. But pulling out all the stops to combat this new kind of threat, even if seemingly irrational on narrow actuarial grounds in the short run, could deter perpetrators in the long run, who would have to factor this determination into their own calculations. Another way of putting it is that dealing with terrorists is a problem in game theory, not just a problem in risk estimation.

I don't disagree with Chapman and Harris's opposition to some of the measures taken by the Bush administration and other authorities. But it is not correct to call the strong response to 9/11 a symptom of fallacious statistical reasoning or human cognitive limitations.

Steven Pinker
Peter de Florez Professor of Psychology
Department of Brain and Cognitive Sciences