The Magazine

It's Probably True

What are the chances of great minds thinking alike?

Jun 22, 2009, Vol. 14, No. 38 • By DAVID GUASPARI
Widget tooltip
Single Page Print Larger Text Smaller Text Alerts

Seven tosses might be required, so Pascal only considers futures with all seven. But then he goofs by treating every future with precisely three heads and three tails (and one D) as a tie between Harry and Tom--instead of awarding the pot to whoever reached three first. He recognizes that this gives the wrong answer and mistakenly concludes that Fermat's method applies only to two-player games. That so fine a mind should make so "obvious" an error shows how hard it was to grasp what Fermat was doing.

It is always illuminating to watch first-rate minds at work, but not obvious how much light Pascal's difficulty throws on the mental revolution needed to grasp what Devlin calls "the very idea of predicting the likelihood of future events." So Pascal's letter really serves the author as a rhetorical hook, not an experimentum cruces. And that's fine. This brief essay cannot and does not try to be a "deep" account of the origins of the concept of probability. (For that, try Hacking.)

Independently of Fermat and Pascal, and at roughly the same time, a London merchant named John Graunt published Natural and Political Observations Made Upon the Bills of Mortality, addressing what seemed to be problems of a different kind. Without benefit of theory, but with considerable ingenuity, Graunt used London's birth and death records to perform what we would now call statistical inference.

For example, deaths attributed to rickets were first recorded in 1634. Had a new disease emerged, or merely a new diagnosis? Graunt, noting that a condition called "liver-grown" (enlarged liver) was much like rickets, reasoned that if rickets diagnoses simply reclassified what had been identified as "liver-grown," the appearance of deaths attributed to rickets would entail a decrease in deaths attributed to liver-grown. In fact, the mortality reported from liver‑grown remained steady; rickets was a new disease.

To get that explanation I went back to Graunt's pamphlet. Devlin is an almost unfailingly clear writer, but his summary here was too terse for me to follow. That's understandable. The sweep of mathematical developments to be covered is huge, from the first recognizably modern account of probability (Christiaan Huygens, 1657) to its first mathematically deep results (Jakob Bernoulli's law of large numbers, the first step in putting statistics on a rigorous mathematical footing) to a theory of sampling and measurement error (the bell curve and all that) to the pricing of financial derivatives. And any account must also sketch philosophical terrain that remains hotly contested.

Does probability concern the stable relative frequencies of certain physical events, such as tosses of a coin? (Would that presuppose some physical mechanism, a "propensity," that keeps those frequencies on track? Why do frequencies matter if I'm going to toss a coin--or insure a life--just once?) Or does probability, instead, measure the degree of belief one has, or ought to have, in propositions that may have no statistical nature at all? (Then why does it lead to quantitatively accurate descriptions of the physical world?)

To me--no expert in probability--Devlin's selection seems sound and informative, though his introductory and valedictory rhetoric sometimes overheats. It's a stretch to say that probability theory, which allows us to make rational plans for the future, allows us to "foresee" it. And I dispute the claim that the engineers who build commercial airliners "can calculate with enormous precision the likelihood that a flaw will cause a major crash."

A great deal is known about the reliability of physical devices--rates of manufacturing defects, mean time between failure, etc.--but there exists no quantifiable, scientific account of the reliability of software, which plays an increasing role in the "life-critical" operations of aircraft.

And I'm not sure what to make of the fact that, shortly before September 11, 2001, a system called Site Profiler, built to help military commanders assess terrorist risks, identified the Pentagon as a possible target. That was regarded as interesting, but not taken too seriously. No doubt the program made valid probability calculations about the model it was asked to analyze, but such models are full of guesswork; and without a great deal of evidence for the model's validity, that result should probably be regarded as a fluke.

Devlin acknowledges that but says, without saying enough to convince me, that "from our present perspective .  .  . the real story is the degree to which the mathematics was able to provide a chillingly accurate assessment of a future risk."

David Guaspari is a writer in Ithaca, New York.