It's Probably True
What are the chances of great minds thinking alike?
Jun 22, 2009, Vol. 14, No. 38 • By DAVID GUASPARI
The Unfinished Game
When Keith Devlin, a mathematician and skilled popularizer, was invited to write about a world-changing mathematical document, he chose a letter from Blaise Pascal to Pierre de Fermat. Written in 1654, it's part of a famous correspondence usually credited as the birth of the modern notion of probability. That, Devlin says, has "made the world modern" by enabling reasoned predictions about a future that cannot be known with certainty. Probability theory has a massive effect on everyday life--Devlin emphasizes the rational management of risk in, among other things, medicine, engineering, and finance--and requires a new way of thinking about the world.
As Ian Hacking stressed in his remarkable book The Emergence of Probability, great scientists of the era such as Galileo and Newton accepted an ancient philosophical distinction that sharply divided knowledge from opinion, as things differing not in degree but in kind. Knowledge concerned universal and necessary truths and their demonstratively certain consequences: Newton offered a proof that the planets must move in elliptical orbits. Opinions concerned things that could be discussed and debated but not demonstrated (or quantified).
So, Hacking argues, what we call probability was not an object of thought that existed in embryo to become fully developed when some social need or technological opportunity encouraged mathematicians to study it more carefully. Rather, it was something new, born into a world that seemed to have no place for it.
Pascal began the correspondence, it is said, to discuss problems posed to him by a nobleman fond of gambling. The letter that Devlin selected concerns an unsolved puzzle (already old in 1654) called "the problem of points" and seems, at first, a surprising choice since it does not offer Pascal's solution. Rather, it shows Pascal struggling to understand Fermat's and not quite succeeding--even though a modern reader, who has always lived in a mental world saturated with probabilistic thinking, may find Fermat's reasoning straightforward.
The letter provides an occasion to discuss Fermat's solution, illustrates how difficult is the birth of a new idea, and makes good on the book's promise to describe "how mathematics is really done."
The Unfinished Game begins with a leisurely explication of Pascal's letter, interpolating brief biographies of Pascal, Fermat, and other participants in probability's prehistory. Its second half sketches how mathematics that arose in the simple and artificial setting of gambling games--whose underlying mechanisms (coins, dice, cards) are easily understood--was later brought to bear on messy real-world situations in which the underlying mechanisms are quite unknown.
Here is a sample problem of points: Harry and Tom bet on tosses of a fair coin. Harry gets a point for each head, Tom for each tail, and the first to reach three wins the pot. If the game is discontinued partway through, how should the pot be split?
Sometimes the answer is obvious: an even split if they quit when the score is tied. But what if Harry leads two to one? The "obvious" case will be misleading if it fixes attention on the current score, but Pascal and Fermat saw past that to the key insight: What matters is not what has happened but what could happen from now on. They develop that insight in different ways, but with sound arguments.
Devlin describes Fermat's solution--superior, he says, because it is simpler and reaches the heart of the matter. Consider all the ways the game could continue. The two remaining tosses could play out in four different ways: H‑H, H‑T, T‑H, T‑T. Harry wins in the first three, and Tom in the last, so the pot should split 3‑to‑1 in favor of Harry.
Pascal struggles with the fact that, in practice, the game has just three possible futures (H-H, T‑T, T‑H) since Harry will win and the game will end if the fourth toss is heads. He tests his understanding of Fermat's analysis in terms of four possible futures by trying it on a more complex game in which Tom, Harry, and Dick toss a three‑sided coin and play a game to three.
Seven tosses might be required, so Pascal only considers futures with all seven. But then he goofs by treating every future with precisely three heads and three tails (and one D) as a tie between Harry and Tom--instead of awarding the pot to whoever reached three first. He recognizes that this gives the wrong answer and mistakenly concludes that Fermat's method applies only to two-player games. That so fine a mind should make so "obvious" an error shows how hard it was to grasp what Fermat was doing.
It is always illuminating to watch first-rate minds at work, but not obvious how much light Pascal's difficulty throws on the mental revolution needed to grasp what Devlin calls "the very idea of predicting the likelihood of future events." So Pascal's letter really serves the author as a rhetorical hook, not an experimentum cruces. And that's fine. This brief essay cannot and does not try to be a "deep" account of the origins of the concept of probability. (For that, try Hacking.)
Independently of Fermat and Pascal, and at roughly the same time, a London merchant named John Graunt published Natural and Political Observations Made Upon the Bills of Mortality, addressing what seemed to be problems of a different kind. Without benefit of theory, but with considerable ingenuity, Graunt used London's birth and death records to perform what we would now call statistical inference.
For example, deaths attributed to rickets were first recorded in 1634. Had a new disease emerged, or merely a new diagnosis? Graunt, noting that a condition called "liver-grown" (enlarged liver) was much like rickets, reasoned that if rickets diagnoses simply reclassified what had been identified as "liver-grown," the appearance of deaths attributed to rickets would entail a decrease in deaths attributed to liver-grown. In fact, the mortality reported from liver‑grown remained steady; rickets was a new disease.
To get that explanation I went back to Graunt's pamphlet. Devlin is an almost unfailingly clear writer, but his summary here was too terse for me to follow. That's understandable. The sweep of mathematical developments to be covered is huge, from the first recognizably modern account of probability (Christiaan Huygens, 1657) to its first mathematically deep results (Jakob Bernoulli's law of large numbers, the first step in putting statistics on a rigorous mathematical footing) to a theory of sampling and measurement error (the bell curve and all that) to the pricing of financial derivatives. And any account must also sketch philosophical terrain that remains hotly contested.
Does probability concern the stable relative frequencies of certain physical events, such as tosses of a coin? (Would that presuppose some physical mechanism, a "propensity," that keeps those frequencies on track? Why do frequencies matter if I'm going to toss a coin--or insure a life--just once?) Or does probability, instead, measure the degree of belief one has, or ought to have, in propositions that may have no statistical nature at all? (Then why does it lead to quantitatively accurate descriptions of the physical world?)
To me--no expert in probability--Devlin's selection seems sound and informative, though his introductory and valedictory rhetoric sometimes overheats. It's a stretch to say that probability theory, which allows us to make rational plans for the future, allows us to "foresee" it. And I dispute the claim that the engineers who build commercial airliners "can calculate with enormous precision the likelihood that a flaw will cause a major crash."
A great deal is known about the reliability of physical devices--rates of manufacturing defects, mean time between failure, etc.--but there exists no quantifiable, scientific account of the reliability of software, which plays an increasing role in the "life-critical" operations of aircraft.
And I'm not sure what to make of the fact that, shortly before September 11, 2001, a system called Site Profiler, built to help military commanders assess terrorist risks, identified the Pentagon as a possible target. That was regarded as interesting, but not taken too seriously. No doubt the program made valid probability calculations about the model it was asked to analyze, but such models are full of guesswork; and without a great deal of evidence for the model's validity, that result should probably be regarded as a fluke.
Devlin acknowledges that but says, without saying enough to convince me, that "from our present perspective . . . the real story is the degree to which the mathematics was able to provide a chillingly accurate assessment of a future risk."
David Guaspari is a writer in Ithaca, New York.