The Magazine

Numbers of Sides

'Many cheerful facts about the square of the hypotenuse,' and beyond.

Feb 25, 2008, Vol. 13, No. 23 • By DAVID GUASPARI
Widget tooltip
Single Page Print Larger Text Smaller Text Alerts

The Pythagorean Theorem

A 4,000-Year History

by Eli Maor

Princeton, 286 pp., $24.95

The Pythagorean Theorem is perhaps the one mathematical fact an Average Joe might be able to name. It is ancient. Evidence of the Pythagorean Theorem can be found on Babylonian clay tablets from 1800 B.C.; versions exist in manuscripts from India circa 600 B.C. and from the Han Dynasty. The first rigorous proof is ascribed to Greeks of the school of Pythagoras, in the mid-6th century B.C.

Eli Maor says that the Pythagorean Theorem is "arguably the most frequently used theorem in all of mathematics" and makes that the premise, or McGuffin, for touring a swath of mathematical history. He aims at the general reader, wishing to provide both an intellectual adventure, complete with proofs, and a genial ramble. (An appended chronology notes that soon after "Einstein publishes his general theory of relativity .  .  . Stanley Jashemski, age nineteen, of Youngstown, Ohio, proposes possibly the shortest known proof of PT.")

He begins, regrettably, with a sin of anachronism--miscasting the original meaning of the Theorem into modern terms. Euclid's famous treatise on geometry presents us with a fact about area: If we draw a square on each side of a right triangle, the area of the square on the hypotenuse is the total of the areas of the squares on the other sides. Nowadays we are inclined to express this as an equation--a2+b2=c2--in which a, b, and c are numbers representing the lengths of the triangle's sides. Maor treats these as interchangeable formulations, and from the modern point of view they are.

But Pythagoras and Euclid would find the modern version unintelligible, for reasons interesting and deep. They distinguished numbers, which are "multitudes" (that can be counted), from lengths, areas, and volumes, which are continuously varying "magnitudes." Multitudes differ essentially from magnitudes. And magnitudes themselves come in different kinds. We may meaningfully compare one line segment to another line segment (is it greater?) but not to a different kind of magnitude, such as a circle or a cube.

It makes sense to total the magnitudes of two squares, but not to total a square with a line. It makes sense to multiply numbers, obtaining another number as a result--three groups of four things amount to 12 things altogether. But it seems merely confused to speak of multiplying one line segment by another, of multiplying by something that is not a multiplicity.

Presented with these careful distinctions, and the rigorous and brilliant Greek science that respected them, a reader might suffer a profitable moment of uncertainty and discomfort, wondering how he could have thought in any other way--uncertain, at least for that moment, how there could be any coherent sense in (or any use for) some mongrel notion of "number" and practice of "algebra" that embraced the counting numbers and magnitudes of all kinds. Yet the massive triumphs of mathematical physics, for one thing, assure us that there can be.

We can't solve that problem here--to begin with, a rigorous mathematical account of the modern notion of number is highly technical--but it is illuminating to consider a simple strategy that holds out hope of dissolving it: In ordinary speech we don't say that the length of a line is "three"--we say that it's "three feet" or "three furlongs" or some such thing. We choose a unit and measure the line as some multiple of the unit--at least, when it comes out exactly.

That suggests a way to unify the distinct quantitative ideas of multitude and magnitude case-by-case: Given a right triangle, for example, choose a unit of which all three sides are exact multiples. That assigns a number to each side and those numbers will satisfy a2+b2=c2.

This strategy fails, for an astonishing reason: The innocent assumption that we can always find such a unit is false. For example, there is no unit of which both the sides and the diagonal of a square are exact multiples. The Pythagoreans not only discovered that but proved it. Here shines one particular brilliance of Greek mathematics: that its results are established by proof. And so far as we know, the notion of mathematical proof--of developing an entire body of knowledge by rigorous deduction from a set of first principles--has emerged only once in human history.

For the Pythagoras cult, this had a tragic aspect. Scholars dispute about the precise beliefs of Pythagoras and his followers, but agree that they included a mystical conviction that numbers (multitudes) are, in some sense, the fundamental constituents of the world.