One Saturday morning this month, a quarter million kids or more will slump their way into the fluorescent tomb of a high school classroom, slide into the seat of a flimsy polypropylene combo chair-desk, and then, with clammy palms dampening the shafts of perfectly sharpened number two pencils, they will take the SAT. They will carefully mark only one answer for each question, as instructed, and they will make sure to fill the entire circle darkly and completely. They will not make any stray marks on their answer sheet. If they erase, they will do so completely, because incomplete erasures may be scored as intended answers. They will not open their test book until the supervisor tells them to do so, and if they finish before time is called, they will not turn to any other section of the test. And over the next three hours they will determine the course of the rest of their lives.

At least that's what a lot of them will think they're doing. They'll be wrong, of course--dozens of people have gone on to live happy and healthy lives after bombing the SAT--but they won't know it because an oddly large number of powerful forces in American society have combined to elevate the SAT to unlikely heights of influence and to impute to it unimaginable powers. You'll hear the SAT can wreck a person's future, even if only temporarily, or salvage a new future from a misspent past. The SAT can enforce class hierarchies or break them open; it unfairly allocates society's spoils and sorts the population into haves and have-nots, or it can unearth intellectual gifts that our nation's atrocious high schools have managed to keep buried. It is a tool of understanding, a cynical hoax, a triumph of social science, a jackboot on the neck of the disadvantaged. But rarely is it just a test.

Even the College Board, which administers the SAT, and the Educational Testing Service, which designs it each year, are sheepish about using the word. The SAT was originally an acronym for Scholastic Aptitude Test. When critics objected to the word "aptitude," for reasons we'll consider in a moment, SAT came to stand for Scholastic Assessment Test. Marketers soon realized that test and assessment have pretty much the same meaning, making "SAT" a kind of solecism, one of those repetitive redundancies that repeats itself--bad form for a test measuring verbal ability. So they gave up trying to make an acronym altogether. "Assessment" was dropped, and so was "test," and "scholastic" too. Today the SAT is officially just the SAT; the letters don't stand for anything, as if the test-makers were too timid to declare what they're up to.

And who can blame them? Critics of the SAT are eager to remind you that its intellectual genealogy traces back to the intelligence tests that eugenicists, racial theorists, and other creepy types promoted in the early 20th century as a way of purifying the gene pool.

"Racists worked hard to design a test that would confirm their racism, and they succeeded," says Robert Schaeffer of FairTest, an activist organization that has declared war on all standardized tests, especially the SAT. A large number of people in higher education share his disdain, both for the test itself and for the uses to which it is put, usually by themselves. Any gathering of college admissions professionals--deans, school counselors, private coaches--swells before long with a chorus of complaint about the SAT's deficiencies, even though most of them are bound, by habit, custom, or popular expectation, to use the test in their everyday work.

Now they're beginning to rebel, and the hostility grows more ferocious every year. It's fair to say the tide of elite opinion now runs solidly against the use of the SAT in college admissions. Last fall, the National Association of College Admissions Counselors (NACAC) released a report calling on its members at last to act on their skepticism by taking steps to decommission the test for use at their schools. When the report was presented at the group's convention last September, the only complaints were that it didn't go far enough in condemning the test. "It's a lousy test," one NACAC member said heatedly on the convention floor. "It's destructive of what all of us here are trying to do."

This spring three more selective and well-known schools--Fairfield University, Connecticut College, and Sewanee: The University of the South--took NACAC's advice, announcing that they would adopt a "test optional" admissions policy, telling applicants they no longer were required to submit SAT scores but were free to submit them if they wished. The schools join dozens of well-regarded peers--Bates, Bowdoin, Hamilton, Holy Cross, and Wake Forest among them--in striking a blow against the SAT, and in being very proud of themselves for doing so.

Wake Forest's president, Nathan O. Hatch, announced his school's SAT policy in a much-discussed op-ed in the Washington Post. "By opening doors even wider to qualified students from all backgrounds and circumstances," he wrote, "we believe we are sending a powerful message of inclusion and advocating for democracy of access to higher education."

Hatch noted that on average, richer students score higher on the SAT than poorer students. He did not note that on average, Asian Americans perform better than whites on standardized tests, whites better than Hispanics, Hispanics better than African Americans, and, at least in math, men better than women. Any such gap, President Hatch said, is conclusive evidence of some crippling defect in the SAT--and provides sufficient reason to eliminate it from college admissions.

Like so many widely shared beliefs in the world of higher education, this argument is seldom challenged, even though it's a relatively novel view. The "achievement gaps" in SAT scores were evident 40 years ago, yet most liberal educators defended standardized tests. In their book The Academic Revolution, published in 1968, the sociologists Christopher Jencks and David Riesman famously (famously for sociologists) expressed what was then still the majority view.

"Those who look askance at testing should not rest their case on the simple notion that tests are 'unfair to the poor,'" they wrote. "Life is unfair to the poor. Tests merely measure the results."

Jencks and Riesman weren't fatalists about this state of affairs; they thought remedial programs in primary and secondary schools might help "close the gap," and "preventive measures" rectifying income inequality would be even more successful. Still, the gaps themselves weren't reason enough to abandon the tests or the university's interest in the aptitude that the SAT measured. Do you fire your doctor because you don't like his diagnosis?

Riesman and Jencks reminded their readers how it was that standardized tests like the SAT became essential to college admissions in the first place. Notwithstanding its ancestral ties to racism and eugenics, the SAT was introduced by progressives to accomplish the same goals that our contemporary progressives now say it impedes: democratizing higher education, uplifting the poor, ending the class spoils system, and making merit rather than accidents of birth the measure of success.

The irony is hard to miss. From the progressives' panacea in the mid-20th century to the progressives' bogeyman in the early 21st, the evolution of the SAT is a story about our shifting notions of merit, democracy, populism, the life of the mind, and what we expect from higher education--an industry into which the country pours many billions of dollars a year. In a way kids are right to be jittery. The SAT is more than a test, and always has been. If it's being condemned today, this is a grisly instance of the revolution devouring its children.

The SAT first became popular in the 1930s, when one side won an argument and the other side lost. The argument was over how college administrators should choose the students who would attend their schools--and who would, by extension, enter the country's leadership class in politics, business, and religion, at a time when fewer than 2 percent of American adults held post-secondary degrees. In the 19th century, those hoping to attend college submitted themselves to interviews with school faculty or took essay exams the faculty concocted. In 1900 a consortium of East Coast colleges formed the Collegiate Entrance Examination Board, the forerunner of today's College Board, to regulate the chaos. The board wrote and disseminated "achievement tests" as a way of standardizing admissions from one school to another. The tests assessed knowledge of English grammar and literature, American and ancient history, Latin and classical Greek--the fundamentals of the prep school curriculum, and the things that every educated gentleman was presumed to know. A high score virtually guaranteed college admission.

The system of achievement tests worked well for awhile. But before long the bluebloods at Columbia, Harvard, and elsewhere were alarmed to discover that a disproportionate number of high scorers were not People Like Us. Many of them, indeed, were Jews. As Jerome Karabel tells the story in his magisterial history of college admissions, The Chosen, administrators quickly adapted. Personal interviews became common as a way of screening applicants. And the criteria for admissions were mysteriously enlarged. Admissions officers claimed to weigh ineffable qualities like "leadership," "breeding," "character," and "well-roundedness."

Karabel reprints a typology of applicants that Harvard admissions officers developed privately in the 1920s. Among the types:

Cross-country style--steady man who plugs and plugs and plugs, won't quit when most others would

Boondocker--unsophisticated rural background

Taconic--culturally depressed background, low-income

Mr. School--significant extracurricular and perhaps (but not necessarily) athletic participation, plus excellent academic record.

You can guess which types Harvard preferred, no matter how well they did on the achievement tests.

Progressives of the era knew that these "objective methods" were just a dodge--a high-minded way of keeping the riffraff out, dividing the applicant pool between our kind and everyone else. One of those progressives, James B. Conant, was appointed president of Harvard in 1933. A product of a shabby-genteel Yankee family himself, Conant was the chief theorist and propagandist for the "meritocratic ideology," as Karabel calls it, that became the declared standard for selective college admissions in mid-century America: Access to an elite education should be based on academic ability rather than wealth or family background. Conant's view wasn't really an ideology so much as an ideal--one violated almost as often as it was honored, as today's progressive critics never tire of pointing out.

But still it was an ideal, and even often-ignored ideals have the power to shape events. Conant despised inherited privilege and the stratagems used to sustain it. (A pet cause of his was the 100 percent inheritance tax.) He was a scientist by training, a believer that reality could be quantified. And he was a democrat. He assumed that cognitive ability--the thing that made a man do well in school and, in time, might make him economically productive, a solid citizen, even perhaps a leader--could be identified and measured. He assumed that this ability, unlike economic power, was distributed equally across the population. His duty was to seek it out, and well-wrought tests would help him do it.

But not the tests that were being used in college admission, at Harvard and elsewhere. Tests of knowledge--achievement tests--by their very design worked against the meritocratic ideal, because they favored the members of one class over another. Who but the sons of privilege would do well on tests drawn from the curriculums of prep schools where the sons of privilege were taught? Far more promising, Conant believed, was the test of scholastic aptitude being developed by the College Board. The SAT claimed to measure not a grasp of facts but the acuteness of intelligence. It leveled the advantage that elite high schools gave their students by measuring the capacity to learn rather than learning itself. In time, Conant thought, the SAT could become a means to reward innate talent and break down the barriers to admission that wealth and privilege had put up. A favorite phrase was "diamonds in the rough," used to describe the jewel-like abilities lurking out there in the high schools of the vast Republic, in the intelligent kid hidden away in a bad school, or the bright boy with bad grades.

The SAT was built for mass use. It was based on the multiple-choice tests the Army had administered to draftees in World War I; those tests were likewise based on the now-infamous IQ tests developed, with racist intent, a generation before. The Army draftees of 1917 made for a human jambalaya unlike any the country had ever seen. The draft had roped in 2 million farm boys, city boys, math whizzes, boulevardiers, dullards, bookworms, sharpies, poets, roués, every human type susceptible to a single test. The Army thought a mass testing program--the largest ever undertaken--would identify their abilities, or the lack of them, and channel the men into the military tasks to which they were best suited. Whatever their fairness and accuracy, the tests were judged to be accurate and useful by the officers who relied on them, and they were seized on by businessmen and educators in the era of "scientific management" that followed the war.

By the time Conant took them up, the SAT had been expanded into two sections, verbal and mathematical. The twin purposes allowed colleges room to choose which kind of student they wanted to attract, word men or numbers men. The tests were refined year to year, and with each revision, said the College Board, the similarities to the earlier IQ tests faded. The first SAT s were pitiless with time: 97 minutes to answer 315 questions. The questions were no pillow fight either. One early set of problems laid out an artificial language, complete with grammatical rules and vocabulary, and required the test taker to translate English sentences into it. The time limits were eventually loosened, and "puzzle-solving" problems were replaced by reading comprehension questions--which seemed a purer test of verbal facility, and less susceptible to coachable tricks.

Conant's embrace of the SAT gave it a kind of informal certification among American educators, who even then were in thrall to Harvard College. After World War II, the test became unavoidable. The GI Bill flooded college admissions offices with applicants. The College Board formed the Educational Testing Service (ETS) to develop the test, while continuing to market and administer it. Together they greatly eased the burden on admissions deans. They offered the test nationwide on common dates under uniform, closely monitored conditions and furnished easily understandable scores. The SAT had the reassuring look of a scientific enterprise; ETS hired superb statisticians who produced a gusher of data testifying to the test's reliability and its yearly molting of imperfections. And the test was a bargain, at least for the schools: Then as now, the College Board charged a fee from the kids who were required to take the tests and not from the colleges that required the kids to take them.

Most appealing, though, was the Conant ideal: The SAT was thought to democratize and objectify what would otherwise have been a chaotic and arbitrary process of selection, open to favoritism and corruption. It offered beleaguered admissions officers a way to assess applicants that was not only accurate but fair, untainted by class or wealth. And it had almost no competition. In 1959, test-writers from the University of Iowa created the ACT (American College Testing), which came to be seen as a rival to the SAT. The ACT more closely resembled an achievement test, tied to high school curriculums in Iowa, and for the next 40 years it did little to dent the popularity of the SAT, particularly among private colleges and in the East.

The triumph of the SAT was complete in 1968, when the University of California, with its dozen campuses and tens of thousands of students, made it a requirement for admission for most applicants. This solidified the test's place in popular culture. It was a symbol of the American way of success, the level playing field, the belief that prosperity was within the reach of everyone regardless of birth. And more than a symbol: Self-appointed social observers--nice job, by the way--ascribed to the test miraculous powers and mythic importance. The liberal journalist Nicholas Lemann, who wrote a comprehensive history of the SAT, called it "the basic mechanism for sorting the American population." And, he took care to add, he wasn't alone in this view: "It is almost universally taken to be today a means of deciding who would reap America's rich material rewards."

This is an overblown way of describing a real trend. With high school education nearly universal, a college degree became an increasingly important marker of talent and ability, and a degree was hard to come by if you didn't take the SAT. The trend didn't make the test the "gatekeeper" it was often said to be. But the exaggerations served the interests of everyone involved--except the test taker, who felt more acutely than ever the pressure of a one-shot chance at success. The College Board and ETS enjoyed the inflated view because it made them seem indispensable, even as they protested halfheartedly that test scores should be only one factor among many considered by admissions officers. The officers themselves were reassured by the authority and objectivity the tests supposedly provided. A wildly growing test-preparation industry fed off the students' fear of failure. And journalists and conspiracy mongers were delighted to discover, in the College Board-ETS combine, an ominous new cabal of white guys rigging the game of life to their own advantage.

Exposés of the SAT became a commonplace of left-wing journalism. As Riesman and Jencks had anticipated, the attacks came along lines of race and class. "For all its sermonizing about equal opportunity," wrote another liberal journalist, David Owen, in another angry book-length exposé called None of the Above (1955), "ETS is the powerful servant of the privileged." Coming from the left the attacks seemed odd, directed as they were against a test that only a generation earlier had been installed as the quintessential liberal reform. But they nicely illustrated a larger rupture in the country's cultural politics. The old progressivism, with its meritocratic ideal, was being abandoned by the new progressives, who saw the meritocratic ideal as at best a delusion and, at worst and more likely, a scam.

Their evidence was the achievement gap. While the College Board and ETS said they worked hard to ensure that everyone who took the SAT took the same test, everyone didn't get the same score. And when the scores were grouped by the race, class, or sex of the test taker--as opposed to his hair color, religion, or shoe size--the scores began to show the pattern mentioned earlier: Asians before whites, Hispanics before blacks, rich before poor, and men before women, except in the sections where women were before men.

You could react to this pattern in one of three ways. Option one is to ask what relevance group numbers had in a country, and an educational system, where merit is supposed to attach to individuals. Option two is to note that the data reveal that some test takers--owing to their schools, their family lives, their neighborhoods, the social services they were given access to, the expectations of parents and friends--had been less well prepared for college than other test takers and, as a result, had a slimmer chance of doing well in some colleges than other colleges. Option three is to insist that something is wrong with the test.

The activists chose number three. They wanted to fire the doctor.

From here the story of the SAT becomes the familiar one of an American institution struggling to make itself acceptable to activists and enthusiasts who will never, under any circumstances, find the institution acceptable. It's hard to feel sorry for entities as flush and bureaucratic as ETS and the College Board. Still there's pathos in the strenuous and futile efforts they have made to appease their critics. The case against them has never been strong. Instances of actual bias within the test itself are hard to come by. The most famous example, cited in nearly every extended critique of the SAT, was a so-called analogy question involving, of all things, rowing. The question was part of a test of verbal reasoning, and the form it took required the test taker to relate pairs of words to each other: "a runner is to a marathon as a _____ is to a _____." Five choices were given. The correct answer was "oarsman/regatta." The question's bias against poorer kids is pretty clear: Anyone raised around the yacht club had an automatic advantage in getting the answer right.

The mental image of ETS test designers lounging about in ascots and double-breasted blazers has proved too amusing to resist; critics still like to cite the regatta question as a sign of the obtuseness of ETS and the College Board, even though it was dropped from the test nearly 40 years ago. Indeed, all analogy questions have been eliminated, after studies indicated that test takers from higher socioeconomic brackets routinely performed better on those questions than poorer kids. Since 1970, ETS has built an intricate bureaucratic apparatus to try to cleanse each question--or "item," as a test question is called--of anything offensive or unfair.

Before a test is put together, according to ETS guidelines, four separate reviewers examine every item for efficacy and bias. Then the item goes to a special "sensitivity reviewer" who also scrubs it for any phrasing or inference that might offend members of identity groups. If a sensitivity reviewer objects to an item, the writer responsible for it can appeal the objection, and the case goes to another team of sensitivity reviewers for adjudication. And finally, once the test has been taken, the answers given by all test takers to each question are tabulated to see if any item "tended to cause inordinate differences [in the number of correct answers] between people in different groups." If it did, "it is discarded or revised and reviewed again."

At first, in the 1970s, "different groups" was defined by socioeconomic status, sex, and race, but the list has lengthened over the decades to include ethnicity and much else. Subgroups today include "older people," people with disabilities, and "people who are bisexual, gay, lesbian, or transgendered." The sensitivity guidelines are quite detailed. Word problems on math sections need to be checked for "unnecessarily difficult language" that might trip up a math whiz who's not a native English speaker. Charts and graphs are forbidden because they are difficult to reproduce in Braille. The phrase "hearing impaired," to describe people whose hearing is impaired, is discouraged in favor of "deaf and hard of hearing." Test writers must steer clear of the words "normal" and "abnormal." "Hispanic" should not be used as a noun, and neither should "blind"; "black" can be used only as an adjective. "Penthouse," "polo," and other "words generally associated with wealthier social classes" are likewise off-limits; regatta, too, needless to say, along with any mention of luxuries or pricey financial instruments like "junk bonds." Elderly is to be avoided in describing people who are elderly. "America" can't be used to describe the United States. "In general, avoid using we unless the people included in the term are specified. The use of an undefined we implies an underlying assumption of unity that is often counter to reality." Point taken.

Test writers are equally rule bound in their treatment of subject matter. Items cannot deal with military topics, sports, religion, hunting, evolution, or any other material that might be "upsetting." In fact, violence is out altogether, with some exceptions that might upset the vegan community. "For example, it is acceptable to discuss the food chain even though animals are depicted eating other animals."

The guidelines wind ever tighter. The number of mentions of men must be balanced by the number of mentions of women. The guidelines stipulate that "20 percent of the items that mention people represent African American people, Asian American people, Latino American people, and/or Native American people." If the item doesn't allow for the test writer to identify people explicitly by race or ethnicity, he should use "place holder names" commonly associated with "various groups": Latisha or Juan or Matsuko. But sometimes a place holder name isn't good enough, for the status of the men or women mentioned in an item must be balanced too: If you mention Albert Einstein in an item, according to the guidelines, you will not achieve gender balance simply by mentioning some anonymous "Emily" or "Imani" in the next item. You need to mention a woman of equivalent status to Einstein. Marie Curie, maybe. Sally Ride. I don't know.

Imagine tiptoeing through this minefield eight hours a day! Oddly the record shows not a single instance when an ETS test writer snapped and started spraying his coworkers with an AK-47. Clearly these are committed professionals, and their good faith is hard to question. But the pressure to satisfy critics is unrelenting. Long before the Americans with Disabilities Act, for instance, test preparers had accommodated people with physical disabilities: kids who couldn't use pencils or keyboards could bring people to fill in the answer sheets for them, for instance, and blind test takers were given tests in Braille or furnished proctors who would read the test aloud. Then the question of "learning disabilities" arrived.

Beginning in the 1990s, activists demanded that students with Attention Deficit and Hyperactivity Disorder be granted extra time to complete the test, to place them on a level playing field with other, less distracted test takers. The companies readily agreed, but then were surprised at what the data revealed a few years later: If you give some kids more time to take a test, they will get higher scores than kids who didn't get more time to take the test. So the College Board decided to "flag" these scores as a kind of caveat emptor, to alert admissions officers that the test taker, though claiming a learning disability, might have been given an unfair advantage.

Activists sued for discrimination. By flagging the scores, they said, the ETS and College Board were "stigmatizing" applicants with ADHD. After a half-hearted defense, the companies conceded the point, and flagging was discontinued. Today a kid claiming ADHD can take as much as an extra hour to finish the test.

The generous and seemingly endless concessions drew the most public attention in 1995. SAT sections are graded on a scale of 200 to 800, and average scores had been sinking since the end of World War II. The verbal score had fallen from 501 in 1941 to 425 by 1990; the math score had dropped from 502 to 475 during the same period. One possible reaction to this sorry turn might be a redoubled effort to improve the quality of secondary education, to raise the scores of kids back to the level of their grand- parents. Instead, ETS "recentered" the grading system, so that a 425 on the old scale became a 501 on the new scale. Everyone was automatically smarter. It's not quite like Lake Wobegon, where all the kids are above average. But the recentering guaranteed that at least as many kids are above average today as 70 years ago.

Many of these adjustments were necessary for technical reasons or reasons of fairness, and certainly they're defensible for reasons of public perception. But in making them, the companies also inadvertently reinforced the premise of their critics. Scholastic aptitude was made to seem an arbitrary concept without grounding in anything real, a fiction subject to endless revision--something that suited the needs of one generation but was now entirely outmoded. The companies seemed to concede the point when they agreed, in 1994, to drop the word "aptitude" from the name of their test.

It was a remarkable concession that would have floored earlier progressives like James Conant or Riesman and Jencks. They never entertained the idea that aptitude was, as one critic put it not long ago, a "tool of repression." They might have asked how it was that first generation Asian immigrant females routinely outperformed native-born white males on a test that native-born white males had supposedly rigged for their own advantage. Those white males weren't as smart as everybody thought.

So if tests no longer measure aptitude, what are they for? The companies have tried to keep their claims modest. Always they have disavowed any grand intention of sorting the American population on the basis of academic ability. Indeed, the only people claiming that the SAT was intended to rank people according to their worth as members of society were SAT critics like the journalists Lemann and Owen, who of course deplored the idea. The grand manifesto of FairTest, the anti-testing activist organization, is titled: "Test Scores Do Not Equal Merit." They don't, of course, but who said they did? Not ETS, not the College Board, and not the dwindling number of disinterested observers who defend the central role of standardized tests in college admissions. The companies have themselves published books and studies attacking what they called the "myth of the single yardstick"--the notion that "there can be one and only one primary ordering of people as 'best qualified.' "

Instead, the College Board says that the SAT does nothing more than measure "developed critical thinking and reasoning skills needed for success in college." To judge whether it succeeds in that task, thousands of statistical and psychometric studies have been done. (The SAT is easily the most pawed-over piece of academic work product in history.) The consensus is that SAT scores do a fairly good job of predicting what kind of grades the student will get in his freshman year--one measure of "success in college." If you consider both his SAT scores and his high school grade point average, you have an even better predictor of how well he'll do in his first year. These findings alone are enough to establish SAT scores as a useful piece of information for admissions officers trying to figure out if an applicant is well suited to their college.

Yet because the achievement gaps persist--no amount of sensitivity tinkering has been able to close them--the calls for downgrading or eliminating the SAT persist. And it's indeed undeniable that wealthier kids and those whose parents went to college get better SAT scores. The law professor Lani Guinier says the SAT should therefore simply be called a "wealth test." Another activist says the test measures nothing but "the size of the student's house." "The only thing that SAT predicts well now is socioeconomic status," one U.C. dean told the L.A. Times.

The problem for SAT critics is that the gaps show up far beyond SAT scores. Rebecca Zwick, an education professor at the University of California at Santa Barbara who collected the quotations above from Guinier and the others, writes flatly that it's "impossible to find a measure of academic achievement that is unrelated to family income." Some reformers have moved 180 degrees from Conant and suggest that scores on "achievement tests"--the kind Conant thought unfairly benefited rich boys--should replace SAT scores as the main criterion for admission. Others suggest using high school grade-point-averages alone. Some suggest using a composite number--compiled from high school grades, personal interviews, writing tests, the difficulty of high school course load, and extent of extracurricular activities--to replace the SAT in a school's deliberations. Test-optional colleges are all using one version or another of these alternatives.

Yet each of these markers correlates with family income as much as, and in some cases more than, the SAT. Kids who get high "aptitude" scores also get high "achievement" scores. While grades are a good predictor of college success for middle- and upper-income kids, the validity fades with kids from lower-income backgrounds. The rankings of kids look much the same whether they're measured by aptitude tests, achievement tests, high school grades, writing tests, the difficulty of their course loads, and so on. By the numbers, the bias toward the well to do is hard to budge.

Their frustration has pushed progressive educators to extremities that would have been unthinkable even a generation ago. One statistician, writing in the Harvard Educational Review, has suggested that a "corrective scoring method" be applied to the SAT. Not only do different groups perform differently on the SAT, but groups show differences in the kinds of SAT questions they do well on. So his R-SAT grading system would count only the questions on which those groups score well. Ta-da: "The R-SAT," he wrote, "shows an increase in SAT verbal scores by as much as 200 or 300 points for individual minority test takers."

As Karabel showed, when the test scores didn't work out the way the bluebloods wanted back in the old days, they did the obvious thing: They played down the numbers. They went looking for personal qualities they could use in place of aptitude. And of course they found what they were looking for, in hazy notions like "good breeding," "manliness," "All-Americanness"--considerations that would yield the kind of class the old boys were comfortable with, one with fewer undesirable elements. Nowadays, with standardized tests yielding a disproportionate number of Asians and wealthy whites, progressives resort to an updated version of the old blueblood technique. Only now they're using social science to lend an air of statistical precision.

The marketers at the College Board have noticed the trend. In keeping with their finely honed instinct for survival, the companies are trying to lead the parade before they get trampled. Last fall College Board researchers announced that they would try to develop standardized tests to measure "noncognitive skills"--attributes beyond the merely intellectual--that could be linked to success in college, tested for, and quantified without resulting in a scoring gap. And it goes without saying that if the College Board could develop such tests, it would be happy to market them to new generations of college goers. "If You Can't Beat 'Em, Join 'Em," read the headline in the trade publication Inside Higher Ed.

The College Board's effort is based largely on work already done by psychologists at Michigan State University, who have devised a "12-dimension taxonomy" on which to test students. "Knowledge and mastery of general principles" is only one of the 12. The others include "social responsibility," "interpersonal skills," and "appreciation for diversity." Unfortunately, so far none of their computations has been able to predict college success with anywhere near the reliability of SAT scores. Sliced another way, however, the results are quite pleasing. The College Board calculated that if the 12-dimension scores were used in college admissions at a selective college, the percentage of black students and Hispanic students admitted to the school would more than double. On the other hand, the percentage of Asians would drop by one-third. But who's counting?

An even more ambitious effort is known as the "Rainbow Project," developed by a psychologist named Robert Sternberg, formerly of Yale and now the dean of arts and sciences at Tufts University. Sternberg says he doesn't want to do away with the SAT altogether; he admits its predictive value. But he is also candidly trying to find a way to admit more black and Hispanic applicants to selective colleges, and to do it with some kind of quantifiable support. His goal, he says, is "the creation of standardized test measures that reduce the different outcomes between different groups as much as possible in a way that still maintains test validity." It's a kind of reverse engineering: He knows the results he wants, he just needs the right test to give them to him.

Sternberg's method is pretty straightforward. He's taken the tender-hearted and almost-true bit of grand-motherly wisdom Everyone is good at something and stretched it to the breaking point: Everyone is good at something that will make him a successful college student. This is the premise of his "triarchic theory of intelligence." Sternberg's thinking is inspired by the well-known work of the Harvard psychologist Howard Gardner, who in 1983 claimed to have identified seven kinds of human intelligence, from bodily-kinesthetic intelligence to intrapersonal intelligence; recently he discovered another intelligence, for a total of eight, though more intelligences may be on the way. Sternberg, more modest, has contented himself with only three--a trio of skills that, when quantified, should be as useful and impressive to college admissions officers as any SAT score.

Sternberg's definitions are highly abstract. Practical intelligence involves "skills used to implement, apply or put into practice ideas in real-world contexts." Creative intelligence involves "skills used to create, invent, discover, imagine, suppose, or hypothesize." Analytical intelligence is closer to more conventional notions of intelligence, and to the aptitude that the SAT has usually been thought to measure. It involves "skills used to analyze, evaluate, judge, or compare and contrast."

To measure his intelligences Sternberg has developed a combination of multiple-choice tests, which resemble the SAT, and "performance measures," which do not. Together the testing session can last four hours. You can see why. After taking the multiple-choice test, the student is handed five New Yorker cartoons and told to write a fresh caption for each. "Trained judges" (Sternberg's phrase) grade the captions on a five point scale, depending on how original, clever, funny, and "task-appropriate" they are. Then the student is asked to write two stories under such provocative titles as "The Octopus's Sneakers" and "Beyond the Edge." Again trained judges are standing by to rank the responses with a number (one to five).

Then: straight to video. The student watches seven brief vignettes about an everyday problem and chooses one out of six options for how to handle it. His answer is judged, from one to seven, on how well it would solve the problem. Then come two written questionnaires, one measuring "common sense" and another rating reactions to "college life." (Example: How would you deal with a difficult roommate?)

Finally, there's biodata, in which the student grades himself on how hard he studies, how hard he plays, how involved he is in his school. Biodata plays a crucial role in almost all noncognitive tests, as replacement for, or a supplement to, more conventional assessments of aptitude like the SAT. A typical example comes from a test developed at the University of Maryland. Students are asked to rate how strongly they agree with various statements about themselves. "Once I start something, I finish it." "I want a chance to prove myself academically." "When I believe strongly in something, I act on it." The higher the score, the more desirable the kid.

It's odd--more, it's hard to believe--that noncognitive tests such as these are being floated to rival a test, the SAT, which is routinely deemed defective because it is too subjective, too coachable, too imprecise, too clumsy to administer, and too dependent on cultural conventions. What, after all, could be more subjective than rating the humor in the caption of a New Yorker cartoon, assuming you could find any? What's more coachable than asking a kid whether he finishes what he starts? (If he leaves the question blank, you've got your answer.) It only makes sense when you remember that the point of the tests is not their objectivity or precision but the scores that they elicit, particularly from individuals lashed together by race, sex, or income level. Sternberg says he can claim some success in this regard. "Although the group differences in the tests were not reduced to zero," he writes, "the tests did substantially attenuate group differences relative to other measures such as the SAT." Interest in Sternberg's method among admissions officers has been intense.

Anyone who drifts unprepared into psychometric literature will be surprised to discover the platitudes that rise like air sanitizer from even the most impenetrable studies. Huge stretches of Sternberg's work are virtually unintelligible to a layman ("A chi-squared test for differences between sample variance and population variance suggests that variance for the sample for these items "). And then suddenly you trip over a sentence that might have come from The Uncollected Polonius: "Success in life requires one not only to analyze one's own ideas as well as the ideas of others, but also to generate ideas and persuade other people of their value." If Polonius had a master's in sociology: "A balance of skills is needed to adapt to, shape, and select environments."

But platitudes--truisms--are everywhere in the anti-SAT literature. Truisms lull the reader so reassuringly that you might miss other stuff that isn't true at all. Martha Allman, Wake Forest's admissions director, announced the school's decision to drop its SAT requirement with self-flattering banalities. "After months of discussion and study and reflection," she said, "we decided it was time to stand up on the side of fairness." Meanwhile, the material Wake Forest issued to support its new test-optional policy was a series of statements that are demonstrably untrue: that SATs aren't good predictors of college success, that they're merely an indicator of socioeconomic status rather than aptitude, that they're a barrier to college for "many well-qualified students," that they're crippled with cultural and racial bias, and so on. Each of these is contradicted by mountains of data and common sense.

The banality and misstatement obscure one truth so obvious that hardly anyone mentions it: If test-optional schools like Wake Forest truly want to admit those "well-qualified students" with low SAT scores, they could just choose to admit them. Admissions officers have access to a vast, multimillion-dollar industry of direct mailers and enrollment management consultants that do nothing all day but help schools find the kinds of applicants they want. And the school could admit them without depriving itself, as a matter of policy, of the valuable information that the SAT provides.

Instead the war on the SAT continues and intensifies. But why?

In addition to the obvious political reasons, there are compelling institutional ones as well. The deans may be progressives, but they're also bureaucrats. A test-optional admissions policy boosts department budgets and staff, since the personal interviews and graded essays used in place of test scores require much more manpower. It also gives a boost in the infamous college rankings published each year by U.S. News and World Report. When a school no longer requires the SAT, the number of applications typically increases, but the number of available slots stays the same. So the percentage of acceptances drops. The school suddenly looks more selective, pushing it up the U.S. News charts. The incoming class's "average SAT score"--another important measure for U.S. News--rises too, since low scorers usually don't submit their scores, leaving the average to be calculated from only the high-scoring applicants.

Best of all, without SAT scores, a dean's discretion is greatly enlarged. He is released from the tyranny of objective numbers. For the progressive admissions director, aching to make his school a gorgeous mosaic of multiculturalism, the SAT must chafe like a manacle. It offers a datum with which outsiders can second-guess his judgment: Why'd you accept Billy with a 1200 SAT and deny Jane with a 1500? He'll face no more questions like that if he can persuade his school to drop the SAT.

Inevitably, I suppose, the demotion of the SAT and what it represents begins to carry a whiff of the same postmodernism that has overtaken the humanities in most elite colleges. We shouldn't be surprised if it's seeped through the ventilators and under the door jambs into the admissions office next door. An attack on the traditional notion of aptitude is also an attack on one long-standing and widely accepted notion of what higher education is for, as a place where academic excellence is pursued both for its own sake and as a preparation for life. If higher ed is not defined this way it's hard to see what it will be defined by--beyond the whims of school presidents and progressive deans. But maybe that's the whole idea.

Andrew Ferguson is a senior editor at THE WEEKLY STANDARD.

Next Page