The Blog

BAD GRADES GOOD IDEA

11:00 PM, Feb 9, 1997 • By CHESTER E. FINN JR.
Widget tooltip
Single Page Print Larger Text Smaller Text Alerts

All this despite decades of research and experience demonstrating how weak the link is between what goes into schools and what comes out. We have ample evidence that spending more -- r more equally -- does not mean more learning follows. (Real expenditures per pupil have tripled since the mid-1950s and doubled since the mid-1960s. Few claim that school performance has.) We also have clear signals from parochial and charter schools that great learning can take place in marginal facilities with lean budgets. Yet the school establishment prefers to think otherwise. So do the editors of Education Week, read mostly by educators who, the report card blandly notes, "in virtually every state worry about getting enough money to do the job." Rather than devising rigorous measures of efficiency or productivity, this report is content to signal that more is better. (As for its focus on technology spending, perhaps only a cynic would link that indicator to twenty-some full- page ads by IBM, AT&T, and other vendors of hardware and software.)

Two more grades are conferred chiefly on the basis of inputs and insider opinions. A state's mark for "Teaching Quality" hinges on features cherished by teacher educators and unions: whether all teachers are fully certified and their colleges duly accredited; whether the state belongs to something called the Interstate New Teacher Assessment and Support Consortium; how many weeks of practice teaching it requires; and whether it has an "independent professional-standards board" (usually a union pawn). Less than a third of the grade is tied to factors that laymen are apt to think vital, such as the fraction of high school instructors with degrees in the subjects they teach. (Kentucky and Minnesota lead this list with "B"s. Arizona and Hawaii anchor it with "D"s.)

The measure called "School Climate" is better: Only 35 percent is pure inputs (class size and teacherstudent ratios). Other parts pertain to governance, regulatory waivers, safety, student and parent "roles," even (public) school choice. Yet only educators were surveyed. Almost half the grade is based on their "perceptions" of things like "student apathy" and " lack of parent involvement." Nobody asked the consumers what they think about their schools' climate -- or the people working in them. (Vermont leads with " B+"; Mississippi, Florida, California, and Utah bring up the rear with "D-"s.)

The remaining grade is well intended. It purports to gauge states' success in instituting "high content standards in English, math, science, and history for all children and assessments that measure whether students meet the standards." But lacking criteria by which to judge which standards are high, the reportcard writers settle for their mere existence. And lacking evidence on which tests are rigorous, they settle for certain types of test. Here great deference is paid to educators' ardor for "performance assessments," and states that employ standardized tests are marked down. This approach yields such bizarre results as a worse grade for Virginia, which has nationally acclaimed standards in place and tests under development, than for North Carolina, whose standards are notoriously low. Iowa, which paces the nation in college-entrance scores most years (and tracks its progress via a private testing program), earns a failing grade (along with Wyoming) in this category because it stoutly refuses to impose uniform standards or state tests.

As if this grading scale were not sufficiently slanted toward establishment preferences, the essays accompanying the state reports also reveal strong preferences for a particular, educator-endorsed strategy of school reform: centralized, uniform, and tightly controlled from above. Establishment leaders dub this approach "systemic reform" -- President Clinton's controversial Goals 2000 program embodies the concept -- and contrast it with the market-style strategies they despise: charter schools, private-contract management, and vouchers.

Never mind that there's no evidence of the "systemic" approach's producing better results. It's the strategy that preserves the old ground rules and power relationships, that maintains control and sops up money, and the report- card writers plainly favor it. Indeed -- incredibly -- they find the prospect of "alternative forms of education . . . to replace public schools as we have known them" as worrisome as erosion of "our democratic system and our economic strength." Lynn Olson, senior editor of Education Week, depicts as one of the great "obstacles" to serious reform the existence of "a vocal and determined group of reformers [who believe] that a better way to improve the schools is through competition."

No wonder Kentucky fares better than Arizona in such a grading scheme. The Bluegrass State hews to the party line, while the Grand Canyon State is striking out on its own. This report card is replete with similar misjudgments. Its central failure is not that it papers over the shortcomings of U.S. education -- it's plenty critical. Rather, its fundamental error is that it turns the clock back thirty years to a time when quality was measured in dollars, payrolls, credentials, and elaborate bureaucratic schemes rather than the actual performance of students, schools, and educators.