The Blog


11:00 PM, Feb 9, 1997 • By CHESTER E. FINN JR.
Widget tooltip
Single Page Print Larger Text Smaller Text Alerts

YOU DECIDE: Which is doing a better job of public education, Arizona or Kentucky? Similar numbers of children attend school in the two states. About a quarter of them live in single-parent families. Arizona has more minority youngsters, but Kentucky has more below the poverty line. On the National Assessment of Educational Progress in 1992 and 1994, the two states had nearly identical (low) scores. Yet Arizona spends a thousand dollars less per pupil on public education than Kentucky, an overall difference of some $ 400 million a year.

In any report card on state-level education performance, you might suppose that Arizona would fare better than Kentucky, at least earning a higher grade for efficiency. Arizona might also be lauded for its bold, shake-the-system " charter school" program, which in its second year already enrolls 2 percent of the state's youngsters in these innovative public schools, while a question could be raised about Kentucky's cumbersome, hyper-centralized reform plan, already in Year Six yet still reporting flat test scores in middle and high schools.

So one might suppose. But only if one were naive about the priorities of the education establishment, which on January 16 trundled out a bulky new " report card" on public education -- Quality Counts -- that conferred a grade of "B" on Kentucky, "C-" on Arizona.

This 238-page coffee-table-size tome -- laced with ads from textbook publishers, computer firms, consultants, teacher colleges, and school reform projects (including my own Hudson Institute's) -- was written by the staff of Education Week, the country's premier newspaper about K-12 schooling, with funding from two private foundations. Considerable fanfare attended its release, and elected officials throughout the land can expect to see the pages grading their states waved about by school lobbyists during the legislative sessions now beginning.

Quality Counts has all the trappings of objective social science. Statistics were gathered on "75 specific indicators." "Thousands of pages of data" were reviewed. Education Week's own ample archives were plumbed. Experts were surveyed (me included). Teachers, principals, and superintendents were polled. And on and on.

The result of all this effort is three different kinds of measure for each state: achievement scores in 4th-grade reading (1994) and 8th-grade math (1992); six letter grades; and a severalpage essay.

The achievement numbers, though lamentable, are solid, based on the widely respected National Assessment, and the authors deserve credit for resisting pressure to adjust those scores by race. As they rightly note, "We can no longer use the excuse of a student's background to justify low achievement." Indeed, when appraising the products of U.S. schools in hard-hitting language such as "rife with mediocrity," Quality Counts lives up to its title.

The test scores, however, are old news. It's the letter grades that are new, that have caught the eye of U.S. educators -- and that will be dangled in legislative drafting sessions and budget hearings this winter. Moreover, because this is the first of a series of annual report cards, these letter grades are sure to be watched in coming years. Voters and taxpayers may reasonably wonder what is being graded.

The answer is mainly school "inputs," especially money. As if to make amends for its tough stance on pupil achievement, nearly all of the report card's other indicators buttress the school establishment's hoary assumptions and encourage its obsession with funding. Indeed, three of a state's six letter grades are tied directly to dollars:

* "Adequacy of resources" blends current per-pupil spending, its rise over the past decade, and the state's "relative fiscal effort," i.e., how heavily it taxes itself to support public schools. (Straight A's to New Jersey, West Virginia, and New York, a lone "F" to Bill Clinton's Arkansas.)

* "Allocation of resources" melds the portion of the state education budget that goes for instruction, the sums devoted to technology, and a measure of how many school buildings are falling down. (No "A"s here. "B"s to Georgia, Indiana, Tennessee, and Virginia. "F" for Alaska.)

* "Equity of resources" tracks the uniformity of per-pupil spending across the state's school districts. (Hawaii, which is all one district, naturally gets an "A" as does West Virginia. The lowest marks are "D"s for California, Rhode Island, and Texas.)

All this despite decades of research and experience demonstrating how weak the link is between what goes into schools and what comes out. We have ample evidence that spending more -- r more equally -- does not mean more learning follows. (Real expenditures per pupil have tripled since the mid-1950s and doubled since the mid-1960s. Few claim that school performance has.) We also have clear signals from parochial and charter schools that great learning can take place in marginal facilities with lean budgets. Yet the school establishment prefers to think otherwise. So do the editors of Education Week, read mostly by educators who, the report card blandly notes, "in virtually every state worry about getting enough money to do the job." Rather than devising rigorous measures of efficiency or productivity, this report is content to signal that more is better. (As for its focus on technology spending, perhaps only a cynic would link that indicator to twenty-some full- page ads by IBM, AT&T, and other vendors of hardware and software.)

Two more grades are conferred chiefly on the basis of inputs and insider opinions. A state's mark for "Teaching Quality" hinges on features cherished by teacher educators and unions: whether all teachers are fully certified and their colleges duly accredited; whether the state belongs to something called the Interstate New Teacher Assessment and Support Consortium; how many weeks of practice teaching it requires; and whether it has an "independent professional-standards board" (usually a union pawn). Less than a third of the grade is tied to factors that laymen are apt to think vital, such as the fraction of high school instructors with degrees in the subjects they teach. (Kentucky and Minnesota lead this list with "B"s. Arizona and Hawaii anchor it with "D"s.)

The measure called "School Climate" is better: Only 35 percent is pure inputs (class size and teacherstudent ratios). Other parts pertain to governance, regulatory waivers, safety, student and parent "roles," even (public) school choice. Yet only educators were surveyed. Almost half the grade is based on their "perceptions" of things like "student apathy" and " lack of parent involvement." Nobody asked the consumers what they think about their schools' climate -- or the people working in them. (Vermont leads with " B+"; Mississippi, Florida, California, and Utah bring up the rear with "D-"s.)

The remaining grade is well intended. It purports to gauge states' success in instituting "high content standards in English, math, science, and history for all children and assessments that measure whether students meet the standards." But lacking criteria by which to judge which standards are high, the reportcard writers settle for their mere existence. And lacking evidence on which tests are rigorous, they settle for certain types of test. Here great deference is paid to educators' ardor for "performance assessments," and states that employ standardized tests are marked down. This approach yields such bizarre results as a worse grade for Virginia, which has nationally acclaimed standards in place and tests under development, than for North Carolina, whose standards are notoriously low. Iowa, which paces the nation in college-entrance scores most years (and tracks its progress via a private testing program), earns a failing grade (along with Wyoming) in this category because it stoutly refuses to impose uniform standards or state tests.

As if this grading scale were not sufficiently slanted toward establishment preferences, the essays accompanying the state reports also reveal strong preferences for a particular, educator-endorsed strategy of school reform: centralized, uniform, and tightly controlled from above. Establishment leaders dub this approach "systemic reform" -- President Clinton's controversial Goals 2000 program embodies the concept -- and contrast it with the market-style strategies they despise: charter schools, private-contract management, and vouchers.

Never mind that there's no evidence of the "systemic" approach's producing better results. It's the strategy that preserves the old ground rules and power relationships, that maintains control and sops up money, and the report- card writers plainly favor it. Indeed -- incredibly -- they find the prospect of "alternative forms of education . . . to replace public schools as we have known them" as worrisome as erosion of "our democratic system and our economic strength." Lynn Olson, senior editor of Education Week, depicts as one of the great "obstacles" to serious reform the existence of "a vocal and determined group of reformers [who believe] that a better way to improve the schools is through competition."

No wonder Kentucky fares better than Arizona in such a grading scheme. The Bluegrass State hews to the party line, while the Grand Canyon State is striking out on its own. This report card is replete with similar misjudgments. Its central failure is not that it papers over the shortcomings of U.S. education -- it's plenty critical. Rather, its fundamental error is that it turns the clock back thirty years to a time when quality was measured in dollars, payrolls, credentials, and elaborate bureaucratic schemes rather than the actual performance of students, schools, and educators.