May 22, 2014, marks the 50th anniversary of President Lyndon Baines Johnson’s “Great Society” address, delivered at the spring commencement for the University of Michigan. That speech remains the most ambitious call to date by any president (our current commander in chief included) to use the awesome powers of the American state to effect a far-reaching transformation of the society that state was established to serve. It also stands as the high-water mark for Washington’s confidence in the broad meliorative properties of government social policy, scientifically applied.
No less important, the Great Society pledge, and the fruit this would ultimately bear, profoundly recast the common understanding of the ends of governance in our country. The address heralded fundamental changes—some then already underway, others still only being envisioned—that would decisively expand the scale and scope of government in American life and greatly alter the relationship between that same government and the governed in our country today.
In his oration, LBJ offered a grand vision of what an American welfare state—big, generous, and interventionist—might accomplish. Difficult as this may be for most citizens now alive to recall, the United States in the early 1960s was not yet a modern welfare state: Our only nationwide social program in those days was the Social Security system, which provided benefits for workers’ retirement and disability and for orphaned or abandoned children of workers. Johnson had gradually been unveiling this vision, starting with his declaration of a “War on Poverty” in his first State of the Union months earlier in 1964, just weeks after John F. Kennedy’s assassination. In LBJ’s words, “The Great Society rests on abundance and liberty for all. It demands an end to poverty and racial injustice, to which we are totally committed in our time. But that,” he said, “is just the beginning.”
The Great Society proposed to reach even further: to bring about wholesale renewal of our cities, beautification of our natural surroundings, vitalization of our educational system. All this, and much more—and the solutions to the many obstacles encountered in this great endeavor, we were told, would assuredly be found, since this undertaking would “assemble the best thought and the broadest knowledge from all over the world to find those answers for America.”
Memorably, Johnson insisted that the constraints on achieving the goals he outlined were not availability of the national wealth necessary for the task or the uncertainties inherent in such complex human enterprises, but instead simply our country’s resolve—whether we as a polity possessed sufficient “wisdom” to embark on the venture.
For a lesser politician, the Great Society speech might have amounted to little more than lofty rhetoric. For LBJ, it was an actual blueprint. With Johnson’s consummate legislative skills, honed over six years as Senate majority leader, and with the coming 1964 electoral landslide for his party, the Great Society vision would be swiftly implemented: through civil rights laws, a panoply of new social programs (Medicare, Medicaid, food stamps, and so forth), new federal agencies (the Department of Housing and Urban Development, the Department of Transportation), and a vast array of other federal social projects.
What began under Johnson continued—or, more often, expanded—under all successive presidents. Not even Ronald Reagan managed to reverse the growth of government set in motion by that call for the Great Society. Thus, the American welfare state as we know it today is very largely the outcome of forces Johnson unleashed in the first half-year of his presidency. (The most appreciable addition to this apparatus over the past half-century is arguably Obamacare, the health care guarantees forged into law under the Affordable Care Act of 2010.)
Half a century later, how should we assess the Great Society? Any attempt at a comprehensive assessment would demand vastly more space than this essay, given the audacity and expanse of territory it laid claims to conquer—or, more precisely, to improve. Everywhere Johnson cast his eye, he seemed to find an America in need of improvement: Environmental protection, community development, the arts—all of these and more he flagged in this one short speech as legitimate new areas for federal government involvement under the banner of the Great Society. We will confine our assessment here to that enormous first pillar of the Great Society: “abundance” for all and the “end to poverty” to which Johnson committed us.
The War on Poverty was grounded in a set of presumptions about our economy and society that were widely shared at the time by the country’s opinion leaders and decision-making elites. American prosperity was, in this postwar era, finally here to stay—and continuing economic advancement could be all but taken for granted. Indeed, the helmsmen of our national economy—groups like the President’s Council of Economic Advisers—knew so much about how to manage the workings of the magnificent U.S. macroeconomy that they could seriously talk about fine-tuning its performance.
The problem of poverty amid general affluence, for its part, was mainly a technocratic question—to be answered boldly through straightforward, official redirection of national resources to fill the country’s “income gap.” Some special programs, however, were also required for addressing conditions in pockets of lingering social disadvantage (urban slums, Appalachia, the Mississippi Delta, and other blighted locales). Guided by experts from the academy and elsewhere, these social programs could, with time, systematically convert virtually all of the underprivileged into full participants in the American Dream.
The conceit that possessed the initial troop of Great Society poverty warriors, in short, was that the challenge inherent in the project of eliminating poverty in America was not in essence very different from that of the project for sending a man to the moon. Both tasks could be successfully engineered by a confident government with sufficient resources, know-how, and commitment behind it. This outlook exemplifies what Friedrich Hayek termed “scientism,” pure and simple: misapplication of techniques and theories from the natural sciences to other, patently unsuitable realms.
The scientistic fallacies that animated the original War on Poverty did not long survive their encounters with real, live human beings, as the fates of the Office of Economic Opportunity and other experiments would attest. Nevertheless, official antipoverty programs and policies went on to flourish—at least by the administrative metric of resource expenditures. In 2012, nearly $700 billion in means-tested transfers of money, goods, and services were obtained by recipients of antipoverty benefits. And this does not include the bureaucratic overhead and personnel costs for such programs. At this writing, annual government outlays for U.S. antipoverty programs may have reached, or even exceeded, the trillion-dollar mark.
And programs expressly devised for combating poverty were only one component within the overall schema of social policies intended to redress material want and economic insecurity. For the Great Society also added Medicare to the structure of the American welfare state and arguably prepared the way for more generous, and inventive, outlays from the existing Social Security program. All in all, inflation-adjusted government transfers for social welfare programs soared more than tenfold between 1964 and 2013, and real per capita welfare state transfers were six-plus times higher in 2013 than 50 years earlier. Numerous critics at home and abroad fault the contemporary U.S. social welfare system for what they take to be its punitive austerity. Nevertheless, the share of overall personal income from social welfare transfers jumped from 5.8 percent in 1964 to 17.0 percent in 2013; more than one dollar in six within the overall American household budget thus comes from government entitlement programs, redistributed through social welfare guarantees.
Since 1964, the welfare state has devoted considerable resources to assuring or improving the public’s living standards—something like $20 trillion in inflation-adjusted dollars through antipoverty programs alone, by one calculation. What sort of effect have these programs had on deprivation and its attendant miseries?
If we were to judge the performance of our welfare state solely by the statistical measure invented to gauge national performance in the War on Poverty—the “poverty rate”—we would have to conclude the whole effort has been a miserable and unmitigated failure. The true picture, however, is rather more complex than that same poverty rate is capable of depicting, though not necessarily much more heartening.
Acccording to the official poverty rate, the proportion of our population below the poverty line was dropping rapidly in the years immediately before the War on Poverty was fully underway. In the seven years between 1959 and 1966, according to the Census Bureau, the proportion of our country living in poverty dropped by about a third, from 22.4 to 14.7 percent. Since then, however, the official poverty rate has been essentially stuck. It reached an all-time low of 11.1 percent in 1973, in the Nixon era, then drifted uncertainly back upward. For the year 2012, the most recent such data available, the national poverty rate was 15.0 percent—slightly higher, in other words, than back in 1966.
The official poverty picture looks even worse the more closely one focuses on it. According to those same official numbers, the poverty rate for all families was no lower in 2012 than in 1966. The poverty rate for American children under 18 is higher now than it was then. The poverty rate for the working-age population (18-64) is also higher now than back then. The poverty rate for whites is higher now than it was then. Poverty rates for Hispanic Americans have been tracked only since 1972—but these likewise are higher today than back then. Shocking as this may sound, only a few groups within our society—most importantly, Americans 65 and older and African Americans of all ages—registered any appreciable improvement in poverty rates between 1966 and 2012.
If those official numbers reflected reality in America, all this would be cause for the gravest alarm. After all, the official poverty rate is meant to count the percentage of the population living on incomes below a threshold set back in the early 1960s and adjusted since then only to keep up with inflation. That threshold was meant to provide only a severe and stringent household budget—as stringency was envisioned half a century ago. But of course, America is a vastly richer society today. According to the Bureau of Economic Analysis, real per capita disposable income in our country in 2012 was two and a half times the 1966 level. And according to data compiled by the Federal Reserve, private wealth grew even faster over that same period.
Taken together, these soundings would seem to conjure up the ghastly image of “immiserating growth,” that fatal tendency of modern capitalist systems, at least according to some postwar neo-Marxian theorists. But the proposition that a higher fraction of Americans are stuck in absolute poverty today than nearly half a century ago cannot be taken seriously. It is preposterous on its very face.
Consider that the health of Americans of all ages is markedly better now than then: life expectancy at birth rose by more than eight years between 1966 and 2010 alone and is higher at every age these days—even for centenarians. Americans are not only healthier, but also much more educated—in 1966, nearly a third of adults 25 or older had a grade school education or less, compared to just 5 percent in 2013. And Americans are more likely now to be working in paid jobs: Despite the terrible 2008 economic crash, the percentage of employed adults 20 and older was still higher in 2013 than in 1966 (61 percent versus 57 percent).
The idea that such a population would at the same time suffer a higher incidence of absolute poverty does not even pass the laugh test. This picture is an illusion, a distorted reflection from the statistical variant of a funhouse mirror, and the funhouse mirror in question is the poverty rate itself. The poverty rate is a highly misleading measure of living standards and material deprivation—incorrigibly misleading, in fact.
The central and irresolvable trouble with the official poverty rate is that it presumes an immediate and exact equivalence between income levels and consumption levels—so that any home in any year with a reported income level below the poverty line must perforce also be constrained to sub-poverty-line spending power. In real-world America, by contrast, income is a poor predictor of spending power for lower-income groups at any given point in time—and that predictive power has dramatically worsened over the course of our postwar era.
In 1960-61, according to the BLS Consumer Expenditure Survey, the bottom one-fourth of American homes spent about 12 percent more than their pretax reported incomes each year. By 2011, according to that same survey, those in the lowest quintile were spending nearly 125 percent more than their reported pretax incomes and nearly 120 percent more than their reported posttax, posttransfer incomes.
This growing discrepancy between income and expenditures on the part of the poorer strata in recent decades is by no means impossible to explain. Not least important, households are subject to greater year-to-year earnings swings than in the past and have greater wherewithal (through borrowing, asset drawdowns, and other means) to buffer their consumption when they hit a bad year, or even a couple of bad years. But this phenomenon also means that people reporting ostensibly poverty-level incomes are less and less likely to be consigned to poverty-level living standards, as that standard was originally conceived in the early 1960s. Increasing noncash transfers of means-tested public benefits (including, especially, health care) only further widen the gap between reported income and actual consumption for America’s “poverty population.”
Thus, the actual living conditions of people counted as living “in poverty” in America today bear very little resemblance to those of Americans enumerated as poor in the first official government count attempted in 1965. By 2011, for example, average per capita housing space for people in poverty was higher than the U.S. average for 1980, and crowding (more than one person per room) was less common for the 2011 poor than for the nonpoor in 1970. More than three-quarters of the 2011 poor had access to one or more motor vehicles, whereas nearly three-fifths were without an auto in 1972-73. Refrigerators, dishwashers, washers and dryers, and many other appliances were more common in officially impoverished homes in 2011 than in the typical American home of 1980 or earlier. Microwaves were virtually universal in poor homes in 2011, and DVD players, personal computers, and home Internet access are now typical in them—amenities not even the richest U.S. households could avail themselves of at the start of the War on Poverty. Further, Americans counted as poor today are manifestly healthier, better nourished (or overnourished), and more schooled than their predecessors half a century ago.
To be clear: The poor in America are not well-to-do. They are poorer than the rest of America. This has not changed. What has changed is their standard of living—which has risen markedly since the beginning of the War on Poverty, as have living standards for all the rest of us. Work by economists like Daniel Slesnick at the University of Texas, Bruce Meyer at the University of Chicago, and James X. Sullivan at the University of Notre Dame demonstrates that an ever-smaller share of our country subsists on consumption levels demarcated by our old, official, 1960s-era poverty line.
Consumption-focused assessments of the poverty problem are stunningly different from our official numbers. In a recent research paper, for example, Meyer and Sullivan indicate that such “consumption poverty” afflicted less than 4 percent of the population in 2008. In the wake of the 2008 crash, “consumption poverty” rose—but as of 2010, when postcrash conditions were possibly most dire, just 3.7-4.5 percent of America was subject to it, according to their calculations.
This research underscores a significant point, all too often misunderstood in both policy and intellectual circles today. Poverty in America—the sort of material deprivation people knew back in the 1960s—has been all but eliminated. This should not be a surprise, considering both the many intervening decades of general economic advancement and the tremendous outlays of government antipoverty funds, currently averaging about $9,000 in total expenses and $7,000 in transfer value per year for every person in our nation designated as a recipient in need.
We cannot say the War on Poverty was a necessary condition for the near-complete abolition of 1960s-style poverty, insofar as we cannot know what the rate of progress would have been without those efforts. But we can say that the War on Poverty has proved to be a sufficient condition for achieving this great objective.
So the long War on Poverty has indeed managed to eradicate 1960s-style poverty from our midst, or very nearly so—even if our federal authorities today are not competent to describe this accomplishment (or, seemingly, even recognize the accomplishment in the first place). This is an important fact in favor of the War on Poverty—but other important facts must be considered as well, all seemingly weighing on the other side of the ledger. For the institutionalization of antipoverty policy has been attended by the rise and spread of an ominous “tangle of pathologies” in the society whose ills antipoverty policies were intended to heal. Those pathologies appear to be conjoined with antipoverty policies; in some cases, antipoverty policies may possibly create them, but irrespective of the causality at work, they are clearly very largely financed today by antipoverty policies.
The phrase “tangle of pathologies” harks back to the famous Moynihan Report of 1965, which warned of the crisis of the family then gathering for black America. That report was criticized, even viciously denounced, at the time, but in retrospect much of it seems positively prophetic.
The Moynihan argument also assumed that the troubles impending for black America were unique—a consequence of the singular historical burdens that black Americans had endured. That argument was not only plausible at the time, but persuasive. Yet today that same “tangle of pathology” can no longer be described as characteristic of just one group. Quite the contrary: These pathologies are evident throughout all of America today, regardless of race or ethnicity. Three of the most disturbing of these many entangled pathologies are welfare dependency, the flight from work, and family breakdown.
Welfare Dependency. Unlike, say, an old-age pension awarded after a lifetime of work, a bestowal of charity or aid to the indigent is a transaction that establishes a relationship of dependence. As a people who have prized their independence, financial as well as political, Americans throughout history have attempted to avoid dependence on “relief” and other handouts. Recovery from the Great Depression was corroborated by the great decline in the numbers of Americans on public aid: In 1951, the commissioner of Social Security was pleased to report that just 3.8 percent of Americans were receiving public aid, down from 11.5 percent as recently as 1940. But with the War on Poverty and its successor programs, such dependency has become routine. The United States today is richer than at any previous juncture—yet, paradoxically, more Americans than ever before are officially judged to be in need. Welfare dependence is at an all-time high and by all indications set to climb in the years ahead.
Perhaps tellingly, the U.S. government did not get around to collecting data and publishing figures on the proportion of the population dependent on need-based benefits on a systematic basis until nearly two decades after the start of the War on Poverty, during the Reagan era. By then (1983), nearly one American in five (18.8 percent) lived in a home taking in one or more means-tested benefits.
By 2012, according to one Census Bureau count, the proportion was almost one in three: 32.3 percent and “only” 29.4 percent if school lunches were excluded from the tally. This still left more than 90 million Americans applying for and accepting aid from government antipoverty programs. But only 33 million people from America’s “poverty population” were enrolled in those same means-tested programs. In other words, nearly twice as many Americans above the poverty line as below it were getting antipoverty benefits. Evidently, the American welfare state has been defining deprivation upward.
In the 1990s, a bipartisan political consensus enacted “welfare reform”—but it would be misleading to overestimate the effect of that adjustment on the long-term rise in dependency. That “welfare reform” took aim at just one especially controversial and unpopular program: aid to Families with Dependent Children (AFDC), a facet of the original Social Security legislation, but one that had been allowed to mutate into a vehicle for financing unwed motherhood and intergenerational dependency.
AFDC’s reach was always limited—in 1983 only 4.2 percent of Americans lived in homes receiving aid from it, according to Census Bureau estimates—and that fraction has been pared down to just 2.0 percent in 2011. On the other hand, most of the other means-tested programs have extended their reach over those same years: public housing, income transfers from AFDC alternatives, food stamps, Medicaid, and more. Since the advent of “welfare reform,” the proportion of the American population relying on at least some entitlement benefit from the government has jumped by another 10 percentage points.
By 2012, according to one Census Bureau count, significant demographic subgroups within the American population were well along the path to means-tested majorities—that is to say, toward the point where more members than not of the groups in question would be claiming benefits from government antipoverty programs. More than 47 percent of all black Americans and fully 48 percent of Hispanic Americans of all ages were reckoned to be taking home means-tested benefits (excluding subsidized school lunches from the tally, here and in the rest of this discussion). More than 60 percent of black and Hispanic children, and nearly 43 percent of all American children, were depending on antipoverty programs for at least some support. Dependency was less pronounced among children of Asian Americans and non-Hispanic whites, but only to a degree—for both those groups, the ratio was close to 30 percent. In all of the aforementioned cases, most of the beneficiaries drawing on government poverty program resources were men, women, and children not officially counted as poor.
In affluent democracies, children are not expected to be self-supporting—nor, necessarily, are their mothers. For men in the prime of life, expectations have always been different. In this sense, the most revealing measure of the spread of dependence is the declining financial independence of working-age American men. Among men 25 to 44 years of age, more than 25 percent lived in homes taking aid from antipoverty programs by 2012. For nonpoor men those same ages, the ratio was over 20 percent. While the proclivity was lower for working-age men living independently from families, nonetheless nearly 1 in 10 adult American men under 65 living alone were seeking and accepting need-based public aid by 2012.
The reach of dependence is perhaps best highlighted by its inroads into the parts of American society traditionally least ensnared by it. Historically, non-Hispanic whites have had the lowest dependence on public aid of any major racial or ethnic group delineated within official statistics—yet by 2012, nearly 1 in 5 nonpoor Anglo men ages 25-44, and about 1 in 11 under 65, nonpoor, and living alone, were on the government benefit rolls.
The Flight from Work. Although a higher fraction of Americans 20 and older are working today than at the start of the War on Poverty (61.2 percent in January 2014 versus 57.2 percent in January 1964), and though labor force participation rates are likewise higher today than 50 years ago, these overall figures mask two distinct tendencies.
On one hand, adult women are much more likely to be working or looking for work today than two generations ago. Labor force participation rates for women 20 and older are fully 20 percentage points higher today than in early 1964 (58.6 percent in January 2014 versus 38.5 percent in January 1964). A lifestyle that includes at least some paid employment has become the norm for American women over the past two generations.
On the other hand, men have been a diminishing presence within the workforce—and not only thanks to the rising share of women who seek to work. The proportion of men 20 and older who are employed has dramatically and almost steadily dropped since the start of the War on Poverty, falling from 80.6 percent in January 1964 to 67.6 percent 50 years later. No less remarkable: The proportion of adult men in the labor force—either working or looking for work—has likewise plunged over those same years, from 84.2 percent then to 71.9 percent today. Put another way: Our country has seen a surge of men making a complete exit from the workforce over the past 50 years. Whereas fewer than 16 percent of men 20 or older neither had work nor were looking for it in early 1964, the corresponding share today is more than 28 percent.
In purely arithmetic terms, the main reason American men today are not working is not unemployment. Rather, it is because they have opted out of the labor market altogether. For every adult man who is between jobs and looking for new work, more than five are neither working nor looking for employment.
Even in what should be the prime of work life, this male flight from work has been apparent. Between early 1964 and early 2014, the proportion of civilian, noninstitutionalized men completely out of the labor force nearly quadrupled—from 3.2 percent to 12.6 percent. By the same token, the corresponding share of nonworkers for men 35-44 years of age more than tripled over those same years, from 2.5 percent in January 1964 to 9.0 percent in January 2014.
The withdrawal of progressively greater proportions of men—including relatively young men—from the U.S. workforce seems especially paradoxical when we consider the major improvements in health (as reflected in life expectancy) and educational attainment (as reflected in mean years of schooling) for the cohorts under consideration over those same years. All other things being equal, one might have assumed these changes would make men more capable of working, not less.
It is noteworthy that the male flight from work for prime working-age groups, striking as it has been, did not proceed uninterrupted over the entire postwar period. No, it only took place after the War on Poverty commenced. Between early 1948—when the Bureau of Labor Statistics (BLS) began the current system for tracking workforce data—and early 1964, a period stretching more than a decade and a half, the proportion of unworking men 25-54 years of age remained essentially unchanged. The same was true for men 35-44 years of age. For men 25-34, the labor force participation rate actually rose from 96.1 percent in January 1948 to 97.1 percent in January 1964. Only since the War on Poverty began to offer alternatives to work for able-bodied men have we seen a major migration of men in prime working ages out of the time-established path of work.
As long as such data have routinely been collected, labor-force participation rates have been lowest for black Americans and highest for Hispanic Americans; rates for Asian Americans and Anglos have been in-between, close to the national average. There may be many reasons for the poor labor force performance of black American men—among them, lower educational levels, collapse of work opportunities in urban centers, and possibly continuing variants of discrimination as well. But ever since the War on Poverty, the flight from work among African-American men has merely preceded the same flight for Anglos. Although the black American labor force participation rate for men of peak working age (25-54) was sharply lower than that of Anglos for 2013, it was a bit higher in 1973 than the Anglo rate would be 40 years later. The same is true for men in their 20s, 30s, and 40s. The strange and disturbing fact is that a lower share of Anglo men today are working or looking for work than was true for their African-American counterparts four decades earlier—notwithstanding all the disadvantages borne by their black counterparts in those earlier years.
Family Breakdown. In the early postwar era, the norm for childbearing and child-rearing was the married two-parent household. Norm and reality were not identical, of course—but for the country as a whole, the gap was not immense. Illegitimacy was on the rise in the early postwar era, but as late as 1963, on the eve of the War on Poverty, more than 93 percent of American babies were coming into the world with two married parents. According to the 1960 census, nearly 88 percent of children under 18 were then living with two parents. That fraction was slightly higher than it had been before World War II, thanks in part to improving survival chances for parents and the correspondingly diminished risk of orphanhood.
Unfortunately, the rise of the new welfare policies inaugurated by the War on Poverty coincided with a marked change in family formation patterns in America. Out-of-wedlock births exploded. Divorce and separation soared. The fraction of children living in two-parent homes commenced a continuing downward spiral. These new patterns are so pervasive, and so politically sensitive, that some today object even to describing the phenomenon as “family breakdown.” But the phenomenon has swept through all of American society over the past 50 years, leaving no ethnic group untouched.
Pre-Great Society statistics on birth outside marriage may understate the true extent of nonmarital child bearing, given the stigma that attached to illegitimacy in those days. Be that as it may, for the quarter-century extending from 1940 to 1965, official data recorded a rise in the fraction of births to unmarried women from 3.8 to 7.7 percent. Over the following quarter-century—1965 to 1990—out-of-wedlock births jumped from 7.7 percent of the nationwide total to 28.0 percent. Twenty-two years later (the most recent available data are for the year 2012), America’s overall out-of-wedlock birth ratio had surpassed 40 percent.
By 2013, nearly 32 percent of America’s children were living in arrangements other than a two-parent home. Moreover, given current trends in cohabitation, divorce, and remarriage, not all children living in two-parent homes nowadays are with both their biological parents—and even where they are, those biological parents are not always married. A Census Bureau study for 2009 reported just under 69 percent of America’s children lived in two-parent homes that year—but only 60 percent were biological offspring of both parents in their home, and only 57 percent were with both married biological parents. The corresponding percentages are presumably lower today.
The two-married-parent family construct has always been frailest among African Americans (though the reasons behind that fragility continue to be debated, sometimes rancorously). The reported illegitimacy ratio for nonwhites gradually rose from 17 percent in 1940 to 22 percent in 1959. In 1960, one in five nonwhite children was living with a lone mother. By 2012, more than 72 percent of black births were outside marriage, and in 2013 more than half of black children were living only with their mother—many more than the 37 percent who were in a two-parent home.
But out-of-wedlock birth ratios and living arrangements for children have been changing in the rest of America as well since the start of the War on Poverty—and radically. Among Hispanic Americans, more than 30 percent of children were in single-parent homes by 2013—and well over half were born out of marriage by 2012. By 2009, fewer than 60 percent of Latino children were living with both biological parents, and fewer than 55 percent lived with biological parents who were married. Corresponding data are not available for 1964, but these figures are much higher than for 1980, when 21 percent were in single-parent homes, and fewer than 25 percent were born outside of marriage.
The collapse of the traditional family structure has been underway among the majority population of non-Hispanic whites as well. For Anglos, there were few signs of impending family breakdown in the generation before the War on Poverty; between 1940 and 1963, the out-of-wedlock birth ratio increased, but only from 2 percent to 3 percent, and in 1960, just 6 percent of white children lived with single mothers. In 2012, the proportion of out-of-wedlock births was 29 percent—nearly 10 times as high as it was just before the War on Poverty. By 2013, more than 18 percent of Anglo children were in single-mother homes—three times the proportion before the War on Poverty—and over one-quarter lived outside two-parent homes. By 2009, less than two-thirds of Anglo children were living with both biological parents, and fewer than five out of eight were with biological parents who were married to each other. Thus, Anglo whites today register illegitimacy ratios markedly higher than those ratios were for African Americans when Moynihan called attention to the crisis in the black family—and proportions of single-parent children look eerily comparable.
The reason the Moynihan Report sounded an alarm about family trends for black America was that a very large body of research already existed in the 1960s concerning the manifold disadvantages conferred on children who grew up in what were then called “broken homes.” Over the intervening decades, a small library of additional studies have accumulated to corroborate and document the tragic range of disadvantages that such children face. This is not to say that children from alternative living arrangements cannot end up thriving—obviously, many do; it is, rather, that their odds of suffering adverse educational, health, behavioral, psychological, and other outcomes are much higher. These disadvantages are starkly evident even after controlling for socioeconomic status, ethnicity, and race.
One of the many risks children of broken homes confront is a much higher chance of becoming a violent offender in our criminal justice system—and, more broadly, a much higher risk of being arrested for crime. Since the launch of the War on Poverty, criminality in America has taken an unprecedented upward turn within our nation. Although reported rates of crime victimization—including murder and other violent crimes—have been falling for two decades, the percentage of Americans behind bars has continued to rise (though it appears to have peaked—or at least temporarily paused—since 2009).
As of year-end 2010, more than 5 percent of all black men in their 40s and nearly 7 percent of those in their 30s were in state or federal prisons, with additional numbers incarcerated in local jails awaiting trial or sentencing. For Latinos, the corresponding numbers were more than 2 percent and nearly 3 percent. Among Anglos, slightly more than 1 percent of all men in their 30s were sentenced offenders in state or federal prisons—a lower share than for these others, but a higher proportion than in earlier generations. This huge convict population may be described in many different ways—but one way to describe most of them is as children of the earthquake that shook family structure in the era of expansive antipoverty policies.
Surveying this new American landscape of dependency, voluntary male joblessness, and family decay, an unavoidable question confronts our society: How are these perverse features of our daily life related to the rise of the modern American welfare state? Is it simply a coincidence that welfare dependence, the male flight from work, and accelerated family breakdown all happened to coincide with the sustained domestic policy shift heralded by the Great Society? As philosophers and statisticians are careful to caution, conjuncture does not establish causation. But this broad and important conjuncture is surely thought-provoking and invites both deep reflection and careful examination.
With respect to welfare dependency, cause and effect are least open to debate. In this particular instance, supply has seemingly created its own demand. Much greater proportions of Americans below the poverty line are seeking and accepting means-tested benefits today than in the past, irrespective of ethnicity or family structure. The culture has changed—or has been changed—by the availability of public benefits that can be obtained by, so to speak, pleading poverty. Moreover, a progressively greater share of Americans above the poverty line is becoming accustomed to applying for and obtaining money, resources, or services from government antipoverty programs. The stigma of depending on what used to be called “relief” is no longer as acute and widespread as it was before the War on Poverty: to which many might say, rightly so. “Entitlements” are benefits to which all citizens are in principle legally entitled. But the plain fact is that popular mores concerning the propriety of taking government help for the needy have shifted tremendously over the past 50 years.
Causality is much less clear-cut when it comes to the adult male flight from work and the erosion of the married two-parent family norm. In these two cases, it could be that the new welfare state was simply stepping into a void opened by social trends propelled by other, unrelated factors: among these, an increasing social preference for leisure, decreasing tolerance for the inconveniences demanded by child-rearing and long-term familial commitments, and changes in technology (including birth control technology). Nor is the fracturing of the modern family unique to postwar America. Far from it: As Francis Fukuyama, among others, has pointed out, almost every Western industrial democracy has undergone a similar sort of earthquake within the family since the 1960s. Only one of those societies was also witness to the War on Poverty: namely, ours.
For these and other reasons, the Great Society’s role in modern America’s social pathologies seems fated for endless and inconclusive debate. What is indisputable, however, is that the new American welfare state facilitated these trends by helping to finance them: by providing support for working-age men who are no longer seeking employment and for single women with children who would not be able to maintain independent households without government aid. Regardless of the origins of the flight from work and family breakdown, the War on Poverty and successive welfare policies have made these modern tendencies more feasible as mass phenomena in our country today.
Suffice it to say that none of these troubling mass phenomena was envisioned when the War on Poverty commenced. Just the opposite—President Johnson saw the War on Poverty as a campaign to bring dependency on government handouts to an eventual end, not as a means of perpetuating them for generations to come. He made this very clear three months after his Great Society speech at the signing ceremony for some of his initial War on Poverty legislation, when he announced:
We are not content to accept the endless growth of relief rolls or welfare rolls. . . . Our American answer to poverty is not to make the poor more secure in their poverty but to reach down and to help them lift themselves out of the ruts of poverty and move with the large majority along the high road of hope and prosperity. The days of the dole in our country are numbered.
Held against this ideal, the actual unfolding of America’s domestic antipoverty policies can be seen only as a tragic failure. Dependence on government relief, in its many modern versions, is more widespread today, and possibly also more habitual, than at any time in our history. To make matters much worse, such aid has become integral to financing lifestyles and behavioral patterns plainly destructive to our commonwealth—and on a scale far vaster than could have been imagined in an era before such antipoverty aid was all but unconditionally available.
The Great Society was by no means a wholesale failure. America has two great achievements to celebrate and take pride in from the Great Society. That agenda finally, and decisively, brought an end to the long, hateful stain of legalized racial discrimination within our nation. And it has all but eliminated the sort of material deprivation that tens of millions of Americans in the early 1960s still suffered.
But the Great Society was a project that ended up at war with itself. Modern America has been shaped by the irreconcilable contradiction between its vision of human flourishing, on the one hand, and the particulars of the antipoverty programs that the Johnson administration and subsequent administrations promoted and financed, on the other. The former promised at long last to include all Americans, irrespective of race, as full citizens under the embrace of the exceptional legal and economic arrangements afforded through the American political tradition. The latter subverted that same promise by tacitly encouraging, and overtly subsidizing, an alternative to financial self-reliance, work, and intact family: the very social basis upon which the American experiment was built. Fifty years later, daily life in modern America continues to be shaped by the conflicted legacy of this fateful project.
Nicholas Eberstadt is the Henry Wendt scholar in political economy at the American Enterprise Institute and a senior adviser to the National Bureau of Asian Research. He is the author of numerous monographs and books, most recently A Nation of Takers: America’s Entitlement Epidemic (Templeton Press, 2012).