An NGO in Africa Goes Awry
5:40 PM, Oct 1, 2012 • By ARMIN ROSEN
The MVP introduced control villages in 2007. But evidence of the project’s success was hardly forthcoming. In 2010, Clemens and Demombynes responded to the MVP’s recently published mid-term evaluation report with an article in the Journal of Development Effectiveness that reached some damning conclusions about both the methodology and efficacy of the project. For instance, Clemens and Demombynes determined that “initial estimates of the project's effects change substantially if more rigorous impact evaluation methods than those used in the project's mid-term evaluation report are employed”—in other words, the project looks a lot different (and a lot less successful) if you evaluate the villages in a national or even regional context, instead of using the MVP’s preferred method of comparing the villages to the earlier, pre-intervention versions of themselves. Clemens and Demombynes wrote that the initial phase of the project was so badly designed that it was “impossible to make definitive statements about the project's effects.”
The MVP simply shrugged off the paper’s conclusions. “They did not take our criticisms seriously,” Clemens told me. “They denied the legitimacy of every single point we made, and they changed nothing.” In the MVP’s official response to the paper, Sachs and McArthur wrote, “Economists like Clemens and Demombynes should stop believing that the alleviation of suffering needs to wait for their controlled cluster randomized trials.” For Clemens, the moralistic suggestion that he stands in the way of the alleviation of suffering, and that the MVP was simply too important to adhere to sound social scientific practice, was “baffling,” as well as rankling. “The response was to obfuscate, rather than enlighten,” Clemens said.
Two years later, the MVP is still guilty of bad social science. The Lancet paper is riddled with errors. The paper’s authors committed a computational mistake that inflated the apparent decline in the villages’ child mortality. At the same time, the paper’s calculations were based on out-of-date national child mortality figures that underestimated the decline in national child mortality. Somewhat astonishingly, the Lancet error reproduced a similar mistake from an MVP-authored paper in the American Journal of Clinical Nutrition a few months earlier. In that paper, the project team used a misleading statistical analysis to prove that the MVPs had decreased stunting at village sites in Ghana. In reality, the decrease was almost identical to national level trends.
The Lancet controversy resulted in a minor shakeup at the MVP—project coordinator Paul Pronyk was re-assigned, and Sachs organized an independent panel of experts to probe the causes of the error. But it has not had a humbling effect on him. A few days after the Lancet correction was announced, Sachs spoke with the aid website Humansophere, and was asked whether he had “evidence the [MVP’s] approach is working.”
“It depends on what you mean by evidence,” Sachs replied. “Some of my critics say we need to do these ‘randomized controlled trials’ as if what we’re doing is testing a red pill against a blue pill. What we’re doing has nothing to do with anything like that. It cannot be reduced down to such a simple and narrow test. . . . This is not a randomized controlled trial; it’s a learning process.”
In fact, the MVP is not a “learning process”: It’s a major project created in lockstep with the Millennium Development Goals, the United Nations’ signature development initiative. It’s a driver of both limited aid resources and global development policy, and its leader is the world’s most famous and influential development economist. Sachs seems to believe that because the MVP represents such an important and even historic leap in development policy, it should be shielded from the kind of rigorous empirical scrutiny that social science demands. To that end, the MVP keeps all of its raw data secret, far from the prying minds of outside researchers.