A group of political scientists (Brigham, Findley, Matthias, Petrey and Nielson) have written a new paper that has raised the alarm about whether aid or development organizations really care about or are interested in learning from the increasing body of rigorous, RCT-based impact evaluation research.
After reading the study, I’m not convinced it has anything surprising to say, and certainly nothing we should get discouraged about. More to the point, I’m not sure it speaks at all to the issues it purports to — it’s not clear the findings have anything to say about whether and how organizations incorporate impact evaluation research into their own work.
So how did the study work? Researchers sent e-mails to over 1,400 microfinance institutions around the world, asking them to respond about whether they would be interested in collaborating in the future on a rigorous impact evaluation of their (the microfinance institution’s) work. When the researchers signaled that academic findings conclude that microfinance is effective, the institutions in the study were twice as likely to respond (and more likely to respond favorably) than when the researcher’s signaled that academic findings conclude that microfinance is ineffective. A control group received no signal about academic findings.
The authors conclude that these findings indicate that microfinance institutions suffer from confirmation bias and are generally unwilling to learn from research that has negative implications for the existing approach they take in their work.
So what matters about the study? There’s a few things to note. First of all, they have a pretty low response rate (although Blattman notes it’s higher than he would have expected). The vast majority (nearly 92%) of microfinance institutions did not respond at all — they got a response from just 118 of the 1,419.
Second, that implementing organizations should be subject to confirmation bias, or cognitive dissonance, is neither surprising nor particularly interesting — given that most human beings (including human beings who work at microfinance institutions) are.* The more compelling question is, how can we create the conditions to overcome these tendencies? How can we make it more likely that organizations will be open to rigorous evaluation even when it’s possible the results won’t validate their existing approach?
It strikes me their research design has produced a scenario in which we should expect microfinance institutions to be most averse to adopting a learning perspective. The study participants who received the “negative treatment” are primed to think about all of the potential risks and costs of rigorous evaluation, and none of the benefits. Why should anyone embrace null findings in the abstract, without any discussion of what null findings mean for learning?
Finally, the authors frame microfinance organizations’ ability to recalibrate in light of new research purely as a question of will or motivation on the part of project implementers (eg pp 6-7). There’s plenty of reasons why this may not be the case. Implementing organizations may have the same concerns about RCTs and external validity that some researchers do. And it’s reasonable to expect that organizations may face strategic and structural barriers to changing their approach that go beyond matters of will, capacity, or even imagination.
The authors presume that learning from evaluation research is purely the responsibility of microfinance organizations, when in fact the researchers conducting this research must be partners in this process. If an impact evaluation tells you that a project isn’t working, it doesn’t necessarily tell you how to change the project to make it start working. Researchers and implementing organizations must be partners in learning how to learn from the growing body of rigorous evaluation research.
* The authors’ assertion that this is somehow unknown or not to be expected is either feeble or naive: “Although confirmation bias is a well-documented shortcoming in human decision- making, its presence in non-profit organizations, such as MFIs, is not yet known. One might hope that anti-poverty organizations have developed organizational routines to maximize learning and minimize bias. After all, charitable organizations focus on poverty relief as their primary goal, and any information that might help them achieve that objective ought to be privileged” (p. 12).