A recent paper by Paul Shaffer provides a useful summary of some of the benefits of using mixed methods* in impact evaluation. The author describes four ways in which qualitative methods (by which he generally means talking to participants in the study and asking them what they think) add to quantitative impact analyses.
As I read the paper, for the author these boil down to:
1) Providing some insight into mechanisms. Rigorous quantitative approaches such as RCTs can help establish that an intervention is contributing to an outcome, but don’t necessarily tell you why it is. In other words, quantitative methods can help give you the “whether,” Surveys, focus groups and ethnographic work can help uncover the “why.”
2) Identifying the right counterfactual or comparison group. If you can’t randomize, gaining deeper understanding of the context you’re working in will help you identify the important factors to create appropriate matched comparison groups.
3) Learning participants’ view of an intervention’s impact. The author calls this “conducting counterfactual thought experiments” — really what this means is finding a way to ask participants what they think they would have done in the absence of the intervention. Self-reporting is problematic and people are subject to all sorts of biases, sure, but doing this simple exercise can help explain null findings. The author provides some important real world examples.
4) Understanding unintended consequences. An RCT can establish whether or not an intervention had the effects it was designed to… but what about understanding whether there were other unintended effects? This is particularly important in understanding unintended negative outcomes of interventions designed to help people.
* Mixed methods refers to a combination of qualitative and quantitative analysis, or what the author refers to as Q² — implying, I suppose, that a mixed methods approach = qualitative x quantitative rather than qualitative + quantitative, and therefore provides exponential added value.