Our goal at the IPRE Group is to help organizations working in the fields of political development, conflict and climate change understand, measure, and maximize the impact they have on the ground, in the communities they work with. In short, this means that we’ve taken the hardest cases, for which there are no established approaches or methodologies when it comes to evaluation and learning. No cookie cutter methods apply. This is great, because it allows us to be innovative and use the broad expertise of our Principals and Partners to tailor the best approach to every situation.
Each of us has spent a good amount of time listening to people who work at social enterprises, foundations, and NGOs, and talking with them about how they think about impact evaluation. What we’ve found is that people rarely mean the same thing when they use this concept, and they too often think that finding answers to the questions they really care about is impossible. Questions like — how do you really know whether your programs are making the difference you would like them to make?
Lots of scholars and practitioners are talking about the fact that doing smart and rigorous evaluation work in fields like peacebuilding, government accountability, policy-making and advocacy is especially challenging. Desired impact may be observable on uncertain or very long time horizons; desired outcomes can be hard to conceptualize and even harder to measure. There are no foolproof medical tests to be used — and relying on self-reporting by program participants or funding recipients can be difficult.
Understanding the effects of programs in dynamic or unstable political environments is incredibly hard. Tough cases always require a tailored approach, which we can provide. At the same time, we think there are five basic components to good impact evaluation. In very simple terms, these are:
1) Comparison. A rigorous evaluation means some element of comparison between people or communities that experienced a program (“received the treatment” in technical terms) and those who did not.
2) Randomization. Randomization helps you avoid certain kinds of bias in your results, and allows you to talk about what you’ve observed “on average.” It ideally occurs at two levels: random selection for inclusion in the research, and random assignment of treatment within this population.
3) Innovation. We think outside the conventional toolbox, and this is a good thing. Our Principals and Partners are using and refining exciting new methods in their research, such as network analyses, the use of technology to measure vote fraud and behavioral experiments.
4) Local Partnership. There is no substitute for local expertise. The best research is done through strong relationships with key local partners, built through open and equal channels of communication.
5) Organizational Feedback. Good evaluation work links organizational goals with the design of the evaluation study, and links the learning that comes from the study back into the organizational mission. Smart impact evaluation is an essential part of learning and growing as an organization.
We welcome your thoughts, and hope this blog will become a place of active discussion!