Lots of talk these days about “design thinking” and “evaluation thinking.” What do these concepts actually mean? And what do they tell us about doing good monitoring and evaluation work? Taken together, design thinking and evaluation thinking point towards a process of perspective-taking* that can make any project, policy, or intervention a whole lot stronger and a whole lot more likely to be successful.
For an excellent crash course on design thinking, check out what the folks at the Stanford d.school have to say. For some really fruitful discussion of evaluation thinking (albeit with a shmancier title), check out this article.
So, what’s really important here? If you’re the designer — of a project, policy, or intervention — these approaches suggest that everything should flow from a rock solid understanding of what your user’s needs really are. Design thinking tells us to start there, evaluation thinking helps us get there. Sounds so basic, but it turns out that we need to be reminded to take ourselves out of the driver’s seat. Really good things can happen when we do.
Design thinking tells you to start with the user** and ask questions like:
– what does the user need? And hey, not what do I think the user needs… but what does the user think she needs?
– why do they need that?
– how will the product, intervention or solution I provide meet that need?
– what will it be like for the user to experience the product, intervention, or solution I am providing?
Design thinking has an emphasis on rapid prototyping and responsive redesign as a form of iterative process. Thoughtful work on evaluation thinking helps flesh out how this might work in practice, when what you’re talking about is larger interventions in challenging political and economic contexts.
Evaluation thinking picks up there, by asking:
– how will I really know whether (or to what extent, and under what conditions) the intervention I am providing is meeting my user’s needs?
– how am I listening? In what way can the user tell me or signal to me that their needs are or are not getting met?
– when I find this out, how can I use what I’ve learned to recalibrate or refine my approach to better meet my user’s needs?
Bringing together a design and evaluation perspective makes for the strongest monitoring and evaluation practice. If we make a really honest assessment of our user’s needs as our baseline, it can totally change what we think is important to measure in M&E efforts. Adding on evaluation thinking means that we’re equipped to learn the best way to implement really well-designed policies and interventions in complicated environments. As this recent article in SSIR points out, even very well-designed, user-focused policies run into major structural and institutional barriers that leave them languishing in the design dustbin. Good monitoring and evaluation process provides the evidence to understand and adapt to the context in which our projects are implemented.
What does it look like to put this into practice? For an interesting example, check out the story of the Governance Collab at Stanford.
* who’s perspective is most important? Your “user/s,” your audience, or the folks you’re trying to reach through your project. Who this may or may not be a straightforward question to answer, especially if different folks will experience your project in different ways. Think, for example, of an intervention designed to increase participation by women in community decision-making. Who’s your user? Both men and women, but they’ll experience the project differently. What are your user’s needs, and how do they articulate them? How can a project designed, at a very basic level, to shift gender-based power relationships, meet the needs of men and women? It’s a tough question, but even tougher to imagine a project being successful if this question isn’t taken seriously.
** sometimes design thinking folks talk in terms of “empathy”, and imagining yourself in the shoes of your user, but it’s always best to start by actually talking to real people. Sounds simple, and it is. But it happens a lot less often than it should.