DevLearn Digest 16: Good measurement requires hard choices
Hello everyone,
Just one announcement for this newsletter: our wonderful online training courses on market systems development and monitoring, evaluation and learning are now available for booking!
Both courses take place in November and offer interactive, entertaining, and cost-effective introductions to the art of MSD and MEL. We have trained over 2,000 people over the years, so if you are looking to boost your skills (or those of your team), please consider applying now.
Now, on with the newsletter…

Good measurement isn’t easy – it requires hard choices
In almost every programme I’ve worked on, expectations on the MEL system and team were wildly disproportionate to their ability to deliver.
I blame myself, in part. I’ve written my share of guidance notes. In trying to make MEL sound friendly and approachable, I always emphasised simplicity and straightforwardness. Results chains include ambitious and hard-to-measure goals like “increased trade, increased income, job creation”. The guidelines are peppered with jaunty ‘easy tips’ and calls to ‘keep it simple’. The implicit message is that anyone can run a good MEL system.
The guidance isn’t wrong. I’ve worked with MEL system that really is practical and simple – based on rapid, qualitative feedback, small-scale surveys, and a clear focus on a few indicators. A genuinely practical framework would be very intentional about where to put time and effort, perhaps only measuring a small part of the portfolio to a high standard.
But this is not the direction that most MEL frameworks take. Instead, they emphasise interesting but hard-to-measure concepts such as resilience, empowerment, and systemic change. Donors are interested in aggregating attributable change, requiring complex, assumption-heavy methodologies to be consistently applied. The challenge is compounded by the fact that MSD programmes typically comprise bundles of interventions with very different objectives and approaches, each demanding tailored MEL systems.
Three examples illustrate how measurement might sound simple, but be very difficult in practice:
- Gender-lens investment: There’s a lot of interest in supporting women-led enterprises. But what exactly does that mean? Should we look at the management, ownership, or employees of the business? How about a business which is male-owned but benefits women through its operations? Does it matter if the business has a safeguarding policy, or collects gender-disaggregated data?
- Disaggregated data: It is tempting to treat disaggregated data as a kind of stakeholder management tool, allowing us to incorporate the specific policy interests of multiple groups. The upshot is that programmes often end up with multiple forms of disaggregation. Each adds extra survey questions, more columns to the reporting template, and new categories in the database. It sounds simple, but any MEL specialist will appreciate that this adds a lot of cost and complexity to surveys and database management.
- Measuring agricultural income: A problem many readers will be familiar with! Smallholder farmers often don’t keep records (or won’t share them with nosy enumerators). Data collection will often miss sales (if conducted early in the season) or come too late for farmers to remember their costs (if conducted late in the season). Smallholders often farm multiple crops, potentially across several seasons annually, requiring long and repetitive surveys to cover. And that’s before attribution is even considered.
These are not impossible problems to solve, provided the measurement team has time, space, and appropriate expectations. Existing frameworks help, such as the 2X criteria for assessing gender, Poverty Probability Index for poverty, the Washington Criteria for disability, and multiple guidance notes on agricultural income. But each takes time to design, to implement, and for respondents to answer. A solid 2X assessment can take 1-2 hours per company. The PPI and Washington Criteria are shorter, but if you include both in every survey, you could easily spend half an hour on demographic data for each farmer you interview.
The problem comes when programmes try to do everything. That’s when harried MEL teams throw together research at the last minute, and feel pressured into making claims that do not withstand serious scrutiny. As always, the problem comes down to incentives – programmes feel uncomfortable disagreeing with their donor, and MEL staff and consultants do not feel able to influence the process. It is always easier to say “Of course, we can measure that”.
Actually, I think that prioritization is the most important challenge facing many MEL teams. Some useful tips are to:
- Set tiers of measurement in your portfolio. For example, allocate 25% of your portfolio to ‘tier 1’, with standardised surveys and income measurement. Allocate another 25% to ‘tier 2’, with rapid surveys and key informant interviews. The final 50% can be in ‘tier 3’, reliant on partner reporting alone.
- Explicitly state what you’re not measuring. A list of ‘topics we will and won’t measure’ can be a useful annual exercise to set priorities and remind stakeholders that you can’t measure everything.
- Estimate costs for measurements: Estimate budgets for new donor requests, and make clear what the trade-off is for additional reporting requirements. However, remember that costs go beyond the pure financial costs of data collection – think about the burden on your partners, target group, and on your management team to review and use the information. For many people and organisations, time is as important as money.
- Think about decisions: Really think about what decisions you will make with every piece of data you collect. If it’s not going to be useful for decisions, don’t collect it! Often, we collect data because it might be useful, rather than because it definitely will be.
Thanks for reading this far, and if you find this newsletter useful, drop us an email to let us know your thoughts
Adam and the DevLearn Team
