Blog / What have we learned from eight impact assessments in market systems development?
18 February 2019

What have we learned from eight impact assessments in market systems development?

From Chris Lysy at Fresh Spectrum https://freshspectrum.com/the-you-sir-data-interface/

For many programmes, running an impact assessment is like going to the doctor for an embarrassing medical condition. A patient with such a condition may prefer not to think about the issue until the last moment. Eventually the pain is too much to ignore, so the patient looks for the cheapest and quickest solution available. They get a cursory inspection from the doctor, try to ignore any bad news that comes out of it, and make sure that nobody ever finds out about the health condition again.

I’m sure many of you recognise the analogy! Some programmes that need an impact assessment will put it off for as long as possible. Pressure from donors, impatient head offices, or hapless M&E staff builds, until eventually an assessment is commissioned as quickly as possible and awarded to the cheapest consultant around. Methodologies are vague, negative results are swiftly buried, and nobody ever hears of the impact assessment again.

Fortunately for the sector, not all programmes are like this. We were recently commissioned to support a series of impact assessments for a market systems programme that aimed to create jobs and increase incomes. Our work included initial design, implementation, alongside the analysis of the data. This blog reflects on this intensive work and draws out four lessons for future.

Lesson One: Know Yourself!

Good monitoring data is the cornerstone of a good impact assessment. With regular field trips, staff who know the interventions inside-out, and a regular flow of information from the partners, you can do the following:

  • Prioritise. Most programmes have more interventions than they have budget to assess. So how do you choose which are deserving of a full impact assessment? Where should you pay extra for a control group, expand your sample, and ensure that you cover sub-groups? You generally want to spend more money on the most successful interventions – and monitoring data is needed to show you which they are.
  • Sample. In most interventions, we had robust data showing who was benefitting, and where. This was critical for sampling. It enabled us to stratify the sample appropriately, ensuring that it was representative of the overall population. It helped select sub-groups that we needed to pay extra attention to, because the impacts might be different for them. In a few interventions where this data was missing, we ended up getting our sample badly wrong – for example, by selecting a quarter of our sample from a district where the intervention was barely functioning.
  • Design questionnaires. We often wanted to ask qualitative questions with coded answers. For example, we would ask ‘Why did you start using crop protection inputs?’ and ask the enumerator to select from a pre-specified set of answers. Where the programme staff knew the intervention well, we were able to code potential answers to cover almost all eventualities. Where they didn’t, many respondents either didn’t answer properly or selected an ‘other’ response, which took a lot more time and effort in the analysis.

Lesson Two: Be Realistic!

The most disappointing impact assessment for me was looking at the impact of a training programme on farmers. Monitoring data suggested a significant outreach, and qualitative data from beneficiaries suggested that they appreciated the intervention. After the impact assessment, we found no result on outcome or impact indicators. There were some promising signs that farmer practices had improved, but we were not able to assign a monetary benefit to this. Partly this was because of the inherent difficulty of measuring changes in income from farming. It is also because the intervention was providing a limited amount of training. The trainees – who mostly worked as government-funded extension workers – received just a few hours of training, which was not likely to make a huge different to the farmers that they worked with. Our expectations of significant impact-level change were ultimately unrealistic. Being more careful to assess this in the design phase could have helped us prioritise other interventions for impact assessment.

Lesson Three: Control Yourself!

A continual debate in the impact assessment planning sessions was whether to use a control group or not. In the event, we used a control group in four cases, and went without one in another four, relying instead on qualitative information to help us establish attribution.

So how did we make this decision? We took a few factors into account:

  • How reliable is qualitative information? We tried to think about how useful qualitative information would be in showing us the key links in the causal chain. If a farmer could be expected to give a fair judgement on causality, we relied on qualitative information rather than a control group.
  • How good is the existing evidence base? Where there was an existing large evidence base, we tried to rely on that rather than use a control group. In practice unfortunately, we did not find any cases where there was already strong secondary evidence.
  • How much time and money did we have available? Control groups immediately double the sample size, cost of data collection, and complexity of analysis. Given the time constraints (discussed below) we had to prioritise.
  • Can a control group be found? We worked in a very diverse country, making finding an appropriate control group for any intervention extremely challenging.

One continually challenging element in the analysis was the differences in baseline results between the control group and the treatment group. This reflected the fact that our control groups were imperfect – we were not able to find sufficiently similar farmers. It was challenging, however, to draw the line between groups that were so dissimilar that they could not be analysed, and those that were slightly different but still allowed for meaningful analysis.

Lesson four: Take your time!

There’s an old joke that you can have quick food, cheap food, or good food – but you can only pick two out of the three options.

Impact assessments are the same. You can have good impact assessments, quick ones, or cheap ones – but it is very difficult to get all three. If you’re trying to collect and analyse data quickly, you can pay a much higher price for the data collection. Moreover, having limited time to understand the intervention, design questionnaires and do your analysis increases the risk that you miss out an important factor, lowering the quality and potentially meaning you need to collect data all over again.

With a strict reporting deadline, we conducted the study design and data collection quickly, and analysis was rapid and focused. More time would have improved the quality of the impact assessment, and the usefulness of the conclusions. Fortunately, it is not too late – now the data is collected, more analysis can be done by the programme team at leisure.

Conclusions

Strong impact assessments are an essential tool for learning about the changes that a programme creates. Done badly, however, they can be a spectacular waste of money and management time. If you really understand your programme, are realistic about what you might achieve, think carefully about control groups, and leave plenty of time, then you should be in a good position to conduct strong impact assessments.