From the excellent

How much is spent on programmes that use the market systems development (MSD) approach? A quick guess (based on the BEAM Exchange programme index), is about a billion dollars over the last five years. Enough money to raise reasonable questions about the effectiveness, value for money, and impact of the approach.

The BEAM Exchange led a response to these questions, putting together an evidence map of evaluations and assessments of MSD programmes which meet quality criteria. They’ve quickly amassed a considerable collection, covering a whole range of sectors, geographies, and modalities. I recently completed a review of this evidence for the BEAM Exchange, with Kevin Conroy, which found some great examples of success in MSD – though the finding comes with a number of caveats.

Part of the challenge comes from the nature of MSD. In common with other approaches rooted in systems thinking, MSD is not easy to evaluate. Technical problems bedevil any attempt to establish attribution of cause to effect, the adaptive nature of interventions mean that baselines swiftly become redundant, and no-one really has a clear idea how systemic change should be assessed.

But, more importantly, the Evidence Map is a compilation of individual project evaluations and reviews. These are commissioned by projects who want to show their impact, or donors who want to hold their implementers to account. All valid reasons – but this means they typically focus narrowly on project achievements, rather than if and how the approach facilitated this improvement.

What would a more systemic research agenda look like? Firstly, unpick what we mean by ‘market systems development’. MSD can mean different things to different people – and includes multiple elements, some controversial, some not. Conducting analysis, considering incentives, and being flexible, for example, seem sufficiently common-sensical that there is no need to research their effectiveness. I think a more interesting question is whether facilitation is always the right approach. How plausible is the assumption that win-win models can be found, whereby market actors change their behaviour in a way that benefits the poor? How does a facilitative approach compare to investment finance, or to challenge funds?

Secondly, research should be comparative, looking between MSD programmes (and non-MSD programmes) to assess how changes in programme modality affect implementation and effectiveness. If we just look at a single programme, the amount of variation in approaches is limited, which makes it much harder to produce data on how these approaches work or don’t work. Similarly, we need more longitudinal research, working alongside programmes to understand how the approaches change over time.

Thirdly, more research should look beyond the project lens, examining the market system or conducting ex-post evaluations. If we think the market system is the key unit of analysis, we should put our research money where our mouth is. How about an intensive study of a market system, tracing back changes over time to try and understand if and how development programmes affected it? Take vegetable seeds in Bangladesh, for example, or the tractor market in Nigeria. These are two high-profile cases where a single project aimed to influence change, but is this visible on the market level? A linked approach to invest more in ex-post evaluations, which come back after a project has finished – preferably 5-10 years later – to see what sustains.

There is already plenty of evidence that market systems approaches can work. Moving to an understanding of how the approach influences successes, and what elements should be further emphasised, requires a more strategically chosen research agenda.