Quick reads to trigger your mind
January 2021 - by Hans Posthumus and Aly Miehlbradt.
Let us focus on assessing what changes and why, and use that information for decision making.
Imagine that you find your three-year-old with a proud grin on her face, wielding a marker pen. Her baby sister sits near by… her face carefully decorated, forehead to chin, with permanent ink! You think: What happened? Why? And then perhaps a few seconds later: What have I learned? What am I going to do?
What happened and why are exactly the questions we need to focus on when we are managing MSD programmes. Then we must determine what we have learned and how we will adapt our programme in response. No need for academic debates around attribution versus contribution.
“Can’t we just say that we contributed, attribution is really hard to do?”
This is the wrong question. What you really want to know is: Did you make a difference? If so, how? Sometimes that is pretty easy to figure out. Sometimes it takes a bit more effort.
An agricultural programme supported one dairy company to improve their cheese making process by introducing a quality assurance system. Two years later, a few more dairy companies had also installed that system. The programme assessed why and how they did that: one company had bought the same equipment from the same foreign supplier; another company had hired the same consultant to design their system. The programme searched for alternative causes for the change, and found none. There you have your attribution: the change that occurred is due to your intervention.
It can be much more challenging than this. There can be so many direct and indirect factors that, together, caused a change. But if we stop investigating to understand why and how these changes occurred, we haven’t learned, and we don’t know what to do next. So we need to carry out our contribution analyses to understand causality. Contribution analysis helps us to structurally unpack cause and effect by gathering and analysing evidence that both supports and challenges a theory of change.
An export promotion programme reported that their promotional activities contributed to a 5 per cent increase in exports. That’s great, but also meaningless. To what extent was that increase due to their support to a dozen exporters? To what extent was that increase due to changes in customs procedures? To what extent was that increase due to changes in market demand by the importing countries? It’s probably due to a combination of the three factors. The programme sought to understand the significance of each of these factors. Lots of questions lead this investigation. They are not always easy to answer, but are very important.
“If we can’t prove attribution, we can’t report anything.”
This is more than a missed opportunity. It is not taking into account the programme’s purpose. That means the push to report attributable impact numbers negatively affects some programmes. That hurts.
A vocational skills development programme supported a few training centres to improve their curricula and teaching methods to meet the demand from the emerging IT sector. Employment rates for graduates were high. But the programme also noted that many graduates, after a few years of employment, founded or joined start-ups - a healthy sign for the IT sector. Was that attributable to their intervention? Probably not. Was it important to understand, important to consider, important to report? Yes, definitely.
Let’s be realistic - systems are complex. Influencing them is an art in itself. Assessing system changes is often not straightforward. And investigating our contribution requires a culture of honest enquiry.
Let’s be pragmatic too. The guidance paper, A Pragmatic Approach to Assessing System Change, builds on what programmes are already doing to assess and understand system change. It provides guidance on how to sensibly assess system changes and how to use that information to adapt strategies and interventions.
It’s not about whether we can attribute changes to our interventions, or whether it’s better to assess our contribution to changes. These are not two different animals. They are part of a continuum. We should aim to assess causality.
But how does a programme report impact that is sometimes attributable and sometimes contributed to?
We need to have a discussion on what and how to report, given the different layers of change that programmes aim to influence.
We are curious to hear your suggestions on how donors and programmes should deal with the numbers issue. The floor is all yours.
Defining and assessing market system resilience: demystification is needed
January 2022, by Hans Posthumus and Nabanita Sen Bekkers
Einstein said it: “In the midst of every crisis lies great opportunity”. The COVID-19 pandemic has shown us that market system resilience is crucial. Donors and programmes have embarked on developing frameworks to define and assess market system resilience. Great. Frameworks do help. So does using common sense.
Resilience is not a stand-alone topic.
Too often we see vaguely defined system strategies and programmes that in practice focus on developing a large number of partnerships with companies, but without a vision and focus to make the system more resilient.
Building resilience is supposed to be part and parcel of your strategy to change systems. Let’s hope that the call by donors to assess market system resilience leads to programmes developing more concrete and practical strategies that help make systems more resilient. Here’s how common sense may help.
Start by answering two simple questions
What are the potential stressors and shocks that may affect the system?
How likely is it that they occur, and how significant might they be?
Answering the first question is best done in a brainstorming setting. We tested this in a workshop setting and within an hour the group identified a sensible set of stressors and shocks for the system we were targeting. Yes, the first that came to mind is a pandemic. But there were more stressors and shocks that popped up when we were thinking of resilience in the maize system: global price changes for maize; change in other commodity markets that may affect the maize system (poultry, wheat); and disease outbreak – just to name a few. It’s not rocket science, but it is system-specific!
Answering the second question might be a bit tricky in practice. How to know if shocks will happen or not? How to know how severe their impact may be?
It’s probably best done by having a good understanding of the system and its context, coupled with lots of reasoning and debating. It certainly should not lead to a mechanical and numerical exercise. Instead, the answers should help you to assess and explore the major 'gaps' in the system. Gaps that your strategy should address.
So, most important is then to discuss 'what can we do about it?’
In other words, to translate the ‘resilience gaps’ into ‘intervention areas’ that your system strategy will address. For example: How can the system’s learning ability be improved? How to ensure healthy competition? How to improve coordination among system actors?
For the maize system we looked at various opportunities, including: diversification; improving R&D on disease resilient maize varieties; and improving coordination among public stakeholders to react swiftly to changes. Market system resilience is then embedded in your strategy and not an additional reporting requirement. The trick is, of course, to identify the underlying causes of these gaps, not just the temporary, superficial blockages.
There are emerging frameworks and guidance for defining and assessing market system resilience.
Frameworks aim to help programmes define, assess and increase market system resilience. They draw attention to important aspects of system resilience such as connectivity, collaboration, market powers and learning ability. Yet, as we have seen with other frameworks, the risk is that programmes apply them blindly.
That would be a missed opportunity: resilience is probably one of the most important aspects of market system development. It deserves so much more than ending up as another storybook full of buzz words.
Market system resilience is the responsibility of programme managers. Results measurement experts may help assess changes, but it’s the responsibility of programme managers to address market system resilience for the systems they target to:
ensure that system strategies target system resilience
develop a diverse range of partnerships that together lead to system changes
address crucial functions and elements of a resilient market system
Please keep your eyes on the system and use your common sense when working on system resilience.