Focus on results in international development cooperation does not necessarily mean better results. On the contrary, it can be counterproductive. We who work with the issues are obliged to inform decision-makers about this. That is the opinion of Janet Vähämäki, Anna Liljelund Hedqvist, Jessica Rothman, Ian Christoplos and Therese Brolin
We are some researchers, consultants and practitioners who for two days at the end of April attended a conference entitled 'Politics of Evidence' at the Institute for Development Studies (IDS) in Brighton. The conference was organized by 'The Big Push Forward', an informal network of practitioners and researchers, which aims to stimulate discussion and debate about the recent years' increasingly dominant 'results agenda', ie. the increasing emphasis placed on showing measurable results and evidence of international development cooperation. Since the issues are highly topical in Sweden as well, we would like to briefly make some reflections from the conference and link to the ongoing debate in Sweden.
During the conference, it was discussed whether the increasingly strict requirement to show quantitative results, including the requested tools (logframes, result matrices, value for money analyzes, etc.) really contributes to better results in development assistance or whether it can even lead to the opposite effect and distortions. incentive. It is lifted e.g. fears that donors will only fund activities that can be measured by simple methods and avoid providing funding for more complex, long-term development projects. During the conference, various actors testified to both the positive and negative consequences of the performance agenda. Several said that they spend more and more time filling in matrices and collecting data, but that it does not give a better picture of the results they achieve and that it also gives them less time to pursue their questions, and also to try to understand what really matters. happens. Others said that an increased focus on results has contributed to organizations becoming better at focusing their activities and work on contributing to changes in behaviors and attitudes.
One conclusion from the discussion was that no matter where one is in the governance chain, we all have a responsibility to inform the governors and the outside world whether the requirements for reporting results can take over the possibilities of actually achieving results.
At the conference, strategies and experiences were discussed on how to avoid the negative consequences of the results agenda and strengthen the positive experiences. One conclusion from the discussion was that no matter where one is in the governance chain, we all have a responsibility to inform the governors and the outside world whether the requirements for reporting results can take over the possibilities of actually achieving results. It was said that it is often a question of donors' ignorance of what certain requirements lead to in practice and that a more open dialogue about experiences regarding follow-up of results and lessons learned from missed results would be better for all parties. It was pointed out that in a relationship between donor and recipient there is almost always a possibility of negotiation, it is not only the recipient who wants money for profitable development projects, the donor of course also has an interest in financing something that leads to development: therefore it is in both parties interest that a balance is achieved, that the accounting and measurement that is done can also be used for a dialogue and improvement and does not become a burden for the implementation of the business.
Linked to this, conclusions were drawn about the importance of clarifying the purposes with which certain performance information needs to be obtained. It was pointed out that it is of course important to be able to show taxpayers in the donor countries what results the development assistance has achieved, but that this performance communication does not always have to be done by proving exact quantifiable figures, attributed to donors' money (eg x Swedish kronor given x number of poor people access to clean water). Experiences drawn during the conference from the UK's aid showed e.g. that this type of 'standard indicators' with quantifiable data can lead to choosing simpler and more measurable goals rather than contributing to more long-term and difficult-to-measure societal changes (eg that people have a better ability to manage their water resources in a sustainable way). Representatives from NGOs in the UK testified e.g. during the conference that the British DfiD has withdrawn from major jointly funded programs in developing countries and instead started to support its own projects where it is easier to demonstrate the direct effects of its own development assistance. A recommendation therefore concerned the importance of communication about development assistance also addressing the complexity of the activity, that it is not always possible to show accuracy in measurable figures, etc., and that communication can also be done through other forms, e.g. through stories from people participating in development projects, qualitative evaluations, etc.
The discussion in Sweden has so far focused mostly on the questions "what have we achieved?" instead of "what works, and why?", ie. more on obtaining results information for communication and control purposes instead of for the purpose of using it for learning and improved analysis.
During the conference, a distinction was made between 'performance agenda' and 'evidence agenda', where the latter refers to all structures and tools that have been developed in order to support the need to demonstrate, ie. well-founded information on what actually works in the long run. Since in tex. DfiD's assessments are a requirement to always make an 'evidence analysis', they systematically take help from a number of different analysis, control and evaluation institutions in their assessments. The discussion in Sweden has so far focused mostly on the questions "what have we achieved?" instead of "what works, and why?", ie. more on obtaining results information for communication and control purposes instead of for the purpose of using it for learning and improved analysis.
At the same time, there are also difficulties in proving evidence. A British NGO highlighted the difficulties of proving evidence in its work. For a period of time, the organization had carried out four different results analyzes where they used different methods for the analysis. All had come to different conclusions about what the organization should invest in in the future. In this particular example, neither the organization nor the donor could ultimately have used any of the conclusions. One lesson learned from this example was the importance of always asking the question 'why'a particular study or analysis needs to be done, or why a certain type of performance information needs to be gathered and how the conclusions and information can be thought to lead to improvements in the business.
Another important lesson from the discussion in Brighton was that the 'performance agenda' must be more about learning and reflecting on what works in practice where development assistance is implemented.
Another important lesson from the discussion in Brighton was that the 'performance agenda' must be more about learning and reflecting on what works in practice where development assistance is implemented. There is a challenge here in not looking for information that needs to be 'entered' into the control instruments and the need to demonstrate results for governments in donor countries takes over the focus from the necessary discussion of what works in a particular context. One conclusion we therefore drew is that we all, who are in some way involved in the implementation of development assistance, have a responsibility to contribute to making the discussion on the results of development assistance something meaningful that contributes to achieving the overall goals of development cooperation: to improve living conditions. for poor people.
Janet Vähämäki, PhD student SCORE and Department of Business Administration, Stockholm University
Anna Liljelund Hedqvist, results advisor, Indevelop
Jessica Rothman, Evaluation Advisor, Indevelop
Ian Christoplos, evaluator, Indevelop and researcher at the Danish Institute for International Studies, DIIS
Therese Brolin, PhD student, Department of Economics and Society, Department of Cultural Geography, University of Gothenburg