Debate

Program-related evaluations provide insight and opportunities for learning

Evaluations should focus on what works, why and what lessons can be learned from it, instead of just checking what Swedish development assistance achieves. When evaluations have not been used for more than listing results, opportunities for learning are lost. It writes three employees to Indevelop, who in a new report has compiled lessons from 71 evaluations that the organization has conducted over the past two years.

In recent weeks, the question of the effects of development assistance and the best way to measure them has been raised again by e.g. a. Carl Johan von Seth on DN's leader page and Göran Hydén in one debate article on Biståndsdebatten.se

The issue of well-founded information on the effects of international development cooperation is important and relevant, not least for the countries that are recipients of aid. Being able to demonstrate results and development effects for their citizens is not least important in countries with high poverty.

In the ongoing debate, important questions are asked about resources, roles and methods for evaluating development assistance. Often, however, the starting point is a control purpose, ie. to see what has been achieved with Swedish tax funds. We see that greater emphasis should be placed on bringing the discussion closer to the implementation of development assistance and asking questions about what works, why and what we can learn. In fact, between 50-80 evaluations of Page-funded projects and programs are carried out annually, sometimes even major reviews and country strategy evaluations.

We at Indevelop have in one investigation report  compiled lessons from 71 evaluations of Swedish development assistance initiatives carried out by Indevelop during the last two years 2011-2013.

The purpose of the report is to compile conclusions and lessons with relevance for Sida and its partners in order to improve the governance and results of Swedish development cooperation.

There is much to be said about the effects of development aid and the governance of Swedish development cooperation. A large part of Swedish development assistance is about strengthening and building long-term capacity of actors (individuals, organizations and institutions) in developing countries. The aim is not only to solve problems quickly (such as distributing mosquito nets) but to build capacity in the countries to solve development problems themselves. For those of us who evaluate development assistance, therefore, evaluation of goal fulfillment becomes a major challenge when we look at time-limited programs where the long-term effects are difficult to see until after the end of the program. Just as important as asking what results have been achieved, one should ask oneself how individuals and organizations can implement the efforts in a more efficient way and if they work with the right things, not just if they do things right.

An important conclusion from our compilation is that evaluations are perceived as meaningful when the starting point is that the evaluation process is seen as an opportunity to learn and reflect. Many times, the implementing organizations have a good picture of what results they have achieved or not achieved, even if they are not always good at communicating this. On the other hand, we see that there are often weaknesses both within implementing organizations and among donors when it comes to reflecting on what works and what does not work and how to do things differently to achieve set goals.

It is important to see both the benefits and the limitations of evaluations. However, we do not see evaluations as anything more than a tool to list only measurable results of development assistance initiatives as a missed opportunity for learning. Evaluations can provide well-founded and independent information about the activities and can, together with other forms of information (not least important here, the organizations' own follow-up) provide a picture of the development in the complex contexts where development assistance is often carried out.

In order to avoid evaluations accumulating dust on bookshelves, an organizational culture should be encouraged where evaluations together with the organizations' reporting and follow-up of efforts become the basis for a critical reflection on achieved or lack of achieved results and what can be learned from this. If it is used correctly, we see a role here for the program-wide evaluation.

Anna Liljelund Hedqvist

Jessica Rothman

Ian Christoplos

Indevelop

This is a debate article. The author is responsible for analysis and opinions in the text.

Do you also want to write a debate article for Uttvecklingsmagasinet? Contact us at opinion@fuf.se

Share this: