Debate

We need to talk about the purpose of the evaluations

The number of evaluations of development assistance has increased sharply in recent years. Nevertheless, they rarely lead donors to learn anything from them. An open discussion is now needed on the purpose of the evaluations. It is written by three researchers who today publish a report on the subject through the Expert Group for Development Aid Analysis.

"When will we ever learn?" was the title of a report from the think tank Center for Global Development in Washington DC more than a decade ago. During the years before and after, a number of donors have published studies with similar issues. Here in Sweden, the Institute for Evaluation of International Development Cooperation (SADEV) launched in 2008 the study “Does Sida learn more than for?”, As a follow-up to a similar report from the National Audit Office twenty years earlier.

Learning is a main purpose in development assistance evaluation. At the same time, the impression is that there is an urgent need for learning - this despite the abundant flow of reports and documents. In fact, there are many researchers and practitioners who believe that the development assistance sector is characterized by a very extensive document production.

Difficult to learn from evaluations

At the heart of this massive production of reports are the central evaluation units for the public assistance authorities. Sida, among others, employs experts in evaluation methodology who try to contribute to learning within as well as outside the organization. They try to compile insights from the sprawling landscape of evaluation reports that come in from Sida's decentralized evaluations, partner organizations and other donors. They try to communicate these insights to Sida's managers, policy makers and an interested public. Nevertheless, learning from these evaluations remains elusive, both in organizations and in the wider society.

As researchers, we were fascinated by this paradox: The lasting combination of limited learning while evaluation reports and documents are increasing sharply. Who writes all these evaluation reports, for whom and how? Who reads them, who uses them, and most importantly - who learns from them?

Explored the relationship between learning and demanding responsibility

As we explored these questions, we put the purpose of evaluations learning in direct relation to the second main purpose of evaluation: accountable. These two - accountability and learning - are often referred to by donors as "the dual purpose" or "the twin objective" with aid evaluation. In our study, we have explored this relationship in both Swedish and Norwegian development assistance evaluation. The expert group for aid analysis, EBA, funded the study. The result is a report which is launched today in Stockholm.

In our study, we combined methods from several research fields, especially history, rhetoric and political economy. The relationship between accountability and learning in evaluations was studied at three related levels - in itself the reports, in the actual processes, and in the broader ones systems. At all three levels, we repeatedly encountered tensions and contradictions between demanding responsibility and learning, which in practice led to a number of trade-offs. The two purposes give rise to different questions and different methods, and in practice they require widely differing evaluation processes. Even in the very first stages of an evaluation - when the job description that defines the evaluation task itself is formulated - choices are made that reward one approach before the other.

Recommendations are poorly substantiated

In evaluations, the purpose of demanding responsibility is well met: the reports establish what happened and when, relate results to expectations, and give praise or praise to the actors involved. Nevertheless, it is difficult to move from descriptions to recommendations. We often found that recommendations were not well-founded and that "doctrines" had limited relevance - if they were even articulated at all. Therefore, it is difficult to see how the reports alone could contribute to learning for others than the actors who have been directly involved.

The evaluation processes are often a more important opportunity for learning than the reports themselves. In interviews with senior evaluators, we were constantly told that they often had to act as discussion leaders and interpreters. They would actively involve relevant actors throughout the evaluation process. They would also work with the external consultants to ensure that their report would be both relevant and of high quality and market the final report so that it could be used. While the evaluators' commitment is necessary for learning, they can also stand in the way of the formal requirements for independence and distance in the evaluation process. Independence and distance are important for creating responsible and external trust in development assistance. At the same time, external consultants - if they act more as accountants than discussion leaders - can create suspicion among administrators, and thus further impair the opportunities for learning.

We must have an open discussion

In conclusion, learning never takes place in a vacuum but is always part of a wider political and organizational context. While evaluation managers may be most preoccupied with learning, there are other factors that counteract the learning purpose in favor of demanding responsibility. One is the so-called management response system, ie the formalized system for management's written response to conclusions and recommendations in evaluations, where a concrete action plan is usually included. Other factors are limited resources for evaluation as well as reward systems for a results-oriented approach and other systems for demanding responsibility. These are clearly important for creating transparency and maintaining public confidence in development aid, but they do not encourage risk-taking - nor do they encourage open and doctrinal processes.

Based on our results, we end the study with the following main recommendation: Everyone who is involved in and discusses aid needs to adapt their expectations to both aid efforts and aid evaluations, and talk openly about the trade-offs between demanding responsibility and learning.. This means an open discussion about the purpose of evaluation reports, the role of external consultants, and the overall challenges created by results-oriented business management systems. We hope that our study will be able to contribute to this open discussion - starting this afternoon, at EBA launch of the study in Stockholm.

Hilde Reinertsen
Kristian Bjørkdahl
Desmond McNeill

This is a debate article. The author is responsible for analysis and opinions in the text.

Do you also want to write a debate article for Uttvecklingsmagasinet? Contact us at opinion@fuf.se

Share this: