There has long been a debate over the relative merits of learning and (usually upwards) accountability within organisational monitoring and evaluation (M&E) systems. I believe that the debate itself is false, for two main reasons. First, the debate is centred on concepts debated in the head offices of Northern institutions, whereas at field level the primary purpose of M&E is almost always basic project or programme management. But second, and perhaps more important, the debate has been couched in unequal terms. Accountability, especially to donors and funders, is often seen as an end in itself, whilst learning is useless unless it translates into improvement, whether within the agency conducting the M&E or other agencies.
Partly as a result, although many organisations pay lip service to learning, my experience is that most M&E systems, however well designed, slowly but surely gravitate towards accountability, with real learning sidelined or forgotten. This is not necessarily because the people responsible for designing and maintaining systems do not have high ideals or good intentions, but is rather due to the pressures exerted on M&E systems by international structures and systems.
The spin chain
The diagrams below illustrate this, and show a standard typology of a chain of organisations stretching between communities, CBOs, Southern and Northern NGOs, donors, the government and the public. In an ideal world all reporting and communications going up and down the chain would be characterised by honesty and free and transparent exchange of information and analyses. However, at the moment, rightly or wrongly, there is a perception that the public cannot accept the reality of development results, and that we need to ‘market’ results to the public in an appropriate way (see diagram on the left).
But the reality is that we have allowed the fear of public perception to drive us down the spin chain to the point where, in the worst case scenario, dishonesty spreads throughout the whole system – with everybody hiding the truth from each other, and organisations rewarded for the extent to which they are able to ‘market’ their product rather than the extent to which they actually deliver results or learn and improve. This results in the ‘spinning’ of information – rarely descending to deliberate falsification of results, but frequently involving the accentuation of good results, the hiding of bad results, and the use of anecdotal or incomplete evidence to justify claims.
Does this matter? Yes and no. If the purpose of M&E is to maintain the status quo, to keep aid flows flowing, to ensure that resources continue to be devoted to international development – a laudable and worthy goal – then the answer is probably no. But if we are serious about using M&E to enhance learning in order to improve then the answer is emphatically yes. In this case we need to seriously question how we can improve our M&E systems to be more focused on learning and improvement. Perhaps the first step in this process is to understand the key elements that appear in M&E systems designed to enhance learning and improving compared to systems primarily designed or implemented for other purposes.