Praxis Paper 23. Monitoring and Evaluating Capacity Building: Is it really that difficult?
Few doubt the importance of capacity building in the modern era, and few would deny that effective monitoring and evaluation (M&E) is needed to support this work. Nevertheless, the monitoring and evaluation of capacity building is as much a challenge now as it was two decades ago. This paper examines both theory and current practice, and discusses some of the key barriers to progress.
M&E Paper 4. Tracking Progress in Advocacy: Why and How to Monitor and Evaluate Advocacy Projects and Programmes
This paper introduces the scope of, and rational for, engaging in advocacy work as part of development interventions. It then […]
M&E Paper 3: Developing M&E Systems for Complex Organisations: A Methodology
Almost all development organisations are expected to have systems that enable them to collect, analyse, summarise and use information. However, […]
Praxis Note 49. Just do it: Dealing with the Dilemmas in Monitoring and Evaluating Capacity Building
Monitoring and evaluating capacity building is notoriously difficult. It rarely takes place partly because stakeholders disagree on fundamental questions of […]
Strengthening Civil Society in Malawi
An external evaluation of the Malawi Programme of INTRAC was commissioned by Cordaid a long-time donor, and undertaken between November […]
Praxis Paper 21. Participatory Monitoring and Evaluation in Practice: Lessons learnt from Central Asia
This paper records an attempt to develop a fully participative M&E system, drawing on the experience of a team of INTRAC staff working on a civil society strengthening programme in close collaboration with their partners in the five countries of Central Asia
Praxis Note 32. Learning and Accountability: A Monitoring & Evaluation Consultant’s Perspective
This note reflects on some of the issues around organisational learning, with a specific focus on how monitoring and evaluation processes can contribute to and support ‘effective’ organisational learning.
Praxis Paper 12. Learning from Capacity Building Practice
This paper provides a reflection on a pilot experience of using the ‘Most Significant Change’ (MSC) methodology to evaluate the […]
Praxis programme: sharing best practice from experience
INTRAC’s Praxis learning programme formally ran between 2003 and 2011 in two phases. It was set up to enable CSOs to […]
OPS 47. Mapping the Terrain: Exploring Participatory Monitoring and Evaluation of Roma Programming in an Enlarged European Union
This paper provides an overview of the monitoring and evaluation strategies adopted at multiple levels of governance by various stakeholders […]
Impact Assessment: a Tool for Evidence-based Programming or for Self-Marketing?
Since the search for ways to measure the impact of social development work gained an urgency in the mid 1990s, […]
Evaluating Tsunami Disaster Relief and Rehabilitation
The Indian Ocean tsunami that hit the coastal regions of Sri Lanka on the morning of 26th December 2004 left […]