by Boru Douthwaite
Last week I attended the IRRI-organized workshop to develop an M&E plan for the GRISP CRP. I began the meeting fully expecting there to be a big difference with GRISP M&E compared to CPWF M&E. I was very happily surprised. In short the GRISP M&E proposal mirrors key components of our own system:
- Uses theory of change
- Uses a traffic-light system to evaluate if projects and the CRP itself is on track
- Relies heavily on use of milestones
- Will use ‘most significant change’ narratives as part of the system
- Will be very cautious about the use of aggregate quantitative indicators, knowing that they can become a ‘rod to beat our own backs’.
- My other main take-home message was to re-confirm our CPWF experience that M&E is a highly contested area that researchers love to hate. There will always be a tendency to ‘shoot the messenger’ by seeking to undermine the M&E system that provides unwelcome feedback, whether it is valid or not. The challenge we face is telling apart a legitimate complaint from a self-serving whinge.
The meeting itself
The rationale for the workshop was that developing an M&E system is a Consortium-mandated early step in the implementation of GRISP, which may also provide important guidance for other CRPs and M&E done at the CGIAR system level. It is essential for the CGIAR System that the M&E mechanisms are efficient and transparent, but also avoid mistakes of the past. The stated objectives were:
- To explore preliminary results of strategic assessments of the expected economic, poverty, and environmental impacts of GRiSP research in each of its major regions;
- To review experiences to-date and principles of good practice to define an appropriate suite of evaluation and reporting practices for the GRiSP; and,
- To identify appropriate indicators for the research performance of the six Themes of the GRiSP.
The workshop began with Bob Zeigler saying that at all costs we must avoid developing a system that ‘evaluates us to death’. He said that the workshop was not going to develop a one-size-fits-all system for other CRPs, but that it could establish principles to be used in the design of others. Lloyd le Page, the Consortium CEO, attended the workshop, thus giving this aspiration some weight.
Marco Wopereis, the DDG-R for Africa Rice made a similar point that the CGIAR has been over-evaluated, that evaluation has become a nightmare word. He pleaded for a system that was clear why we do evaluation and for whom.
David Raitzer made a good presentation that laid out the fundamental concepts and practical challenges in research evaluation. Some of the key evaluation challenges he identified were:
- Serendipity, progress happens through breakthroughs, which can’t be predicted in advance. Plans may change as we do our research.
- Measuring output – no accepted standard metric of quantity of production across disciplines or research areas. Quality compounds the problem. Output is discontinuous over time. Lags due to data analysis, publication.
- Risk –only a portion of research yields successful products
- Outcome and impact assessment can only be relevant to a small portion of the research portfolio. Don’t know in advance
- Long and indirect causal chains compared to say, building a bridge. Other influences can act on any step in the chain.
- Determining failure – research has multiple pathways to impact. Can’t conclude that not useful by concentrating on present performance, so it is actually easier to identify success
- Research evaluation has unique requirements – understand biophysical as well as social conditions. Evaluators need access networks of partners. Very difficult to for independent consultants to do.
- Implications – evaluation model used elsewhere has limited applicability
- Few indicators of long term outcomes that can be reported without dedicated and detailed assessment of causality
- CPWF experience is at odds with the independent consultant argument (re: Bron MacDonald’s evaluation of PN10).
Achim Dobermand presented a horror story about the M&E expectations put on scientists and research managers by M&E, including CCERS, EPMRS, donor requirements, Medium Term Plans and the Performance Measurement System that used indicators to measure CG Centre performance. He used phrases like “caused outrage”, “indicators took a huge amount of time”, “didn’t make any sense”. This was echoed by others. There was a lot of built-up frustration.