This 3 day training event provided a practical introduction to monitoring and evaluation, with a particular focus on language and education programmes. As an introduction, it was particularly aimed at those with little or no experience and outlined some basic tools and techniques that could then be further tested in participants own contexts.
Wednesday 25th March:
08:45-09:00 - Sign in and introductions.
09:00-09:30 - Mapping out the training. Where are we going?
09:30-10:30 - Why do M&E? What is it for?
10:30-11:00 - Break
11:00-12:30 - How does Monitoring and Evaluation fit with my program?
12:30-14:00 - Lunch
14:00-15:30 - Principles of good M&E frameworks.
15:30-16:00 - Break
16:00-17:30 - Working with complexity.
17:30-17:45 - Daily debrief
20:00-21:30 - meet the LEAD Asia team (TBC)
Thursday 26th March:
09:00-10:30 - Introduction to tools and quantitative indicators.
10:30-11:00 - Break
11:00-12:30 - Outcome mapping.
12:30-14:00 - Lunch
14:00-15:30 - Using participatory methods in M&E.
15:30-16:00 - Break
16:00-17:30 - Most significant change.
17:30-17:45 - Daily debrief
Friday 27th March:
09:00-10:30 - Introduction to evaluation.
10:30-11:00 - Break
11:00-12:30 - Practical considerations - designing an M&E system.
12:30-14:00 - Lunch
14:00-15:00 - Review your own current M&E system.
15:00-15:45 - Next steps, key resources.
15:45-16:15 - Closing tea
Day 1: Where are we going?
Follow us in Bangkok as we learn about core principles in Monitoring & Evaluation
- Phil Smith, LEAD Asia's Associate Director for Programs, laid out a 'road map' of the event (3/25-3/27):
- Starting from the basics: What is M&E? > Indicators & data collection > Participatory M&E > Outcome Mapping for behavioral change > Most Significant Change > How can we commission and use evaluation in our projects?
- Access a full schedule of the event here.
- Sharing Our Expectations of the Event
- One group from the Philippines shared that they were hoping to learn about "M&E in the context of MLE in the Philippines, where classrooms are highly multilingual...and where languages are viewed according to their market value."
- Defining M&E
- Participants shared their emotions about M&E, both negative (it's 'extra work', 'boring', 'a stressful 'test'', 'too quantitative and formal', even 'pure agony'!) and positive ('exciting', 'empowering', and an opportunity for 'learning, adapting, adjusting, and celebrating').
- M&E asks 4 simple questions: What happened? > Why? > So what? > Now what?
- Monitoring: "Are we getting any juice?" VS. Evaluation: "Was the juice worth the squeeze?"
- Evaluation typically looks for the 'Big 5' (+ 1): Relevance, Efficiency, Effectiveness, Sustainability, Impact (and Equity).
- If you don't know where you're going, then any road will 'get you there'...
- Matt Wisbey, LEAD Asia Resourcing & Communications Coordinator: "In complex, multi-actor situations, we can look for areas where we're contributing to a change, not just where we're responsible for a change."
- The River of Life tool is a quick, easy tool that allows teams to reflect on their project's past, present, and future.
- How it can be used in M&E: After writing down the project's key activities, teams can compare these activities to the activities in their log frames or RBM plans.
- See step-by-step instructions for the River of Life (and other great tools!) here.
- Principles of Good M&E Frameworks
- Foundational principle #1: Does your M&E system cover all levels?
- What are the different levels? 1) Inputs: Our resources, like HR/finance systems, 2) Outputs: What we are producing, 3) Outcomes: The direct effect of our work, 4) Impact: Are communities changing for the better? 5) Context, 6) Organizational health: How we are working with each other)?
- Expected & Unexpected Results
- Remember..."when you're looking for a gorilla, you often miss other unexpected events!"
- Phil shared the story of Pu Tru, a village in northeast Cambodia. By all of his organization's measures, the project was failing, but when villagers were interviewed, they surprisingly attributed many of the positive changes in the community to the project.
- In another case, attendance at the literacy class was declining, and this was seen as negative. But in reality, the literacy class had given students enough skills and confidence to re-enroll in the formal government school, which was positive!
- Think: Are we dealing with systems that are simple, complicated, complex, or chaotic?
- According to researcher Dave Snowden, in simple systems (e.g. measuring our outputs), we should respond with best practice. In complicated systems (e.g. measuring test scores), we respond with good practice or expert analysis. In complex systems (e.g. measuring changing social attitudes in communities), we respond with emergent practice. And in chaotic systems, we respond with novel practice. The difficult part is figuring out which system we are in!
- We in development often work in complicated, complex, or chaotic systems...
- ...so: How complex are the things we are measuring? Are we using the appropriate tools?
Day 2: M&E Tools
An introduction to monitoring tools we can use at different levels in our programs.
- Note: For all presentations, please join the LEAD Community of Practice on Ning. Access resources here.
- Yesterday we looked at the 'Why?' of M&E: principles and theories.
- Today we will look at tools that we can apply more directly in our programs.
- 1. Indicators
- Definition: A specific thing we can measure or observe which tells us about whether our goal is being achieved. These can be direct or proxy (indirect), and should be Specific, Measurable, Attainable, Relevant, and Time-bound (SMART).
- What we need: 1) A data-collection method & plan, 2) A process to analyze the data, and 3) A baseline and a schedule for when to follow up.
- Example: "Proportion of students who, by the end of two grades of primary schooling, demonstrate that they can read and understand the meaning of grade-level text." (USAID)
- What to remember: These are much easier to develop at the lower levels (Activities, Outputs). Once you get to the Outcome level, it's difficult to get specific measures that are meaningful. Indicators deal with simple or complicated systems; after all, we have to already know what we want to measure!
- Asking the right questions is just as important as asking questions, as Elaine Vitikainen, SIL MSEAG OrgDev Specialist, shared.
- When collecting data that is subjective (e.g. stories), we can try to triangulate a more objective perspective by asking at least three different sources about the same subject or using at least three different methodologies.
- 2. Using Participatory Methods in M&E
- Definition: Tracking change together with communities. It's a means to help communities reflect on and analyze their challenges.
- Example: Participants practiced using a force-field analysis to monitor progress toward the goal of the CoP event: "M&E Event participants have increased awareness of monitoring and evaluation approaches AND are growing in confidence to learn more so they can use the approaches...."
- This also helped LEAD Asia recognize what was going well and what could be improved: monitoring the CoP event itself!
- 3. Outcome Mapping
- Definition: The Outcome Mapping approach is an alternative to the Logical Framework Approach or to Results-Based Management. It focuses on behavioral changes in those we are trying to influence in our programs. It can help us with 'intermediate outcomes' and behavioral changes, and deals with complex or complicated systems.
- Example: In seeking to improve education for children, we need to influence teachers, parents, local authorities, and school management. In Outcome Mapping, we would track the changes in the behavior of these groups. For instance, we would track whether teachers are using child-friendly methods or whether parents are involved in monitoring the schools.
- See this useful guide to Outcome Mapping for more information.
- 4. Story-Based Approaches (Most Significant Change)
- Story-based approaches help us track the wider Context, our Outcomes, and a bit of our Impact. These deal with complex or complicated systems.
- Definition: In Most Significant Change, we collect stories of change from the community and select the story that represents the 'most significant change' in the community. This selection process both provides a means to reduce large amounts of qualitative data and provides an important forum for dialogue about project values.
- Process: 1) Collect stories, 2) Select the story with the most significant change, 3) Document and share the reasons for its significance.
- Example: "Looking back over the last 6 months, what do you think was the most significant change in [ ] as a result of your involvement with [ ] project?"
- See how Xinia Skoropinski, SIL Philippines Associate Director for LEAD, used the Most Significant Change tool to evaluate an MTB-MLE program for Save the Children here.
- That's all for the second day of the LEAD CoP on M&E! Storifying the Storify experience...
Day 3: Intro to Evaluation
The group hears an overview of evaluation and works on their M&E frameworks.
- Again, please find all resources from this CoP event here on Ning.
- Phil started the group off with a thought-provoking question:
- "Evaluation is the subjective interpretation of data collected during the monitoring of a project." True or false?
- What's involved in an evaluation? Here's an interview with two participants who have been involved with evaluation, Anne Thomas (independent consultant) and Chhejap Lhomi (NELHOS):
- The Evaluator: Anne Thomas has been involved with projects in Papua New Guinea, Laos, and Cambodia, and her focus is on literacy and community development. She does one or two evaluations a year, advising literacy programs. She is familiar with the grassroots level and can speak the language in some cases.
- Q: What is your role in evaluation?
- Anne: Many people have strong negative reactions towards M&E, because the funder may have commissioned an evaluation. So instead we say, "We're going to look at your projects and see how they went and what we should improve for next year's planning." We want our evaluation reports used, so we use a capacity-building, participatory approach.
- We evaluate projects on an operational (efficiency/effectiveness) level, strategic (relevance/impact/sustainability) level, and in terms of cross-cutting issues (gender parity).
- Q: How do you prepare for an evaluation?
- Anne: I go and meet in person the people who want the evaluation. It's very often requested by the funder, and the person who's a team leader might not even know the project very well; they're in the capital city and from the majority culture, and the project is in a remote area.
- Anne is transparent about what she's going to evaluate and discusses what she's going to evaluate in advance with the staff to avoid the issue of "We didn't ask you to evaluate that!"
- Q: What are some challenges you face as an evaluator?
- Anne: The challenge is the people who are removed from the village, who don't speak the local language and say, "This is your subjective opinion. We have a great project." If you have good results, they like your report. As an evaluator, I cannot always give you an A+. If you find things which need to be improved, you might meet resistance. If you have a project with staff who don't speak the local language, that needs to be changed.
- Q: Do you have an recommendations for organizations that are planning for an evaluation?
- Anne: If you don't want to have an evaluation report sitting dusty on the shelf, agree to spend the time and effort to have a participatory evaluation, which is capacity building for the staff, and where it's just a logical next step for the staff to implement the recommendations. It's a mixture of internal and external evaluation--you may need to sell the funder on this.
- The Evaluated: Chhejap Lhomi works for the Nepal Llhomi Society (NELHOS), based in Kathmandu. It was founded by the Lhomi people for indigenous communities in Nepal. Their last evaluation was in 2012.
- Q: What preparation was needed? Was the evaluation commissioned by a funder?
- Chhejap: The donor gathered all the stakeholders and conducted the evaluation. We need to know our project plan well. We need to have documentation of data quarterly and yearly, and we need to take care of the logistical arrangements for the evaluators.
- Q: What were some of the challenges you faced?
- Chhejap: It was difficult to find the data, because the data was in different folders. The board and the staff need to know why we are doing the evaluation and the focus of it. And what our next plan is.
- Q: Was the evaluation helpful?
- Chhejap: It helped show how we should train our staff, and what should be done on time. Some points were encouraging, other points showed our weaknesses, and this gave us a chance to improve. A challenge was that we had done good work in the field, but we could not show what actually happened.
- Elaine: Remember, the evaluation can be like a small project in itself: there are outcomes that need to be achieved and activities that need to be done and organizational systems that need to be in place. It's not about being judged, it's about working together.
- Impact Evaluations
- Phil: There are four times to do an evaluation: baseline, mid-term, end, and post-evaluation (evaluating sustainability).
- For more information on conducting impact evaluations, especially evaluations relevant for small-scale programs, see 3ieimpact.org and povertyactionlab.org. One helpful paper can be found here. Other sites related to evaluation include the American Evaluation Association and the Canadian Evaluation Society.