The Center for Victims of Torture’s New Tactics in Human Rights program has created a flexible methodology for measuring success in advocacy campaigns as part of our efforts to improve the effectiveness of advocacy initiatives. Our tools enable organizations to assess the impact of their work more systematically and effectively.
Given the important role of our community in funding advocacy initiatives around the world, particularly in regions where human rights challenges are most acute, we invited our community to give input early on in the process. Their experience and insights were invaluable in shaping our evaluation framework.
In this dialogue, practitioners shared examples and stories from their own work, as well as thoughts on the types of evaluation approaches that have been most effective.
Questions for this Dialogue
- What evaluation approaches have worked well for the organizations you fund that do advocacy work, particularly in the Global South?
What information that has come from advocacy evaluation has been most useful to you as a funder?
- What have been the particular challenges of doing evaluation in the regions where you support advocacy?
- What are the obstacles that are still making evaluation difficult in these contexts?
In the context of advocacy campaigns, understanding success is often complex. Over five days, participants in the discussion shared their insights and experiences in measuring success, designing evaluations, and overcoming challenges in the field. Here’s a summary of their key points.
Day 1: Introduction to Measuring Success in Advocacy
The discussion began by exploring how success in advocacy is defined and measured. It was agreed that success could be both long-term and incremental. Many participants noted that advocacy outcomes often involve complex changes that require patience and nuanced measurement tools. A key takeaway was that success is not always immediate and can include outcomes like maintaining momentum or keeping a policy from worsening, not just achieving a specific policy change.
Day 2: The Role of Monitoring and Evaluation (M&E)
Day 2 focused on the role of monitoring and evaluation (M&E) in understanding advocacy success. Participants highlighted that while M&E is essential, it is often challenging to measure the intangible aspects of advocacy, like shifts in public opinion or policy discourse. Several approaches were discussed, including the use of evaluation frameworks and indicators that track both outcomes and processes. The importance of embedding learning into the evaluation process was emphasized, as well as the difficulty of assessing success in environments with limited data access.
Day 3: Understanding Contexts and Tailoring Evaluations
The conversation shifted to the challenges of evaluating advocacy campaigns in specific contexts, particularly in the Global South. Many participants shared that factors like political instability, limited resources, and restricted civic space complicate the measurement of success. Collecting data in such contexts can be difficult, with limited access to partners and participants, making it challenging to follow up post-campaign. It was noted that success in such environments might simply be preventing further deterioration of the situation.
Day 4: Lessons Learned and Ongoing Challenges
Day 4 focused on lessons learned from previous campaigns and the ongoing challenges in measuring success. Key lessons included the importance of building time for follow-up and reflection into the project timeline. It was noted that many advocacy campaigns require longer periods for assessment, but donors often focus on shorter timelines. The challenge of capturing incremental success, such as community solidarity or public awareness, was discussed. Another challenge was ensuring that evaluations reflect the risks and limitations in closed or shrinking civic spaces.
Day 5: Designing Advocacy Evaluations
The final day explored best practices for designing advocacy evaluations. A consensus emerged that the process should begin by asking, “What do we want to learn?” rather than focusing solely on methodology. The discussion also highlighted the importance of setting clear objectives for campaigns, which then guide the evaluation process. The concept of learning agendas—ongoing questions that guide evaluation and are revisited throughout the project—was praised. It was emphasized that success should be measured at both macro and micro levels, with incremental successes tracked throughout the campaign.
Conclusion: Best Practices for Measuring Advocacy Success
Participants agreed that measuring success in advocacy requires a flexible and context-sensitive approach. Some key takeaways include:
- Start with clear questions and objectives: Define what success means in the context of each campaign.
- Track incremental successes: Celebrate smaller milestones, not just the end goal.
- Utilize learning agendas: Continuously ask questions to guide evaluation and adapt strategies.
- Adapt to the local context: Be mindful of the challenges in measuring success in environments with restricted resources or civic space.
Ultimately, successful advocacy evaluation depends on integrating these practices into the design, implementation, and reflection stages of a campaign.