Day 5 - April 27 - Designing Advocacy Evaluations

5 posts / 0 new
Last post
Day 5 - April 27 - Designing Advocacy Evaluations
Happy Friday! Today I am hoping to hear from you on what your suggestions would be for measuring the successes of advocacy campaigns.
 

1. If you were designing your own ideal way to measure the success of campaigns, what would that look like?

  • What would your measures of success be?
  • How would you find out if you achieved success?
  • What would you need in order to measure success this way?

2. What do you think is most important for CVT to consider when designing a type of evaluation for activists?

The starting point

Hello from the train to DC. I will check in this afternoon but I have one immediate reaction. I believe evaluation discussions often start with measurement and methodology. That does not make sense to me. I think we should always start with the questions we want to explore. What do we want to learn?  And “ success” is too vague to help design a useful evaluation. For example, “ did the campaign engage ( constituency/target group) ?”could be an indicator of success if that is an objective. Then you could develop evaluation questions and a learning agenda i.e. “what outreach strategies were most effective in engaging ( constituency/target group)? I’d envision a campaign evaluation developed around a set of the key overall questions and then a subset of learning agenda questions for each.

The second starting point component for me would be Julia Coffman’s matrix that we have mentioned a couple of times and is in the CEI publications I highlighted day 1.

Agree

I agree with Jackie. The specifics are important. What does success look like for those who are designing the campaign? What are the goals of the campaign? Is the 'how' of the campaign important? A methodology  is selected based on the question(s), and the resources available.

Again, apologies for my brevity. It has been a busy week. That said, this is a great discussion and I appreciate being included. Will the discussion continue in some way? Will we have access to the conversation next week?

Response- Day 5

I agree with Jackie's comments and would add that when designing evaluations for activists, I would consider how the evaluation or evaluative work can support any monitoring that may be ongoing. Monitoring and evaluation are complementary, but they are distinct and can be used for different things. Sometimes that gets muddled in our work. It is helpful, as Jackie said, in helping to answer the "why" questions that arise from monitoring data.

In terms of measuring success, I would add that incremental success is important to measure. Not just waiting for a big policy change, but capturing and celebrating the milestones along the way is important. I think the Coffman matrix helps us outline those incremental steps. Along with logical frameworks. 

This may not be completely relevant to what you are looking at, but I thought I'd share it since learning agendas have been raised in a few discussions. I love the idea of learning agendas and we are trying to use them more to guide our evaluative work. We are utilzing a learning agenda approach to evaluation in a large human rights mechanism that we implement through USAID. The mechanism is essentially an umbrella grant that supports up to 15-20 human rights projects around the world. The projects range from 2 years to 5 years and while not exclusively advocacy based, they do have large advocacy components. We created a results framework (RF) and all projects funded under the mechanism must contribute to one or more of the objectives and intermediate results laid out in the RF. Then, the project creates its own customized strategy, but showing what greater goals/objectives it is contributing towards. We also have outcome and some output level performance indicators that relate to the RF. So each project is collecting a few of these common indicators so we have some comparable indicators across projects. Then, we developed learning questions (or evaluation questions) for each of the 4 objectives in the RF. An example of a question is: How well does external pressure from civil society organizations, impact litigation, media outlets, and citizen participation improve accountability and transparency compared to internal reforms within judicial and political institutions? And in which combination and sequence are such initiatives most effective? Every project must include the relevant learning question(s) in their project level evaluations or as an evaluative case study that they will do over the course of implementation. This has helped us to capture qual and quant data and has helped us communicate what these smaller projects are working towards or doing at a bigger level. In addition, as we gather data from the projects on these evaluative questions, we are holding learning events with activists and other CSOs in the field and showing them the data and fact checking it. Does this ring true to you? Is this your experience? If not, what is your experience? That we can eventually get information around what works in what types of contexts, etc.

Again, this example is more about rolling up learning and success on a macro level, and I'm realzing it's difficult to describe, so if you have more questions or anything doesn't make sense just let me know. 

These are such interesting

These are such interesting examples of how this works in practice! I love the idea of learning questions accross projects, but with some leeway to answer the question in different ways. Also, I love the process of checking back with other CSOs to see validate and expand the information.