Day 4 - April 26 - Lessons and Challenges

11 posts / 0 new
Last post
Day 4 - April 26 - Lessons and Challenges

Welcome to Thursday! For today's questions, I would love answers on both. I am sure you all have some thoughts on these based on your own experiences. Thanks!

1. Would you tell us about a lesson you have learned while trying to measures the success of advocacy campaigns?

2. What are some challenges that make it especially difficult to measure success in the particular contexts where you fund it in the global south?

Response - Day 4

Apologies again for joining late! I figure I'll start backwards and respond to this then go back and do the other days. 

My response to your questions:

1) There are many lessons learned, but a few key ones for me are: a) Try to build in time/resources for some type of measurement or follow up at least 6 months after the end of the campaign. We typically deal with public donor grants and they have set timelines for the work to be conducted. We have had some success in explaining to our funders that while we will conduct the campaign for, say, 8 months, we'd like the grant to go for 18 months so that we can do follow up data collection. b) Require advocacy implementation teams to develop a strategy at the outset. With more traditional development projects our teams develop logframes or results frameworks. I have not found those to be as useful for advocacy work; however, there are ways to lay out the expected outcomes or results from advocacy work. Too often I hear that it's impossible because advocacy work is not linear, it's unpredictable, etc etc, but I think that tools like Julia Coffman's advocacy matrix can help teams to lay out their expected results which also helps develop a measurement system. I always tell them that the strategy is a living document and can and should change as you learn more and start implementing, but we need to know what we are trying to achieve and why we are using the tactics we are using in order to understand whether or not they work. 

2) Challenges to measuring success -- We have difficultly tracking down a lot of our partners, participants, etc after the work is completed. We do a lot of work in environments with closed or closing civic space so getting people to fill out surveys, talk to you in person, over the phone, etc AND give honest information is incredibly challenging. Another challenge we have is measuring or demonstrating success when, in many cases, success may be that the situation simply didn't get worse. It's difficult to accurately capture that and, in many cases, it's not what donors want to hear. Finally, the long term nature of change, that others have described in previous days comments, is an ongoing challenge to measuring success. 

-Kelly

Kelly, these are such great

Kelly, these are such great thoughts!

I am so glad that you have been able to communicate with funders the benefit of a longer timeline, I think that is always a complaint we hear about evaluation and grants. Do you have any tips on how to communicate that?

New Tactics here at CVT has also come accross the need for strategic planning, and the linking of tactics to goals. This is currently built into their training, and an area where evaluation practices are best embedded. It definitely makes sense to have evaluation be part of the planning up front instead of an after thought (as it can often be in grant-driven evaluation).

I like your outline of challenges as well. Outcomes being long term means that following up within a grant cycle is not possible. Also, getting a hold of people is of course difficult, especially with the given challenge of potential risk and safety of those involved in activism. Also, yes of course the outcome of keeping a situation from getting worse might mean quite a bit to the lives of those effected, but is VERY difficult for funders to conceive of as an outcome.

Also, Julia Coffman's work has come up many times in our research. Is this the matrix that you are referring to? (here cited in a CEI publication)

http://www.evaluationinnovation.org/sites/default/files/Adocacy%20Strategy%20Framework.pdf

Response

Re: Communicating to funders - we have had the most success with funders who value M&E and learning  as we appeal to their desire to learn more and communicate outcomes to their leadership teams and Congress in the case of USG funders. 

And, yes, that is the link to the matrix we use. 

Campaign timeframe

Hey, can you give an example or two of campaigns that are implemented in 8 months? My experience has mainly been with longer-term efforts.

And completely agree with your point b, re upfront planning. An evaluation of work we supported to advocate for mandatory helmet use  policy in Vietnam and an evaluation in a very different context - for health care reform here in the US- both concluded that an important contribution to the successful campaigns was the planning phase upfront. 

I think this relates to another challenge. Advocates are caught up in the day to day campaign and do not take time to reflect on what has happened, where they are and if adaptations are useful. Some of what we talked about related to our own portfolio evaluations as good practice could apply to campaigns as well. Like any things, I think it needs a point person- someone in the campaign or embedded to ensure that reflection and learning happen.

 

 

Yes, the role of reflection

Yes, the role of reflection and directing that reflection to action is very difficult to embed within activists in a campaign. Because activists are drawn towards immediate action, an evaluator (like an embedded developmental evaluator as mentioned in later comments) is often the only person available and in a position to facilitate learning and data collection. 

I actually have a question about the question

Measuring the success of a campaign seems the easy part to me i.e did the policy change? That is usually documented in a fairly straightforward way. The harder part for me has always been how to document progress along the way. And related to the outcome, how exactly did your own role contribute given that advocacy often is a collective effort.

Agree and...

In response to your last two comments, I agree about having someone embedded to ensure knowledge is utilized is great. In my work prior to Freedom House, we did this through a pilot application of Michael Quinn Patton's Developmental Evaluation (DE) approach. Which is, effectively, embedding an evaluator into an implementation team. The main challenge is that this can get costly. However, UNICEF is piloting a number of DE activities in difficult environments so they may be interesting to hear from. 

If you can't embed an evaluator, contribution analysis (and, forgive me if this was said in earlier comments) could be a useful evaluative tool. In fact, I just received a notice that the Modernizing Foreign Assistance Network (MFAN), recently hired evaluators to utilize this approach. They wrote, "after eight years of working to influence U.S. foreign assistance policy, MFAN commissioned an evaluation to better understand the difference it was making. As part of that evaluation, MFAN sought to examine its contribution to four key policies. To do this, the evaluation team suggested using contribution analysis: a theory-based approach to causal analysis that examines the many factors that influence policy change."

In response to your first comment asking what 8 month advocacy projects may look like -- I was thinking of our "Lifeline: the Embattled CSO Assistance Fund” program which provides rapid response advocacy grants to give local CSOs the resources to push back against closures of civic space as they arise. Lifeline advocacy grants are designed to be highly flexible, and can support a wide variety of activities, such as: Community Mobilization; Policy and Legal Advocacy; Civil Society Coalition Building; Strategic Litigation; Awareness Raising Campaigns; Advocacy Capacity Building; and Security and Protection Training. However, they tend to be shorter term grants that focus on a "window of opportunity" to act. I have some examples of how we monitor this (although I'm always looking to improve it!) that I'll add to the relevant section as I'm assuming that's easier for Kirsten and her team to keep track of. 

Thanks!

Thank you Kelly, feel free to

Thank you Kelly, feel free to put comments wherever they come up. We will be looking back through the whole transcript of this conversation. I would also love to hear about more examples of monitoring and evaluation for the Lifeline project.

Contribution analysis has not been much discussed here yet, and while the theory seems sound and very interesting, I am wondering if any of you have experience implementing it? 

Also, does anyone have examples of someone within a campaign taking the role of an internal developmental evlauator?

I see your point Jackie, that

I see your point Jackie, that the term "success" means that the policy has actually changed, while progress along the way is what needs to be measured. In my question, I had meant success to encompass all of those broader steps of progress and other successes along the way. Success I think can also be an unintended outcome, as in some examples outlined earlier. While we hope that group solidarity moves the policy agenda along, perhaps the solidarity of two formerly separate groups is a successs in itself, and will lead in other positive directions.

Successes, Challenges and setbacks

First, measuring challenges is also important. We measure both so that we can learn from the successes and the setbacks.

While my work to date has primarily focused on our grantmaking work as the bulk of our programming, and our work at a strategy level, lessons there are applicable. The change we seek is, realistically, only achievable in the long-term; it is not linear, and often the pathway to change is unpredictable. We contribute to the change, but there are many other factors to consider. There are incremental successes, but the incremental setbacks are as numerous in some cases. The incremental progress/regress is most often nuanced and qualitative in nature. We reflect internally, but one challenge for me is the triangulation of perspective. My access to grantee perception and/or community level perception of change is limited. These factors all present challenges to measurement.

Sorry for my late and brief contribution!