Our Blog


Learning from the Literature: A First Step Toward Developing a Sector Driven Evaluation Strategy

Learning from the Literature: A First Step Toward Developing a Sector Driven Evaluation Strategy

We’ve been amazed by the number of people who are eager to talk evaluation with us. We think this interest arises in part because people understand that evaluation has huge potential to make a difference in the sector. Many of us see evaluation as a way to better explain the value of the work we do. At the same time, we know that people are keen to talk about evaluation because they are frustrated. Although the investment in evaluation has been large, the return has often not lived up to expectations. Many evaluation reports sit on shelves or are sent off to government departments or other funders never to be heard from again.

Before we got too far in our work, we felt we needed to begin to unpack this big, complex, and sometimes emotional evaluation discussion. We needed to more clearly understand the expectations that people have for evaluation, the conditions under which evaluation is most likely to be useful, and the mistakes to avoid. This report has helped us to refine our thoughts and focus in on some of the big systemic issues of evaluation in the nonprofit sector.

Here are a few of the things we learned.

While the sector talks a lot about evaluation, we found that most of this discussion is centred around methodology, tools, and indicators. We found that there has been less attention paid to the intended purpose and audiences for evaluation results — in other words, who is asking the evaluation questions and why. This turns out to be kind of important. You might think that an evaluation is most likely to be used when it is methodologically rigorous, with a large sample size, all the latest standardized measures, and a big thick report with lots of graphs. However, the research on evaluation use is pretty conclusive. Although these things do matter, they don’t predict use all that well. What matters more are things like these:

  • whether the evaluation has a clear purpose that people see as important;
  • whether the people who are expected to use the evaluation are involved in planning it;
  • whether the people involved have worked to develop trust; and
  • how much time and energy is invested in making sure there is good communication throughout the evaluation process. 

Think a bit about your evaluation experience. Did these factors play a role in how useful the evaluation turned out to be? Frustration tends to arise in situations where people haven’t had input in the evaluation purpose, don’t understand how findings will be used, and have had limited opportunity to reflect critically on what is being learned.

According to the research that we reviewed, these kinds of scenarios — where the potential for frustration is high and the potential for the evaluation to lead to action is low — are most likely to occur when the evaluation has been required by a funder for the purposes of only holding the nonprofit accountable for use of grant money. The potential for learning and action is even lower if the process is poorly explained, based on unrealistic expectations, or under-resourced.  

When it comes to managing expectations, we found that it is important to make sure that the evaluation approach you use fits well with your context. We noted that the term evaluation is used to cover a wide range of social research activities, undertaken by different stakeholder groups, for differing reasons. For some, evaluation might mean a group of staff getting together at the end of a program cycle to reflect on how it went. For others, evaluation could be a complex, multi-year research project with sites all over the province and access to a large team of academic experts.

Going forward, we’ll use this research to help us develop the products and ideas that will eventually make up the Sector Driven Evaluation Strategy. At each stage of development, we’ll also continue to seek your feedback to help us make this work relevant to the sector. In the meantime, this review will give you a better idea of where we’re headed and why. Happy reading!

Read the Executive summary Read the full report
Andrew Taylor and Ben Liadsky
Andrew Taylor and Ben Liadsky

Andrew Taylor thinks evaluation is only useful if it answers questions that matter and enables people to act in new ways. He is co-owner of Taylor Newberry Consulting, a Guelph-based firm that specializes in developing research and evaluation solutions for public sector organizations. He is also ONN's Resident Evaluation Expert. He has helped organizations across Canada develop impact strategies and measurement systems that are evidence based, manageable, and meaningful. **** Ben joined the ONN in 2015 as Evaluation Program Associate. He has more than five years of experience working in the nonprofit sector in a variety of capacities from project management to fundraising to communications. He holds a Master’s Degree in International Studies with specialization in Global Environmental Policy from the University of Northern British Columbia where his research focused on the role of local governments and transnational environmental networks in addressing climate change. When not reading away, he can be found on his bike- if you can catch him that is.

Comments

  1. Larry Gemmel Says: February 5, 2016 at 8:40 am

    Very useful Evaluation Literature Review. I think the Sector Driven Evaluation Strategy is a great idea and I look forward to following this work. I am also interested in the Shared Measurement condition of Collective Impact and would be interested from this perspective as well. Easy to talk about, but hard to do.

    Thank you Andrew, Ben, and ONN!

    • Kate Browning Kate Browning Says: February 24, 2016 at 3:32 pm

      Thank you for your feedback, Larry! There will be more great resources coming from our Evaluation team in upcoming months. Stay tuned.

Leave a Reply

Your email address will not be published. Required fields are marked *