Our Blog

Unpacking Nonprofit Evaluation: Who is taking the risks and who is making the decisions?

Unpacking Nonprofit Evaluation: Who is taking the risks and who is making the decisions?

As part of the Ontario Nonprofit Network’s efforts to develop the Sector Driven Evaluation Strategy, we have been collecting feedback from people on some of the issues of concern to them and what a strategy could do to help make evaluation more useful for the sector. This blog post is the first in a series that we will share with you related to a specific issue that has come up in our research and conversations with others.

One issue we have been thinking about a lot is the relationship between evaluation and accountability. Perhaps you have a new grant from a funder that includes a requirement to evaluate. Maybe you have been contacted by an outside evaluation consultant on behalf of a funder. Evaluation seems to become more stressful and less useful in these kinds of situations where an external agent requires it as part of an accountability process. In our research, we have heard some strong opinions on this relationship. The following comment (as part of this survey) is an indication of that:


hat words or sentences come to mind when you think about how evaluations are planned, implemented, and used today in Ontario’s nonprofit sector?

Uncoordinated (between funders), cumbersome, reactive, unhelpful, time-consuming. It should be something that is truly useful for organizations but really just ends up taking time — like you are checking boxes for others’ benefit and catering answers to what the funders want to hear without truly learning from what happened internally or externally.

The goal of this blog is to unpack that frustration a little bit in order to understand it better. Think of employee satisfaction as an analogy. Employees are most satisfied when their role has a good balance of control and responsibility. If you are in a position where you have a lot of responsibility, it is important to have control over the kinds of decisions you need to make in order to succeed. On the other hand, if you have a relatively low level of responsibility and are expected to carry out tasks assigned by someone else, you may feel less need for control or flexibility. Employees tend to be much less satisfied when control and responsibility are out of alignment — when they are on the hook for a project’s success or failure (i.e., they have a high degree of responsibility) but they haven’t been given control over the people, the resources or the information they need.

A similar kind of dynamic can play out when it comes to accountability in evaluation projects. Here are some questions that you might ask yourself to get a better sense of how much responsibility you are being asked to take on in an evaluation project:

  • How much of my time and resources am I expected to devote to the evaluation? Will I be expected to design data collection tools or collect data and analyse data? How much time is the funder or evaluator devoting to the project? Has the time required to complete this project been estimated? Are evaluation expenses being covered or reimbursed? What is the cost to my organization?
  • How intrusive are the proposed methods? Will my clients or participants be asked to share information?
  • How much risk am I taking on? If the evaluation takes longer than expected or produces controversial results, will I have to deal with that? How much risk is the funder taking on?
  • What kinds of decisions will be made on the basis of this evaluation? By who?
  • How much uncertainty is there? Are some of the key methods still being sorted out?

Here is another list of questions that will give you some sense of how much control you are likely to have:

  • Is the person or group that has initiated this evaluation making an effort to build trust with me? Do I feel respected and listened to?
  • How much input will I have into how the evaluation is designed? Will it explore questions that are useful to my organization or to the people I serve? If I feel the methodology is too intrusive, for example, would I have an opportunity to get that method changed?
  • How often will we communicate during the evaluation process? Will the people I serve be part of this communication? Will the evaluators be present at these meetings? Will the funder?
  • How much control will I have over how the results are analyzed and interpreted? Who will be making the recommendations? Will I have a chance to contribute?
  • How does my organization stand to benefit? How do the people my organization serves stand to benefit? How will things be better for them as a result of this evaluation? Could this evaluation cause them harm?

If you have high control and high responsibility, the evaluation process has a good chance of being worthwhile for you (although it is probably going to be a lot of work as well!). If you have low responsibility and low control, the process might not help you much. However, it isn’t going to take up much of your time and your risk is low. You may choose to get involved in this kind of evaluation project because you value your relationship with your funder or partner.

The most frustrating evaluation relationships are ones where your role and your level of influence don’t match. These are situations where you take on a lot of responsibility and risk but are not given much input or control around how the project unfolds. Here is a typical example: you are required to complete an evaluation report as a condition of funding, so the risk for your organization is high and your level of responsibility for the evaluation work is also high. However, you have very little input into the evaluation questions and little information about how the evaluation data will be used. You have little choice about whether to participate in the evaluation process. These are the kinds of evaluation relationships you might want to avoid, if you could!

As our work to develop the Sector Driven Evaluation Strategy unfolds, we hope to create resources that you can use to have more constructive conversations about evaluation with your funders or other partners in order to develop better partnerships that lead to useful results. Stay tuned!

Andrew Taylor
Andrew Taylor

Andrew Taylor thinks evaluation is only useful if it answers questions that matter and enables people to act in new ways. He is co-owner of Taylor Newberry Consulting, a Guelph-based firm that specializes in developing research and evaluation solutions for public sector organizations. He is also ONN's Resident Evaluation Expert. He has helped organizations across Canada develop impact strategies and measurement systems that are evidence based, manageable, and meaningful.

Leave a Reply

Your email address will not be published. Required fields are marked *