Our Blog

Can a pizza party be evaluation, too? Updating mismatched expectations in nonprofit measurement

Last year, we put out a resource to help clarify the different ways in which the term evaluation gets used (or misused) in Ontario’s nonprofit sector. Since then, we have heard some great ways nonprofits are promoting critical reflection and learning that didn’t quite fit into our original analysis. So, here’s a resource refresh to reflect what we’ve learned and heard from the sector since.

Pizza, anyone?

One nonprofit organization told us about their end-of-year gathering for staff and volunteers to talk about the issues and opportunities it faced with programming for the year ahead. It took time to reflect on what worked, what didn’t, and why. The “data” used for this process were simply the insights and reflections of the people in the room. This simple process helped people a great deal as they made difficult decisions about what they needed to do differently going forward. And in between bites of pizza. This facilitated critical reflection session can be considered evaluation, too.

Mismatched expectations

Evaluation is a word that gets used a lot, as we pointed out in Whaddaya mean, evaluation? Different kinds of data gathering approaches with different purposes sometimes get lumped together under the general heading of evaluation. This can lead to miscommunication and unrealistic expectations. Our initial resource focused on four basic approaches to measurement work. Now we’ve added a fifth approach.

Five definitions in our updated table:

1. Facilitated critical reflection (a.k.a. the pizza party approach): Sometimes in order to evaluate what has been learned and what can be improved or changed, all that is needed is some dedicated time to chat openly and honestly (i.e. over pizza).

2. Performance measurement is the day-to-day tracking of simple descriptive information.

3. Program evaluation tends to be more intensive, time limited, and focused on measurement of short-term outcomes.

4. Systems evaluation focuses on understanding the cumulative effect of multiple programs or strategies.

5. Applied research tends to be more theory driven and designed to generate new knowledge from which we can make general conclusions, rather than practical recommendations for program managers.

We’ve also arranged these approaches in order of increasing complexity and difficulty involved with each approach – from left to right.

This updated table lists six common reasons or purposes for nonprofits to engage in evaluation using the five approaches. It also indicates which approach works best for each purpose.

Evaluation Approaches Resource 2.0 Graphic

If you need a refresher, here’s more information on how the table works:

Performance measurement

Green dot: Represents situations where the approach and expectations are well matched.

Let’s say, for example, your program collects basic data about how many people take part, who those people are, and how satisfied they are with your service in a general way. This is a classic example of a good performance measurement approach. If your purpose in gathering this data is to demonstrate to a funder that you are carrying out the program as planned and that things are running smoothly, this approach works really well. You’re in the cell with a green dot in the top-left of our table.

On the other hand, maybe you are doing much more ambitious measurement work. Perhaps you are working with sister agencies in other cities and an academic partner on a study of the long-term impacts of a certain program model. If so, you’re probably down in the bottom-right section of our table, in another cell with a green dot. If so, you’re good to go! The applied research approach is probably right for you.

So far, so good?

Program evaluation

Yellow dot: There are many situations where the approach and expectation don’t match as well.

Imagine a nonprofit with a strong track record of day-to-day performance measurement. Let’s say this nonprofit starts to shift its thinking about the purpose of its measurement work. Perhaps, for example, it would like to get better at measuring program outcomes. It might be tempting to think that this would be an easy shift. A few tweaks to the client survey and off we go. However, this expectation may not be realistic. Getting good at outcome measurement may require the agency to develop or do more thinking about its theory of change. It may need to begin doing pre-test surveys as well as post-tests. It may need to start asking somewhat more intrusive questions of its clients. In short, a change in evaluation purpose or expectations may require a significant change in evaluation approach. In our table, using performance measurement approaches to demonstrate achievement of impact appears in a yellow cell. That means it can work, under the right circumstances, but you should proceed with caution. Shifting to a program evaluation approach (the cell with a green dot in the middle of the table) might be a better way to go.

There are also situations where you might want to make a shift in the opposite direction towards a less formal, less resource intensive approach. Imagine an organization that has a fairly long program evaluation survey that asks a lot of questions about outcomes for clients. This survey has been in use for many years. When it was launched, it generated very useful data, but lately it seems to produce the same findings every year and the organization has stopped paying much attention to it. The staff have learned a lot more than they knew back in the program’s early days and the program’s context and client base has changed. This program might need to be shifted to an approach that does a better job of getting them up to date information on how the program’s implementation has evolved over the years. It might be time to order some pizzas.

Red dot: Funders in the nonprofit sector can also run into challenges around mismatched expectations.

Imagine a funder that has asked all grant recipients to engage in program evaluation work and report on the outcomes. Let’s assume that the grant recipients have done this measurement work well. That funder may believe that these individual program evaluation reports can be rolled up to demonstrate system impact or the impact of the funder’s investments on the community as a whole. This may not be a realistic expectation, and it shows up in our table as a cell with a red dot. Unless the funder and its grant recipients have worked together to create a measurement strategy specifically designed to demonstrate system impact (as the Canadian Women’s Foundation does, for example) lots of individual evaluation reports aren’t likely to add up to evidence of systems change.

If you’d like to know more about evaluation definitions and approaches, there’s an app for that.

However, the core point we are making is that no single evaluation project can be all things to all people.

It is important to think about whether your approach matches your expectations and whether everyone involved in your evaluation work has similar expectations.

Check out our updated resource.

Andrew Taylor

Comments are closed.