DECODING EVALUATION

by Gayle L. Gifford, ACFRE

Suggest that it’s time for evaluation and you’re likely to send shivers down the spines of your colleagues and watch them quickly flee the room.

Yet, if you asked those same colleagues if they’d like the opportunity for a new learning experience, especially one that helps them become more effective, they’d probably at least entertain a conversation on the topic.

It’s not hard to understand why “evaluation” has gotten such a bad reputation – whether it’s a performance evaluation or a programmatic one. Who decided they were your judge anyway? I know very few people who experience the warm fuzzies when their evaluation is in store.

Yet, our nonprofits and staff are living in the age of accountability. It’s difficult to open a professional journal or respond to a request for a proposal without bumping up against the dreaded E-word. So like it or not, E-valuation is heading your way, whether you serve on a board, write grants, run programs, or just raise money for a living.

I’d like to suggest that our nonprofit organizations should embrace evaluation, not run from it. And not just because our funders are pushing it on us.

As a sector that relies upon the continued trust of the public, you’d think that we’d want to know how well our efforts are working. Why aren’t we curious enough to explore whether a different investment of time and money would produce better results? Wouldn’t we benefit from identifying the elements of success so that we could apply them to other situations; or vice versa, to understand what went wrong so that we can stop doing it?

If we consider evaluation as learning, instead of just judging, I think that we’d all be open to a new relationship with it. So, to get started, let’s strip evaluation down its rather simple framework.

YOU CAN START WITH FIVE QUESTIONS

First, get rid of that frightening image of evaluation that you have – you know, highly sophisticated statistical programs and endless surveys (they’ll be time for that later) – and replace it with a more benign image. Imagine instead a group of people who look a lot like you sitting around a table discussing their project.

At the project’s start, there was an expectation that it would produce a result of some kind, whether that was raising more money or preventing homelessness. Along the way, actions were taken in an attempt to achieve that result. So, here you are now, asking yourself the question: “So, did it work? Did we get the result we were looking for?”

To answer that, you can ask five simple questions:

1. What? Did we do what we said we would do?
2. Why? Why did we? Or, why didn’t we? What did we learn?
3. So what? Ultimately, what difference did it make? What changed?
4. Now what? What should we do differently?
5. Then what? How can we apply what we’ve learned in the future?

Learning results when a group or an individual honestly seeks to answer these questions.

Let’s look at each question in turn:

Question 1. WHAT?
Did you do what you said you would do? This seems pretty simple. If you said that if you did a with the expectation that b would happen, then did you do a? To what extent?

For example, if you said that you would mail four reminders in order to get a 75% renewal rate from your members, did you send the four mailings or did you only send three? It’s amazing, but frequently, organizations don’t get the results they wanted simply because they didn’t put in the necessary effort.

Question 2. WHY?
So, if you only mailed three renewal notices when you planned to mail four – why? Maybe you simply didn’t have the time because you were diverted by other projects. Maybe you didn’t budget enough money to cover increasing postage rates. Maybe you got such fabulous results with the first three mailings that the fourth wasn’t needed?

While it’s natural to overlook the obvious, you can learn from your assessment of why plans did or didn’t happen. Can you plan more realistically next time? Can you develop a contingency for unexpected circumstances? With even more reflection, you can ratchet this up to an even higher level of analysis – e.g. does this situation happen regularly across departments? If so, what are the organizational or external triggers? Can you affect the way your entire organization responds?

Question 3. SO WHAT?
In other words, what difference did all effort this make? When I took algebra in high school, I had this really nice teacher who used to give me points if I showed the work leading up to the solution to the problem. That way, she had a good sense of my reasoning and understanding of the approach to problem solving.

Only thing…I had this one fatal flaw – I couldn’t add, subtract, multiply or divide very well, so I often got the wrong answer (that was before kids were allowed to use calculators). Now, maybe being off a few degrees didn’t mean a lot to my algebra grade. But it sure would have made a difference if I were designing the re-entry path for the space shuttle.

While effort is important, it’s the results that ultimately matter. You may have done everything that you planned to do and executed it absolutely brilliantly. But if it didn’t produce the difference that you were looking for, then you didn’t raise enough money, or cure cancer, or house the homeless.

This is the toughest, scariest question of all. And it requires the most study and reflection. But it’s worth it. Because this work that we do, it’s not about how we’re graded. It’s about families, and kids and the air we breathe or the communities in which we live.

Few expect you to be perfect. As a colleague of mine observes, much of the work that nonprofits do is still unproven – it hasn’t made it much beyond experimentation or promising practices to get all the way up the ladder to science or evidence-based practice. In some fields and on some issues, the cause and effect relationships are well tested. But too often, we’re designing programs based on some combination of external research, our own experience and our intuition. Knowing this, then we owe it to our clients, our communities and our sector to study, document and learn from our actions.

Which takes us to Questions 4 and 5: NOW WHAT and THEN WHAT?

How will you apply what you’ve learned? Will you need to head back to the drawing board and look for new approaches? Was there something about your particular execution, your particular audience that produced an unexpected result? If your project worked as you expected, can you reproduce those results? What might you test in other situations?

Ultimately, the learning that comes from asking and answering these questions carries across your organization and back into the community. How will you transmit what you learned throughout your organization? How might this learning be of value to your professional colleagues?

YES, YOU’LL NEED DATA
To answer any evaluation questions, you’ve got to figure out what to measure and how to collect the data that informs those measures. Not just any data, but the right data. Then, you’ve got to know how to assess what that data is telling you.

You may very well need outside assistance to structure your project, develop meaningful indicators, and collect and analyze data. That’s where the evaluation professional can be particularly helpful.

For some projects, measurement can be fairly straightforward: e.g. what was the first year return on investment of your fundraising appeal? But frequently, a good evaluation takes you into more complicated territory. For example: How will you know if the tobacco awareness program you instituted stopped teenagers from smoking? Or if your advocacy program reduced toxins in the river? Even in fundraising, you may need assistance to develop good systems to measure your fundraising activities, especially if you care donor relationships over the long-term.

Evaluation is only valuable to us when it is based in an authentic desire to learn. If you remember these five simple questions, it doesn’t have to be so mysterious or intimidating.

Note: You can find more about this approach in the “Guide to Project Evaluation, a Participatory Approach” at the Health Canada website.

This article first appeared in Contributions Magazine.

Gayle L. Gifford, ACFRE and her colleague Jonathan W. Howard at Cause & Effect Inc. help nonprofits from the grassroots to international create strategic change for a more just and peaceful world. With over 30 years of nonprofit experience, Cause & Effect helps nonprofit organizations with strategic planning, board development, fundraising and communications needs.