It happens. A new person takes a job and improves everything. I’d rather not have my successor have such an easy time! I’ll make improvements myself! One way to do that is to do program evaluations, and we’re starting one at CCCC that you can follow and learn from as it unfolds.
If you aren’t yet doing program reviews in your ministry, this series in real time should help you get started. The goal of doing a review is to decide whether to continue, discontinue or modify a program. Some people may question the value of doing program evaluations because you can just ‘eye-ball’ a program and get an idea of its performance. Maybe. But you are only looking at what is and not what could be. I don’t want to lead a ministry that is only doing ‘okay;’ I want to lead a ministry that is doing the very best possible! I want to get the whole job done, not just part of it. A formal program review will force you to get outside of your own observations and take an objective look at the program as it is and as it could be.
This first post will document the organizational context for the review, the rationale for the selection of the program to be reviewed and the high-level research questions that will drive the evaluation.
Organizational Context
Not every organization is ready to do a program evaluation. Employees might be suspicious of ulterior motives. Perhaps previous evaluations were badly done and no one wants to go through that exercise again! There may be little credibility in evaluations if program staff were not involved in designing and conducting the review. If the focus is on assigning blame for a failing program, you have bigger problems than the evaluation to address. Make sure your team knows that taking a hard look at performance is something that is rewarded. No one who is fearful of being blamed and penalized will take an honest look at their own work.
CCCC has had a good experience with program reviews and there are no special concerns or considerations to be addressed prior to launching this evaluation. CCCC began doing formal program evaluations when we added a new standard for our Certified Members that requires program evaluations to be done. After all, what’s good for the goose is good for the gander! If CCCC says it should be done, then we’d better be able to show how to do it.
Having no experience with formal program evaluations, we started with a small scale review. “Start small and grow with experience,” is what I advise first-time program evaluators. In my opinion, some evaluation is always better than no evaluation! The result of the first formal evaluation at CCCC was to terminate the Certified Stewardship Counselor program. The program was muddling along but not going anywhere, and the research question was simply “Do we kill the program or invest in growing it significantly?” Stopping that program gave us the time to develop the highly successful three day Advancing Stewardship I course.
We next did an operational evaluation of the Community Trust Fund, which was a labour-intensive program. Not only was it not scalable, we wondered if the program was even covering its costs. The result of this evaluation is that the Community Trust Fund today is highly automated, donors can open an account to manage their gifts, and the program is paying for itself. (I still manually sign the cheques that go out, and it gives me great pleasure because while signing I think of the ministries who are receiving these donations.) If you have donors who want to give you publicly-traded securities and you aren’t set up for that, send them to the entry page of the Community Trust Fund’s website. We’ve done other reviews since these first two.
Our positive experiences with program evaluations mean that our employees know that doing a review is risk-free for them, so they need not be afraid of them. Every review, including the one that cancelled a program, led to a positive outcome for all staff because we are more effective and efficient and we have made room for new, more promising programs. The point of finding out why something did or did not work is to help us make better use of our resources.
So given this context, I think we are ready to tackle a major program and ask the hard question, “Is this program still worth doing? Is it effective?”
Program Selection
People generally evaluate their programs on a rotational basis over a period of years. Some might be reviewed every year or even a couple of times per year. There are a few factors that will help you decide which program to evaluate:
- If a program is clearly not performing or is causing you trouble, evaluate it.
- Programs that consume a lot of resources, whether money or time, should be evaluated.
- If you simply have questions about a program, if you find yourself wondering if it is worth it, then evaluate it.
- If a program hasn’t been evaluated in a long time, evaluate it.
This year we are reviewing the Annual Conference. Here’s why:
- The conference, along with the Bulletin newsletter, is the original program of CCCC. I think this factor elevates the conference to sacred cow status, and for that reason alone, it should be carefully reviewed
- The conference accounts for about 14% of our expense budget and 8% of our staff time (we don’t outsource conference planning and management). With salary and overhead plus the direct costs, the total cost of the conference is $250,000, the second highest investment in a program by far. The only program surpassing it is providing general technical support to our members, which is $676,000. The conference covers its out-of-pocket costs (everything but salary and overhead) and last year provided $11,000 towards staff time and overhead. That means that the conference is heavily supported from our general revenues. This is another compelling reason to check that the conference is an effective program and worthy of such support.
- Attendance is nowhere near the level that we would like it to be. Given a membership of over 3,200 ministries, why do only 180 or so send someone to the conference?
- Is the conference up-to-date with all that is going on in the world? Is it still relevant?
For all these reasons, it is a good time to take a serious look at the conference.
Research Questions
Program evaluations must have a focus. Our review of the Certification program focused on the internal operations required to support it with the goal of ensuring we could handle double the number of currently Certified members. Our review of the Advancing Stewardship I program focused on how it was delivered, with a goal of changing the program if necessary and of designing the follow-up program, Advancing Stewardship II, based on what we learned from the first one.
CCCC is in a season of challenging itself on all of its stated and unstated assumptions. We are diligently searching out our hidden assumptions, things we have implictly accepted as our version of reality, and subjecting them to challenge in order to verify the accuracy of our assumptions. While we have a pretty good feel for the conference, we want this review to go back to square one and address why we even have a conference, what function it serves, and whether or not it is really accomplishing anything worthwhile. The research questions are therefore all-encompassing:
- Is the conference program helping CCCC fulfill its mission? It is only fulfilling our mission if it is helping our members fulfill their missions. So we need to examine what the attendees do with what they learn at our conference.
- If it is helping CCCC fulfill its mission, then how can the conference program better meet the needs of ministry workers? How can it attract more of them? We need to dig into what they think of conferences in general, how they learn, and why they attend.
- What assumptions have we made about our members’ needs and about how to put on conferences? Do they stand up to scrutiny? We need to understand what is going on in the life of a Christian worker and how that relates to attending a conference.
- What is the state of conferences today? What are the trends and new developments? What does research show? Are there viable alternatives as replacements?
Sarah Rush, my assistant, will be the lead evaluator with the goal of presenting a final report to the January 30th board meeting.
After selecting the program and defining at a high level what you want to learn about it, you must document all the expectations you have of it, along with all the resources it consumes. So my next post in this series will be about the theory of change behind holding conferences and the resulting logic model.