Please note the CCCC office will be closed on December 25, 26, 27, and Jan. 1, 2025.

Program Evaluation 4 – Research Design

Effective, Organizational Leadership, Thoughtful | ,

program evaluation 4   research design
A black microscope. Used with permission.

A year ago I wrote about the need to check your assumptions. That is what we are now ready to do in the program evaluation of our annual conference. By the end of this post, you will see how we at CCCC designed the research content and process of our program review.

Deciding What Research Needs to Be Done

Having:

  1. selected the program we want to evaluate,
  2. developed the program rationale (theory of change and logic model), and
  3. completed the literature review,

we next reviewed every part of the program rationale (theory of change and logic model) looking for assumptions that we already know about and assumptions that come to light only as we think more deeply about the program. The goal is to develop a list of questions we want to answer. If we can answer those questions, we will know how well the program is working and how to improve it. We also looked at the research questions that were developed at the start and thought about what questions we need to ask in order to answer the research questions. Other ways we used to develop our questions included:

  • Reading the literature review to find decision points where we’d like to know what our members think about the options. In our case, the literature review did not spark any questions that had not already arisen from our analysis of the program rationale, but it did provide ideas for improvement. Decision points include such things as length, plenary/workshop mix, use of technology, etc.
  • We asked the senior team for their questions.
  • We keep a statistical analysis of each conference and have extensive feedback forms for both the conference as a whole and all of its components. We ran reports that showed that 25-30% of the participants each year are first-timers, but conference attendance is not growing at that rate, so we’d like to understand better how the decision is made whether or not to come again. The feedback showed that our conference is considered by most to be the best run conference they attend. There is only one question of logistics or conference administration to ask.
  • Finally, overall responsibility for the conference rests with me, so I sat back, closed my eyes, and just thought about what I’d like to know more about.

Keep in mind that in any program evaluation you want to essentially answer two questions:

  1. Were we effective? You need to compare the actual output to the expected output to discover if you did what you wanted to do. That’s effectiveness. The theory of change helps you answer the effectiveness question: Did we do the right things?
  2. Were we efficient? You need to compare the ratio of actual output to actual inputs and ask if you were good stewards of your resources. That’s efficiency. The logic model helps you answer the efficiency question: Did we do things right?

Specific Questions

We developed specific questions to guide the development of surveys and further research, sorted by category.

Assumed Problem and Assumed Causes Questions

We wanted to know how people feel about conferences and the other means they use to stay current in their fields. We wondered how people challenge and stimulate their own thinking.

Assumed Assets and Other Attendee Needs

These questions related to their habits, preferences and decisions about conference attendance.

Interventions

We wanted to probe what happens pre-conference, during the conference, and post-conference from the attendee’s perspective.  How does an event fit within a person’s larger context?

Short-Term Outcomes

Moving away from the conference itself, we wanted to know what people do with the material afterwards.

Long-Term Outcomes

Stepping back from any particular conference, respondents to our surveys were asked about their employer’s overall assessment of the usefulness of attending conferences.

Inputs

There were some questions, not for surveys but for further research, that we asked to improve the use of our time, the locations we rent, and the handout materials.

Success Criteria

If there are benchmarks or standards available for the program you are reviewing, you would determine what threshold must be crossed in order to consider your program a success. It is very difficult to set a threshold once you know what the results are because biases will enter the equation!

Ministries might ask questions such as:

  • What recidivism rate is acceptable?
  • What is an acceptable percentage of new believers who are still attending church and being discipled a year later?
  • What percentage of clients should have a job a year from now with at least three months of steady employment?
  • What percentage of our congregation is involved in some form of Christian ministry, whether with our church or with another ministry?

In our case, I cannot find any benchmarks for success, so as part of the research we will attempt to create some benchmarks by asking other associations with voluntary attendance at their conferences about their results. We will also ask ourselves: In light of the number of ministries that benefit from the conference compared to our resources that it consumes, is it good stewardship to continue running a conference? This criteria is a bit fuzzy for evaluation purists, but it is good enough for me given that we are primarily looking for ways to improve the conference rather than make a continue/discontinue decision.

Methodology

We then reviewed each question and decided two things:

  1. Who should we ask this question of?
  2. How should we ask it?

We decided we would find the answers in the following ways:

  • Ask people who have attended the conference at least once since 2006 (includes members and non-members)
  • Ask members who have not attended the conference since 2005 (the conference is designed for our members so we expect them to come; non-members are not our target for the conference)
  • Ask speakers who have presented in three of the past six years
  • Review possible venues at certain locations
  • Analyze our database for attendance patterns
  • Talk with other organizations that run conferences
  • Based on the results of all the above inquiries, select a small group of people for one-on-one interviews to delve deeper to get more insight on any remaining questions.

While we chose to base the review on surveys for the most part, there are many research methodologies you could choose from:

  • Verbal data: Conduct interviews either one-on-one or in groups (focus groups)
  • Client surveys: expectations of services, use of services, satisfaction, rating of quality
  • Outcomes surveys: behaviours, beliefs and conditions that have changed as a result of your service
  • Observational data: Watch and see what happens
  • Archival data: Check data collected from running the program, your own records, plans etc.

While benchmarks may be helpful, there are always so many differences between organizations or even divisions within the same organization – circumstances, conditions, history etc. – that the best comparison is really between your current results and your past results. Is your performance improving?

You are now ready to go ahead and do the evaluation.

Series Navigation<< Program Evaluation 3 – Literature ReviewProgram Evaluation 5 – Wrapping It Up >>

Sign up for Christian Leadership Reflections today!

An exploration of Christian ministry leadership led by CCCC's CEO John Pellowe