A guest post by Dr. Jennifer Leo, Postdoctoral Researcher, UNESCO Chair IT Tralee
Share impact stories with others!
In a quest to share all of the things I’ve learned over the years I wanted to write a blog post about the importance of program evaluation (1), including tips about what you ‘should’ be doing. I wanted to tell you why I think it’s so important to organize yourself at the beginning of a new program so that you are ready to gather information and document the things that happened (what worked really well, not so well and where you can make changes).
Here’s the thing. As I was reading journal articles about conducting program evaluations to prepare myself to impart all of this knowledge and wisdom, I realized that I don’t always practice what I was about to preach. Don’t get me wrong. I think program evaluation (also referred to as monitoring and evaluation) is absolutely critical to include in all new programs, regardless of your field or industry. It absolutely vital that you think about what you want the participants and everyone else (e.g., staff, family members, volunteers) to get out of the experience. If you don’t set out goals, plan and consider the potential outcomes and impacts, how else will you be able to celebrate your success and share the exciting impact stories with others?
Here’s the part that I was forgetting. In my attempt at making everyone believe me, that I know what I’m talking about, I forgot about how important it is to involve the stakeholders from the very beginning. I forgot that without bringing everyone together to make sure that (a) the assessment tools make sense for a specific population, (b) those who are going to hand out the surveys or gather the information understand the reason why we are doing this, and (c) they are prepared to answer any and all questions from participants. Without all of this coming together, the evaluation is not going to be as rich and meaningful as it could be.
Program evaluation is all about people
Program evaluation only works if it gets done. You can plan out a list of things to measure and identify the best assessment tools (for example, surveys about goals, perceived behaviours, and participant demographics), but if you don’t have the support from those working on the front line, they aren’t going to be filled out. And if the surveys aren’t completed or the interviews aren’t conducted, you won’t have any data to work with.
You need to have buy in from everyone involved. Program evaluation is all about people, including those who spend time developing great programs and those who spend time delivering them (sometimes these are the same people!). You need to build relationships, gather feedback, get everyone on board so that you have a team who is dedicated to supporting all stages of the evaluation process. Here’s the thing though. This takes time. This means
- (a) asking people questions -like the person who will hand out your survey,
- (b) listening – Do they think it’s too long? When do they think a good time is to get it done? Do they know why you are doing this? and
- (c) maybe even changing your evaluation plans – even if you think you know EXACTLY what should be done!.
This is what leads to authentic, meaningful evaluation. Luckily evaluations are about process and learning. So in my case, there’s still time for me to change and modify my behaviour. I can’t wait till the next time I’m responsible for a program evaluation. I’m already thinking ahead to how I can make it better. And at the end of the day, isn’t that what evaluation is really about?
If you are interested in learning more about program evaluation, including tips and strategies to make it work for you, please let a comment below. Or if you have any suggestions for future UFIT blog topics, please let us know! We appreciate you taking the time to help us ensure these blog posts are meaningful for you!
(1) For the purpose of this blog post, program evaluation means: “the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decisions about future program development” (Patton, 2008, p.39).
Patton, M. Q. (2008). Utilization-focused evaluation. 4th ed. Thousand Oaks, CA: Sage.