We are now in the final module of our Program Inquiry and Evaluation course. It has been an interesting semester learning about the various forms of evaluation use and program design. When I read Facilitating the Use of Evaluations by Sara Vaca this week, it calmed my nerves that after five years of evaluation she is starting to go beyond “surviving” evaluation.
In these final steps, I have been instructed to identify data collection methods and analysis strategies, describe an approach to enhance evaluation use and write a statement that describes how my evaluation plan adheres to the Standards for Program Evaluation. I look forward to the thoughts and challenges of my peers on my evaluation process.
To reiterate my previous posts, I am designing a Program Evaluation Design (PED) for an internship/placement program that is part of a college course. Ironically since I began this PED, the website has already adjusted their language. Previously, the program claimed that a four-month placement gives students an “opportunity to graduate into a job” in their field. While it is no longer claiming this, I have decided to continue my evaluation as is; therefore, my PED questions are as follows (updated November 11, 2018):
- What percentage of students have obtained jobs directly from the placement organization?
- To what extent is students’ success in obtaining a job attributed to the placement program?
- Is there a difference in academic standing and personality traits between students that have obtained a job from placement versus students who have not?
- In what ways are activities working for the program to run efficiently?
- In what ways are students satisfied or dissatisfied with different aspects of the program?
- What are the retention rates of the program?
- Has the program seen an increase in enrollment and applications?
Identifying Data Collection
As I watched this YouTube video regarding data collection methods by Sierra Thompson, I recognize this evaluation would have elements of direct sources, archival data, and indirect sources. I believe there is already existing data that would prove useful, as well as obtaining new data from existing students and placement organizations. There will be a balance of both qualitative and quantitative data to explain the value of the program and the impact is it having. The breakdown is as follows:
Direct sources (interviews, surveys, observations: qualitative and quantitative): This type of data collection will assist in evaluating all of the questions. Interviews and surveys would need to be distributed to current students, alumni, and placement organizations; as well as conducted with current staff and administration associated with the program. While some data collected would be quantitative, such as the percentage of students attributing their job to the placement, other data collection would be qualitative, such as the analysis of what activities and aspects are working well/not-so-well.
Archival data (public records, education records, etc: quantitative and qualitative): Primarily I would use archival data to look at the growth (or decline) of the program and retention rates. We would also be able to assess the number of applicants versus registrants, which could give us a bit of an inside look at the perception of the program.
Indirect Sources (family and friends, other organizations of similar evaluation: qualitative): Mostly I would access this type of source for comparison. What are similar programs doing with placements? Are there elements of other programs that are working more effectively? While this would not be the primary data source of my evaluation (especially based on the questions), I believe this would provide me as an evaluator with more context – allowing me to submerge myself better in the program structure and evaluation.
With the data I receive, I would discuss any findings and uncertainties with the stakeholders and users of the program. I would want to ensure any evaluation findings are as accurate as they can possibly be, and make sure that the findings are relevant in the overall goals of the program evaluation.
Enhancing Evaluation Use
I found our readings in module three fascinating. The ways that evaluation use has developed and the on-going debate of whether or not it is the responsibility of the evaluator to ensure that the evaluation is followed through on. Based on our readings and my personal opinion, I have developed the following statement:
Prior to the commencement of the program evaluation, I will ensure that I have a clear understanding from the stakeholders as to why a program evaluation is occurring. After developing an understanding, and an agreed upon design, I will continue to make myself available throughout the process. As an evaluator, I will dedicate myself to learning the program and submerging myself in all aspects related to it (staff, placement, students, administration, etc). I will provide on-going reports, organize meetings when needed, and set up a discussion of the findings to help establish priorities as a team. Following the evaluation, I will check in with the organization after six months to see if they require any further assistance. As an evaluator, I want to be committed to the follow-through of program evaluation as much as I possibly can.
Commitment to Standards of Practice
As outlined in Better Evaluation, I will make sure I adhere to the Standards for Program Evaluation. The following is a brief statement of how I will adhere to each of the five categories (utility, feasibility, propriety, accuracy, and evaluation accountability):
I believe this program evaluation design adheres to the Standards of Practice, by providing a feasible evaluation that would be utilized by the stakeholders and users of the program. After a thorough consultation with interested parties, I would assure them a quality, unbiased, thorough, and accurate evaluation of the program. I recognize that I am accountable for the evaluation and would provide guidance and assistance as needed. I would work with the group to establish goals on implementation and would communicate all findings in various forms (orally and written) to ensure all of its contents are clear and understood.
Beyond this, I believe I have a responsibility to conduct this evaluation in a reasonable amount of time and will be responsible for all of the findings. I will be responsive to stakeholders and users of the program and will change my methods when I find deficiencies in the program. Lastly, I will make sure to keep track of all data and findings throughout the process of the program evaluation.
Thank you for reading along with my PED journey. I look forward to your feedback either here or in the OnQ platform. Happy evaluating!
Thompson, Sierra. (2016, April 2). Data collection methods [Video file]. Retrieved from https://www.youtube.com/watch?v=NJ-gW6adQTc.
Vaca, Sara. (2018, June 2). Facilitating the use of evaluations [Web blog]. Retrieved from https://aea365.org/blog/facilitating-the-use-of-evaluations-by-sara-vaca/comment-page-1/#comment-328614.
Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards: A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks, CA: Sage.