Best Practices: Evaluating Learning Outcomes

Research vs. Quality Improvement/Program Evaluation

Consider the purpose of the assessment and whether you want to engage in Program Evaluation, Quality Improvement or Research (this distinction is particularly important because if the evaluation is research it will be subject approval by an Institutional Research Ethics Board). Research is grounded in a desire to further knowledge in a particular field of study (Levin-Rozalis, 2003).  It is rooted in theory and designed to test a hypothesis. In contrast, quality improvement or program evaluation tend to have a narrower focus and are generally only applicable for the program that is being evaluated (Lynn et al., 2007, p. 667).  Program Evaluation tends to have a narrower focus and is generally only applicable for the discipline that is being evaluated.  See Dalhousie’s Guidelines on the Scholarship of Learning and Teaching for more information these distinctions.

Data Collection

Consider how to best gather rigorous, robust data on the student learning experience. Remember that the purpose of evaluations is in part to ascertain the usefulness or success of an activity for future applications (Levin-Rozalis, 2003). For example, consider if the success of the simulation could be gauged in part on the students' final marks in the course (Fredericking, 2005). Alternatively, instructors might consider keeping their own journal record of the simulation exercise as a source of data for the evaluation portion.

Surveys

Consider making a quick three question survey for students to complete. This survey may be helpful for understanding the student’s perspective for strengthening the simulation (Baranowski, 2006). Some questions for this qualitative survey might be: What did you learn from the simulation? How could the simulation be improved to improve learning in future years? What did you like most about the simulation? In addition, consider giving students surveys before and after the simulation to assess knowledge acquisition and/or skill development (McCarthy & Anderson, 2000: 280).

Triangulation

Examining quantitative data in the form of students’ performance in tandem with the results of the survey may prove to be an effective tool in evaluating the effectiveness of the simulation.

Additional Resources

Keeping up with the Scholarship of Teaching and Learning on effective assessment of simulation-based learning. (Axtell et al., 1996: 139). See Dalhousie’s Scholarship of Teaching and Learning Resource Guide for more information.