Accessibility Skip to Global Navigation Skip to Local Navigation Skip to Content Skip to Search Skip to Site Map Menu

Transitioning to online questionnaires

Why switch from paper to online evaluations?

“Online delivery offers several advantages over paper-and-pencil administration. Students can respond outside of class at their convenience, freeing class time for other activities (Dommeyer, Baum, & Hanna, 2003; Layne, DeCristoforo, & McGinty, 1999). Response rates to open-ended questions posted online tend to be higher (Johnson, 2003) and written comments lengthier (Hardy, 2003; Johnson, 2003; Layne et al., 1999). Moreover, online directions and procedures can be uniform for all classes, enabling teachers to be less involved in the administration process (Layne et al., 1999).” (Benton & Cashin, 2012, p. 11).

Do scores between online and paper evaluations differ substantially?

“When the same students respond under both formats, the correlations are high between global ratings of the teacher (.84) and course (.86) (Johnson, 2003). Further, no meaningful differences are found in individual item means, number of positive and negative written comments (Venette, Sellnow, & McIntire, 2010), scale means and reliabilities, and the underlying factor structure of the ratings (Leung & Kember, 2005). Similarly, when different students respond to online and paper surveys, no meaningful differences are found in student progress on relevant course objectives, global ratings of the course and teacher, frequency of various teaching methods (Benton et al., 2010a), subscale means (Layne et al., 1999), the proportion of positive and negative written comments (Hardy, 2003), and the underlying factor structure (Layne et al., 1999).” (Benton & Cashin, 2012, p. 11).

Likewise, little meaningful difference can be discerned between online and paper teacher evaluations conducted at the University of Otago. Of 319 course codes that were evaluated in both paper and online formats between 2013 and early 2016, the average proportion of positive responses to: “Overall, how effective have you found [teacher’s name] in teaching this course?” was 86% for paper and 82% for online. Given that the presence of a teacher while students complete questionnaires tends to lead to higher ratings (Benton & Cashin, 2012), the marginally lower scores achieved with online evaluations (4% average, 3% median) may in fact be a truer representation of student opinion.

Will allowing "absentee" students to participate lower my evaluation scores?

Teacher ratings appear to be unrelated to student attendance (Perrett, 2013). Students with, or expecting, higher grade-point-averages are more likely to participate in online evaluations (Adams and Umbach, 2012; Layne et al., 1999; Thorpe, 2002). Even if this was not the case, students who are expecting poor, versus good, grades were no more likely to evaluate a teacher poorly (Avery et. al. 2006; and Thorpe 2002).

Do online evaluations have lower response rates than paper?

This is something that you can influence. The key to better response rates appears to be obtaining greater student engagement (Bennett and Nair, 2010).

Communicate with students

Your relationship with students is an important factor in increasing response rates. Time the evaluation carefully, explain why you are doing the evaluation, feedback to students, and make changes when appropriate (see Harvey, 2003).

Let students know that their opinions matter

Ensure that students feel that their opinions make a difference. Sending out an evaluation at the end of a course makes it difficult for this to happen. If you ask for student comments part way through the course you can then tell the students what you have learned. They will experience any changes you then make or you can provide a reason why things should stay the same. They will feel consulted, involved and valued (see Leckey and Neil, 2001; Bennett and Nair, 2010).

As part of explaining why evaluations are done, it may be appropriate to briefly explain the role that student evaluations play in academic evaluations (e.g. confirmation, progression, promotion).

Have students complete online questionnaires in class

The online evaluation that HEDC emails to Otago students is user-friendly for small electronic devices. So, if you wish to mimic the ‘captured’ nature of in-class paper-based questionnaires, you can schedule online questionnaires to be completed in class (where online access is available). Be sure to let students know which class they need to bring a device to. And, to ensure that students have received the email link to the online questionnaire, request that it begins at least 1 day before the specified class.

How much does response rate matter?

Even though overall response rates are currently low for online evaluations at Otago there is little to no correlation between response rates and the proportion of positive ratings in both the paper (r = 0.17) and online (r = 0.10) formats (see also Benton and Cashin, 2012). So, while online delivery has led to lower response rates, response rates appear to have little impact on the positivity of ratings.

Finally, the committees that process your confirmation, progression, and promotion have been forewarned to expect lower response rates as students get used to the delivery system .

References

Adams, M. and Umbach, P. (2012). Nonresponse and online student evaluations of teaching: Understanding the influence of salience, fatigue, and academic environments. Research in Higher Education, 53: 576-591.

Avery, R. J., Bryant, W. K., Mathios, A., Kang, H., & Bell, D. (2006). Electronic Course Evaluations: Does an Online Delivery System Influence Student Evaluations? Journal of Economic Education, 37(1): 21-37.

Bennett, L., & Nair, C. S. “A Recipe for Effective Participation Rates for Web-Based Surveys,” Assessment and Evaluation in Higher Education, 2010, Vol. 35, No. 4, pp. 357-366.

Benton, S. L., & Cashin, W. E. (2012). IDEA Paper No. 50: Student ratings of teaching: A summary of research and literature. Manhattan, KS: The IDEA Center.

Harvey. L, “Student Feedback,” Quality in Higher Education,2003, Vol. 9, No. 1, pp. 3-20.

Johnson, T. D. (2003). Online Student Ratings: Will Students Respond? New Directions for Teaching and Learning, 2003(96), 49–59.

Layne, B. H., DeCristoforo, J. R., & McGinty, D. (1999). ‘Electronic versus Traditional Student Ratings of Instruction’. Research in Higher Education, Vol. 40, No. 2 (Apr., 1999), pp. 221-232.

Leckey, J., & Neill, N. “Quantifying Quality: The Importance of Student Feedback,” Quality in Higher Education, 2001, Vol. 7, No. 1, pp. 19-3.

Perrett, J. 2013. Exploring graduate and undergraduate course evaluations administered on paper and online: A case study. Assessment & Evaluation in Higher Education, 38(1): 85-93.

Thorpe, S. W. (2002). Online student evaluation of instruction: An investigation of non-response bias. Paper presented at the 42nd annual Forum for the Association for Institutional Research, Toronto, Ontario, Canada.