Student access to online evaluations
The online evaluation that the Evaluation service emails to Otago students is user-friendly for both laptops and small electronic devices. To mimic the ‘captured’ nature of in-class paper-based questionnaires, schedule online questionnaires to be completed in class (where online access is available). Select the date AND time that an evaluation starts to coincide with a class time so that invitation emails are at the top of students’ inboxes.
Student portal of Otago inFORM
In addition to the email invitation, students can complete questionnaires sent directly to their @otago.ac.nz email account by logging in to the student portal of Otago inFORM with their University username.
You can present this student portal link to students via email, text message, social media, or in class:
It is also embedded into the home page of Blackboard, the student app, and a QR code.
Learn more about ordering and accessing student evaluations (PDF)
Move to online evaluations
Every NZ university, the Australia 'group of eight' universities, the US Ivy league universities, and many more have made the transition from paper-based to online student evaluations. Studies show that online, compared to paper-based, questionnaires:
- provide five times the total amount of written comments1 which are also more informative.2,3
- are more inclusive. They reach everyone enrolled not just those who turn up to class.
- reduce sustainability concerns, and operational costs.
- provide less chance for fraud (e.g., staff completing evaluations on behalf of students).
- are, according to a study of over 800 non-first year Otago University students in 20174, twice as preferred to paper-based evaluations.2,5,6
Since 2009, when online questionnaires became widely available, staff voluntarily transitioned away from paper-based evaluations. Online evaluation uptake increased year-on-year to 25% in 2015. Staff were encouraged to use online evaluations in 2016 and from 2017 it has been the only option.
Online teacher effectiveness ratings
Concerns were raised that online evaluations might increase the voice of 'absentee' or disaffected students. Instead, online teacher evaluations conducted at Otago between 2014 and 2017 show that students are more likely to respond if their grade-point-averages are higher. Others have also found that students either with, or expecting, higher grade-point-averages are more likely to participate in online evaluations7,8,9. It might be that students who do less work for papers think of the evaluation as work too. And even when students who expected poor, versus good grades, respond, they are no more likely to evaluate a teacher poorly9,10. These findings are consistent with ratings of teacher effectiveness at Otago, which, since records began in 2001, have never been more positive than they have been in the six years from 2014 to 2019.
Interpreting lower response rates
However, without a paper evaluation handed out in class, students can easily avoid completing it. Indeed, the transition to online-only evaluations has been accompanied by a general decline in response rates. Low response rates don't however necessarily reflect student apathy. Only 8% of Otago students thought that completing evaluations was unimportant4. 83% thought that student evaluations should be run each time a paper is run with a further 10% saying every second time a paper is run. And 73% were content with the number of evaluations they were receiving, 17% wanted fewer, and 10% wanted more. So, if the vast majority of students value receiving evaluations why are fewer actually completing them?
One answer is that students are over-surveyed. In actuality, students seem reluctant to continue providing feedback where there is little evidence of it making an impact11. For instance, Otago students commonly said that they didn't complete evaluations because they saw little point in doing so4. A quarter of them said that they were not told how evaluations were going to be used. Less than one in six thought that evaluations were very likely to help improve students' experiences of the paper. And, only one in twenty students said that evaluation results had been shared with them in the previous twelve months (i.e., closing the loop). Others also suggest that students are less inclined to complete evaluations near the end of semester, with exams looming, when they are stressed and busy12. Believing in the feedback process appears then to be insufficient. It seems that students also need to receive evaluations at the right time and be confident that their voice will be heard and make a real difference13.
Departments that have made changes to better engage students have managed to increase response rates. Bennett and Nair (2010) describe how an enhanced communication strategy helped obtain an average online response rate of 83% in Monash's Education department, well above their university average of 44%11. Similarly, Kuch and Roberts (2019) describe how they guided 273 students at the University of Las Vegas to complete student evaluations in class on mobile devices to obtain a 78% response rate14. At Otago, the Radiation department asked students what would work for them. As a result, a one hour tutorial in a computer lab was dedicated to the running of all their second semester evaluation in 2017. Average response rates jumped from 44% in semester one to 84% in semester two.
Online response rates of at least 70% at Otago
To share the success of sixty Otago staff who obtained online student evaluation response rates of at least 70%, view the summary of the tools they used:
A more detailed report includes scenarios in staff members' own words plus additional examples:
Detailed report (PDF)
Triangulate evaluation evidence
Even if response rates remain stubbornly low, student evaluations still retain value. A single insightful comment can inspire a positive change. And for evidence of teaching effectiveness, student evaluations are considered to be the most valid and consistent tool15,16. Yes, bias exists with student evaluations, peer review, self-reflection and every form of evaluation in which humans make subjective judgements. To minimise bias from individual tools, staff triangulate evidence from multiple sources and conduct student evaluations on several occasions. In this way, anomalous results can be placed in context amongst more enduring trends.
- Hardy, N. (2003). ‘Online ratings: Fact and fiction. New Directions for Teaching and Learning, 96, pp. 31-38.
- Anderson, H. M., J. Cain, and E. Bird. (2005). ‘Online Student Course Evaluations: Review of Literature and a Pilot Study’. American Journal of Pharmaceutical Education, 69(1), pp. 34-42.
- Benton, S. L., and Cashin, W. E. (2012). ‘IDEA Paper No. 50: Student ratings of teaching: A summary of research and literature. Manhattan’. KS: The IDEA Center.
- Study of 855 non-first year Otago University students. (2017). Evaluation Team, Quality Advancement Unit.
- Knight, D., Naidu, V. and Kinash, S. (2012). ‘Achieving high student evaluation of teaching response rates through a culture of academic-student collaboration’. Studies in Learning, Evaluation, Innovation and Development, 9(1), pp. 126-144.
- Morrison, K. (2013). ‘Online and Paper Evaluations of Courses: A Literature Review and Case Study.’ Evaluation Research and Evaluation, 19(7), pp. 585-604.
- Adams, M. and Umbach, P. (2012). ‘Nonresponse and online student evaluations of teaching: Understanding the influence of salience, fatigue, and academic environments’. Research in Higher Education, 53, pp. 576-591.
- Layne, B. H., DeCristoforo, J. R., & McGinty, D. (1999). ‘Electronic versus Traditional Student Ratings of Instruction’. Research in Higher Education, 40(2), pp. 221-232.
- Thorpe, S. W. (2002). ‘Online student evaluation of instruction: An investigation of non-response bias’. Paper presented at the 42nd annual Forum for the Association for Institutional Research, Toronto, Ontario, Canada.
- Avery, R. J., Bryant, W. K., Mathios, A., Kang, H., & Bell, D. (2006). ‘Electronic Course Evaluations: Does an Online Delivery System Influence Student Evaluations?’. Journal of Economic Education, 37(1), pp. 21-37.
- Bennett, L., and Nair, C. S. (2010). ‘A Recipe for Effective Participation Rates for Web-Based Surveys.’ Assessment and Evaluation in Higher Education, 35(4), pp. 357-366.
- Iqbal, I., Lee, J. D., Pearson, M. L., and Albon, S. P. (2016). ‘Student and faculty perceptions of student evaluations of teaching in a Canadian pharmacy school’. Currents in Pharmacy Teaching and Learning, 8(2), pp. 191-199.
- Goodman, J., Anson, R. and Belcheir, M. (2015). ‘The effect of incentives and other instructor-driven strategies to increase online student evaluation response rates’. Assessment & Evaluation in Higher Education, 40(7), pp. 958-970.
- Kuch, F. and Roberts, R. M. (2019). ‘Electronic in-class course evaluations and future directions’. Assessment & Evaluation in Higher Education, 44(5), pp. 726-731.
- Benton, S. L. & Ryalls, K. R. (2016). ‘Challenging misconceptions about student ratings of instruction 10 (IDEA Paper No. 58)’. Manhattan, KS: The IDEA Center.
- Berk, R. A. (2005). ‘Survey of 12 Strategies to Measure Teaching Effectiveness’. International Journal of Teaching and Learning in Higher Education, 17(1), pp. 48-62.