How Many Responses Do We Need? Using Generalizability Analysis to Estimate Minimum Necessary Response Rates for Online Student Evaluations

2015 ◽  
Vol 27 (4) ◽  
pp. 395-403 ◽  
Author(s):  
Margaret W. Gerbase ◽  
Michèle Germond ◽  
Bernard Cerutti ◽  
Nu V. Vu ◽  
Anne Baroffio
2018 ◽  
Vol 44 (1) ◽  
pp. 37-49 ◽  
Author(s):  
Karen Young ◽  
Jeffrey Joines ◽  
Trey Standish ◽  
Victoria Gallagher

Author(s):  
Susan J. Clark ◽  
Christian M. Reiner ◽  
Trav D. Johnson

Many institutions of higher education are considering the possibility of conducting student evaluations of teaching (course-ratings) online. Some campuses have already implemented online evaluation systems that collect, process, and report ratings data electronically. Information on the successes and challenges of these systems is beginning to emerge. This chapter outlines some of the most salient advantages and challenges of online student evaluations of teaching within the context of how they relate to The Personnel Evaluation Standards set forth by the Joint Committee on Standards for Educational Evaluation (JCSEE, 1988). The authors also provide suggestions for successful implementation of online evaluation systems.


2010 ◽  
Vol 34 (4) ◽  
pp. 213-216 ◽  
Author(s):  
John A. McNulty ◽  
Gregory Gruener ◽  
Arcot Chandrasekhar ◽  
Baltazar Espiritu ◽  
Amy Hoyt ◽  
...  

Student evaluations of faculty are important components of the medical curriculum and faculty development. To improve the effectiveness and timeliness of student evaluations of faculty in the physiology course, we investigated whether evaluations submitted during the course differed from those submitted after completion of the course. A secure web-based system was developed to collect student evaluations that included numerical rankings ( 1–5) of faculty performance and a section for comments. The grades that students received in the course were added to the data, which were sorted according to the time of submission of the evaluations and analyzed by Pearson's correlation and Student's t-test. Only 26% of students elected to submit evaluations before completion of the course, and the average faculty ratings of these evaluations were highly correlated [ r( 14 ) = 0.91] with the evaluations submitted after completion of the course. Faculty evaluations were also significantly correlated with the previous year [ r( 14 ) = 0.88]. Concurrent evaluators provided more comments that were statistically longer and subjectively scored as more “substantive.” Students who submitted their evaluations during the course and who included comments had significantly higher final grades in the course. In conclusion, the numeric ratings that faculty received were not influenced by the timing of student evaluations. However, students who submitted early evaluations tended to be more engaged as evidenced by their more substantive comments and their better performance on exams. The consistency of faculty evaluations from year to year and concurrent versus at the end of the course suggest that faculty tend not to make significant adjustments to student evaluations.


Sign in / Sign up

Export Citation Format

Share Document