Teacher Self-Ratings as a Validity Criterion for Student Evaluations

1987 ◽  
Vol 14 (1) ◽  
pp. 23-25 ◽  
Author(s):  
David R. Drews ◽  
W. Jeffrey Burroughs ◽  
DeeAnn Nokovich

Student ratings were validated against instructor self-ratings by assessing student—faculty agreement concerning day-to-day variability within courses. For 15 days, students and instructors in each of four courses made daily evaluations. Analysis showed that student ratings and instructor self-ratings were significantly correlated in three areas: material covered, instructor performance, and overall impressions of the success of the class. These results are consistent with those of other studies that have argued for the ability of students to provide valid course evaluations. In addition, they avoid some of the interpretive problems of other criterion measures that have been used to validate student evaluations.

2014 ◽  
Vol 28 (3) ◽  
pp. 189-204 ◽  
Author(s):  
Kristin F. Butcher ◽  
Patrick J. McEwan ◽  
Akila Weerapana

Average grades in colleges and universities have risen markedly since the 1960s. Critics express concern that grade inflation erodes incentives for students to learn; gives students, employers, and graduate schools poor information on absolute and relative abilities; and reflects the quid pro quo of grades for better student evaluations of professors. This paper evaluates an anti-grade-inflation policy that capped most course averages at a B+. The cap was biding for high-grading departments (in the humanities and social sciences) and was not binding for low-grading departments (in economics and sciences), facilitating a difference-in-differences analysis. Professors complied with the policy by reducing compression at the top of the grade distribution. It had little effect on receipt of top honors, but affected receipt of magna cum laude. In departments affected by the cap, the policy expanded racial gaps in grades, reduced enrollments and majors, and lowered student ratings of professors.


1973 ◽  
Vol 36 (2) ◽  
pp. 533-534 ◽  
Author(s):  
Dewitt C. Davison

68 college juniors rated themselves and the instructor on the 49 trait adjectives in Bills' Index of Adjustment and Values. They were then asked to rate the instructor's teaching performance on a different questionnaire. The correspondence between the average rating given self and the average given the instructor across the 49 adjectives was taken as an index of assumed-similarity of student to instructor. The 34 students who perceived the instructor as being most superior to themselves on the trait adjectives rated his teaching performance higher than the 34 who perceived him as being more similar to themselves. The findings suggest a halo effect in student ratings of instructor performance.


PeerJ ◽  
2016 ◽  
Vol 4 ◽  
pp. e2343 ◽  
Author(s):  
Robert Robinson

Introduction:The educational technology of massive open online courses (MOOCs) has been successfully applied in a wide variety of disciplines and are an intense focus of educational research at this time. Educators are now looking to MOOC technology as a means to improve professional medical education, but very little is known about how medical MOOCs compare with traditional content delivery.Methods:A retrospective analysis of the course evaluations for the Medicine as a Business elective by fourth-year medical students at Southern Illinois University School of Medicine (SIU-SOM) for the 2012–2015 academic years was conducted. This course was delivered by small group flipped classroom discussions for 2012–2014 and delivered via MOOC technology in 2015. Learner ratings were compared between the two course delivery methods using routinely collected course evaluations.Results:Course enrollment has ranged from 6–19 students per year in the 2012–2015 academic years. Student evaluations of the course are favorable in the areas of effective teaching, accurate course objectives, meeting personal learning objectives, recommending the course to other students, and overall when rated on a 5-point Likert scale. The majority of all student ratings (76–95%) of this elective course are for the highest possible choice (Strongly agree or Excellent) for any criteria, regardless if the course was delivered via a traditional or MOOC format. Statistical analysis of these ratings suggests that the Effective Teacher and Overall Evaluations did not statistically differ between the two delivery formats.Discussion:Student ratings of this elective course were highly similar when delivered in a flipped classroom format or by using MOOC technology. The primary advantage of this new course format is flexibility of time and place for learners, allowing them to complete the course objectives when convenient for them. The course evaluations suggest this is a change that is acceptable to the target audience.Conclusions:This study suggests that learner evaluations of a fourth-year medical school elective course do not significantly differ when delivered by flipped classroom group discussions or via MOOC technology in a very small single center observational study. Further investigation is required to determine if this delivery method is an acceptable and effective means of teaching in the medical school environment.


2019 ◽  
pp. 82-108 ◽  
Author(s):  
Jason Brennan ◽  
Phillip Magness

In the United States, most universities and colleges ask students to complete course evaluations at the end of each semester. They ask students how much they’ve learned, how much they studied, whether the instructor seemed well-prepared, and how valuable the class was overall. This chapter examines how colleges routinely make faculty hiring, retention, and promotion decisions on the basis of what they ought to know are invalid tests. It argues that student course evaluations do not track teacher effectiveness. Using these as the bases of determining hires, promotions, tenure, or raises for faculty is roughly on par with reading entrails or tea leaves to make such decisions. The chapter also explains why universities continue to use student course evaluations.


2017 ◽  
Author(s):  
Erin Michelle Buchanan ◽  
Becca Nicole Huber ◽  
Arden Miller ◽  
David W. Stockburger ◽  
Marshall Beauchamp

We analyzed student evaluations for 3,585 classes collected over 20 years to determine stability and evaluate the relationship of perceived grading to global evaluations, perceived fairness, and appropriateness of assignments. Using class as the unit of analysis, we found small evaluation reliability when professors taught the same course in the same semester, with much weaker correlations for differing courses. Expected grade and grading related questions correlated with overall evaluations of courses. Differences in course evaluations on expected grades, grading questions, and overall grades were found between full-time faculty and other types of instructors. These findings are expanded to a model of grading type questions mediating the relationship between expected grade and overall course evaluations with a moderating effect of type of instructor.


Author(s):  
Philip Stark ◽  
Richard Freishtat

Student ratings of teaching have been used, studied, and debated for almost a century. This article examines student ratings of teaching from a statistical perspective. The common practice of relying on averages of student teaching evaluation scores as the primary measure of teaching effectiveness for promotion and tenure decisions should be abandoned for substantive and statistical reasons: There is strong evidence that student responses to questions of “effectiveness” do not measure teaching effectiveness. Response rates and response variability matter. And comparing averages of categorical responses, even if the categories are represented by numbers, makes little sense. Student ratings of teaching are valuable when they ask the right questions, report response rates and score distributions, and are balanced by a variety of other sources and methods to evaluate teaching.


2006 ◽  
Vol 37 (1) ◽  
pp. 21-37 ◽  
Author(s):  
Rosemary J. Avery ◽  
W. Keith Bryant ◽  
Alan Mathios ◽  
Hyojin Kang ◽  
Duncan Bell

Sign in / Sign up

Export Citation Format

Share Document