Peer evaluation of teaching effectiveness

1990 ◽  
Vol 54 (10) ◽  
pp. 626-629
Author(s):  
MG Winter ◽  
SG Kestner
Author(s):  
Bob Uttl

AbstractIn higher education, anonymous student evaluation of teaching (SET) ratings are used to measure faculty’s teaching effectiveness and to make high-stakes decisions about hiring, firing, promotion, merit pay, and teaching awards. SET have many desirable properties: SET are quick and cheap to collect, SET means and standard deviations give aura of precision and scientific validity, and SET provide tangible seemingly objective numbers for both high-stake decisions and public accountability purposes. Unfortunately, SET as a measure of teaching effectiveness are fatally flawed. First, experts cannot agree what effective teaching is. They only agree that effective teaching ought to result in learning. Second, SET do not measure faculty’s teaching effectiveness as students do not learn more from more highly rated professors. Third, SET depend on many teaching effectiveness irrelevant factors (TEIFs) not attributable to the professor (e.g., students’ intelligence, students’ prior knowledge, class size, subject). Fourth, SET are influenced by student preference factors (SPFs) whose consideration violates human rights legislation (e.g., ethnicity, accent). Fifth, SET are easily manipulated by chocolates, course easiness, and other incentives. However, student ratings of professors can be used for very limited purposes such as formative feedback and raising alarm about ineffective teaching practices.


2021 ◽  
pp. e20210015
Author(s):  
Stacey A. Fox-Alvarez ◽  
Laura D. Hostnik ◽  
Bobbi Conner ◽  
J.S. Watson

Peer evaluation of teaching (PET) serves an important role as a component of faculty development in the medical education field. With the emergence of COVID-19, the authors recognized the need for a flexible tool that could be used for a variety of lecture formats, including virtual instruction, and that could provide a framework for consistent and meaningful PET feedback. This teaching tip describes the creation and pilot use of a PET rubric, which includes six fixed core items (lesson structure, content organization, audiovisual facilitation, concept development, enthusiasm, and relevance) and items to be assessed separately for asynchronous lectures (cognitive engagement—asynchronous) and synchronous lectures (cognitive engagement—synchronous, discourse quality, collaborative learning, and check for understanding). The instrument packet comprises the rubric, instructions for use, definitions, and examples of each item, plus three training videos for users to compare with authors’ consensus training scores; these serve as frame-of-reference training. The instrument was piloted among veterinary educators, and feedback was sought in a focus group setting. The instrument was well received, and training and use required a minimum time commitment. Inter-rater reliability within 1 Likert scale point (adjacent agreement) was assessed for each of the training videos, and consistency of scoring was demonstrated between focus group members using percent agreement (0.82, 0.85, 0.88) and between focus members and the authors’ consensus training scores (all videos: 0.91). This instrument may serve as a helpful resource for institutions looking for a framework for PET. We intend to continually adjust the instrument in response to feedback from wider use.


2000 ◽  
Vol 8 ◽  
pp. 50 ◽  
Author(s):  
Robert Sproule

The purpose of the present work is twofold. The first is to outline two arguments that challenge those who would advocate a continuation of the exclusive use of raw SET data in the determination of "teaching effectiveness" in the "summative" function. The second purpose is to answer this question: "In the face of such challenges, why do university administrators continue to use these data exclusively in the determination of 'teaching effectiveness'?"


Sign in / Sign up

Export Citation Format

Share Document