Development, Evaluation, and Utility of a Peer Evaluation Form for Online Teaching

2014 ◽  
Vol 39 (1) ◽  
pp. 22-25
Author(s):  
Carol D. Gaskamp ◽  
Eileen Kintner
2021 ◽  
pp. e20210015
Author(s):  
Stacey A. Fox-Alvarez ◽  
Laura D. Hostnik ◽  
Bobbi Conner ◽  
J.S. Watson

Peer evaluation of teaching (PET) serves an important role as a component of faculty development in the medical education field. With the emergence of COVID-19, the authors recognized the need for a flexible tool that could be used for a variety of lecture formats, including virtual instruction, and that could provide a framework for consistent and meaningful PET feedback. This teaching tip describes the creation and pilot use of a PET rubric, which includes six fixed core items (lesson structure, content organization, audiovisual facilitation, concept development, enthusiasm, and relevance) and items to be assessed separately for asynchronous lectures (cognitive engagement—asynchronous) and synchronous lectures (cognitive engagement—synchronous, discourse quality, collaborative learning, and check for understanding). The instrument packet comprises the rubric, instructions for use, definitions, and examples of each item, plus three training videos for users to compare with authors’ consensus training scores; these serve as frame-of-reference training. The instrument was piloted among veterinary educators, and feedback was sought in a focus group setting. The instrument was well received, and training and use required a minimum time commitment. Inter-rater reliability within 1 Likert scale point (adjacent agreement) was assessed for each of the training videos, and consistency of scoring was demonstrated between focus group members using percent agreement (0.82, 0.85, 0.88) and between focus members and the authors’ consensus training scores (all videos: 0.91). This instrument may serve as a helpful resource for institutions looking for a framework for PET. We intend to continually adjust the instrument in response to feedback from wider use.


2014 ◽  
Vol 18 (3) ◽  
Author(s):  
Katrina A. Meyer ◽  
Vicki S. Murrell

This article presents the results of a national study of 39 higher education institutions that collected information about their evaluation procedures and outcome measures for faculty development for online teaching conducted during 2011-2012. The survey results found that over 90% of institutions used measures of the faculty person’s assessment of satisfaction and usefulness of the training; student GPAs were used by only 30% of the institutions. As for how evaluations were conducted, online evaluations were used by 80% of institutions and focus groups were used by 21% of the institutions.


2019 ◽  
Vol 5 (1) ◽  
pp. 52-60
Author(s):  
Robin Hailstorks ◽  
Karen E. Stamm ◽  
John C. Norcross ◽  
Rory A. Pfund ◽  
Peggy Christidis

1987 ◽  
Author(s):  
William P. Erchul
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document