A path analysis of factors often found to be related to student ratings of teaching effectiveness

1979 ◽  
Vol 11 (2) ◽  
pp. 111-123 ◽  
Author(s):  
Stephen A. Stumpf ◽  
Richard D. Freedman ◽  
Joseph C. Aguanno
2010 ◽  
Vol 39 (1) ◽  
Author(s):  
Tanya Beran ◽  
Claudio Violato

Characteristics of university courses and student engagement were examined in relation to student ratings of instruction. The Universal Student Ratings of Instruction instrument was administered to students at the end of every course at a major Canadian university over a three-year period. Using a two-step analytic procedure, a latent variable path model was created. The model showed a moderate fit to the data (Comparative Fit Index = .88), converged in _0 iterations, with a standardized residual mean error of .03, χ2 (_49) = _988.59, p < .05. The model indicated that course characteristics such as status and description are not directly related to student ratings. Rather, they are mediated by student engagement, which is measured by student attendance and expected grade. It was concluded that, although the model is statistically adequate, many other factors determine how students rate their instructors.  


Author(s):  
Ronald A Berk ◽  
Phyllis L Naumann ◽  
Susan E Appling

Peer observation of classroom and clinical teaching has received increased attention over the past decade in schools of nursing to augment student ratings of teaching effectiveness. One essential ingredient is the scale used to evaluate performance. A five-step systematic procedure for adapting, writing, and building any peer observation scale is described. The differences between the development of a classroom observation scale and an appraisal scale to observe clinical instructors are examined. Psychometric issues peculiar to observation scales are discussed in terms of content validity, eight types of response bias, and interobserver reliability. The applications of the scales in one school of nursing as part of the triangulation of methods with student ratings and the teaching portfolio are illustrated. Copies of the scales are also provided.


1984 ◽  
Vol 2 (2) ◽  
pp. 5-30 ◽  
Author(s):  
Penny Wright ◽  
Ray Whittington ◽  
G.E. Whittenburg

Author(s):  
Philip Stark ◽  
Richard Freishtat

Student ratings of teaching have been used, studied, and debated for almost a century. This article examines student ratings of teaching from a statistical perspective. The common practice of relying on averages of student teaching evaluation scores as the primary measure of teaching effectiveness for promotion and tenure decisions should be abandoned for substantive and statistical reasons: There is strong evidence that student responses to questions of “effectiveness” do not measure teaching effectiveness. Response rates and response variability matter. And comparing averages of categorical responses, even if the categories are represented by numbers, makes little sense. Student ratings of teaching are valuable when they ask the right questions, report response rates and score distributions, and are balanced by a variety of other sources and methods to evaluate teaching.


Sign in / Sign up

Export Citation Format

Share Document