scholarly journals Student Evaluation Of Teacher Performance: Random Pre-Destination

2004 ◽  
Vol 1 (6) ◽  
Author(s):  
Kim L. Chuah ◽  
Cynthia Hill

The student evaluation, used to measure students’ perceptions of teacher performance, has been increasingly used as the predominant component in assessing teaching effectiveness (Waters et al. 1988), and the widespread movement of outcomes assessment across the country makes this trend likely to continue in the future (McCoy et al. 1994, AACSB 1994, SACS 1995).  Substantial research has been conducted with regard to the reliability and accuracy of student evaluation of teaching quality, and a considerable number of uncontrollable factors are found to bias the results of the evaluation rating.  This paper identifies one more factor.  Each student has an “evaluator profile”, which decreases the reliability of the student evaluation.  An “evaluator profile” is a persistent pattern of evaluating behavior that may or may not be consistent with the quality of the characteristic being evaluated.  Each class of students consists of a random sample of different evaluator profiles.  A student evaluation rating of a teacher’s performance is biased up or down depending on the concentration of high or low evaluator profiles present.  This paper further shows through simulation the degree to which student “evaluator profiles” impact the overall student evaluation rating of teacher performance. We find that there is evidence to support the “evaluator profile” conjecture, and that these “evaluator profiles” do in fact have the potential to change overall student evaluation ratings substantially.

2015 ◽  
Vol 5 (6) ◽  
pp. 9 ◽  
Author(s):  
Faisal Al-Maamari

<p>It is important to consider the question of whether teacher-, course-, and student-related factors affect student ratings of instructors in Student Evaluation of Teaching (SET) in English Language Teaching (ELT). This paper reports on a statistical analysis of SET in two large EFL programmes at a university setting in the Sultanate of Oman. I carried out a multiple regression analysis to address the research questions of whether instructor sex, class size, course type and percent participation would affect teaching effectiveness scores, and whether or not response rate can be predicted by instructor sex, class size and course type. The study utilizes a dataset of over 2000 student ratings obtained from an SET survey covering the period from Fall 2011 through to Spring 2014in these two programmes. Results indicated that the modeled predictors showed extremely low bias towards both teaching quality scores and response rate. Although the effect sizes of these results are extremely small, they are still significant due to the large sample size (comprising over 2000). The findings also suggest that contrary to common parlance in some quarters claiming students’ unreliable ratings, this analysis has shown that students can judge teaching effectiveness and do not allow other teacher-, course- and student-related factors to bias their responses. The study’s significance stems from the fact that it adds to instructional evaluation in ELT, a field characterized by a clear lack of research on SET.</p>


Author(s):  
Bob Uttl

AbstractIn higher education, anonymous student evaluation of teaching (SET) ratings are used to measure faculty’s teaching effectiveness and to make high-stakes decisions about hiring, firing, promotion, merit pay, and teaching awards. SET have many desirable properties: SET are quick and cheap to collect, SET means and standard deviations give aura of precision and scientific validity, and SET provide tangible seemingly objective numbers for both high-stake decisions and public accountability purposes. Unfortunately, SET as a measure of teaching effectiveness are fatally flawed. First, experts cannot agree what effective teaching is. They only agree that effective teaching ought to result in learning. Second, SET do not measure faculty’s teaching effectiveness as students do not learn more from more highly rated professors. Third, SET depend on many teaching effectiveness irrelevant factors (TEIFs) not attributable to the professor (e.g., students’ intelligence, students’ prior knowledge, class size, subject). Fourth, SET are influenced by student preference factors (SPFs) whose consideration violates human rights legislation (e.g., ethnicity, accent). Fifth, SET are easily manipulated by chocolates, course easiness, and other incentives. However, student ratings of professors can be used for very limited purposes such as formative feedback and raising alarm about ineffective teaching practices.


2000 ◽  
Vol 8 ◽  
pp. 50 ◽  
Author(s):  
Robert Sproule

The purpose of the present work is twofold. The first is to outline two arguments that challenge those who would advocate a continuation of the exclusive use of raw SET data in the determination of "teaching effectiveness" in the "summative" function. The second purpose is to answer this question: "In the face of such challenges, why do university administrators continue to use these data exclusively in the determination of 'teaching effectiveness'?"


2012 ◽  
Vol 42 (1) ◽  
pp. 129-150 ◽  
Author(s):  
Eduardo de Carvalho Andrade ◽  
Bruno de Paula Rocha

We use a random-effects model to find the factors that affect the student evaluation of teaching (SET) scores. Dataset covers 6 semesters, 496 undergraduate courses related to 101 instructors and 89 disciplines. Our empirical findings are: (i) the class size affects negatively the SET score; (ii) instructors with more experience are better evaluated, but these gains reduce over time; (iii) participating in training programs, designed to improve the quality of teaching, did not increase the SET scores; (iv) instructors seem to be able to marginally 'buy' a better evaluation by inflating students' grade. Finally, there are significant changes in the rankings when we adjust the SET score to eliminate the effects of variables beyond instructors' control. Despite these changes, they are not statistically significant.


Sign in / Sign up

Export Citation Format

Share Document