Student Evaluation of College Teaching Effectiveness: a brief review

1998 ◽  
Vol 23 (2) ◽  
pp. 191-212 ◽  
Author(s):  
Howard K. Wachtel
Author(s):  
Bob Uttl

AbstractIn higher education, anonymous student evaluation of teaching (SET) ratings are used to measure faculty’s teaching effectiveness and to make high-stakes decisions about hiring, firing, promotion, merit pay, and teaching awards. SET have many desirable properties: SET are quick and cheap to collect, SET means and standard deviations give aura of precision and scientific validity, and SET provide tangible seemingly objective numbers for both high-stake decisions and public accountability purposes. Unfortunately, SET as a measure of teaching effectiveness are fatally flawed. First, experts cannot agree what effective teaching is. They only agree that effective teaching ought to result in learning. Second, SET do not measure faculty’s teaching effectiveness as students do not learn more from more highly rated professors. Third, SET depend on many teaching effectiveness irrelevant factors (TEIFs) not attributable to the professor (e.g., students’ intelligence, students’ prior knowledge, class size, subject). Fourth, SET are influenced by student preference factors (SPFs) whose consideration violates human rights legislation (e.g., ethnicity, accent). Fifth, SET are easily manipulated by chocolates, course easiness, and other incentives. However, student ratings of professors can be used for very limited purposes such as formative feedback and raising alarm about ineffective teaching practices.


Author(s):  
Mehrak Rahimi

In this chapter the impact of using a learning management system on pre-service and in-service teachers' evaluation of a teacher educator in a teacher training university was compared. Two groups of students participated in the study and for one semester experienced a blended learning where the extension of academic activities of the course Materials Evaluation and Syllabus Design was presented via a learning management system online. At the end of the course both groups' evaluation of the instructor's teaching was compared in two aspects: teaching style and student-teacher interaction. The result showed that there was a significant difference between two groups' evaluation of the educator. Pre-service teachers were found to have higher attitudes towards teaching effectiveness and they were more satisfied with both teachers' teaching style and social behavior.


2019 ◽  
Vol 11 (3) ◽  
pp. 604-615
Author(s):  
Mahmoud AlQuraan

Purpose The purpose of this paper is to investigate the effect of insufficient effort responding (IER) on construct validity of student evaluations of teaching (SET) in higher education. Design/methodology/approach A total of 13,340 SET surveys collected by a major Jordanian university to assess teaching effectiveness were analyzed in this study. The detection method was used to detect IER, and the construct (factorial) validity was assessed using confirmatory factor analysis (CFA) and principal component analysis (PCA) before and after removing detected IER. Findings The results of this study show that 2,160 SET surveys were flagged as insufficient effort responses out of 13,340 surveys. This figure represents 16.2 percent of the sample. Moreover, the results of CFA and PCA show that removing detected IER statistically enhanced the construct (factorial) validity of the SET survey. Research limitations/implications Since IER responses are often ignored by researchers and practitioners in industrial and organizational psychology (Liu et al., 2013), the results of this study strongly suggest that higher education administrations should give the necessary attention to IER responses, as SET results are used in making critical decisions Practical implications The results of the current study recommend universities to carefully design online SET surveys, and provide the students with clear instructions in order to minimize students’ engagement in IER. Moreover, since SET results are used in making critical decisions, higher education administrations should give the necessary attention to IER by examining the IERs rate in their data sets and its consequences on the data quality. Originality/value Reviewing the related literature shows that this is the first study that investigates the effect of IER on construct validity of SET in higher education using an IRT-based detection method.


2000 ◽  
Vol 8 ◽  
pp. 50 ◽  
Author(s):  
Robert Sproule

The purpose of the present work is twofold. The first is to outline two arguments that challenge those who would advocate a continuation of the exclusive use of raw SET data in the determination of "teaching effectiveness" in the "summative" function. The second purpose is to answer this question: "In the face of such challenges, why do university administrators continue to use these data exclusively in the determination of 'teaching effectiveness'?"


2004 ◽  
Vol 1 (6) ◽  
Author(s):  
Kim L. Chuah ◽  
Cynthia Hill

The student evaluation, used to measure students’ perceptions of teacher performance, has been increasingly used as the predominant component in assessing teaching effectiveness (Waters et al. 1988), and the widespread movement of outcomes assessment across the country makes this trend likely to continue in the future (McCoy et al. 1994, AACSB 1994, SACS 1995).  Substantial research has been conducted with regard to the reliability and accuracy of student evaluation of teaching quality, and a considerable number of uncontrollable factors are found to bias the results of the evaluation rating.  This paper identifies one more factor.  Each student has an “evaluator profile”, which decreases the reliability of the student evaluation.  An “evaluator profile” is a persistent pattern of evaluating behavior that may or may not be consistent with the quality of the characteristic being evaluated.  Each class of students consists of a random sample of different evaluator profiles.  A student evaluation rating of a teacher’s performance is biased up or down depending on the concentration of high or low evaluator profiles present.  This paper further shows through simulation the degree to which student “evaluator profiles” impact the overall student evaluation rating of teacher performance. We find that there is evidence to support the “evaluator profile” conjecture, and that these “evaluator profiles” do in fact have the potential to change overall student evaluation ratings substantially.


2011 ◽  
Vol 5 (3) ◽  
Author(s):  
Gregory C. Potter ◽  
George C. Romeo ◽  
Da-Hsien Bao ◽  
Robert E. Pritchard

This paper investigates the variability of student teaching effectiveness survey evaluations among the various recitation sections when lecture/recitation instruction is utilized with the same instructor both delivering the lecture and teaching all of the corresponding recitation sections.  The research results indicate that when an instructor teaches multiple sections using lecture/recitation instruction, then the meaningful measure of the instructor’s teaching is the average of the student ratings for the various recitation sections. This study focuses on the variability of the students’ responses to each item in the survey instrument as measured by its standard deviation.


Sign in / Sign up

Export Citation Format

Share Document