An Instrument for the Student Evaluation of Teaching Effectiveness in Physical Education Activity CoursesEffects of a Season's Training on the Body Composition of Female College Swimmers

Author(s):  
William W. Colvin ◽  
Elmo S. Roundy
Author(s):  
Bob Uttl

AbstractIn higher education, anonymous student evaluation of teaching (SET) ratings are used to measure faculty’s teaching effectiveness and to make high-stakes decisions about hiring, firing, promotion, merit pay, and teaching awards. SET have many desirable properties: SET are quick and cheap to collect, SET means and standard deviations give aura of precision and scientific validity, and SET provide tangible seemingly objective numbers for both high-stake decisions and public accountability purposes. Unfortunately, SET as a measure of teaching effectiveness are fatally flawed. First, experts cannot agree what effective teaching is. They only agree that effective teaching ought to result in learning. Second, SET do not measure faculty’s teaching effectiveness as students do not learn more from more highly rated professors. Third, SET depend on many teaching effectiveness irrelevant factors (TEIFs) not attributable to the professor (e.g., students’ intelligence, students’ prior knowledge, class size, subject). Fourth, SET are influenced by student preference factors (SPFs) whose consideration violates human rights legislation (e.g., ethnicity, accent). Fifth, SET are easily manipulated by chocolates, course easiness, and other incentives. However, student ratings of professors can be used for very limited purposes such as formative feedback and raising alarm about ineffective teaching practices.


2000 ◽  
Vol 8 ◽  
pp. 50 ◽  
Author(s):  
Robert Sproule

The purpose of the present work is twofold. The first is to outline two arguments that challenge those who would advocate a continuation of the exclusive use of raw SET data in the determination of "teaching effectiveness" in the "summative" function. The second purpose is to answer this question: "In the face of such challenges, why do university administrators continue to use these data exclusively in the determination of 'teaching effectiveness'?"


2004 ◽  
Vol 1 (6) ◽  
Author(s):  
Kim L. Chuah ◽  
Cynthia Hill

The student evaluation, used to measure students’ perceptions of teacher performance, has been increasingly used as the predominant component in assessing teaching effectiveness (Waters et al. 1988), and the widespread movement of outcomes assessment across the country makes this trend likely to continue in the future (McCoy et al. 1994, AACSB 1994, SACS 1995).  Substantial research has been conducted with regard to the reliability and accuracy of student evaluation of teaching quality, and a considerable number of uncontrollable factors are found to bias the results of the evaluation rating.  This paper identifies one more factor.  Each student has an “evaluator profile”, which decreases the reliability of the student evaluation.  An “evaluator profile” is a persistent pattern of evaluating behavior that may or may not be consistent with the quality of the characteristic being evaluated.  Each class of students consists of a random sample of different evaluator profiles.  A student evaluation rating of a teacher’s performance is biased up or down depending on the concentration of high or low evaluator profiles present.  This paper further shows through simulation the degree to which student “evaluator profiles” impact the overall student evaluation rating of teacher performance. We find that there is evidence to support the “evaluator profile” conjecture, and that these “evaluator profiles” do in fact have the potential to change overall student evaluation ratings substantially.


2015 ◽  
Vol 57 (6) ◽  
pp. 623-638 ◽  
Author(s):  
Sanjiv Mittal ◽  
Rajat Gera ◽  
Dharminder Kumar Batra

Purpose – There is a debate in literature about the generalizability of the structure and the validity of the measures of Student Evaluation of Teaching Effectiveness (SET). This debate spans the dimensionality and validity of the construct, and the use of the measure for summative and formative purposes of teachers valuation and feedback. The purpose of this paper is to contribute to the debate on the aforementioned issues. Specifically the paper tests the relationship of teacher’s “charisma” trait with a measure of SET consisting of the two dimensions of “lecturer ability” and “module attributes.” The market characteristics of the paper are those of an emerging market and cross-cultural context with a specific reference to India. Design/methodology/approach – In this study, a two-dimensional scale of SET, which was originally developed by Shevlin et al. (2000) in their study in the UK, was empirically tested with Indian students and modified. Empirical data were collected from Indian students pursuing their MBA program in a north Indian university and statistical testing using exploratory and confirmatory factor analyses was undertaken. The proposed relationship of a teacher’s “charisma” trait was tested as a reflective construct comprising of the two dimensions of SET with the help of the software package Amos ver 4.0. Findings – The results indicate that the measure of SET is influenced by the teacher’s “Charisma” (trait), thus providing evidence of a halo effect. This raises the issue of validity of SET as an instrument for measuring teaching effectiveness (TE). The results provide support to the hypothesis that structure of SET is multidimensional along with the need for adapting the instrument in diverse cultural and market contexts. Originality/value – This study contributes to the debate on the validity, structure and use of SET as an instrument for measuring TE in a developing market with cross-cultural implications such as India.


Sign in / Sign up

Export Citation Format

Share Document