Confirming the Factor Structure of a Research-Based Mid-Semester Evaluation of College Teaching

2020 ◽  
Vol 38 (7) ◽  
pp. 866-881
Author(s):  
Alice E. Donlan ◽  
Virginia L. Byrne

End-of-semester evaluations provide scalable data for university administrators, but typically do not provide instructors with timely feedback to inform their teaching practices. Midsemester evaluations have the potential to provide instructors with beneficial formative feedback that can contribute to improved teaching practices and student engagement. However, existing research on the construction of valid, reliable midsemester tools is rare, and there are no existing midsemester evaluation scales that were constructed using education research and psychometric analysis. To address this gap, we designed and piloted a midsemester evaluation of teaching with 29 instructors and 1,350 undergraduate students. We found evidence that our Mid-Semester Evaluation of College Teaching (MSECT) is a valid and reliable measure of four constructs of effective teaching: classroom climate, content, teaching practices, and assessment. Furthermore, our factor structure remained consistent across instructor genders, providing evidence that the MSECT may be less susceptible to gender bias than prior student evaluation measures.

Author(s):  
Rentauli Maria Silalahi

Student evaluation of teaching (SET) has been proven to improve teachers’ teaching practices and students’ learning experiences despite being used commonly for accountability purposes. Indonesian teachers’ perceptions of SET, however, remain largely unexplored. This qualitative study therefore investigated how four Indonesian university teachers perceived SET, how SET impacted their teaching practices and what roles they believed the university should play in implementing SET properly. The participants taught English to undergraduate students in an Indonesian private university. Data were collected using semi-structured interviews and analysed using qualitative methods. The teachers perceived SET positively, had made conscious changes to improve their teaching practices and students’ learning, and believed the institution had facilitated teachers in meeting students’ needs, especially during the campus closure due to the COVID-19 pandemic, which led to a transition to online learning. The institution where the participants taught implemented SET only for formative or improvement purposes. Using SET for such purposes is important as it is more likely to cause teachers less pressure and anxiety. Hence, teachers are willing to act upon the student feedback. Meanwhile, using SET for accountability purposes may create extra work for teachers and make them feel manipulated and untrusted.


Author(s):  
Natalia Manuhutu

This study investigated the students’ perceptions concerning the use of Robert Frost’s poetry in writing class at English Literature Department of Musamus University which was obtained through a survey. A total of 17 undergraduate students taking writing class participated in this study. The participants responded to a questionnaire and an open-ended questions concerning the two focal points: (1) how the students perceived the use of Robert Frost’s poetry in teaching writing, (2) the implementation of Frost’s poetry in improving students’ writing short story. The results of the study revealed that the implementation of Frost’s poetry helped them to be easier in writing short story. Most of the participants gave positive response to the use of Frost’s poetry in teaching them to write a short story. In addition, they seemed to prefer learning writing short story by using English poetry in writing classes. The concluding discussion addresses suggestion about the need to consider students’ wants and needs by gauging their perceptions as the student evaluation of teaching in order to keep up the better improvement to the teaching writing the texts and the using of authentic material or media in English Literature Department at Musamus University.


2000 ◽  
Vol 8 ◽  
pp. 50 ◽  
Author(s):  
Robert Sproule

The purpose of the present work is twofold. The first is to outline two arguments that challenge those who would advocate a continuation of the exclusive use of raw SET data in the determination of "teaching effectiveness" in the "summative" function. The second purpose is to answer this question: "In the face of such challenges, why do university administrators continue to use these data exclusively in the determination of 'teaching effectiveness'?"


Author(s):  
Bob Uttl ◽  
Victoria C. Violo

In a widely cited and widely talked about study, MacNell et al. (2015) [1] examined SET ratings of one female and one male instructor, each teaching two sections of the same online course, one section under their true gender and the other section under false/opposite gender. MacNell et al. concluded that students rated perceived female instructors more harshly than perceived male instructors, demonstrating gender bias against perceived female instructors. Boring, Ottoboni, and Stark (2016) [2] re-analyzed MacNell et al.’s data and confirmed their conclusions. However, the design of MacNell et al. study is fundamentally flawed. First, MacNell et al.’ section sample sizes were extremely small, ranging from 8 to 12 students. Second, MacNell et al. included only one female and one male instructor. Third, MacNell et al.’s findings depend on three outliers – three unhappy students (all in perceived female conditions) who gave their instructors the lowest possible ratings on all or nearly all SET items. We re-analyzed MacNell et al.’s data with and without the three outliers. Our analyses showed that the gender bias against perceived female instructors disappeared. Instead, students rated the actual female vs. male instructor higher, regardless of perceived gender. MacNell et al.’s study is a real-life demonstration that conclusions based on extremely small sample-sized studies are unwarranted and uninterpretable.


2021 ◽  
Vol 10 (1) ◽  
pp. 74
Author(s):  
Li Li ◽  
Jingya Zhang

This study explores perceptions of Chinese undergraduate students taking online courses amid the Covid-19 pandemic. Using semi-structured interviews after students’ completion of the online courses, the study yields certain findings. First, university administrators are expected to communicate more with students to hear their concerns and offer assistance accordingly. Second, instructors should incorporate more interactive activities to promote learning, create a relaxing learning environment, and provide a timely feedback to students. Third, undergraduate students should employ appropriate learning strategies that include: being an independent learner, making a self-regulated learning plan, managing time, and practicing self-motivation. Implications for online learning practices are discussed.


Pedagogika ◽  
2021 ◽  
Vol 143 (3) ◽  
pp. 45-67
Author(s):  
Lidon Moliner ◽  
Aida Sanahuja ◽  
Francisco Alegre

The aims of this study were to create and validate a questionnaire designed to assess schoolteachers’ pedagogical beliefs according to 641 schoolteachers and 26 experts and analyse the results obtained therefrom. A seven-factor structure was defined for the questionnaire, and Cronbach’s alpha was .91. Compared to their older, more experienced and male counterparts, younger, less experienced and female teachers, respectively, demonstrated more positive beliefs about factors such as classroom climate, the teacher’s role and the student’s role.


Author(s):  
Milica Maričić ◽  
Aleksandar Đoković ◽  
Veljko Jeremić

Student evaluation of teaching (SET) has steadily, but surely, become an important assessment tool in higher education. Although SET provides feedback on students level of satisfaction with the course and the lecturer, the validity of its results has been questioned. After extensive studies, the factor which is believed to distort the SET results is gender of the lecturer. In this paper, Potthoff analysis is employed to additionally explore whether there is gender bias in SET. Namely, this analysis has been used with great success to compare linear regression models between groups. Herein, we aimed to model the overall lecturer impression with independent variables related to teaching, communication skills, and grading and compare the models between genders. The obtained results reveal that gender bias exists in certain cases in the observed SET. We believe that our research might provide additional insights on the interesting topic of gender bias in SET.


Author(s):  
Bob Uttl ◽  
Victoria Violo

In a recent small sample study, Khazan et al. (2020) examined SET ratings received by one female teaching (TA) assistant who assisted with teaching two sections of the same online course, one section under her true gender and one section under false/opposite gender. Khazan et al. concluded that their study demonstrated gender bias against female TA even though they found no statistical difference in SET ratings between male vs. female TA ( p = .73). To claim gender bias, Khazan et al. ignored their overall findings and focused on distribution of six negative SET ratings and claimed, without reporting any statistical test results, that (a) female students gave more positive ratings to male TA than female TA, (b) female TA received five times as many negative ratings than the male TA, and (c) female students gave most low scores to female TA. We conducted the missing statistical tests and found no evidence supporting Khazan et al.s claims. We also requested Khazan et al.s data to formally examine them for outliers and to re-analyze the data with and without the outliers. Khazan et al. refused. We read off the data from their Figure 1 and filled in several values using the brute force, exhaustive search constrained by the summary statistics reported by Khazan et al.. Our re-analysis revealed six outliers and no evidence of gender bias. In fact, when the six outliers were removed, the female TA was rated higher than male TA but non-significantly so.


Sign in / Sign up

Export Citation Format

Share Document