scholarly journals Studying Psychological Aspects of New Methods of Teaching Effectiveness on Higher Education of Iran

2015 ◽  
Vol 11 (22) ◽  
Author(s):  
Maryam Rahimi Mand ◽  
Abbas Abbaspour
2007 ◽  
Vol 4 (3) ◽  
Author(s):  
Obeua S. Persons

This study has identified two important factors, unrelated to an instructor’s teaching ability, which can affect an instructor’s teaching evaluations.  The first factor, which has never been examined in any prior studies, is the section effect.  This study finds that teaching evaluations differ significantly across sections of the same course taught by the same instructor.  This section effect cannot be explained by six student-related variables.  The second factor, which is students’ pre-course interest measured at the beginning of a course, is found to be positively related to teaching evaluations.  These findings suggest that higher-education administrators may want to consider the section effect and the students’ pre-course interest when they evaluate an instructor’s teaching effectiveness for promotion, tenure and merit decisions.


2009 ◽  
Vol 6 (4) ◽  
Author(s):  
Audrey Amrein-Beardsley ◽  
Thomas Haladyna

For over 30 years survey instruments have been used in colleges of higher education to measure instructional effectiveness. Extensive research has been conducted to determine which items best capture this construct. This research study was triggered by a college of education’s enthusiastic but failed attempt to create a new and improved instructor survey based on this research. Researchers found that the new instrument was no better than its predecessor. Student halo ratings contaminated results, reliability was lower than expected, and the survey results indicated one single dimension – general teaching effectiveness.  Two associated variables of considerable interest, course relevance and rigor/demand, were also contaminated by student halo rating. Based on these findings and the extensive literature on student surveys of teaching effectiveness, we argue that traditional surveys based on conventional items may be valid for evaluating global teaching effectiveness and other summative purposes but not for the formative, self-diagnostic, and reflective purposes anticipated. New ways of evaluating teaching in higher education are presented and discussed.  The article shares insights into theory-based survey development and a plan for validation.


Author(s):  
Bob Uttl

AbstractIn higher education, anonymous student evaluation of teaching (SET) ratings are used to measure faculty’s teaching effectiveness and to make high-stakes decisions about hiring, firing, promotion, merit pay, and teaching awards. SET have many desirable properties: SET are quick and cheap to collect, SET means and standard deviations give aura of precision and scientific validity, and SET provide tangible seemingly objective numbers for both high-stake decisions and public accountability purposes. Unfortunately, SET as a measure of teaching effectiveness are fatally flawed. First, experts cannot agree what effective teaching is. They only agree that effective teaching ought to result in learning. Second, SET do not measure faculty’s teaching effectiveness as students do not learn more from more highly rated professors. Third, SET depend on many teaching effectiveness irrelevant factors (TEIFs) not attributable to the professor (e.g., students’ intelligence, students’ prior knowledge, class size, subject). Fourth, SET are influenced by student preference factors (SPFs) whose consideration violates human rights legislation (e.g., ethnicity, accent). Fifth, SET are easily manipulated by chocolates, course easiness, and other incentives. However, student ratings of professors can be used for very limited purposes such as formative feedback and raising alarm about ineffective teaching practices.


2018 ◽  
Vol 6 (1) ◽  
pp. 516-522
Author(s):  
Kalina Peycheva ◽  
Mariela Deliverska

Regardless of what both patients and medical professionals might think, nowadays there is no free medicine. The need of changing the pattern is emphasized and people should become more responsible for their own health. The aim is to find a connection between the trust in GPs, prophylactic check-ups, new methods of treatment and the willingness of patients to pay for the received medical services. Material and Method: A questionnaire was prepared for the purposes of the study. The methods utilized were a direct individual anonymous questionnaire, statistical – descriptive, analytical (Chi-square). The answers were examined and statistically processed according to age, gender and education level of the participants. Results: 1. The results regarding the trust in GP is very unconvincing – only 14,5 % believe in their GP. 2. The percentage of believers in prophylactic check-ups is high - 57,9%.  3.The percentage of those who believe in the new methods and means for treatment is high, over 80%, while no difference is found with respect to the patients’ education level. 4. The patients often (86%) pay for the treatment of a specialist. 5. People with higher education more readily pay for medical care. Conclusions: 1. The lack of trust in GP combined with the strong belief in prophylactic check - ups and the new methods for diagnostic and treatment of diseases lead to higher expectations of patients towards the medical services and their readiness to pay for these services. 2. The patients indicate readiness to pay for medical services which is a part of the patients’ readiness to take care for their own health.


2020 ◽  
Vol 52 (9) ◽  
pp. 1305-1329 ◽  
Author(s):  
Corinne Jacqueline Perera ◽  
Zamzami Zainuddin ◽  
Chua Yan Piaw ◽  
Kenny S. L. Cheah ◽  
David Asirvatham

Teachers of urban higher education institutions often explore new methods of teaching using innovative techno-pedagogical approaches. This study reports on postgraduate students’ perceptions of the blended learning mode of delivery, co-taught by two lecturers concurrently during the “Qualitative Research” elective course offered for the Master of Educational Leadership program, in a reputed Malaysian university. A qualitative action research methodology was adopted for this study with students’ comments captured through Padlet. Results indicate that students have very positive perceptions of their experiences gained through blended learning and co-lecturing. The findings of this action research study provide evidence of the meaningful and personalized learning experiences reported by students, gained through the collaborative blended mode of delivery. The results also provide more thoughtful reflections for teachers to draw on students’ feedback and possibly adapt their teaching practices to better accommodate students learning needs.


2019 ◽  
Vol 11 (3) ◽  
pp. 604-615
Author(s):  
Mahmoud AlQuraan

Purpose The purpose of this paper is to investigate the effect of insufficient effort responding (IER) on construct validity of student evaluations of teaching (SET) in higher education. Design/methodology/approach A total of 13,340 SET surveys collected by a major Jordanian university to assess teaching effectiveness were analyzed in this study. The detection method was used to detect IER, and the construct (factorial) validity was assessed using confirmatory factor analysis (CFA) and principal component analysis (PCA) before and after removing detected IER. Findings The results of this study show that 2,160 SET surveys were flagged as insufficient effort responses out of 13,340 surveys. This figure represents 16.2 percent of the sample. Moreover, the results of CFA and PCA show that removing detected IER statistically enhanced the construct (factorial) validity of the SET survey. Research limitations/implications Since IER responses are often ignored by researchers and practitioners in industrial and organizational psychology (Liu et al., 2013), the results of this study strongly suggest that higher education administrations should give the necessary attention to IER responses, as SET results are used in making critical decisions Practical implications The results of the current study recommend universities to carefully design online SET surveys, and provide the students with clear instructions in order to minimize students’ engagement in IER. Moreover, since SET results are used in making critical decisions, higher education administrations should give the necessary attention to IER by examining the IERs rate in their data sets and its consequences on the data quality. Originality/value Reviewing the related literature shows that this is the first study that investigates the effect of IER on construct validity of SET in higher education using an IRT-based detection method.


2019 ◽  
Vol 36 (2) ◽  
pp. 202-216 ◽  
Author(s):  
Jiju Antony ◽  
Stavros Karamperidis ◽  
Frenie Antony ◽  
Elizabeth A. Cudney

PurposeThe purpose of this paper is to demonstrate the power of experimental design as a technique to understand and evaluate the most important factors which influence teaching effectiveness for a postgraduate course in a higher education (HE) context.Design/methodology/approachThe methodology involves the execution of a case study in the form of an experiment in a business school setting. The experiment was carried out with the assistance of over 100 postgraduate students from 26 countries. The data were collected over a two year period (2015 and 2016) from a postgraduate course offered by the same tutor for repeatability reasons.FindingsThe key findings of the experiment have clearly indicated that students’ perceptions of teaching effectiveness based on intuition and guesswork are not identical to the outcomes from a simple designed experiment. Moreover, the results of the experiment provided a greater stimulus for the wider applications of the technique to other processes across the case study HE sector.Research limitations/implicationsOne of the limitations of the study is that the experiment was conducted for a popular postgraduate course. It would be beneficial to understand the results of the experiment for less popular postgraduate courses in the university in order to drive improvements. Moreover, this research was conducted only for postgraduate courses and the results may vary for undergraduate courses. This would be an interesting study to understand the differences in the factors between undergraduate and postgraduate teaching effectiveness.Practical implicationsThe outcome of this experiment would help everyone who is involved in teaching to understand the factors and their influences to improve students’ satisfaction scores during the delivery of teaching.Originality/valueThis paper shows how experimental design as a pure manufacturing technique can be extended to a HE setting.


Sign in / Sign up

Export Citation Format

Share Document