scholarly journals Validation of scales for measuring factors of teaching quality from the dynamic model of educational effectiveness

Psihologija ◽  
2021 ◽  
pp. 10-10
Author(s):  
Bojana Bodroza ◽  
Jelena Teodorovic ◽  
Smiljana Josic

Large-scale educational effectiveness research requires valid student questionnaires to assess teaching practices. This research validated eight scales for measuring teaching factors from the Dynamic Model of Educational Effectiveness (DMEE). Parallel versions of scales for measuring teaching factors in mathematics and biology were constructed and validated in two studies. In the first study, an exploratory factor analysis was conducted on data from 683 students. In the second study, the structure was cross-validated via a confirmatory factor analysis (CFA) on a sample of 5,476 students. The multi-group CFA resulted in an acceptable metric invarience for all scales, indicating that the scales have comparable factor loadings. However, unsatisfactory scalar invariance suggested that the scales could not be used to compare teachers of different subjects. Testing alternative structural relations between the teaching factors did not confirm that the data fit the DMEE model adequately, although the fit parameters were better than for the alternative theoretical models. For mathematics, the external validation of the scales showed that the scales correlated with job satisfaction, external control, and teacher self-efficacy reported by the teachers. The scales are reliable and valid and could be applied to different school subjects.

2012 ◽  
Vol 24 (3) ◽  
pp. 18-44 ◽  
Author(s):  
Ahmed Alzahrani ◽  
Bernd Carsten Stahl ◽  
Mary Prior

Governments worldwide spend billions from their allocated IT budgets to deliver convenient electronic services to their citizens. As a result, it is important to encourage citizens to use these services to avoid potential failures. Yet, few empirical studies exist that cover the relevant issues of adoption from the perspective of citizens in developing countries. Moreover, the need for a well-validated instrument to capture citizen adoption of such services is vital, given the vast investment in technology and the potential cost-saving implications. This study integrates elements from the most popular theories, including adoption technology acceptance model (TAM), innovation diffusion theory (IDT), and theory of planned behavior (TPB), in conjunction with web trust models. It develops an instrument to measure citizens’ acceptance of electronic public services by utilizing confirmatory factor analysis (CFA) within the structural equation modeling technique. Findings of a large scale data sampling of citizens in Saudi Arabia indicate that the proposed measurement model is an acceptable fit with the data. Overall, the findings supply a rigorous instrument for measuring citizens’ acceptance of e-public services, providing further insights for researchers and offering policy makers a suitable tool with which to study proposed strategies.


2001 ◽  
Vol 17 (1) ◽  
pp. 1-16 ◽  
Author(s):  
Robert J. Sternberg ◽  
J.L. Castejón ◽  
M.D. Prieto ◽  
Jarkko Hautamäki ◽  
Elena L. Grigorenko

Summary: In the current study we compare different theoretical models of the underlying structure of the STAT (Sternberg Triarchic Abilities Test), Level-H, by using the techniques of confirmatory factor analysis on a combined sample of 3278 school students from the United States, Finland, and Spain. The results of the comparison of a number of models - using the strategy of hierarchical confirmatory factor analysis (HCFA) and comparing nested and alternative models, specified under different assumed theories relative to a unidimensional concept of general intelligence, a traditional factorial concept, and a triarchic model - illustrate that the second-order factor model based on the triarchic theory of intelligence achieves the best (albeit far from perfect) fit to the empirical data.


PeerJ ◽  
2015 ◽  
Vol 3 ◽  
pp. e1312 ◽  
Author(s):  
Xiaoqing Tang ◽  
Wenjie Duan ◽  
Ying Wang ◽  
Pengfei Guo

Social anxiety is an emotional disorder common to various populations around the world. The newly developed Self-Beliefs Related to Social Anxiety Scale (SBSA) aims to assess three kinds of self-beliefs through 15 items that include self-related cognitive factors that evidently result in social anxiety. This study explored the psychometric characteristics of SBSA among 978 Chinese. An eight-item Negative Self-beliefs Inventory (NSBI) was developed through qualitative and quantitative analyses. Exploratory factor analysis, confirmatory factor analysis, and multi-group confirmatory factor analysis suggested that NSBI contained clear, meaningful, stable, and invariant three-factor structure consistent with the original SBSA. Further analyses showed that the three subscales and the entire scale exhibited high internal consistency (0.779–0.837), good criterion validity, and good convergent and divergent validity (i.e., negative associations with flourishing and positive associations with anxiety, depression, and stress). These findings indicated that NSBI is reliable and valid for measuring negative self-beliefs in the Chinese population. A higher total score of NSBI indicates the more serious negative self-beliefs. Limitations of the present study and implications for research and practice were also discussed. Further studies are needed to evaluate the predictive ability, incremental validity, and potential role of NSBI in clinical and large-scale populations.


1997 ◽  
Vol 81 (3) ◽  
pp. 963-967 ◽  
Author(s):  
David Watkins ◽  
John Sachs ◽  
Murari Regmi

The Causal Dimension Scale-II is conceptually important for research on attributions as it taps directly the subjects' own views of the dimensions underlying their causal ascriptions. However, this research based on the responses of 120 Nepalese tertiary students to the Causal Dimension Scale-II for both success and failure outcomes indicates that the internal consistency reliability of the External Control scale is of doubtful adequacy and that the best fit model for success outcomes combines the Locus and Personal Control scales while no adequate fit was found for failure outcomes. It is possible that these latter findings may be due to cultural differences in causal attributions rather than a deficiency in the scale's structure.


Author(s):  
Ellen Laupper ◽  
Lars Balzer ◽  
Jean-Louis Berger

Abstract Survey-based formats of assessing teaching quality in higher education are widely used and will likely continue to be used by higher education institutions around the world as various global trends contributing to their widespread use further evolve. Although the use of mobile devices for course evaluation continues to grow, there remain some unresolved aspects of the classic paper and web-based modes of evaluation. In the current study, the multigroup confirmatory factor analysis approach (MGCFA), an accepted methodological approach in general mixed-method survey research, was chosen to address some of the methodological issues when comparing these two evaluation modes. By randomly assigning one of the two modes to 33 continuing training courses at a Swiss higher education institution, this study tested whether the two different modes of assessing teaching quality yield the same results. The practical implications for course evaluation practice in institutions of higher education as well as the implications and limitations of the chosen methodological approach are discussed.


2019 ◽  
Vol 23 (4) ◽  
pp. 556-567
Author(s):  
Eglantina Hysa ◽  
Naqeeb Ur Rehman

Introduction.In recent years, measuring the efficiency and effectiveness of higher education has become a major issue. Most developed countries are using national surveys to measure teaching and assessment as key determinants of students’ approaches to learning which have direct effect on the quality of their learning outcomes. In less developed countries, there does not exist a national survey. This paper aims to propose an original questionnaire assessing the teaching quality. The specifics of this questionnaire, termed as the Instructor Course Evaluation Survey, is that it addresses three main dimensions, such as: Learning Resources, Teaching Effectiveness, and Student Support. Materials and Methods. The paper opted for an analytic study using 3,776 completed questionnaires. This is a case study applied to the students enrolled in economics program in a private university in Albania. The Instructor Course Evaluation Survey design was supported by the literature review, identifying the three main dimensions included in the questionnaire. The reliability was tested with Cronbach’s alpha and with Confirmatory Factor Analysis. The use of Confirmatory Factor Analysis helps in identifying issues of multi-dimensionality in scales. Results. The paper provides empirical insights into the assessing methodology and brings a new model of it. The finding suggests that Learning Resources, Teaching Effectiveness and Student Support increase the quality of teaching. Because of the chosen research target group, students from economics program, the research results may not be generalizable. Therefore, researchers are encouraged to test the proposed statements further. Discussion and Conclussion.The paper includes implications for the development of a simple and useful questionnaire assessing the quality of teaching. Although Instructor Course Evaluation Survey was applied specifically to economics program, the proposed questionnaire can be broadly applied. This paper fulfills an identified need to propose an original and simple questionnaire to be used from different universities and programs to measure the quality of teaching.


Sign in / Sign up

Export Citation Format

Share Document