2014 ◽  
Vol 28 (3) ◽  
pp. 83-92 ◽  
Author(s):  
Franziska Pfitzner-Eden ◽  
Felicitas Thiel ◽  
Jenny Horsley

Teacher self-efficacy (TSE) is an important construct in the prediction of positive student and teacher outcomes. However, problems with its measurement have persisted, often through confounding TSE with other constructs. This research introduces an adapted TSE instrument for preservice teachers, which is closely aligned with self-efficacy experts' recommendations for measuring self-efficacy, and based on a widely used measure of TSE. We provide first evidence of construct validity for this instrument. Participants were 851 preservice teachers in three samples from Germany and New Zealand. Results of the multiple-group confirmatory factor analyses showed a uniform 3-factor solution for all samples, metric measurement invariance, and a consistent and moderate correlation between TSE and a measure of general self-efficacy across all samples. Despite limitations to this study, there is some first evidence that this measure allows for a valid 3-dimensional assessment of TSE in preservice teachers.


2019 ◽  
Author(s):  
Amanda Goodwin ◽  
Yaacov Petscher ◽  
Jamie Tock

Various models have highlighted the complexity of language. Building on foundational ideas regarding three key aspects of language, our study contributes to the literature by 1) exploring broader conceptions of morphology, vocabulary, and syntax, 2) operationalizing this theoretical model into a gamified, standardized, computer-adaptive assessment of language for fifth to eighth grade students entitled Monster, PI, and 3) uncovering further evidence regarding the relationship between language and standardized reading comprehension via this assessment. Multiple-group item response theory (IRT) across grades show that morphology was best fit by a bifactor model of task specific factors along with a global factor related to each skill. Vocabulary was best fit by a bifactor model that identifies performance overall and on specific words. Syntax, though, was best fit by a unidimensional model. Next, Monster, PI produced reliable scores suggesting language can be assessed efficiently and precisely for students via this model. Lastly, performance on Monster, PI explained more than 50% of variance in standardized reading, suggesting operationalizing language via Monster, PI can provide meaningful understandings of the relationship between language and reading comprehension. Specifically, considering just a subset of a construct, like identification of units of meaning, explained significantly less variance in reading comprehension. This highlights the importance of considering these broader constructs. Implications indicate that future work should consider a model of language where component areas are considered broadly and contributions to reading comprehension are explored via general performance on components as well as skill level performance.


2020 ◽  
Vol 14 (15) ◽  
pp. 2453-2461
Author(s):  
Xuguang Zhang ◽  
Huangda Lin ◽  
Mingkai Chen ◽  
Bin Kang ◽  
Lei Wang

Sign in / Sign up

Export Citation Format

Share Document