A Mixture Group Bifactor Model for Binary Responses

2014 ◽  
Vol 21 (3) ◽  
pp. 375-395 ◽  
Author(s):  
Sun-Joo Cho ◽  
Allan S. Cohen ◽  
Seock-Ho Kim
2019 ◽  
Author(s):  
Amanda Goodwin ◽  
Yaacov Petscher ◽  
Jamie Tock

Various models have highlighted the complexity of language. Building on foundational ideas regarding three key aspects of language, our study contributes to the literature by 1) exploring broader conceptions of morphology, vocabulary, and syntax, 2) operationalizing this theoretical model into a gamified, standardized, computer-adaptive assessment of language for fifth to eighth grade students entitled Monster, PI, and 3) uncovering further evidence regarding the relationship between language and standardized reading comprehension via this assessment. Multiple-group item response theory (IRT) across grades show that morphology was best fit by a bifactor model of task specific factors along with a global factor related to each skill. Vocabulary was best fit by a bifactor model that identifies performance overall and on specific words. Syntax, though, was best fit by a unidimensional model. Next, Monster, PI produced reliable scores suggesting language can be assessed efficiently and precisely for students via this model. Lastly, performance on Monster, PI explained more than 50% of variance in standardized reading, suggesting operationalizing language via Monster, PI can provide meaningful understandings of the relationship between language and reading comprehension. Specifically, considering just a subset of a construct, like identification of units of meaning, explained significantly less variance in reading comprehension. This highlights the importance of considering these broader constructs. Implications indicate that future work should consider a model of language where component areas are considered broadly and contributions to reading comprehension are explored via general performance on components as well as skill level performance.


2021 ◽  
pp. 106907272110022
Author(s):  
Marijana Matijaš ◽  
Darja Maslić Seršić

Career adaptability is an important resource for dealing with career transitions such as the transition from university to work. Previous research emphasized the importance of focusing on career adapt-abilities instead only on general career adaptability. The aim of this research was to investigate whether career adaptability can be conceptualized as a bifactor model and whether general and specific dimensions of career adaptability have a relationship with job-search self-efficacy of graduates. In an online cross-sectional study, 667 graduates completed the Career Adapt-Abilities Scale and Job Search Skill and Confidence Scale. The CFA analysis showed that the bifactor model of career adaptability had a good fit where general factor explained most of the items’ variance. The SEM analysis revealed that general career adaptability and the specific factor of confidence positively correlated with job-search and interview performance self-efficacy. Control only correlated with interview performance self-efficacy. Neither concern nor curiosity showed a significant relationship with job-search and interview performance self-efficacy.


2020 ◽  
Vol 98 (Supplement_4) ◽  
pp. 469-470
Author(s):  
MaryGrace Erickson ◽  
Danielle Marks ◽  
Elizabeth Karcher ◽  
Michel Wattiaux

Abstract Efforts to improve the quality of teaching and learning in animal science are forestalled by the lack of psychometric scales validated in our disciplinary context. Researchers have used instruments validated outside of animal science reliably, but this approach has questionable validity. The objective of our research was to adapt and validate scales to measure the motivational variables individual interest (II) and situational interest (SI) in introductory animal sciences students. A total of 254 introductory course students in two consecutive semesters rated their interest in animal sciences on unidimensional II (8-item) and 3-factor SI (11-item) scales previously validated for psychology undergraduates. After adapting instruments with wording specific to animal sciences, we conducted a series of confirmatory factor analyses. First, we discovered and removed two problematic items from the unidimensional II scale, offered theory-based explanations for differential item functioning in animal sciences students, and validated a revised II scale (λ = 0.74 - 0.94, CFI = 0.995, RMSEA = 0.027). Next, we confirmed the validity and reliability of the SI scale and its three subscales (λ = 0.83 - 0.96, CFI = 0.979, RMSEA = 0.048). Finally, to explore the dimensionality of SI in our population, we fitted a bifactor model and computed ancillary indices. Results supported the reliability and empirical validity of the bifactor model as an alternative conceptualization of SI (CFI = 0.986, RMSEA = 0.044). and indicated that the SI scale is mostly unidimensional (ω H = 0.923). This suggests that total SI scores can be used as a composite measure but that subscale scores are substantially contaminated by the general SI factor and should not be interpreted as unique. We present the finalized scales, recommendations for their use in animal sciences classrooms, and suggestions for future research.


2017 ◽  
Vol 78 (5) ◽  
pp. 717-736 ◽  
Author(s):  
Samuel Green ◽  
Yanyun Yang

Bifactor models are commonly used to assess whether psychological and educational constructs underlie a set of measures. We consider empirical underidentification problems that are encountered when fitting particular types of bifactor models to certain types of data sets. The objective of the article was fourfold: (a) to allow readers to gain a better general understanding of issues surrounding empirical identification, (b) to offer insights into empirical underidentification with bifactor models, (c) to inform methodologists who explore bifactor models about empirical underidentification with these models, and (d) to propose strategies for structural equation model users to deal with underidentification problems that can emerge when applying bifactor models.


2016 ◽  
Vol 28 (8) ◽  
pp. 987-1000 ◽  
Author(s):  
Francisco J. Abad ◽  
Miguel A. Sorrel ◽  
Francisco J. Román ◽  
Roberto Colom

2016 ◽  
Vol 33 (1) ◽  
pp. 131-147 ◽  
Author(s):  
Jennifer L. Tackett ◽  
Heather Krieger ◽  
Clayton Neighbors ◽  
Dipali Rinker ◽  
Lindsey Rodriguez ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document