item factor analysis
Recently Published Documents


TOTAL DOCUMENTS

77
(FIVE YEARS 14)

H-INDEX

18
(FIVE YEARS 1)

Methodology ◽  
2021 ◽  
Vol 17 (4) ◽  
pp. 296-306
Author(s):  
Urbano Lorenzo-Seva ◽  
Pere J. Ferrando

Kaiser’s single-variable measure of sampling adequacy (MSA) is a very useful index for debugging inappropriate items before a factor analysis (FA) solution is fitted to an item-pool dataset for item selection purposes. For reasons discussed in the article, however, MSA is hardly used nowadays in this context. In our view, this is unfortunate. In the present proposal, we first discuss the foundation and rationale of MSA from a ‘modern’ FA view, as well as its usefulness in the item selection process. Second, we embed the index within a robust approach and propose improvements in the preliminary item selection process. Third, we implement the proposal in different statistical programs. Finally, we illustrate its use and advantages with an empirical example in personality measurement.


2021 ◽  
Author(s):  
Christopher John Urban ◽  
Daniel J. Bauer

We investigate novel parameter estimation and goodness-of-fit (GOF) assessment methods for large-scale confirmatory item factor analysis (IFA) with many respondents, items, and latent factors. For parameter estimation, we extend Urban and Bauer's (2021) deep learning algorithm for exploratory IFA to the confirmatory setting by showing how to handle user-defined constraints on loadings and factor correlations. For GOF assessment, we explore new simulation-based tests and indices. In particular, we consider extensions of the classifier two-sample test (C2ST), a method that tests whether a machine learning classifier can distinguish between observed data and synthetic data sampled from a fitted IFA model. The C2ST provides a flexible framework that integrates overall model fit, piece-wise fit, and person fit. Proposed extensions include a C2ST-based test of approximate fit in which the user specifies what percentage of observed data can be distinguished from synthetic data as well as a C2ST-based relative fit index that is similar in spirit to the relative fit indices used in structural equation modeling. Via simulation studies, we first show that the confirmatory extension of Urban and Bauer's (2021) algorithm produces more accurate parameter estimates as the sample size increases and obtains comparable estimates to a state-of-the-art confirmatory IFA estimation procedure in less time. We next show that the C2ST-based test of approximate fit controls the empirical type I error rate and detects when the number of latent factors is misspecified. Finally, we empirically investigate how the sampling distribution of the C2ST-based relative fit index depends on the sample size.


2021 ◽  
Vol 12 ◽  
Author(s):  
María Dolores Nieto ◽  
Luis Eduardo Garrido ◽  
Agustín Martínez-Molina ◽  
Francisco José Abad

The item wording (or keying) effect consists of logically inconsistent answers to positively and negatively worded items that tap into similar (but polarly opposite) content. Previous research has shown that this effect can be successfully modeled through the random intercept item factor analysis (RIIFA) model, as evidenced by the improvements in the model fit in comparison to models that only contain substantive factors. However, little is known regarding the capability of this model in recovering the uncontaminated person scores. To address this issue, the study analyzes the performance of the RIIFA approach across three types of wording effects proposed in the literature: carelessness, item verification difficulty, and acquiescence. In the context of unidimensional substantive models, four independent variables were manipulated, using Monte Carlo methods: type of wording effect, amount of wording effect, sample size, and test length. The results corroborated previous findings by showing that the RIIFA models were consistently able to account for the variance in the data, attaining an excellent fit regardless of the amount of bias. Conversely, the models without the RIIFA factor produced increasingly a poorer fit with greater amounts of wording effects. Surprisingly, however, the RIIFA models were not able to better estimate the uncontaminated person scores for any type of wording effect in comparison to the substantive unidimensional models. The simulation results were then corroborated with an empirical dataset, examining the relationship between learning strategies and personality with grade point average in undergraduate studies. The apparently paradoxical findings regarding the model fit and the recovery of the person scores are explained, considering the properties of the factor models examined.


2021 ◽  
pp. 016502542110055
Author(s):  
Marcus Waldman ◽  
Dana Charles McCoy ◽  
Jonathan Seiden ◽  
Jorge Cuartas ◽  
Günther Fink ◽  
...  

The Caregiver Reported Early Development Instruments (CREDI) are assessments tools for measuring the development of children under age three in global contexts. The present study describes the construction and psychometric properties of the motor, cognitive, language, and socio-emotional subscales from the CREDI’s long form. Multidimensional item factor analysis was employed, allowing indicators of child development to simultaneously load onto multiple factors representing distinct developmental domains. A total of 14,113 caregiver reports representing 17 low-, middle-, and high-income countries were analyzed. Criterion-related validity of the constructed subscales was tested in a subset of participants using data from previously established instruments, anthropometric data, and a measure of child stimulation. We also report internal-consistency reliability and test–retest reliability statistics. Results from our analysis suggest that the CREDI subscales display adequate reliability for population-level measurement, as well as evidence of validity.


2021 ◽  
Author(s):  
María Dolores ◽  
Luis Eduardo Garrido ◽  
Francisco José Abad ◽  
Agustín Martínez-Molina

The item wording (or keying) effect consists of logically inconsistent answers to positively and negatively worded items that tap into similar (but polarly opposite) content. Previous research has shown that this effect can be successfully modeled through the random intercept item factor analysis (RIIFA) model, as evidenced by the improvements in model fit in comparison to models that only contain substantive factors. However, little is known regarding the capability of this model in recovering the uncontaminated person scores. To address this issue, the current study analyzed the performance of the RIIFA approach across three types of wording effects proposed in the literature: carelessness, item verification difficulty, and acquiescence. In the context of unidimensional substantive models, four independent variables were manipulated using Monte Carlo methods: type of wording effect, amount of wording effect, sample size, and test length. The results corroborated previous findings by showing that the RIIFA models were consistently able to account for the variance in the data, attaining excellent fit regardless of the amount of bias. Conversely, the models without the RIIFA factor produced increasingly poorer fit with greater amounts of wording effects. Surprisingly, however, the RIIFA models were not able to better estimate the uncontaminated person scores for any type of wording effect in comparison to the substantive unidimensional models. The simulation results were then corroborated with an empirical dataset examining the relationship between learning strategies and personality with grade point average in undergraduate studies. The apparently paradoxical findings regarding model fit and the recovery of the person scores are explained in light of the properties of the factor models examined.


2020 ◽  
Vol 24 (1) ◽  
Author(s):  
Bahrul Hayat ◽  
Muhammad Dwirifqi Kharisma Putra ◽  
Bambang Suryadi

Rasch model is a method that has a long history in its application in the fields of social and behavioral sciences including educational measurement. Under certain circumstances, Rasch models are known as a special case of Item response theory (IRT), while IRT is equivalent to the Item Factor Analysis (IFA) models as a special case of Structural Equation Models (SEM), although there are other ‘tradition’ that consider Rasch measurement models not part of both. In this study, a simulation study was conducted to using simulated data to explain how the interrelationships between the Rasch model as a constraint version of 2-parameter logistic (2-PL) IRT, Rasch model as an item factor analysis were compared with the Rasch measurement model using Mplus, IRTPRO and WINSTEPS program, each of which came from its own 'tradition'. The results of this study indicate that Rasch models and IFA as a special case of SEM are mathematically equal, as well as the Rasch measurement model, but due to different philosophical perspectives people might vary in their understanding about this concept. Given the findings of this study, it is expected that confusion and misunderstanding between the three can be overcome.


Sign in / Sign up

Export Citation Format

Share Document