Multiple Group Item Response Theory Applications Using Stata irt Package

2021 ◽  
Vol 19 (3) ◽  
pp. 190-198
Author(s):  
Xiaying Zheng ◽  
Ji Seung Yang
Assessment ◽  
2019 ◽  
Vol 27 (7) ◽  
pp. 1416-1428 ◽  
Author(s):  
Hao Luo ◽  
Björn Andersson ◽  
Jennifer Y. M. Tang ◽  
Gloria H. Y. Wong

The traditional application of the Montreal Cognitive Assessment uses total scores in defining cognitive impairment levels, without considering variations in item properties across populations. Item response theory (IRT) analysis provides a potential solution to minimize the effect of important confounding factors such as education. This research applies IRT to investigate the characteristics of Montreal Cognitive Assessment items in a randomly selected, culturally homogeneous sample of 1,873 older persons with diverse educational backgrounds. Any formal education was used as a grouping variable to estimate multiple-group IRT models. Results showed that item characteristics differed between people with and without formal education. Item functioning of the Cube, Clock Number, and Clock Hand items was superior in people without formal education. This analysis provided evidence that item properties vary with education, calling for more sophisticated modelling based on IRT to incorporate the effect of education.


2020 ◽  
Author(s):  
E. Damiano D'Urso ◽  
Kim De Roover ◽  
Jeroen K. Vermunt ◽  
Jesper Tijmstra

In social sciences, the study of group differences concerning latent constructs is ubiquitous. These constructs are generally measured by means of scales composed of ordinal items. In order to compare these constructs across groups, one crucial requirement is that they are measured equivalently or, in technical jargon, that measurement invariance holds across the groups. This study compared the performance of multiple group categorical confirmatory factor analysis (MG-CCFA) and multiple group item response theory (MG-IRT) in testing measurement invariance with ordinal data. A simulation study was conducted to compare the true positive rate (TPR) and false positive rate (FPR) both at the scale and at the item level for these two approaches under an invariance and a non-invariance scenario. The results of the simulation studies showed that the performance, in terms of the TPR, of MG-CCFA- and MG-IRT-based approaches mostly depends on the scale length. In fact, for long scales, the likelihood ratio test (LRT) approach, for MG-IRT, outperformed the other approaches, while, for short scales, MG-CCFA seemed to be generally preferable. In addition, the performance of MG-CCFA's fit measures, such as RMSEA and CFI, seemed to depend largely on the length of the scale, especially when MI was tested at the item level. General caution is recommended when using these measures, especially when MI is tested for each item individually. A decision flowchart, based on the results of the simulation studies, is provided to help summarizing the results and providing indications on which approach performed best and in which setting.


Author(s):  
E. Damiano D’Urso ◽  
Kim De Roover ◽  
Jeroen K. Vermunt ◽  
Jesper Tijmstra

AbstractIn social sciences, the study of group differences concerning latent constructs is ubiquitous. These constructs are generally measured by means of scales composed of ordinal items. In order to compare these constructs across groups, one crucial requirement is that they are measured equivalently or, in technical jargon, that measurement invariance (MI) holds across the groups. This study compared the performance of scale- and item-level approaches based on multiple group categorical confirmatory factor analysis (MG-CCFA) and multiple group item response theory (MG-IRT) in testing MI with ordinal data. In general, the results of the simulation studies showed that MG-CCFA-based approaches outperformed MG-IRT-based approaches when testing MI at the scale level, whereas, at the item level, the best performing approach depends on the tested parameter (i.e., loadings or thresholds). That is, when testing loadings equivalence, the likelihood ratio test provided the best trade-off between true-positive rate and false-positive rate, whereas, when testing thresholds equivalence, the χ2 test outperformed the other testing strategies. In addition, the performance of MG-CCFA’s fit measures, such as RMSEA and CFI, seemed to depend largely on the length of the scale, especially when MI was tested at the item level. General caution is recommended when using these measures, especially when MI is tested for each item individually.


2016 ◽  
Vol 32 (6) ◽  
pp. 1843
Author(s):  
Nico Martins ◽  
Hester Nienaber

The goal of the current study was to assess the Employee Engagement Instrument (EEI) from an item response theory (IRT) perspective, with a specific focus on measurement invariance for annual turnover.  The sample comprised 4 099 respondents from all business sectors in South Africa.  This article describes the logic and procedures used to test for factorial invariance across groups in the context of construct validation. The procedures included testing for configural and metric invariance in the framework of multiple-group confirmatory factor analysis (CFA).The results confirmed the factor analytic structure of the model fit for some of the individual scales of the EEI. The measurement invariance of the EEI as a function of annual turnover was confirmed. However, the results indicated that the EEI needs to be refined for future research.


2001 ◽  
Vol 46 (6) ◽  
pp. 629-632
Author(s):  
Robert J. Mislevy

Sign in / Sign up

Export Citation Format

Share Document