Applying Item Response Theory Analysis to the Montreal Cognitive Assessment in a Low-Education Older Population

Assessment ◽  
2019 ◽  
Vol 27 (7) ◽  
pp. 1416-1428 ◽  
Author(s):  
Hao Luo ◽  
Björn Andersson ◽  
Jennifer Y. M. Tang ◽  
Gloria H. Y. Wong

The traditional application of the Montreal Cognitive Assessment uses total scores in defining cognitive impairment levels, without considering variations in item properties across populations. Item response theory (IRT) analysis provides a potential solution to minimize the effect of important confounding factors such as education. This research applies IRT to investigate the characteristics of Montreal Cognitive Assessment items in a randomly selected, culturally homogeneous sample of 1,873 older persons with diverse educational backgrounds. Any formal education was used as a grouping variable to estimate multiple-group IRT models. Results showed that item characteristics differed between people with and without formal education. Item functioning of the Cube, Clock Number, and Clock Hand items was superior in people without formal education. This analysis provided evidence that item properties vary with education, calling for more sophisticated modelling based on IRT to incorporate the effect of education.

2011 ◽  
Vol 24 (4) ◽  
pp. 651-658 ◽  
Author(s):  
Chia-Fen Tsai ◽  
Wei-Ju Lee ◽  
Shuu-Jiun Wang ◽  
Ben-Chang Shia ◽  
Ziad Nasreddine ◽  
...  

ABSTRACTBackground: The Montreal Cognitive Assessment (MoCA) is an instrument for screening mild cognitive impairment (MCI). This study examined the psychometric properties and the validity of the Taiwan version of the MoCA (MoCA-T) in an elderly outpatient population.Methods: Participants completed the MoCA-T, Mini-Mental State Examination (MMSE), and the Chinese Version Verbal Learning Test. The diagnosis of Alzheimer's disease (AD) was made based on the NINCDS-ADRDA criteria, and MCI was diagnosed through the criteria proposed by Petersen et al. (2001).Results: Data were collected from 207 participants (115 males/92 females, mean age: 77.3 ± 7.5 years). Ninety-eight participants were diagnosed with AD, 71 with MCI, and 38 were normal controls. The area under the receiver operator curves (AUC) for predicting AD was 0.98 (95% confidence interval [CI] = 0.97–1.00) for the MMSE, and 0.99 (95% CI = 0.98–1.00) for the MoCA-T. The AUC for predicting MCI was 0.81 (95% CI = 0.72–0.89) using the MMSE and 0.91 (95% CI = 0.86–1.00) using the MoCA-T. Using an optimal cut-off score of 23/24, the MoCA-T had a sensitivity of 92% and specificity of 78% for MCI. Item response theory analysis indicated that the level of information provided by each subtest of the MoCA-T was consistent. The frontal and language subscales provided higher discriminating power than the other subscales in the detection of MCI.Conclusion: Compared to the MMSE, the MoCA-T provides better psychometric properties in the detection of MCI. The utility of the MoCA-T is optimal in mild to moderate cognitive dysfunction.


2014 ◽  
Vol 10 ◽  
pp. P560-P560
Author(s):  
Andreana Benitez ◽  
Liana Apostolova ◽  
Alden L. Gross ◽  
John Ringman ◽  
Po-Haong Lu

2020 ◽  
Author(s):  
E. Damiano D'Urso ◽  
Kim De Roover ◽  
Jeroen K. Vermunt ◽  
Jesper Tijmstra

In social sciences, the study of group differences concerning latent constructs is ubiquitous. These constructs are generally measured by means of scales composed of ordinal items. In order to compare these constructs across groups, one crucial requirement is that they are measured equivalently or, in technical jargon, that measurement invariance holds across the groups. This study compared the performance of multiple group categorical confirmatory factor analysis (MG-CCFA) and multiple group item response theory (MG-IRT) in testing measurement invariance with ordinal data. A simulation study was conducted to compare the true positive rate (TPR) and false positive rate (FPR) both at the scale and at the item level for these two approaches under an invariance and a non-invariance scenario. The results of the simulation studies showed that the performance, in terms of the TPR, of MG-CCFA- and MG-IRT-based approaches mostly depends on the scale length. In fact, for long scales, the likelihood ratio test (LRT) approach, for MG-IRT, outperformed the other approaches, while, for short scales, MG-CCFA seemed to be generally preferable. In addition, the performance of MG-CCFA's fit measures, such as RMSEA and CFI, seemed to depend largely on the length of the scale, especially when MI was tested at the item level. General caution is recommended when using these measures, especially when MI is tested for each item individually. A decision flowchart, based on the results of the simulation studies, is provided to help summarizing the results and providing indications on which approach performed best and in which setting.


Author(s):  
E. Damiano D’Urso ◽  
Kim De Roover ◽  
Jeroen K. Vermunt ◽  
Jesper Tijmstra

AbstractIn social sciences, the study of group differences concerning latent constructs is ubiquitous. These constructs are generally measured by means of scales composed of ordinal items. In order to compare these constructs across groups, one crucial requirement is that they are measured equivalently or, in technical jargon, that measurement invariance (MI) holds across the groups. This study compared the performance of scale- and item-level approaches based on multiple group categorical confirmatory factor analysis (MG-CCFA) and multiple group item response theory (MG-IRT) in testing MI with ordinal data. In general, the results of the simulation studies showed that MG-CCFA-based approaches outperformed MG-IRT-based approaches when testing MI at the scale level, whereas, at the item level, the best performing approach depends on the tested parameter (i.e., loadings or thresholds). That is, when testing loadings equivalence, the likelihood ratio test provided the best trade-off between true-positive rate and false-positive rate, whereas, when testing thresholds equivalence, the χ2 test outperformed the other testing strategies. In addition, the performance of MG-CCFA’s fit measures, such as RMSEA and CFI, seemed to depend largely on the length of the scale, especially when MI was tested at the item level. General caution is recommended when using these measures, especially when MI is tested for each item individually.


Sign in / Sign up

Export Citation Format

Share Document