scholarly journals Cognitive screening instruments for dementia: comparing metrics of test limitation

2021 ◽  
Vol 15 (4) ◽  
pp. 458-463
Author(s):  
Andrew J. Larner

ABSTRACT Cognitive screening instruments (CSIs) for dementia and mild cognitive impairment are usually characterized in terms of measures of discrimination such as sensitivity, specificity, and likelihood ratios, but these CSIs also have limitations. Objective: The aim of this study was to calculate various measures of test limitation for commonly used CSIs, namely, misclassification rate (MR), net harm/net benefit ratio (H/B), and the likelihood to be diagnosed or misdiagnosed (LDM). Methods: Data from several previously reported pragmatic test accuracy studies of CSIs (Mini-Mental State Examination, the Montreal Cognitive Assessment, Mini-Addenbrooke’s Cognitive Examination, Six-item Cognitive Impairment Test, informant Ascertain Dementia 8, Test Your Memory test, and Free-Cog) undertaken in a single clinic were reanalyzed to calculate and compare MR, H/B, and the LDM for each test. Results: Some CSIs with very high sensitivity but low specificity for dementia fared poorly on measures of limitation, with high MRs, low H/B, and low LDM; some had likelihoods favoring misdiagnosis over diagnosis. Tests with a better balance of sensitivity and specificity fared better on measures of limitation. Conclusions: When deciding which CSI to administer, measures of test limitation as well as measures of test discrimination should be considered. Identification of CSIs with high MR, low H/B, and low LDM, may have implications for their use in clinical practice.

2020 ◽  
Author(s):  
Andrew J Larner

AbstractCognitive screening instruments (CSIs) for dementia and mild cognitive impairment are usually characterised in terms of measures of discrimination such as sensitivity, specificity, and likelihood ratios. However, CSIs also have limitations. Several metrics exist which may be used to denote test limitations but they are seldom examined. Data from several pragmatic test accuracy studies of CSIs were interrogated to calculate various measures of limitation, namely: misclassification rate; net harm to net benefit ratio; and the likelihood to be diagnosed or misdiagnosed. Intra- and inter-test performance for measures of discrimination and limitation were compared. The study found that some tests with very high sensitivity but low specificity for dementia fared poorly on measures of limitation, with high misclassification rates, low net harm to net benefit ratios, and low likelihoods to be diagnosed or misdiagnosed; some had likelihoods favouring misdiagnosis over diagnosis. Tests with a better balance of sensitivity and specificity fared better on measures of limitation. When choosing which CSIs to administer, measures of test limitation should be considered as well as measures of test discrimination. Although high test sensitivity may be desirable to avoid false negatives, false positives also have a cost. Identification of tests having high misclassification rate, low net harm to net benefit ratio, and low likelihood to be diagnosed or misdiagnosed, may have implications for their use in clinical practice.


2017 ◽  
Vol 29 (6) ◽  
pp. 931-937 ◽  
Author(s):  
A.J. Larner

ABSTRACTBackground:The Mini-Addenbrooke's Cognitive Examination (MACE) is a new brief cognitive screening instrument for dementia and mild cognitive impairment (MCI). Historical data suggest that MACE may be comparable to the Montreal Cognitive Assessment (MoCA), a well-established cognitive screening instrument, in secondary care settings, but no head-to-head study has been reported hitherto.Methods:A pragmatic diagnostic accuracy study of MACE and MoCA was undertaken in consecutive patients referred over the course of one year to a neurology-led Cognitive Function Clinic, comparing their performance for the diagnosis of dementia and MCI using various test metrics.Results:In a cohort of 260 patients with dementia and MCI prevalence of 17% and 29%, respectively, both MACE and MoCA were quick and easy to use and acceptable to patients. Both tests had high sensitivity (>0.9) and large effect sizes (Cohen's d) for diagnosis of both dementia and MCI but low specificity and positive predictive values. Area under the receiver operating characteristic curve was excellent for dementia diagnosis (both >0.9) but less good for MCI (MoCA good and MACE fair). In contrast, weighted comparison suggested test equivalence for dementia diagnosis but with a slight net benefit for MACE for MCI diagnosis.Conclusions:MACE is an acceptable and accurate test for the assessment of cognitive problems, with performance comparable to MoCA. MACE appears to be a viable alternative to MoCA for testing patients with cognitive complaints in a secondary care setting.


2015 ◽  
Vol 39 (3-4) ◽  
pp. 167-175 ◽  
Author(s):  
Andrew J. Larner

Background/Aims: The optimal method of establishing test cutoffs or cutpoints for cognitive screening instruments (CSIs) is uncertain. Of the available methods, two base cutoffs on either the maximal test accuracy or the maximal Youden index. The aim of this study was to compare the effects of using these alternative methods of establishing cutoffs. Methods: Datasets from three pragmatic diagnostic accuracy studies which examined the Mini-Mental State Examination (MMSE), the Addenbrooke's Cognitive Examination-Revised (ACE-R), the Montreal Cognitive Assessment (MoCA), and the Test Your Memory (TYM) test were analysed to calculate test sensitivity and specificity using cutoffs based on either maximal test accuracy or the maximal Youden index. Results: For ACE-R, MoCA, and TYM, optimal cutoffs for dementia diagnosis differed from those in index studies when defined using either the maximal accuracy or the maximal Youden index method. Optimal cutoffs were higher for MMSE, MoCA, and TYM when using the maximal Youden index method and consequently more sensitive. Conclusion: Revision of the cutoffs for CSIs established in index studies may be required to optimise performance in pragmatic diagnostic test accuracy studies which more closely resemble clinical practice.


2019 ◽  
Vol 9 (5) ◽  
pp. 277-281 ◽  
Author(s):  
Andrew J Larner

Aim: To examine four different accuracy metrics for assessment of commonly used cognitive screening instruments: correct classification accuracy, area under the receiver operating characteristic curve, F measure (F) or F1 score and Matthews correlation coefficient (MCC). Methods: Raw data were extracted from test accuracy studies of Mini-Mental State Examination. Montreal Cognitive Assessment, Mini-Addenbrooke's Cognitive Examination, Six-item Cognitive Impairment Test, informant AD8 and Free-Cog, and used to calculate the accuracy measures. Results: Each metric resulted in similar ordering of the screening instruments for diagnosis of both dementia and mild cognitive impairment. Area under the receiver operating characteristic curve gave the highest (most optimistic) and MCC the lowest (most pessimistic) accuracy value for each test examined, with correct classification accuracy and F falling between. Conclusion: All the accuracy measures examined have potential shortcomings. None can be recommended as the definitive unitary outcome measure for test accuracy studies. However, MCC has theoretical advantages and might be more widely adopted.


Diagnostics ◽  
2019 ◽  
Vol 9 (3) ◽  
pp. 93 ◽  
Author(s):  
Rónán O’Caoimh ◽  
D. William Molloy

Short but accurate cognitive screening instruments are required in busy clinical practice. Although widely-used, the diagnostic accuracy of the standardised Mini-Mental State Examination (SMMSE) in different dementia subtypes remains poorly characterised. We compared the SMMSE to the Quick Mild Cognitive Impairment (Qmci) screen in patients (n = 3020) pooled from three memory clinic databases in Canada including those with mild cognitive impairment (MCI) and Alzheimer’s, vascular, mixed, frontotemporal, Lewy Body and Parkinson’s dementia, with and without co-morbid depression. Caregivers (n = 875) without cognitive symptoms were included as normal controls. The median age of patients was 77 (Interquartile = ±9) years. Both instruments accurately differentiated cognitive impairment (MCI or dementia) from controls. The SMMSE most accurately differentiated Alzheimer’s (AUC 0.94) and Lewy Body dementia (AUC 0.94) and least accurately identified MCI (AUC 0.73), vascular (AUC 0.74), and Parkinson’s dementia (AUC 0.81). The Qmci had statistically similar or greater accuracy in distinguishing all dementia subtypes but particularly MCI (AUC 0.85). Co-morbid depression affected accuracy in those with MCI. The SMMSE and Qmci have good-excellent accuracy in established dementia. The SMMSE is less suitable in MCI, vascular and Parkinson’s dementia, where alternatives including the Qmci screen may be used. The influence of co-morbid depression on scores merits further investigation.


Diagnostics ◽  
2019 ◽  
Vol 9 (2) ◽  
pp. 58 ◽  
Author(s):  
Besa Ziso ◽  
Andrew J. Larner

Many cognitive screening instruments are available to assess patients with cognitive symptoms in whom a diagnosis of dementia or mild cognitive impairment is being considered. Most are quantitative scales with specified cut-off values. In contrast, the cognitive disorders examination or Codex is a two-step decision tree which incorporates components from the Mini-Mental State Examination (MMSE) (three word recall, spatial orientation) along with a simplified clock drawing test to produce categorical outcomes defining the probability of dementia diagnosis and, by implication, directing clinician response (reassurance, monitoring, further investigation, immediate treatment). Codex has been shown to have high sensitivity and specificity for dementia diagnosis but is less sensitive for the diagnosis of mild cognitive impairment (MCI). We examined minor modifications to the Codex decision tree to try to improve its sensitivity for the diagnosis of MCI, based on data extracted from studies of two other cognitive screening instruments, the Montreal Cognitive Assessment and Free-Cog, which are more stringent than MMSE in their tests of delayed recall. Neither modification proved of diagnostic value for mild cognitive impairment. Possible explanations for this failure are considered.


2021 ◽  
Vol 11 (11) ◽  
pp. 1473
Author(s):  
Andrew J. Larner

Diagnostic and screening tests may have risks such as misdiagnosis, as well as the potential benefits of correct diagnosis. Effective communication of this risk to both clinicians and patients can be problematic. The purpose of this study was to develop a metric called the “efficiency index” (EI), defined as the ratio of test accuracy and inaccuracy, to evaluate screening tests for dementia. This measure was compared with a previously described “likelihood to be diagnosed or misdiagnosed” (LDM), also based on “numbers needed” metrics. Datasets from prospective pragmatic test accuracy studies examining four brief cognitive screening instruments (Mini-Mental State Examination; Montreal Cognitive Assessment; Mini-Addenbrooke’s Cognitive Examination (MACE); and Free-Cog) were analysed to calculate values for EI and LDM, and to examine their variation with test cut-off for MACE and dementia prevalence. EI values were also calculated using a modification of McGee’s heuristic for the simplification of likelihood ratios to estimate percentage change in diagnostic probability. The findings indicate that EI is easier to calculate than LDM and, unlike LDM, may be classified either qualitatively or quantitatively in a manner similar to likelihood ratios. EI shows the utility or inutility of diagnostic and screening tests, illustrating the inevitable trade-off between diagnosis and misdiagnosis. It may be a useful metric to communicate risk in a way that is easily intelligible for both clinicians and patients.


2021 ◽  
Author(s):  
Andrew J Larner

Diagnostic and screening tests may have risks such as misdiagnosis, as well as the potential benefits of correct diagnosis. Effective communication of this risk to both clinicians and patients can be problematic. The purpose of this study was to develop a metric called the efficiency index (EI), defined as the ratio of test accuracy and inaccuracy, or the number needed to misdiagnose divided by number needed to diagnose. This was compared with a previously described likelihood to be diagnosed or misdiagnosed (LDM), also based on numbers needed metrics. Datasets from prospective pragmatic test accuracy studies examining four brief cognitive screening instruments (Mini-Mental State Examination; Montreal Cognitive Assessment; Mini-Addenbrookes Cognitive Examination, MACE; and Free-Cog) were analysed to calculate values for EI and LDM, and to examine their variation with test cut-off for MACE. EI values were also calculated using a modification of McGees heuristic for the simplification of likelihood ratios to estimate percentage change in diagnostic probability. The findings indicate that EI is easier to calculate than LDM and, unlike LDM, may be classified either qualitatively or quantitatively in a manner similar to likelihood ratios. EI shows the utility or inutility of diagnostic and screening tests, illustrating the inevitable trade-off between diagnosis and misdiagnosis. It may be a useful metric to communicate risk in a way that is easily intelligible for both clinicians and patients.


2022 ◽  
Vol 12 (1) ◽  
pp. 37
Author(s):  
Jie Wang ◽  
Zhuo Wang ◽  
Ning Liu ◽  
Caiyan Liu ◽  
Chenhui Mao ◽  
...  

Background: Mini-Mental State Examination (MMSE) is the most widely used tool in cognitive screening. Some individuals with normal MMSE scores have extensive cognitive impairment. Systematic neuropsychological assessment should be performed in these patients. This study aimed to optimize the systematic neuropsychological test battery (NTB) by machine learning and develop new classification models for distinguishing mild cognitive impairment (MCI) and dementia among individuals with MMSE ≥ 26. Methods: 375 participants with MMSE ≥ 26 were assigned a diagnosis of cognitively unimpaired (CU) (n = 67), MCI (n = 174), or dementia (n = 134). We compared the performance of five machine learning algorithms, including logistic regression, decision tree, SVM, XGBoost, and random forest (RF), in identifying MCI and dementia. Results: RF performed best in identifying MCI and dementia. Six neuropsychological subtests with high-importance features were selected to form a simplified NTB, and the test time was cut in half. The AUC of the RF model was 0.89 for distinguishing MCI from CU, and 0.84 for distinguishing dementia from nondementia. Conclusions: This simplified cognitive assessment model can be useful for the diagnosis of MCI and dementia in patients with normal MMSE. It not only optimizes the content of cognitive evaluation, but also improves diagnosis and reduces missed diagnosis.


2021 ◽  
Vol 18 ◽  
Author(s):  
Che-Sheng Chu ◽  
I-Chen Lee ◽  
Chuan-Cheng Hung ◽  
I-Ching Lee ◽  
Chi-Fa Hung ◽  
...  

Background: The aim of this study was to establish the validity and reliability of the Computerized Brief Cognitive Screening Test (CBCog) for early detection of cognitive impairment. Method: One hundred and sixty participants, including community-dwelling and out-patient volunteers (both men and women) aged ≥ 65 years, were enrolled in the study. All participants were screened using the CBCog and Mini-Mental State Examination (MMSE). The internal consistency of the CBCog was analyzed using Cronbach’s α test. Areas under the curves (AUCs) of receiver operating characteristic analyses were used to test the predictive accuracy of the CBCog in detecting mild cognitive impairment (MCI) in order to set an appropriate cutoff point. Results: The CBCog scores were positively correlated with the MMSE scores of patients with MCI-related dementia (r = 0.678, P < .001). The internal consistency of the CBCog (Cronbach’s α) was 0.706. It was found that the CBCog with a cutoff point of 19/20 had a sensitivity of 97.5% and a specificity of 53.7% for the diagnosis of MCI with education level ≥ 6 years. The AUC of the CBCog for discriminating the normal control elderly from patients with MCI (AUC = 0.827, P < 0.001) was larger than that of the MMSE for discriminating the normal control elderly from patients with MCI (AUC= 0.819, P < .001). Conclusion: The CBCog demonstrated to have sufficient validity and reliability to evaluate mild cognitive impairment, especially in highly educated elderly people.


Sign in / Sign up

Export Citation Format

Share Document