scholarly journals Communicating Risk: Developing an “Efficiency Index” for Dementia Screening Tests

2021 ◽  
Vol 11 (11) ◽  
pp. 1473
Author(s):  
Andrew J. Larner

Diagnostic and screening tests may have risks such as misdiagnosis, as well as the potential benefits of correct diagnosis. Effective communication of this risk to both clinicians and patients can be problematic. The purpose of this study was to develop a metric called the “efficiency index” (EI), defined as the ratio of test accuracy and inaccuracy, to evaluate screening tests for dementia. This measure was compared with a previously described “likelihood to be diagnosed or misdiagnosed” (LDM), also based on “numbers needed” metrics. Datasets from prospective pragmatic test accuracy studies examining four brief cognitive screening instruments (Mini-Mental State Examination; Montreal Cognitive Assessment; Mini-Addenbrooke’s Cognitive Examination (MACE); and Free-Cog) were analysed to calculate values for EI and LDM, and to examine their variation with test cut-off for MACE and dementia prevalence. EI values were also calculated using a modification of McGee’s heuristic for the simplification of likelihood ratios to estimate percentage change in diagnostic probability. The findings indicate that EI is easier to calculate than LDM and, unlike LDM, may be classified either qualitatively or quantitatively in a manner similar to likelihood ratios. EI shows the utility or inutility of diagnostic and screening tests, illustrating the inevitable trade-off between diagnosis and misdiagnosis. It may be a useful metric to communicate risk in a way that is easily intelligible for both clinicians and patients.

2021 ◽  
Author(s):  
Andrew J Larner

Diagnostic and screening tests may have risks such as misdiagnosis, as well as the potential benefits of correct diagnosis. Effective communication of this risk to both clinicians and patients can be problematic. The purpose of this study was to develop a metric called the efficiency index (EI), defined as the ratio of test accuracy and inaccuracy, or the number needed to misdiagnose divided by number needed to diagnose. This was compared with a previously described likelihood to be diagnosed or misdiagnosed (LDM), also based on numbers needed metrics. Datasets from prospective pragmatic test accuracy studies examining four brief cognitive screening instruments (Mini-Mental State Examination; Montreal Cognitive Assessment; Mini-Addenbrookes Cognitive Examination, MACE; and Free-Cog) were analysed to calculate values for EI and LDM, and to examine their variation with test cut-off for MACE. EI values were also calculated using a modification of McGees heuristic for the simplification of likelihood ratios to estimate percentage change in diagnostic probability. The findings indicate that EI is easier to calculate than LDM and, unlike LDM, may be classified either qualitatively or quantitatively in a manner similar to likelihood ratios. EI shows the utility or inutility of diagnostic and screening tests, illustrating the inevitable trade-off between diagnosis and misdiagnosis. It may be a useful metric to communicate risk in a way that is easily intelligible for both clinicians and patients.


2021 ◽  
Vol 15 (4) ◽  
pp. 458-463
Author(s):  
Andrew J. Larner

ABSTRACT Cognitive screening instruments (CSIs) for dementia and mild cognitive impairment are usually characterized in terms of measures of discrimination such as sensitivity, specificity, and likelihood ratios, but these CSIs also have limitations. Objective: The aim of this study was to calculate various measures of test limitation for commonly used CSIs, namely, misclassification rate (MR), net harm/net benefit ratio (H/B), and the likelihood to be diagnosed or misdiagnosed (LDM). Methods: Data from several previously reported pragmatic test accuracy studies of CSIs (Mini-Mental State Examination, the Montreal Cognitive Assessment, Mini-Addenbrooke’s Cognitive Examination, Six-item Cognitive Impairment Test, informant Ascertain Dementia 8, Test Your Memory test, and Free-Cog) undertaken in a single clinic were reanalyzed to calculate and compare MR, H/B, and the LDM for each test. Results: Some CSIs with very high sensitivity but low specificity for dementia fared poorly on measures of limitation, with high MRs, low H/B, and low LDM; some had likelihoods favoring misdiagnosis over diagnosis. Tests with a better balance of sensitivity and specificity fared better on measures of limitation. Conclusions: When deciding which CSI to administer, measures of test limitation as well as measures of test discrimination should be considered. Identification of CSIs with high MR, low H/B, and low LDM, may have implications for their use in clinical practice.


2018 ◽  
Vol 12 (4) ◽  
pp. 394-401 ◽  
Author(s):  
Cláudia M. Memória ◽  
Henrique C.S. Muela ◽  
Natália C. Moraes ◽  
Valéria A. Costa-Hong ◽  
Michel F. Machado ◽  
...  

ABSTRACT The functioning of attention is complex, a primordial function in several cognitive processes and of great interest to neuropsychology. The Test of Variables of Attention (T.O.V.A) is a continuous computerized performance test that evaluates some attention components such as response time to a stimulus and errors due to inattention and impulsivity. Objective: 1) To evaluate the applicability of T.O.V.A in Brazilian adults; 2) To analyze the differences in performance between genders, age ranges, and levels of education; 3) To examine the association between T.O.V.A variables and other attention and cognitive screening tests. Methods: The T.O.V.A was applied to 63 healthy adults (24 to 78 years of age) who also underwent the Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), Digit Span and Digit Symbol (Wechsler Intelligence Scale for Adults – WAIS-III) and the Trail Making Test. Results: the T.O.V.A was little influenced by age or education, but was influenced by gender. The correlations between some T.O.V.A variables and the Digit Symbol and Trail Making test were weak (r-values between 0.2 and 0.4), but significant (p<0.05). There was no correlation with the Digit Span test. Conclusion: The T.O.V.A showed good applicability and proved adequate for evaluating attentional processes in adults.


2020 ◽  
Vol 17 (5) ◽  
pp. 460-471
Author(s):  
Emma Elliott ◽  
Claire Green ◽  
David J. Llewellyn ◽  
Terence J. Quinn

Background: Telephone-based cognitive assessments may be preferable to in-person testing in terms of test burden, economic and opportunity cost. Objective: We sought to determine the accuracy of telephone-based screening for the identification of dementia or Mild Cognitive Impairment (MCI). Methods: Five multidisciplinary databases were searched. Two researchers independently screened articles and extracted data. Eligible studies compared any multi-domain telephone-based assessment of cognition to the face-to-face diagnostic evaluation. Where data allowed, we pooled test accuracy metrics using the bivariate approach. Results: From 11,732 titles, 34 papers were included, describing 15 different tests. There was variation in test scoring and quality of included studies. Pooled analyses of accuracy for dementia: Telephone Interview for Cognitive Status (TICS) (<31/41) sensitivity: 0.92, specificity: 0.66 (6 studies); TICSmodified (<28/50) sensitivity: 0.91, specificity: 0.91 (3 studies). For MCI: TICS-modified (<33/50) sensitivity: 0.82, specificity: 0.87 (3 studies); Telephone-Montreal Cognitive Assessment (<18/22) sensitivity: 0.98, specificity: 0.69 (2 studies). Conclusion: There is limited diagnostic accuracy evidence for the many telephonic cognitive screens that exist. The TICS and TICS-m have the greatest supporting evidence; their test accuracy profiles make them suitable as initial cognitive screens where face to face assessment is not possible.


2015 ◽  
Vol 39 (3-4) ◽  
pp. 167-175 ◽  
Author(s):  
Andrew J. Larner

Background/Aims: The optimal method of establishing test cutoffs or cutpoints for cognitive screening instruments (CSIs) is uncertain. Of the available methods, two base cutoffs on either the maximal test accuracy or the maximal Youden index. The aim of this study was to compare the effects of using these alternative methods of establishing cutoffs. Methods: Datasets from three pragmatic diagnostic accuracy studies which examined the Mini-Mental State Examination (MMSE), the Addenbrooke's Cognitive Examination-Revised (ACE-R), the Montreal Cognitive Assessment (MoCA), and the Test Your Memory (TYM) test were analysed to calculate test sensitivity and specificity using cutoffs based on either maximal test accuracy or the maximal Youden index. Results: For ACE-R, MoCA, and TYM, optimal cutoffs for dementia diagnosis differed from those in index studies when defined using either the maximal accuracy or the maximal Youden index method. Optimal cutoffs were higher for MMSE, MoCA, and TYM when using the maximal Youden index method and consequently more sensitive. Conclusion: Revision of the cutoffs for CSIs established in index studies may be required to optimise performance in pragmatic diagnostic test accuracy studies which more closely resemble clinical practice.


Stroke ◽  
2014 ◽  
Vol 45 (10) ◽  
pp. 3008-3018 ◽  
Author(s):  
Rosalind Lees ◽  
Johann Selvarajah ◽  
Candida Fenton ◽  
Sarah T. Pendlebury ◽  
Peter Langhorne ◽  
...  

2020 ◽  
Author(s):  
Andrew J Larner

AbstractCognitive screening instruments (CSIs) for dementia and mild cognitive impairment are usually characterised in terms of measures of discrimination such as sensitivity, specificity, and likelihood ratios. However, CSIs also have limitations. Several metrics exist which may be used to denote test limitations but they are seldom examined. Data from several pragmatic test accuracy studies of CSIs were interrogated to calculate various measures of limitation, namely: misclassification rate; net harm to net benefit ratio; and the likelihood to be diagnosed or misdiagnosed. Intra- and inter-test performance for measures of discrimination and limitation were compared. The study found that some tests with very high sensitivity but low specificity for dementia fared poorly on measures of limitation, with high misclassification rates, low net harm to net benefit ratios, and low likelihoods to be diagnosed or misdiagnosed; some had likelihoods favouring misdiagnosis over diagnosis. Tests with a better balance of sensitivity and specificity fared better on measures of limitation. When choosing which CSIs to administer, measures of test limitation should be considered as well as measures of test discrimination. Although high test sensitivity may be desirable to avoid false negatives, false positives also have a cost. Identification of tests having high misclassification rate, low net harm to net benefit ratio, and low likelihood to be diagnosed or misdiagnosed, may have implications for their use in clinical practice.


Sign in / Sign up

Export Citation Format

Share Document