The Dépistage Cognitif de Québec: A New Clinician’s Tool for Early Recognition of Atypical Dementia

2018 ◽  
Vol 46 (5-6) ◽  
pp. 310-321 ◽  
Author(s):  
Leila Sellami ◽  
Synthia Meilleur-Durand ◽  
Anne-Marie Chouinard ◽  
David Bergeron ◽  
Louis Verret ◽  
...  

Introduction: Early recognition of atypical dementia remains challenging partly because of lack of cognitive screening instruments precisely tailored for this purpose. Methods: We assessed the validity and reliability of the Dépistage Cognitif de Québec (DCQ; www.dcqtest.org), a newly developed cognitive screening test, to detect atypical dementia using a multicenter cohort of 628 participants. Sensitivity and specificity were compared to the Montreal Cognitive Assessment (MoCA). A predictive diagnostic algorithm for atypical dementia was determined using classification tree analysis. Results: The DCQ showed excellent psychometric properties. It was significantly more accurate than the MoCA to detect atypical dementia. All correlations between DCQ indexes and standard neuropsychological measures were significant. A statistical model distinguished typical from atypical dementia with a predictive power of 79%. Discussion: The DCQ is a better tool to detect atypical dementia than standard cognitive screening tests. Expanding the clinician’s tool kit with the DCQ could reduce missed/delayed identification of atypical dementia and accelerate therapeutic intervention.

2019 ◽  
Author(s):  
Ashita S. Gurnani ◽  
Shayne S.-H. Lin ◽  
Brandon E Gavett

Objective: The Colorado Cognitive Assessment (CoCA) was designed to improve upon existing screening tests in a number of ways, including enhanced psychometric properties and minimization of bias across diverse groups. This paper describes the initial validation study of the CoCA, which seeks to describe the test; demonstrate its construct validity; measurement invariance to age, education, sex, and mood symptoms; and compare it to the Montreal Cognitive Assessment (MoCA). Method: Participants included 151 older adults (MAge = 71.21, SD = 8.05) who were administered the CoCA, MoCA, Judgment test from the Neuropsychological Assessment Battery (NAB), 15-item version of the Geriatric Depression Scale (GDS-15), and 10-item version of the Geriatric Anxiety Scale (GAS-10). Results: A single factor confirmatory factor analysis model of the CoCA fit the data well, CFI = 0.955; RMSEA = 0.033. The CoCA’s internal consistency reliability was .84, compared to .74 for the MoCA. The CoCA had stronger disattenuated correlations with the MoCA (r = .79) and NAB Judgment (r = .47) and weaker correlations with the GDS-15 (r = -.36) and GAS-10 (r = -.15), supporting its construct validity. Finally, when analyzed using multiple indicators, multiple causes (MIMIC) modeling, the CoCA showed no evidence of measurement non-invariance, unlike the MoCA. Conclusions: These results provide initial evidence to suggest that the CoCA is a valid cognitive screening tool that offers numerous advantages over the MoCA, including superior psychometric properties and measurement non-invariance. Additional validation and normative studies are warranted.


2017 ◽  
Vol 29 (11) ◽  
pp. 1771-1784 ◽  
Author(s):  
Annie Pye ◽  
Anna Pavlina Charalambous ◽  
Iracema Leroi ◽  
Chrysoulla Thodi ◽  
Piers Dawes

ABSTRACTBackground:Cognitive screening tests frequently rely on items being correctly heard or seen. We aimed to identify, describe, and evaluate the adaptation, validity, and availability of cognitive screening and assessment tools for dementia which have been developed or adapted for adults with acquired hearing and/or vision impairment.Method:Electronic databases were searched using subject terms “hearing disorders” OR “vision disorders” AND “cognitive assessment,” supplemented by exploring reference lists of included papers and via consultation with health professionals to identify additional literature.Results:1,551 papers were identified, of which 13 met inclusion criteria. Four papers related to tests adapted for hearing impairment; 11 papers related to tests adapted for vision impairment. Frequently adapted tests were the Mini-Mental State Examination (MMSE) and the Montreal Cognitive Assessment (MOCA). Adaptations for hearing impairment involved deleting or creating written versions for hearing-dependent items. Adaptations for vision impairment involved deleting vision-dependent items or spoken/tactile versions of visual tasks. No study reported validity of the test in relation to detection of dementia in people with hearing/vision impairment. Item deletion had a negative impact on the psychometric properties of the test.Conclusions:While attempts have been made to adapt cognitive tests for people with acquired hearing and/or vision impairment, the primary limitation of these adaptations is that their validity in accurately detecting dementia among those with acquired hearing or vision impairment is yet to be established. It is likely that the sensitivity and specificity of the adapted versions are poorer than the original, especially if the adaptation involved item deletion. One solution would involve item substitution in an alternative sensory modality followed by re-validation of the adapted test.


Diagnostics ◽  
2019 ◽  
Vol 9 (3) ◽  
pp. 95 ◽  
Author(s):  
Emma Elliott ◽  
Bogna A. Drozdowska ◽  
Martin Taylor-Rowan ◽  
Robert C. Shaw ◽  
Gillian Cuthbertson ◽  
...  

Full completion of cognitive screening tests can be problematic in the context of a stroke. Our aim was to examine the completion of various brief cognitive screens and explore reasons for untestability. Data were collected from consecutive stroke admissions (May 2016–August 2018). The cognitive assessment was attempted during the first week of admission. Patients were classified as partially untestable (≥1 test item was incomplete) and fully untestable (where assessment was not attempted, and/or no questions answered). We assessed univariate and multivariate associations of test completion with: age (years), sex, stroke severity (National Institutes of Health Stroke Scale (NIHSS)), stroke classification, pre-morbid disability (modified Rankin Scale (mRS)), previous stroke and previous dementia diagnosis. Of 703 patients admitted (mean age: 69.4), 119 (17%) were classified as fully untestable and 58 (8%) were partially untestable. The 4A-test had 100% completion and the clock-draw task had the lowest completion (533/703, 76%). Independent associations with fully untestable status had a higher NIHSS score (odds ratio (OR): 1.18, 95% CI: 1.11–1.26), higher pre-morbid mRS (OR: 1.28, 95% CI: 1.02–1.60) and pre-stroke dementia (OR: 3.35, 95% CI: 1.53–7.32). Overall, a quarter of patients were classified as untestable on the cognitive assessment, with test incompletion related to stroke and non-stroke factors. Clinicians and researchers would benefit from guidance on how to make the best use of incomplete test data.


2020 ◽  
pp. 089198872091551
Author(s):  
Shanna L. Burke ◽  
Adrienne Grudzien ◽  
Aaron Burgess ◽  
Miriam J. Rodriguez ◽  
Yesenia Rivera ◽  
...  

Increasing rates of dementia spectrum disorders among Spanish-speaking geriatric populations necessitate the development of culturally appropriate cognitive screening tests that can identify neurodegenerative disorders in their earliest stages when emerging disease-modifying treatments are most likely to be effective. This scoping review identified 26 brief Spanish language cognitive screening tools (<20 minutes) by searching academic databases using a combination of search terms. Results suggest that the Mini-Mental Status Examination and Montreal Cognitive Assessment appear to be less valid than other screeners. Instruments such as the 7-Minute Screen and Mini-Cog evidence higher classification rates of dementia, while Phototest detected mild cognitive impairment at higher rates more consistently than other screeners. Different sensitivity and specificity outcomes and cutoffs were observed when the same cognitive screener was evaluated in different countries. Results indicate that it is imperative to increase nation-specific validation and normative data for these instruments to best serve diverse populations.


2019 ◽  
Vol 35 (2) ◽  
pp. 176-187
Author(s):  
Ashita S Gurnani ◽  
Shayne S-H Lin ◽  
Brandon E Gavett

Abstract Objective The Colorado Cognitive Assessment (CoCA) was designed to improve upon existing screening tests in a number of ways, including enhanced psychometric properties and minimization of bias across diverse groups. This paper describes the initial validation study of the CoCA, which seeks to describe the test; demonstrate its construct validity; measurement invariance to age, education, sex, and mood symptoms; and compare it to the Montreal Cognitive Assessment (MoCA). Method Participants included 151 older adults (MAge = 71.21, SD = 8.05) who were administered the CoCA, MoCA, Judgment test from the Neuropsychological Assessment Battery (NAB), 15-item version of the Geriatric Depression Scale (GDS-15), and 10-item version of the Geriatric Anxiety Scale (GAS-10). Results A single-factor confirmatory factor analysis model of the CoCA fit the data well, CFI = 0.955; RMSEA = 0.033. The CoCA factor score reliability was .84, compared to .74 for the MoCA. The CoCA had stronger disattenuated correlations with the MoCA (r = .79) and NAB Judgment (r = .47) and weaker correlations with the GDS-15 (r = −.36) and GAS-10 (r = −.15), supporting its construct validity. Finally, when analyzed using multiple-indicators, multiple-causes (MIMIC) modeling, the CoCA showed no evidence of measurement noninvariance, unlike the MoCA. Conclusions These results provide initial evidence to suggest that the CoCA is a valid cognitive screening tool that offers numerous advantages over the MoCA, including superior psychometric properties and measurement noninvariance. Additional validation and normative studies are warranted.


2015 ◽  
Vol 27 (6) ◽  
pp. 981-989 ◽  
Author(s):  
G. Cheung ◽  
A. Clugston ◽  
M. Croucher ◽  
D. Malone ◽  
E. Mau ◽  
...  

ABSTRACTBackground:With the ubiquitous Mini-Mental State Exam now under copyright, attention is turning to alternative cognitive screening tests. The aim of the present study was to investigate three common cognitive screening tools: the Montreal Cognitive Assessment (MoCA), the Rowland Universal Dementia Assessment Scale (RUDAS), and the recently revised Addenbrooke's Cognitive Assessment Version III (ACE-III).Methods:The ACE-III, MoCA and RUDAS were administered in random order to a sample of 37 participants with diagnosed mild dementia and 47 comparison participants without dementia. The diagnostic accuracy of the three tests was assessed.Results:All the tests showed good overall accuracy as assessed by area under the ROC Curve, 0.89 (95% CI = 0.80–0.95) for the ACE-III, 0.84 (0.75–0.91) for the MoCA, and 0.86 (0.77–0.93) for RUDAS. The three tests were strongly correlated: r(84) = 0.85 (0.78–0.90) between the ACE-III and MoCA, 0.70 (0.57–0.80) between the ACE-III and RUDAS; and 0.65 (0.50–0.76) between the MoCA and RUDAS. The data derived optimal cut-off points for were lower than the published recommendations for the ACE-III (optimal cut-point ≤76, sensitivity = 81.1%, specificity = 85.1%) and the MoCA (≤20, sensitivity = 78.4%, specificity = 83.0%), but similar for the RUDAS (≤22, sensitivity = 78.4%, specificity = 85.1%).Conclusions:All three tools discriminated well overall between cases of mild dementia and controls. To inform interpretation of these tests in clinical settings, it would be useful for future research to address more inclusive and potentially age-stratified local norms.


2021 ◽  
Vol 10 (18) ◽  
pp. 4269
Author(s):  
Laura C. Jones ◽  
Catherine Dion ◽  
Philip A. Efron ◽  
Catherine C. Price

Sepsis disproportionally affects people over the age of 65, and with an exponentially increasing older population, sepsis poses additional risks for cognitive decline. This review summarizes published literature for (1) authorship qualification; (2) the type of cognitive domains most often assessed; (3) timelines for cognitive assessment; (4) the control group and analysis approach, and (5) sociodemographic reporting. Using key terms, a PubMed database review from January 2000 to January 2021 identified 3050 articles, and 234 qualified as full text reviews with 18 ultimately retained as summaries. More than half (61%) included an author with an expert in cognitive assessment. Seven (39%) relied on cognitive screening tools for assessment with the remaining using a combination of standard neuropsychological measures. Cognitive domains typically assessed were declarative memory, attention and working memory, processing speed, and executive function. Analytically, 35% reported on education, and 17% included baseline (pre-sepsis) data. Eight (44%) included a non-sepsis peer group. No study considered sex or race/diversity in the statistical model, and only five studies reported on race/ethnicity, with Caucasians making up the majority (74%). Of the articles with neuropsychological measures, researchers report acute with cognitive improvement over time for sepsis survivors. The findings suggest avenues for future study designs.


2021 ◽  
Author(s):  
Andrew J Larner

Diagnostic and screening tests may have risks such as misdiagnosis, as well as the potential benefits of correct diagnosis. Effective communication of this risk to both clinicians and patients can be problematic. The purpose of this study was to develop a metric called the efficiency index (EI), defined as the ratio of test accuracy and inaccuracy, or the number needed to misdiagnose divided by number needed to diagnose. This was compared with a previously described likelihood to be diagnosed or misdiagnosed (LDM), also based on numbers needed metrics. Datasets from prospective pragmatic test accuracy studies examining four brief cognitive screening instruments (Mini-Mental State Examination; Montreal Cognitive Assessment; Mini-Addenbrookes Cognitive Examination, MACE; and Free-Cog) were analysed to calculate values for EI and LDM, and to examine their variation with test cut-off for MACE. EI values were also calculated using a modification of McGees heuristic for the simplification of likelihood ratios to estimate percentage change in diagnostic probability. The findings indicate that EI is easier to calculate than LDM and, unlike LDM, may be classified either qualitatively or quantitatively in a manner similar to likelihood ratios. EI shows the utility or inutility of diagnostic and screening tests, illustrating the inevitable trade-off between diagnosis and misdiagnosis. It may be a useful metric to communicate risk in a way that is easily intelligible for both clinicians and patients.


Sign in / Sign up

Export Citation Format

Share Document