scholarly journals Clinical judgement of GPs for the diagnosis of dementia: a diagnostic test accuracy study

BJGP Open ◽  
2021 ◽  
pp. BJGPO.2021.0058
Author(s):  
Samuel Thomas Creavin ◽  
Judy Haworth ◽  
Mark Fish ◽  
Sarah Cullum ◽  
Anthony Bayer ◽  
...  

BackgroundGPs often report using clinical judgement to diagnose dementia.AimInvestigate the accuracy of GPs’ clinical judgement for the diagnosis of dementia.Design & settingDiagnostic test accuracy study, recruiting from 21 practices around Bristol.MethodThe clinical judgement of the treating GP (index test) was based on the information immediately available at their initial consultation with a person aged over 70 years who had cognitive symptoms. The reference standard was an assessment by a specialist clinician, based on a standardised clinical examination and made according to ICD-10 criteria for dementia.Results240 people were recruited, with a median age of 80 years (IQR 75–84 years), of whom 126 (53%) were men and 132 (55%) had dementia. The median duration of symptoms was 24 months (IQR 12–36 months) and the median ACE-III score was 75 (IQR 65–87). GP clinical judgement had sensitivity 56% (95% CI 47% to 65%) and specificity 89% (95% CI 81% to 94%). Positive likelihood ratio was higher in people aged 70–79 years (6.5, 95% CI 2.9–15) compared to people aged ≥80 years (3.6, 95% CI 1.7–7.6), and in women (10.4, 95% CI 3.4–31.7) compared to men (3.2, 95% CI 1.7–6.2), whereas the negative likelihood ratio was similar in all groups.ConclusionA GP clinical judgement of dementia is specific, but confirmatory testing is needed to exclude dementia in symptomatic people who GPs judge as not having dementia.

2020 ◽  
Author(s):  
Sam Creavin ◽  
Judy Haworth ◽  
Mark Fish ◽  
Sarah Cullum ◽  
Antony Bayer ◽  
...  

Background: The accuracy of General Practitioners' (GPs') clinical judgement for diagnosing dementia is uncertain. Aim: Investigate the accuracy of GPs' clinical judgement for the diagnosis of dementia. Design and Setting: Diagnostic test accuracy study, recruiting from 21 practices around Bristol. Method: The clinical judgement of the treating GP (index test) was based on the information immediately available at their initial consultation with a person aged over 70 years who had symptoms of possible dementia. The reference standard was an assessment by a specialist clinician, based on a standardised clinical examination and made according to ICD-10 criteria for dementia. Results: 240 people were recruited, with a median age of 80 years (IQR 75 to 84 years), of whom 126 (53%) were men and 132 (55%) had dementia. The median duration of symptoms was 24 months (IQR 12 to 36 months) and the median ACE-III score was 75 (IQR 65 to 87). GP clinical judgement had sensitivity 56% (95% CI 47% to 65%) and specificity 89% (95% CI 81% to 94%). Positive likelihood ratio was higher in people aged 70-79 years (6.5, 95% CI 2.9 to 15) compared to people aged ≥ 80 years (3.6, 95% CI 1.7 to 7.6), and in women (10.4, 95% CI 3.4 to 31.7) compared to men (3.2, 95% CI 1.7 to 6.2), whereas the negative likelihood ratio was similar in all groups. Conclusion: GP judgement is more likely to under identify rather than over identify dementia.


2018 ◽  
Vol 45 (5-6) ◽  
pp. 300-307 ◽  
Author(s):  
John C. Williamson ◽  
Andrew J. Larner

Background/Aims: The Mini-Addenbrooke’s Cognitive Examination (MACE) is a relatively new short cognitive screening instrument for the detection of patients with dementia and mild cognitive impairment (MCI). Few studies of the MACE have been reported hitherto. The aim of this study was to undertake a pragmatic diagnostic test accuracy study of MACE in a large cohort of patients seen in a dedicated cognitive disorders clinic. Methods: MACE was administered to consecutive patients referred to a neurology-led Cognitive Function Clinic over the course of 3 years to assess its performance for the diagnosis of dementia and MCI using various test metrics. Results: In a cohort of 599 patients, the prevalence of dementia and MCI by criterion diagnosis was 0.17 and 0.29, respectively. MACE had a high sensitivity (> 0.9) and negative predictive values (> 0.8) with large effect sizes (Cohen’s d > 1) for the diagnosis of both dementia and MCI but a low specificity (< 0.5) and positive predictive values (≤0.5). Conclusion: MACE is an acceptable test for the assessment of cognitive complaints in a secondary care setting with good metrics for identifying cases of both dementia and MCI.


Author(s):  
Janwillem W.H. Kocks ◽  
Heinze J.H. Andringa ◽  
Ellen van Heijst ◽  
Renaud Louis ◽  
Inigo Ojanguren Arranz ◽  
...  

Diagnosis ◽  
2018 ◽  
Vol 5 (4) ◽  
pp. 205-214 ◽  
Author(s):  
Matthew L. Rubinstein ◽  
Colleen S. Kraft ◽  
J. Scott Parrott

AbstractBackgroundDiagnostic test accuracy (DTA) systematic reviews (SRs) characterize a test’s potential for diagnostic quality and safety. However, interpreting DTA measures in the context of SRs is challenging. Further, some evidence grading methods (e.g. Centers for Disease Control and Prevention, Division of Laboratory Systems Laboratory Medicine Best Practices method) require determination of qualitative effect size ratings as a contributor to practice recommendations. This paper describes a recently developed effect size rating approach for assessing a DTA evidence base.MethodsA likelihood ratio scatter matrix will plot positive and negative likelihood ratio pairings for DTA studies. Pairings are graphed as single point estimates with confidence intervals, positioned in one of four quadrants derived from established thresholds for test clinical validity. These quadrants support defensible judgments on “substantial”, “moderate”, or “minimal” effect size ratings for each plotted study. The approach is flexible in relation to a priori determinations of the relative clinical importance of false positive and false negative test results.Results and conclusionsThis qualitative effect size rating approach was operationalized in a recent SR that assessed effectiveness of test practices for the diagnosis ofClostridium difficile. Relevance of this approach to other methods of grading evidence, and efforts to measure diagnostic quality and safety are described. Limitations of the approach arise from understanding that a diagnostic test is not an isolated element in the diagnostic process, but provides information in clinical context towards diagnostic quality and safety.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Pakpoom Subsoontorn ◽  
Manupat Lohitnavy ◽  
Chuenjid Kongkaew

AbstractMany recent studies reported coronavirus point-of-care tests (POCTs) based on isothermal amplification. However, the performances of these tests have not been systematically evaluated. Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy was used as a guideline for conducting this systematic review. We searched peer-reviewed and preprint articles in PubMed, BioRxiv and MedRxiv up to 28 September 2020 to identify studies that provide data to calculate sensitivity, specificity and diagnostic odds ratio (DOR). Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) was applied for assessing quality of included studies and Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies (PRISMA-DTA) was followed for reporting. We included 81 studies from 65 research articles on POCTs of SARS, MERS and COVID-19. Most studies had high risk of patient selection and index test bias but low risk in other domains. Diagnostic specificities were high (> 0.95) for included studies while sensitivities varied depending on type of assays and sample used. Most studies (n = 51) used reverse transcription loop-mediated isothermal amplification (RT-LAMP) to diagnose coronaviruses. RT-LAMP of RNA purified from COVID-19 patient samples had pooled sensitivity at 0.94 (95% CI: 0.90–0.96). RT-LAMP of crude samples had substantially lower sensitivity at 0.78 (95% CI: 0.65–0.87). Abbott ID Now performance was similar to RT-LAMP of crude samples. Diagnostic performances by CRISPR and RT-LAMP on purified RNA were similar. Other diagnostic platforms including RT- recombinase assisted amplification (RT-RAA) and SAMBA-II also offered high sensitivity (> 0.95). Future studies should focus on the use of un-bias patient cohorts, double-blinded index test and detection assays that do not require RNA extraction.


BMJ Open ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. e034348 ◽  
Author(s):  
Marie Barais ◽  
Emilie Fossard ◽  
Antoine Dany ◽  
Tristan Montier ◽  
Erik Stolper ◽  
...  

ObjectivesDyspnoea and chest pain are symptoms shared with multiple pathologies ranging from the benign to life-threatening diseases. A Gut Feelings Questionnaire (GFQ) has been validated to measure the general practitioner’s (GPs) sense of alarm or sense of reassurance. The aim of the study was to estimate the diagnostic test accuracy of GPs’ sense of alarm when confronted with dyspnoea and chest pain.Design and settingsProspective observational study in general practice.ParticipantsPatients aged between 18 and 80 years, consulting their GP for dyspnoea and/or chest pain, were considered for enrolment. These GPs had to complete the GFQ immediately after the consultation.Primary outcome measuresLife-threatening and non-life-threatening diseases have previously been defined according to the pathologies or symptoms in the International Classification of Primary Care (ICPC)-2 classification. The index test was the sense of alarm and the reference standard was the final diagnosis at 4 weeks.Results25 GPs filled in 235 GFQ questionnaires. The positive likelihood ratio for the sense of alarm was 2.12 (95% CI 1.49 to 2.82), the negative likelihood ratio was 0.55 (95% CI 0.37 to 0.77).ConclusionsWhere the physician experienced a sense of alarm when a patient consulted him/her for dyspnoea and/or chest pain, the post-test odds that this patient had, in fact, a life-threatening disease was about twice as high as the pretest odds.Trial registration numberNCT02932982.


2019 ◽  
Vol 65 (2) ◽  
pp. 291-301 ◽  
Author(s):  
Jean-Paul Salameh ◽  
Matthew D F McInnes ◽  
David Moher ◽  
Brett D Thombs ◽  
Trevor A McGrath ◽  
...  

Abstract BACKGROUND We evaluated the completeness of reporting of diagnostic test accuracy (DTA) systematic reviews using the recently developed Preferred Reporting Items for Systematic Reviews and MetaAnalyses (PRISMA)-DTA guidelines. METHODS MEDLINE® was searched for DTA systematic reviews published October 2017 to January 2018. The search time span was modulated to reach the desired sample size of 100 systematic reviews. Reporting on a per-item basis using PRISMA-DTA was evaluated. RESULTS One hundred reviews were included. Mean reported items were 18.6 of 26 (71%; SD = 1.9) for PRISMA-DTA and 5.5 of 11 (50%; SD = 1.2) for PRISMA-DTA for abstracts. Items in the results were frequently reported. Items related to protocol registration, characteristics of included studies, results synthesis, and definitions used in data extraction were infrequently reported. Infrequently reported items from PRISMA-DTA for abstracts included funding information, strengths and limitations, characteristics of included studies, and assessment of applicability. Reporting completeness was higher in higher impact factor journals (18.9 vs 18.1 items; P = 0.04), studies that cited PRISMA (18.9 vs 17.7 items; P = 0.003), or used supplementary material (19.1 vs 18.0 items; P = 0.004). Variability in reporting was associated with author country (P = 0.04) but not journal (P = 0.6), abstract word count limitations (P = 0.9), PRISMA adoption (P = 0.2), structured abstracts (P = 0.2), study design (P = 0.8), subspecialty area (P = 0.09), or index test (P = 0.5). Abstracts with a higher word count were more informative (R = 0.4; P &lt; 0.001). No association with word counts was observed for full-text reports (R = −0.03; P = 0.06). CONCLUSIONS Recently published reports of DTA systematic reviews are not fully informative when evaluated against the PRISMA-DTA guidelines. These results should guide knowledge translation strategies, including journal level (e.g., PRISMA-DTA adoption, increased abstract word count, and use of supplementary material) and author level (PRISMA-DTA citation awareness) strategies.


Sign in / Sign up

Export Citation Format

Share Document