scholarly journals The relationship between time to diagnose and diagnostic accuracy among internal medicine residents: a randomized experiment

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
J. Staal ◽  
J. Alsma ◽  
S. Mamede ◽  
A. P. J. Olson ◽  
G. Prins-van Gilst ◽  
...  

Abstract Background Diagnostic errors have been attributed to cognitive biases (reasoning shortcuts), which are thought to result from fast reasoning. Suggested solutions include slowing down the reasoning process. However, slower reasoning is not necessarily more accurate than faster reasoning. In this study, we studied the relationship between time to diagnose and diagnostic accuracy. Methods We conducted a multi-center within-subjects experiment where we prospectively induced availability bias (using Mamede et al.’s methodology) in 117 internal medicine residents. Subsequently, residents diagnosed cases that resembled those bias cases but had another correct diagnosis. We determined whether residents were correct, incorrect due to bias (i.e. they provided the diagnosis induced by availability bias) or due to other causes (i.e. they provided another incorrect diagnosis) and compared time to diagnose. Results We did not successfully induce bias: no significant effect of availability bias was found. Therefore, we compared correct diagnoses to all incorrect diagnoses. Residents reached correct diagnoses faster than incorrect diagnoses (115 s vs. 129 s, p < .001). Exploratory analyses of cases where bias was induced showed a trend of time to diagnose for bias diagnoses to be more similar to correct diagnoses (115 s vs 115 s, p = .971) than to other errors (115 s vs 136 s, p = .082). Conclusions We showed that correct diagnoses were made faster than incorrect diagnoses, even within subjects. Errors due to availability bias may be different: exploratory analyses suggest a trend that biased cases were diagnosed faster than incorrect diagnoses. The hypothesis that fast reasoning leads to diagnostic errors should be revisited, but more research into the characteristics of cognitive biases is important because they may be different from other causes of diagnostic errors.

JAMA ◽  
2010 ◽  
Vol 304 (11) ◽  
pp. 1198 ◽  
Author(s):  
Sílvia Mamede ◽  
Tamara van Gog ◽  
Kees van den Berge ◽  
Remy M. J. P. Rikers ◽  
Jan L. C. M. van Saase ◽  
...  

Diagnosis ◽  
2019 ◽  
Vol 6 (2) ◽  
pp. 115-119 ◽  
Author(s):  
Shwetha Iyer ◽  
Erin Goss ◽  
Casey Browder ◽  
Gerald Paccione ◽  
Julia Arnsten

Abstract Background Errors in medicine are common and often tied to diagnosis. Educating physicians about the science of cognitive decision-making, especially during medical school and residency when trainees are still forming clinical habits, may enhance awareness of individual cognitive biases and has the potential to reduce diagnostic errors and improve patient safety. Methods The authors aimed to develop, implement and evaluate a clinical reasoning curriculum for Internal Medicine residents. The authors developed and delivered a clinical reasoning curriculum to 47 PGY2 residents in an Internal Medicine Residency Program at a large urban hospital. The clinical reasoning curriculum consists of six to seven sessions with the specific aims of: (1) educating residents on cognitive steps and reasoning strategies used in clinical reasoning; (2) acknowledging the pitfalls of clinical reasoning and learning how cognitive biases can lead to clinical errors; (3) expanding differential diagnostic ability and developing illness scripts that incorporate discrete clinical prediction rules; and (4) providing opportunities for residents to reflect on their own clinical reasoning (also known as metacognition). Results Forty-seven PGY2 residents participated in the curriculum (2013–2016). Self-assessed comfort in recognizing and applying clinical reasoning skills increased in 15 of 15 domains (p < 0.05 for each). Resident mean scores on the knowledge assessment improved from 58% pre-curriculum to 81% post curriculum (p = 0.002). Conclusions A case vignette-based clinical reasoning curriculum can effectively increase residents’ knowledge of clinical reasoning concepts and improve residents’ self-assessed comfort in recognizing and applying clinical reasoning skills.


Author(s):  
Corey Chartan ◽  
Hardeep Singh ◽  
Parthasarathy Krishnamurthy ◽  
Moushumi Sur ◽  
Ashley Meyer ◽  
...  

Abstract Objective To investigate effects of a cognitive intervention based on isolation of red flags (I-RED) on diagnostic accuracy of ‘do-not-miss diagnoses.’ Design A 2 × 2 randomized case vignette-based experiment with manipulation of I-RED strategy between subjects and case complexity within subjects. Setting Two university-based residency programs. Participants One-hundred and nine pediatric residents from all levels of training. Interventions Participants were randomly assigned to the I-RED vs. control group, and within each group, they were further randomized to the order in which they saw simple and complex cases. The I-RED strategy involved an instruction to look for a constellation of symptoms, signs, clinical data or circumstances that should heighten suspicion for a serious condition. Main Outcome Measures Primary outcome was diagnostic accuracy, scored as 1 if any of the three differentials given by participants included the correct diagnosis, and 0 if not. We analyzed effects of I-RED strategy on diagnostic accuracy using logistic regression. Results I-RED strategy did not yield statistically higher diagnostic accuracy compared to controls (62 vs. 48%, respectively; odd ratio = 2.07 [95% confidence interval, 0.78–5.5], P = 0.14) although participants reported higher decision confidence compared to controls (7.00 vs. 5.77 on a scale of 1 to 10, P < 0.02) in simple but not complex cases. I-RED strategy significantly shortened time to decision (460 vs. 657 s, P < 0.001) and increased the number of red flags generated (3.04 vs. 2.09, P < 0.001). Conclusions A cognitive strategy of prompting red flag isolation prior to differential diagnosis did not improve diagnostic accuracy of ‘do-not-miss diagnoses.’ Given the paucity of evidence-based solutions to reduce diagnostic error and the intervention’s potential effect on confidence, findings warrant additional exploration.


CJEM ◽  
2018 ◽  
Vol 20 (S1) ◽  
pp. S106-S106
Author(s):  
J. Sherbino ◽  
S. Monteiro ◽  
J. Ilgen ◽  
E. Hayden ◽  
E. Howey ◽  
...  

Introduction: Cognitive bias is often cited as an explanation for diagnostic errors. Of the numerous cognitive biases currently discussed in the literature, availability bias, defined as the current case reminds you of a recent similar example is most well-known. Despite the ubiquity of cognitive biases in medical and popular literature, there is surprisingly little evidence to substantiate these claims. The present study sought to measure the influence of availability bias and identify contributing factors that may increase susceptibility to the influence of a recent similar case. Methods: To investigate the role of prior examples and category priming on diagnostic error at different levels of expertise, we devised a 2 phase experiment. The experimental intervention was in a validation phase preceding the test, where participants were asked to verify a diagnosis which was either i) representative of Diagnosis A, and similar to a test case, ii) representative of Diagnosis A and dissimilar to a test case, iii) representative of Diagnosis B and similar to a test case. The test phase consisted of 8 written cases, each with two approximately equally likely diagnoses(A or B). Each participant verified 2 cases from each condition, for a total of 6. They then diagnosed all 8 test cases; the remaining 2 test cases had no prior example. All cases were counterbalanced across conditions. Comparison between Condition i) and ii) and no prior showed effect of prior exemplar; comparison between iii) and no prior showed effect of category priming. Because cases were designed so that both Diagnosis A and B were likely, overall accuracy was measured as the sum of proportion of cases in which either was selected. Subjects were emergency medicine staff (n=40), residents (n=39) and medical students (n=32) from McMaster University, University of Washington, and Harvard Medical School. Results: Overall, staff had an accuracy (A + B) of 98%, residents 98% and students 85% (F=35.6,p<.0001). For residents and staff there was no effect of condition (all mean accuracies 97% to 100%); for students there was a clear effect of category priming, with accuracy of 84% for i), 87% for ii) and 94% for iii) but only 73% for the no prime condition (Interaction F= 3.54, p<.002) Conclusion: Although prior research has shown substantial biasing effects of availability, primarily in cases requiring visual diagnosis, the present study has shown such effects only for novices (medical students). Possible explanations need to be explored. Nevertheless, our study shows that with increasing expertise, availability may not be a source of error.


2007 ◽  
Vol 82 (6) ◽  
pp. 587-592 ◽  
Author(s):  
Colin P. West ◽  
Jefrey L. Huntington ◽  
Mashele M. Huschka ◽  
Paul J. Novotny ◽  
Jeff A. Sloan ◽  
...  

2020 ◽  
Vol 29 (7) ◽  
pp. 550-559 ◽  
Author(s):  
Sílvia Mamede ◽  
Marco Antonio de Carvalho-Filho ◽  
Rosa Malena Delbone de Faria ◽  
Daniel Franci ◽  
Maria do Patrocinio Tenorio Nunes ◽  
...  

BackgroundDiagnostic errors have often been attributed to biases in physicians’ reasoning. Interventions to ‘immunise’ physicians against bias have focused on improving reasoning processes and have largely failed.ObjectiveTo investigate the effect of increasing physicians’ relevant knowledge on their susceptibility to availability bias.Design, settings and participantsThree-phase multicentre randomised experiment with second-year internal medicine residents from eight teaching hospitals in Brazil.InterventionsImmunisation: Physicians diagnosed one of two sets of vignettes (either diseases associated with chronic diarrhoea or with jaundice) and compared/contrasted alternative diagnoses with feedback. Biasing phase (1 week later): Physicians were biased towards either inflammatory bowel disease or viral hepatitis. Diagnostic performance test: All physicians diagnosed three vignettes resembling inflammatory bowel disease, three resembling hepatitis (however, all with different diagnoses). Physicians who increased their knowledge of either chronic diarrhoea or jaundice 1 week earlier were expected to resist the bias attempt.Main outcome measurementsDiagnostic accuracy, measured by test score (range 0–1), computed for subjected-to-bias and not-subjected-to-bias vignettes diagnosed by immunised and not-immunised physicians.ResultsNinety-one residents participated in the experiment. Diagnostic accuracy differed on subjected-to-bias vignettes, with immunised physicians performing better than non-immunised physicians (0.40 vs 0.24; difference in accuracy 0.16 (95% CI 0.05 to 0.27); p=0.004), but not on not-subjected-to-bias vignettes (0.36 vs 0.41; difference −0.05 (95% CI −0.17 to 0.08); p=0.45). Bias only hampered non-immunised physicians, who performed worse on subjected-to-bias than not-subjected-to-bias vignettes (difference −0.17 (95% CI −0.28 to −0.05); p=0.005); immunised physicians’ accuracy did not differ (p=0.56).ConclusionsAn intervention directed at increasing knowledge of clinical findings that discriminate between similar-looking diseases decreased physicians’ susceptibility to availability bias, reducing diagnostic errors, in a simulated setting. Future research needs to examine the degree to which the intervention benefits other disease clusters and performance in clinical practice.Trial registration number68745917.1.1001.0068.


Author(s):  
Sílvia Mamede ◽  
Marco Goeijenbier ◽  
Stephanie C. E. Schuit ◽  
Marco Antonio de Carvalho Filho ◽  
Justine Staal ◽  
...  

Abstract Background Bias in reasoning rather than knowledge gaps has been identified as the origin of most diagnostic errors. However, the role of knowledge in counteracting bias is unclear. Objective To examine whether knowledge of discriminating features (findings that discriminate between look-alike diseases) predicts susceptibility to bias. Design Three-phase randomized experiment. Phase 1 (bias-inducing): Participants were exposed to a set of clinical cases (either hepatitis-IBD or AMI-encephalopathy). Phase 2 (diagnosis): All participants diagnosed the same cases; 4 resembled hepatitis-IBD, 4 AMI-encephalopathy (but all with different diagnoses). Availability bias was expected in the 4 cases similar to those encountered in phase 1. Phase 3 (knowledge evaluation): For each disease, participants decided (max. 2 s) which of 24 findings was associated with the disease. Accuracy of decisions on discriminating features, taken as a measure of knowledge, was expected to predict susceptibility to bias. Participants Internal medicine residents at Erasmus MC, Netherlands. Main Measures The frequency with which higher-knowledge and lower-knowledge physicians gave biased diagnoses based on phase 1 exposure (range 0–4). Time to diagnose was also measured. Key Results Sixty-two physicians participated. Higher-knowledge physicians yielded to availability bias less often than lower-knowledge physicians (0.35 vs 0.97; p = 0.001; difference, 0.62 [95% CI, 0.28–0.95]). Whereas lower-knowledge physicians tended to make more of these errors on subjected-to-bias than on not-subjected-to-bias cases (p = 0.06; difference, 0.35 [CI, − 0.02–0.73]), higher-knowledge physicians resisted the bias (p = 0.28). Both groups spent more time to diagnose subjected-to-bias than not-subjected-to-bias cases (p = 0.04), without differences between groups. Conclusions Knowledge of features that discriminate between look-alike diseases reduced susceptibility to bias in a simulated setting. Reflecting further may be required to overcome bias, but succeeding depends on having the appropriate knowledge. Future research should examine whether the findings apply to real practice and to more experienced physicians.


Diagnosis ◽  
2018 ◽  
Vol 5 (4) ◽  
pp. 257-266
Author(s):  
Mark L. Graber ◽  
Dan Berg ◽  
Welcome Jerde ◽  
Phillip Kibort ◽  
Andrew P.J. Olson ◽  
...  

Abstract This is a case report involving diagnostic errors that resulted in the death of a 15-year-old girl, and commentaries on the case from her parents and involved providers. Julia Berg presented with fatigue, fevers, sore throat and right sided flank pain. Based on a computed tomography (CT) scan that identified an abnormal-appearing gall bladder, and markedly elevated bilirubin and “liver function tests”, she was hospitalized and ultimately underwent surgery for suspected cholecystitis and/or cholangitis. Julia died of unexplained post-operative complications. Her autopsy, and additional testing, suggested that the correct diagnosis was Epstein-Barr virus infection with acalculous cholecystitis. The correct diagnosis might have been considered had more attention been paid to her presenting symptoms, and a striking degree of lymphocytosis that was repeatedly demonstrated. The case illustrates how cognitive “biases” can contribute to harm from diagnostic error. The case has profoundly impacted the involved healthcare organization, and Julia’s parents have become leaders in helping advance awareness and education about diagnostic error and its prevention.


Author(s):  
Josepha Kuhn ◽  
Pieter van den Berg ◽  
Silvia Mamede ◽  
Laura Zwaan ◽  
Patrick Bindels ◽  
...  

AbstractWhen physicians do not estimate their diagnostic accuracy correctly, i.e. show inaccurate diagnostic calibration, diagnostic errors or overtesting can occur. A previous study showed that physicians’ diagnostic calibration for easy cases improved, after they received feedback on their previous diagnoses. We investigated whether diagnostic calibration would also improve from this feedback when cases were more difficult. Sixty-nine general-practice residents were randomly assigned to one of two conditions. In the feedback condition, they diagnosed a case, rated their confidence in their diagnosis, their invested mental effort, and case complexity, and then were shown the correct diagnosis (feedback). This was repeated for 12 cases. Participants in the control condition did the same without receiving feedback. We analysed calibration in terms of (1) absolute accuracy (absolute difference between diagnostic accuracy and confidence), and (2) bias (confidence minus diagnostic calibration). There was no difference between the conditions in the measurements of calibration (absolute accuracy, p = .204; bias, p = .176). Post-hoc analyses showed that on correctly diagnosed cases (on which participants are either accurate or underconfident), calibration in the feedback condition was less accurate than in the control condition, p = .013. This study shows that feedback on diagnostic performance did not improve physicians’ calibration for more difficult cases. One explanation could be that participants were confronted with their mistakes and thereafter lowered their confidence ratings even if cases were diagnosed correctly. This shows how difficult it is to improve diagnostic calibration, which is important to prevent diagnostic errors or maltreatment.


Sign in / Sign up

Export Citation Format

Share Document