scholarly journals Payment Innovations To Improve Diagnostic Accuracy And Reduce Diagnostic Error

2018 ◽  
Vol 37 (11) ◽  
pp. 1828-1835 ◽  
Author(s):  
Robert Berenson ◽  
Hardeep Singh
Diagnosis ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Maria R. Dahm ◽  
Carmel Crock

Abstract Objectives To investigate from a linguistic perspective how clinicians deliver diagnosis to patients, and how these statements relate to diagnostic accuracy. Methods To identify temporal and discursive features in diagnostic statements, we analysed 16 video-recorded interactions collected during a practice high-stakes exam for internationally trained clinicians (25% female, n=4) to gain accreditation to practice in Australia. We recorded time spent on history-taking, examination, diagnosis and management. We extracted and deductively analysed types of diagnostic statements informed by literature. Results Half of the participants arrived at the correct diagnosis, while the other half misdiagnosed the patient. On average, clinicians who made a diagnostic error took 30 s less in history-taking and 30 s more in providing diagnosis than clinicians with correct diagnosis. The majority of diagnostic statements were evidentialised (describing specific observations (n=24) or alluding to diagnostic processes (n=7)), personal knowledge or judgement (n=8), generalisations (n=6) and assertions (n=4). Clinicians who misdiagnosed provided more specific observations (n=14) than those who diagnosed correctly (n=9). Conclusions Interactions where there is a diagnostic error, had shorter history-taking periods, longer diagnostic statements and featured more evidence. Time spent on history-taking and diagnosis, and use of evidentialised diagnostic statements may be indicators for diagnostic accuracy.


Author(s):  
Corey Chartan ◽  
Hardeep Singh ◽  
Parthasarathy Krishnamurthy ◽  
Moushumi Sur ◽  
Ashley Meyer ◽  
...  

Abstract Objective To investigate effects of a cognitive intervention based on isolation of red flags (I-RED) on diagnostic accuracy of ‘do-not-miss diagnoses.’ Design A 2 × 2 randomized case vignette-based experiment with manipulation of I-RED strategy between subjects and case complexity within subjects. Setting Two university-based residency programs. Participants One-hundred and nine pediatric residents from all levels of training. Interventions Participants were randomly assigned to the I-RED vs. control group, and within each group, they were further randomized to the order in which they saw simple and complex cases. The I-RED strategy involved an instruction to look for a constellation of symptoms, signs, clinical data or circumstances that should heighten suspicion for a serious condition. Main Outcome Measures Primary outcome was diagnostic accuracy, scored as 1 if any of the three differentials given by participants included the correct diagnosis, and 0 if not. We analyzed effects of I-RED strategy on diagnostic accuracy using logistic regression. Results I-RED strategy did not yield statistically higher diagnostic accuracy compared to controls (62 vs. 48%, respectively; odd ratio = 2.07 [95% confidence interval, 0.78–5.5], P = 0.14) although participants reported higher decision confidence compared to controls (7.00 vs. 5.77 on a scale of 1 to 10, P < 0.02) in simple but not complex cases. I-RED strategy significantly shortened time to decision (460 vs. 657 s, P < 0.001) and increased the number of red flags generated (3.04 vs. 2.09, P < 0.001). Conclusions A cognitive strategy of prompting red flag isolation prior to differential diagnosis did not improve diagnostic accuracy of ‘do-not-miss diagnoses.’ Given the paucity of evidence-based solutions to reduce diagnostic error and the intervention’s potential effect on confidence, findings warrant additional exploration.


Diagnosis ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Ava L. Liberman ◽  
Natalie T. Cheng ◽  
Benjamin W. Friedman ◽  
Maya T. Gerstein ◽  
Khadean Moncrieffe ◽  
...  

Abstract Objectives We sought to understand the knowledge, attitudes, and beliefs of emergency medicine (EM) physicians towards non-specific neurological conditions and the use of clinical decision support (CDS) to improve diagnostic accuracy. Methods We conducted semi-structured interviews of EM physicians at four emergency departments (EDs) affiliated with a single US healthcare system. Interviews were conducted until thematic saturation was achieved. Conventional content analysis was used to identify themes related to EM physicians’ perspectives on acute diagnostic neurology; directed content analysis was used to explore views regarding CDS. Each interview transcript was independently coded by two researchers using an iteratively refined codebook with consensus-based resolution of coding differences. Results We identified two domains regarding diagnostic safety: (1) challenges unique to neurological complaints and (2) challenges in EM more broadly. Themes relevant to neurology included: (1) knowledge gaps and uncertainty, (2) skepticism about neurology, (3) comfort with basic as opposed to detailed neurological examination, and (4) comfort with non-neurological diseases. Themes relevant to diagnostic decision making in the ED included: (1) cognitive biases, (2) ED system/environmental issues, (3) patient barriers, (4) comfort with diagnostic uncertainty, and (5) concerns regarding diagnostic error identification and measurement. Most participating EM physicians were enthusiastic about the potential for well-designed CDS to improve diagnostic accuracy for non-specific neurological complaints. Conclusions Physicians identified diagnostic challenges unique to neurological diseases as well as issues related more generally to diagnostic accuracy in EM. These physician-reported issues should be accounted for when designing interventions to improve ED diagnostic accuracy.


2020 ◽  
Vol 96 (1140) ◽  
pp. 581-583 ◽  
Author(s):  
Taro Shimizu

Reducing diagnostic error is a major issue in medical care. Various strategies have been proposed to prevent diagnostic error. The most prevalent factor for the diagnostic error is a cognitive error by physicians; reducing the cognitive error should lead to a substantial reduction in diagnostic error. That said, few studies have described new strategies to increase diagnostic accuracy that focuses on the cognitive processes of physicians. The current study describes new diagnostic strategies using cognitive forcing. Horizontal tracing is a strategy to identify comorbidities reliably, and vertical tracing identifies an underlying condition.


Author(s):  
Martin Caliendo ◽  
Joanna Abraham

Diagnostic error accounts for up to 17 percent of all adverse patient outcomes. Cognitive errors, in particular faulty information synthesis, accounts for the majority of these diagnostic errors. Reflective practice is reported as a strategy to improve diagnostic accuracy. The theoretic foundation to use reflective practice to decrease diagnostic error is well developed; however, empirical support is lacking and inconsistent. To address this gap, the author conducted an integrative review to critically evaluate the evidence in support of intervention for training in reflective practice to improve the diagnostic accuracy of clinicians’ decision making. We discuss our findings on the analytical, theoretical and methodological foundation of current evaluation studies on training in reflective practice patterns, in addition to identifying gaps in knowledge that will guide potential areas for future research.


2011 ◽  
Vol 140 (8) ◽  
pp. 1515-1524 ◽  
Author(s):  
F. LEWIS ◽  
M. J. SANCHEZ-VAZQUEZ ◽  
P. R. TORGERSON

SUMMARYIdentification of covariates associated with disease is a key part of epidemiological research. Yet, while adjustment for imperfect diagnostic accuracy is well established when estimating disease prevalence, similar adjustment when estimating covariate effects is far less common, although of important practical relevance due to the sensitivity of such analyses to misclassification error. Case-study data exploring evidence for seasonal differences in Salmonella prevalence using serological testing is presented, in addition simulated data with known properties are analysed. It is demonstrated that: (i) adjusting for misclassification error in models comprising continuous covariates can have a very substantial impact on the resulting conclusions which can then be drawn from any analyses; and (ii) incorporating prior knowledge through Bayesian estimation can provide potentially more informative assessments of covariates while removing the assumption of perfect diagnostic accuracy. The method presented is widely applicable and easily generalized to many types of epidemiological studies.


Author(s):  
Matthew Meyer ◽  
Julia Keith-Rokosh ◽  
Hasini Reddy ◽  
Joseph Megyesi ◽  
Robert R. Hammond

Objective:The goal of this study was to optimize intraoperative neuropathology consultations by studying trends and sources of diagnostic error. We hypothesized that errors in intraoperative diagnoses would have sampling, technical, and interpretive sources. The study also audited diagnostic strengths, weaknesses and trends associated with increasing experience. We hypothesized that errors would decline and that the accuracy of “qualified” diagnoses would improve with experience.Methods:The pathologist's first 100 cases (P1), second 100 (P2), and most recent 100 (P3, after ten years in practice) formed the data set. Intraoperative diagnoses were scored as correct, minor error or major error using the final diagnosis as the gold-standard. Incorrect diagnoses were re-examined by two reviewers to identify sources of error.Results:Among the 300 cases there were 22 errors with 11 in P1, 9 in P2 and 2 in P3. Sampling contributed to 17 errors (77%), technical factors to 7 (32%) and interpretive factors to 16 (73%). Improvement in diagnostic accuracy between P1 and P2 (p=0.8143), or P2 and P3 (p=0.0582) did not reach significance. However, significant improvement was found between P1 and P3 (p=0.0184).Conclusion:The present study was a practical and informative audit for the pathologist and trainees. It reaffirmed the accuracy of intraoperative neuropathology diagnoses and informed our understanding of sources of error. Most errors were due to a combination of sampling, technical and interpretive factors. A significant improvement in diagnostic proficiency was observed with increasing experience.


2020 ◽  
Vol 73 (10) ◽  
pp. 681-685
Author(s):  
David Nigel Poller ◽  
Massimo Bongiovanni ◽  
Beatrix Cochand-Priollet ◽  
Sarah J Johnson ◽  
Miguel Perez-Machado

This review article summarises systems for categorisation of diagnostic errors in pathology and cytology with regard to diagnostic accuracy and the published information on human factors (HFs) in pathology to date. A 12-point event-based checklist for errors of diagnostic accuracy in histopathology and cytopathology is proposed derived from Dupont’s ‘Dirty Dozen’ HF checklist, as used in the aerospace industry for aircraft maintenance. This HF checklist comprises 12 HFs; (1) Failure of communication. (2) Complacency. (3) Lack of knowledge. (4) Distractions. (5) Lack of teamwork. (6) Fatigue. (7) Lack of resources. (8) Pressure. (9) Lack of assertiveness. (10) Stress. (11) Norms. (12) Lack of awareness. The accompanying article explains practical examples of how each of these 12 HFs may cause errors in diagnostic accuracy in pathology. This checklist could be used as a template for analysis of accuracy and risk of diagnostic error in pathology either retrospectively ‘after the event’ or prospectively at the time of diagnosis. There is a need for further evaluation and validation of this proposed 12-point HF checklist and similar systems for categorisation of diagnostic errors and diagnostic accuracy in pathology based on HF principles.


Diagnosis ◽  
2018 ◽  
Vol 5 (2) ◽  
pp. 63-69 ◽  
Author(s):  
Melissa Sundberg ◽  
Catherine O. Perron ◽  
Amir Kimia ◽  
Assaf Landschaft ◽  
Lise E. Nigrovic ◽  
...  

Abstract Background: Diagnostic error can lead to increased morbidity, mortality, healthcare utilization and cost. The 2015 National Academy of Medicine report “Improving Diagnosis in Healthcare” called for improving diagnostic accuracy by developing innovative electronic approaches to reduce medical errors, including missed or delayed diagnosis. The objective of this article was to develop a process to detect potential diagnostic discrepancy between pediatric emergency and inpatient discharge diagnosis using a computer-based tool facilitating expert review. Methods: Using a literature search and expert opinion, we identified 10 pediatric diagnoses with potential for serious consequences if missed or delayed. We then developed and applied a computerized tool to identify linked emergency department (ED) encounters and hospitalizations with these discharge diagnoses. The tool identified discordance between ED and hospital discharge diagnoses. Cases identified as discordant were manually reviewed by pediatric emergency medicine experts to confirm discordance. Results: Our computerized tool identified 55,233 ED encounters for hospitalized children over a 5-year period, of which 2161 (3.9%) had one of the 10 selected high-risk diagnoses. After expert record review, we identified 67 (3.1%) cases with discordance between ED and hospital discharge diagnoses. The most common discordant diagnoses were Kawasaki disease and pancreatitis. Conclusions: We successfully developed and applied a semi-automated process to screen a large volume of hospital encounters to identify discordant diagnoses for selected pediatric medical conditions. This process may be valuable for informing and improving ED diagnostic accuracy.


Diagnosis ◽  
2018 ◽  
Vol 5 (1) ◽  
pp. 21-28
Author(s):  
Deborah DiNardo ◽  
Sarah Tilstra ◽  
Melissa McNeil ◽  
William Follansbee ◽  
Shanta Zimmer ◽  
...  

AbstractBackground:While there is some experimental evidence to support the use of cognitive forcing strategies to reduce diagnostic error in residents, the potential usability of such strategies in the clinical setting has not been explored. We sought to test the effect of a clinical reasoning tool on diagnostic accuracy and to obtain feedback on its usability and acceptability.Methods:We conducted a randomized behavioral experiment testing the effect of this tool on diagnostic accuracy on written cases among post-graduate 3 (PGY-3) residents at a single internal medical residency program in 2014. Residents completed written clinical cases in a proctored setting with and without prompts to use the tool. The tool encouraged reflection on concordant and discordant aspects of each case. We used random effects regression to assess the effect of the tool on diagnostic accuracy of the independent case sets, controlling for case complexity. We then conducted audiotaped structured focus group debriefing sessions and reviewed the tapes for facilitators and barriers to use of the tool.Results:Of 51 eligible PGY-3 residents, 34 (67%) participated in the study. The average diagnostic accuracy increased from 52% to 60% with the tool, a difference that just met the test for statistical significance in adjusted analyses (p=0.05). Residents reported that the tool was generally acceptable and understandable but did not recognize its utility for use with simple cases, suggesting the presence of overconfidence bias.Conclusions:A clinical reasoning tool improved residents’ diagnostic accuracy on written cases. Overconfidence bias is a potential barrier to its use in the clinical setting.


Sign in / Sign up

Export Citation Format

Share Document