Improving Diagnostic Error Detection and Analysis: The First Step on a Long Path to Diagnostic Error Prevention

Author(s):  
Lawrence Lurvey ◽  
Michael H Kanter
Diagnosis ◽  
2014 ◽  
Vol 1 (1) ◽  
pp. 75-78 ◽  
Author(s):  
James Phillips

AbstractThe question of diagnostic error in psychiatry involves two intertwined issues, diagnosis and error detection. You cannot detect diagnostic error unless you have a reliable, valid method of making diagnoses. Since the diagnostic process is less certain in psychiatry than in general medicine, that will make the detection of error less confidant. Psychiatric diagnostic categories are developed without laboratory tests and other biomarkers. These limitations dramatically weaken the validity of psychiatric diagnoses and render error detection an uncertain undertaking, with go gold standard such as laboratory findings and tissue analysis, as in most of general medicine. With these limitations in mind, I review the methods that are available for error detection in psychiatry.


2003 ◽  
Vol 127 (11) ◽  
pp. 1489-1492 ◽  
Author(s):  
Martin J. Trotter ◽  
Andrea K. Bruecks

Abstract Context.—Slide review has been advocated as a means to reduce diagnostic error in surgical pathology and is considered an important component of a total quality assurance program. Blinded review is an unbiased method of error detection, and this approach may be used to determine the diagnostic discrepancy rates in surgical pathology. Objective.—To determine the diagnostic discrepancy rate for skin biopsies reported by general pathologists. Design.—Five hundred eighty-nine biopsies from 500 consecutive cases submitted by primary care physicians and reported by general pathologists were examined by rapid-screen, blinded review by 2 dermatopathologists, and the original diagnosis was compared with the review interpretation. Results.—Agreement was observed in 551 (93.5%) of 589 biopsies. Blinded review of these skin biopsies by experienced dermatopathologists had a sensitivity of 100% (all lesions originally reported were detected during review). False-negative errors were the most common discrepancy, but false positives, threshold discrepancies, and differences in type or grade were also observed. Only 1.4% of biopsies had discrepancies that were of potential clinical importance. Conclusions.—Blinded review demonstrates that general pathologists reporting skin biopsies submitted by primary care physicians have a low diagnostic error rate. The method detects both false-negative and false-positive cases and identifies problematic areas that may be targeted in continuing education activities. Blinded review is a useful component of a dermatopathology quality improvement program.


2017 ◽  
Vol 44 (4) ◽  
pp. 1212-1223 ◽  
Author(s):  
Michelle Passarge ◽  
Michael K. Fix ◽  
Peter Manser ◽  
Marco F. M. Stampanoni ◽  
Jeffrey V. Siebers

Diagnosis ◽  
2015 ◽  
Vol 2 (1) ◽  
pp. 3-19 ◽  
Author(s):  
Edna C. Shenvi ◽  
Robert El-Kareh

AbstractDiagnostic errors are common and costly, but difficult to detect. “Trigger” tools have promise to facilitate detection, but have not been applied specifically for inpatient diagnostic error. We performed a scoping review to collate all individual “trigger” criteria that have been developed or validated that may indicate that an inpatient diagnostic error has occurred. We searched three databases and screened 8568 titles and abstracts to ultimately include 33 articles. We also developed a conceptual framework of diagnostic error outcomes using real clinical scenarios, and used it to categorize the extracted criteria. Of the multiple criteria we found related to inpatient diagnostic error and amenable to automated detection, the most common were death, transfer to a higher level of care, arrest or “code”, and prolonged length of hospital stay. Several others, such as abrupt stoppage of multiple medications or change in procedure, may also be useful. Validation for general adverse event detection was done in 15 studies, but only one performed validation for diagnostic error specifically. Automated detection was used in only two studies. These criteria may be useful for developing diagnostic error detection tools.


2006 ◽  
Vol 130 (5) ◽  
pp. 626-629 ◽  
Author(s):  
Andrew A. Renshaw

Abstract Context.—Both gynecologic cytology and surgical pathology use similar methods to measure diagnostic error, but differences exist between how these methods have been applied in the 2 fields. Objective.—To compare the application of methods of error detection in gynecologic cytology and surgical pathology. Data Sources.—Review of the literature. Conclusions.—There are several different approaches to measuring error, all of which have limitations. Measuring error using reproducibility as the gold standard is a common method to determine error. While error rates in gynecologic cytology are well characterized and methods for objectively assessing error in the legal setting have been developed, meaningful methods to measure error rates in clinical practice are not commonly used and little is known about the error rates in this setting. In contrast, in surgical pathology the error rates are not as well characterized and methods for assessing error in the legal setting are not as well defined, but methods to measure error in actual clinical practice have been characterized and preliminary data from these methods are now available concerning the error rates in this setting.


2020 ◽  
pp. jclinpath-2020-206991
Author(s):  
Murali Varma ◽  
W Glenn McCluggage ◽  
Varsha Shah ◽  
Daniel M Berney

It is established good practice for histopathologists to obtain a second opinion in difficult cases. However, it is becoming more common for histology material to be reviewed either at the time of reporting (double-reporting) or as part of the preparation for multidisciplinary team meetings. Routine histological review does not provide ‘value for money’ and could even increase the risk of diagnostic error. The focus should be on error prevention as opposed to error detection. If pathologists get it right the first time, then there would be less need for ‘double checking’. Increased subspecialisation could increase diagnostic confidence and reduce error rates. Double-reporting and retrospective review should be limited to selected cases. We describe a protocol for clearly recording the process and outcome of such reviews.


2005 ◽  
Vol 129 (10) ◽  
pp. 1237-1245 ◽  
Author(s):  
Richard J. Zarbo ◽  
Frederick A. Meier ◽  
Stephen S. Raab

Abstract Objectives.—To define the magnitude of error occurring in anatomic pathology, to propose a scheme to classify such errors so their influence on clinical outcomes can be evaluated, and to identify quality assurance procedures able to reduce the frequency of errors. Design.—(a) Peer-reviewed literature search via PubMed for studies from single institutions and multi-institutional College of American Pathologists Q-Probes studies of anatomic pathology error detection and prevention practices; (b) structured evaluation of defects in surgical pathology reports uncovered in the Department of Pathology and Laboratory Medicine of the Henry Ford Health System in 2001–2003, using a newly validated error taxonomy scheme; and (c) comparative review of anatomic pathology quality assurance procedures proposed to reduce error. Results.—Marked differences in both definitions of error and pathology practice make comparison of error detection and prevention procedures among publications from individual institutions impossible. Q-Probes studies further suggest that observer redundancy reduces diagnostic variation and interpretive error, which ranges from 1.2 to 50 errors per 1000 cases; however, it is unclear which forms of such redundancy are the most efficient in uncovering diagnostic error. The proposed error taxonomy tested has shown a very good interobserver agreement of 91.4% (κ = 0.8780; 95% confidence limit, 0.8416–0.9144), when applied to amended reports, and suggests a distribution of errors among identification, specimen, interpretation, and reporting variables. Conclusions.—Presently, there are no standardized tools for defining error in anatomic pathology, so it cannot be reliably measured nor can its clinical impact be assessed. The authors propose a standardized error classification that would permit measurement of error frequencies, clinical impact of errors, and the effect of error reduction and prevention efforts. In particular, the value of double-reading, case conferences, and consultations (the traditional triad of error control in anatomic pathology) awaits objective assessment.


Sign in / Sign up

Export Citation Format

Share Document