P2-352: COMMUNICATING UNCERTAINTY WHEN DISCLOSING DIAGNOSTIC TEST RESULT: THE ABIDE-CLINICAL ENCOUNTER STUDY

2006 ◽  
Vol 14 (7S_Part_15) ◽  
pp. P823-P823
Author(s):  
Leonie N.C. Visser ◽  
Sophie Pelt ◽  
Marij A. Hillen ◽  
Femke H. Bouwman ◽  
Wiesje M. Van der Flier ◽  
...  
2005 ◽  
Vol 44 (01) ◽  
pp. 124-126 ◽  
Author(s):  
W. Lehmacher ◽  
M. Hellmich

Summary Objectives: Bayes’ rule formalizes how the pre-test probability of having a condition of interest is changed by a diagnostic test result to yield the post-test probability of having the condition. To simplify this calculation a geometric solution in form of a ruler is presented. Methods: Using odds and the likelihood ratio of a test result in favor of having the condition of interest, Bayes’ rule can succinctly be expressed as ”the post-test odds equals the pre-test odds times the likelihood ratio”. Taking logarithms of both sides yields an additive equation. Results: The additive log odds equation can easily be solved geometrically. We propose a ruler made of two scales to be adjusted laterally. A different, widely used solution in form of a nomogram was published by Fagan [2]. Conclusions: Whilst use of the nomogram seems more obvious, the ruler may be easier to operate in clinical practice since no straight edge is needed for precise reading. Moreover, the ruler yields more intuitive results because it shows the change in probability due to a given test result on the same scale.


2016 ◽  
Vol 55 (10) ◽  
pp. 1379-1382
Author(s):  
Toshihide Izumida ◽  
Hidenao Sakata ◽  
Masahiko Nakamura ◽  
Yumiko Hayashibara ◽  
Noriko Inasaki ◽  
...  

2020 ◽  
Vol 30 (8) ◽  
pp. 1287-1300 ◽  
Author(s):  
Melissa Miao ◽  
Maria R. Dahm ◽  
Julie Li ◽  
Judith Thomas ◽  
Andrew Georgiou

We sought (a) an inductive understanding of patient and clinician perspectives and experiences of the communication of diagnostic test information and (b) a normative understanding of the management of uncertainty that occurs during the clinical encounter in emergency care. Between 2016 and 2018, 58 interviews were conducted with patients and nursing, medical, and managerial staff. Interview data were sequentially analyzed through an inductive thematic analysis, then a normative theory of uncertainty management. Themes of “Ideals,” “Service Efficiency,” and “Managing Uncertainty” were inductively identified as influencing the communication of diagnostic test information. A normative theory of uncertainty management highlighted (a) how these themes reflected the interaction’s sociocultural context, encapsulated various criteria by which clinicians and patients evaluated the appropriateness and effectiveness of their communication, and represented competing goals during the clinical encounter, and (b) how systemic tensions between themes accounted for when diagnostic test information communication occurred, was deferred or avoided.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Breanna Wright ◽  
Alyse Lennox ◽  
Mark L. Graber ◽  
Peter Bragge

Abstract Background Communication failures involving test results contribute to issues of patient harm and sentinel events. This article aims to synthesise review evidence, practice insights and patient perspectives addressing problems encountered in the communication of diagnostic test results. Methods The rapid review identified ten systematic reviews and four narrative reviews. Five practitioner interviews identified insights into interventions and implementation, and a citizen panel with 15 participants explored the patient viewpoint. Results The rapid review provided support for the role of technology to ensure effective communication; behavioural interventions such as audit and feedback could be effective in changing clinician behaviour; and point-of-care tests (bedside testing) eliminate the communication breakdown problem altogether. The practice interviews highlighted transparency, and clarifying the lines of responsibility as central to improving test result communication. Enabling better information sharing, implementing adequate planning and utilising technology were also identified in the practice interviews as viable strategies to improve test result communication. The citizen panel highlighted technology as critical to improving communication of test results to both health professionals and patients. Patients also highlighted the importance of having different ways of accessing test results, which is particularly pertinent when ensuring suitability for vulnerable populations. Conclusions This paper draws together multiple perspectives on the problem of failures in diagnostic test results communication to inform appropriate interventions. Across the three studies, technology was identified as the most feasible option for closing the loop on test result communication. However, the importance of clear, consistent communication and more streamlined processes were also key elements that emerged. Review registration The protocol for the rapid review was registered with PROSPERO CRD42018093316.


2003 ◽  
Vol 42 (03) ◽  
pp. 260-264 ◽  
Author(s):  
W. A. Benish

Summary Objectives: This paper demonstrates that diagnostic test performance can be quantified as the average amount of information the test result (R) provides about the disease state (D). Methods: A fundamental concept of information theory, mutual information, is directly applicable to this problem. This statistic quantifies the amount of information that one random variable contains about another random variable. Prior to performing a diagnostic test, R and D are random variables. Hence, their mutual information, I(D;R), is the amount of information that R provides about D. Results: I(D;R) is a function of both 1) the pretest probabilities of the disease state and 2) the set of conditional probabilities relating each possible test result to each possible disease state. The area under the receiver operating characteristic curve (AUC) is a popular measure of diagnostic test performance which, in contrast to I(D;R), is independent of the pretest probabilities; it is a function of only the set of conditional probabilities. The AUC is not a measure of diagnostic information. Conclusions: Because I(D;R) is dependent upon pretest probabilities, knowledge of the setting in which a diagnostic test is employed is a necessary condition for quantifying the amount of information it provides. Advantages of I(D;R) over the AUC are that it can be calculated without invoking an arbitrary curve fitting routine, it is applicable to situations in which multiple diagnoses are under consideration, and it quantifies test performance in meaningful units (bits of information).


BMJ Open ◽  
2018 ◽  
Vol 8 (2) ◽  
pp. e019241 ◽  
Author(s):  
Bonnie Armstrong ◽  
Julia Spaniol ◽  
Nav Persaud

ObjectiveClinicians often overestimate the probability of a disease given a positive test result (positive predictive value; PPV) and the probability of no disease given a negative test result (negative predictive value; NPV). The purpose of this study was to investigate whether experiencing simulated patient cases (ie, an ‘experience format’) would promote more accurate PPV and NPV estimates compared with a numerical format.DesignParticipants were presented with information about three diagnostic tests for the same fictitious disease and were asked to estimate the PPV and NPV of each test. Tests varied with respect to sensitivity and specificity. Information about each test was presented once in the numerical format and once in the experience format. The study used a 2 (format: numerical vs experience) × 3 (diagnostic test: gold standard vs low sensitivity vs low specificity) within-subjects design.SettingThe study was completed online, via Qualtrics (Provo, Utah, USA).Participants50 physicians (12 clinicians and 38 residents) from the Department of Family and Community Medicine at St Michael’s Hospital in Toronto, Canada, completed the study. All participants had completed at least 1 year of residency.ResultsEstimation accuracy was quantified by the mean absolute error (MAE; absolute difference between estimate and true predictive value). PPV estimation errors were larger in the numerical format (MAE=32.6%, 95% CI 26.8% to 38.4%) compared with the experience format (MAE=15.9%, 95% CI 11.8% to 20.0%,d=0.697, P<0.001). Likewise, NPV estimation errors were larger in the numerical format (MAE=24.4%, 95% CI 14.5% to 34.3%) than in the experience format (MAE=11.0%, 95% CI 6.5% to 15.5%,d=0.303, P=0.015).ConclusionsExposure to simulated patient cases promotes accurate estimation of predictive values in clinicians. This finding carries implications for diagnostic training and practice.


Sign in / Sign up

Export Citation Format

Share Document