Effectiveness of Automatic Diagnostic Test Result Feedback on Outpatient Laboratory and Radiology Testing in Veterans

Medical Care ◽  
1996 ◽  
Vol 34 (8) ◽  
pp. 857-861 ◽  
Author(s):  
DONALD R. HOLLEMAN ◽  
DAVID L. SIMEL
2005 ◽  
Vol 44 (01) ◽  
pp. 124-126 ◽  
Author(s):  
W. Lehmacher ◽  
M. Hellmich

Summary Objectives: Bayes’ rule formalizes how the pre-test probability of having a condition of interest is changed by a diagnostic test result to yield the post-test probability of having the condition. To simplify this calculation a geometric solution in form of a ruler is presented. Methods: Using odds and the likelihood ratio of a test result in favor of having the condition of interest, Bayes’ rule can succinctly be expressed as ”the post-test odds equals the pre-test odds times the likelihood ratio”. Taking logarithms of both sides yields an additive equation. Results: The additive log odds equation can easily be solved geometrically. We propose a ruler made of two scales to be adjusted laterally. A different, widely used solution in form of a nomogram was published by Fagan [2]. Conclusions: Whilst use of the nomogram seems more obvious, the ruler may be easier to operate in clinical practice since no straight edge is needed for precise reading. Moreover, the ruler yields more intuitive results because it shows the change in probability due to a given test result on the same scale.


2006 ◽  
Vol 14 (7S_Part_15) ◽  
pp. P823-P823
Author(s):  
Leonie N.C. Visser ◽  
Sophie Pelt ◽  
Marij A. Hillen ◽  
Femke H. Bouwman ◽  
Wiesje M. Van der Flier ◽  
...  

2016 ◽  
Vol 55 (10) ◽  
pp. 1379-1382
Author(s):  
Toshihide Izumida ◽  
Hidenao Sakata ◽  
Masahiko Nakamura ◽  
Yumiko Hayashibara ◽  
Noriko Inasaki ◽  
...  

2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Breanna Wright ◽  
Alyse Lennox ◽  
Mark L. Graber ◽  
Peter Bragge

Abstract Background Communication failures involving test results contribute to issues of patient harm and sentinel events. This article aims to synthesise review evidence, practice insights and patient perspectives addressing problems encountered in the communication of diagnostic test results. Methods The rapid review identified ten systematic reviews and four narrative reviews. Five practitioner interviews identified insights into interventions and implementation, and a citizen panel with 15 participants explored the patient viewpoint. Results The rapid review provided support for the role of technology to ensure effective communication; behavioural interventions such as audit and feedback could be effective in changing clinician behaviour; and point-of-care tests (bedside testing) eliminate the communication breakdown problem altogether. The practice interviews highlighted transparency, and clarifying the lines of responsibility as central to improving test result communication. Enabling better information sharing, implementing adequate planning and utilising technology were also identified in the practice interviews as viable strategies to improve test result communication. The citizen panel highlighted technology as critical to improving communication of test results to both health professionals and patients. Patients also highlighted the importance of having different ways of accessing test results, which is particularly pertinent when ensuring suitability for vulnerable populations. Conclusions This paper draws together multiple perspectives on the problem of failures in diagnostic test results communication to inform appropriate interventions. Across the three studies, technology was identified as the most feasible option for closing the loop on test result communication. However, the importance of clear, consistent communication and more streamlined processes were also key elements that emerged. Review registration The protocol for the rapid review was registered with PROSPERO CRD42018093316.


2003 ◽  
Vol 42 (03) ◽  
pp. 260-264 ◽  
Author(s):  
W. A. Benish

Summary Objectives: This paper demonstrates that diagnostic test performance can be quantified as the average amount of information the test result (R) provides about the disease state (D). Methods: A fundamental concept of information theory, mutual information, is directly applicable to this problem. This statistic quantifies the amount of information that one random variable contains about another random variable. Prior to performing a diagnostic test, R and D are random variables. Hence, their mutual information, I(D;R), is the amount of information that R provides about D. Results: I(D;R) is a function of both 1) the pretest probabilities of the disease state and 2) the set of conditional probabilities relating each possible test result to each possible disease state. The area under the receiver operating characteristic curve (AUC) is a popular measure of diagnostic test performance which, in contrast to I(D;R), is independent of the pretest probabilities; it is a function of only the set of conditional probabilities. The AUC is not a measure of diagnostic information. Conclusions: Because I(D;R) is dependent upon pretest probabilities, knowledge of the setting in which a diagnostic test is employed is a necessary condition for quantifying the amount of information it provides. Advantages of I(D;R) over the AUC are that it can be calculated without invoking an arbitrary curve fitting routine, it is applicable to situations in which multiple diagnoses are under consideration, and it quantifies test performance in meaningful units (bits of information).


BMJ Open ◽  
2018 ◽  
Vol 8 (2) ◽  
pp. e019241 ◽  
Author(s):  
Bonnie Armstrong ◽  
Julia Spaniol ◽  
Nav Persaud

ObjectiveClinicians often overestimate the probability of a disease given a positive test result (positive predictive value; PPV) and the probability of no disease given a negative test result (negative predictive value; NPV). The purpose of this study was to investigate whether experiencing simulated patient cases (ie, an ‘experience format’) would promote more accurate PPV and NPV estimates compared with a numerical format.DesignParticipants were presented with information about three diagnostic tests for the same fictitious disease and were asked to estimate the PPV and NPV of each test. Tests varied with respect to sensitivity and specificity. Information about each test was presented once in the numerical format and once in the experience format. The study used a 2 (format: numerical vs experience) × 3 (diagnostic test: gold standard vs low sensitivity vs low specificity) within-subjects design.SettingThe study was completed online, via Qualtrics (Provo, Utah, USA).Participants50 physicians (12 clinicians and 38 residents) from the Department of Family and Community Medicine at St Michael’s Hospital in Toronto, Canada, completed the study. All participants had completed at least 1 year of residency.ResultsEstimation accuracy was quantified by the mean absolute error (MAE; absolute difference between estimate and true predictive value). PPV estimation errors were larger in the numerical format (MAE=32.6%, 95% CI 26.8% to 38.4%) compared with the experience format (MAE=15.9%, 95% CI 11.8% to 20.0%,d=0.697, P<0.001). Likewise, NPV estimation errors were larger in the numerical format (MAE=24.4%, 95% CI 14.5% to 34.3%) than in the experience format (MAE=11.0%, 95% CI 6.5% to 15.5%,d=0.303, P=0.015).ConclusionsExposure to simulated patient cases promotes accurate estimation of predictive values in clinicians. This finding carries implications for diagnostic training and practice.


2021 ◽  
Author(s):  
Adrian Mironas ◽  
David Jarrom ◽  
Evan Campbell ◽  
Jennifer Washington ◽  
Sabine Ettinger ◽  
...  

AbstractAs COVID-19 testing is rolled out increasingly widely, the use of a range of alternative testing methods will be beneficial in ensuring testing systems are resilient and adaptable to different clinical and public health scenarios. Here, we compare and discuss the diagnostic performance of a range of different molecular assays designed to detect the presence of SARS-CoV-2 infection in people with suspected COVID-19. Using findings from a systematic review of 103 studies, we categorised COVID-19 molecular assays into 12 different test classes, covering point-of-care tests, various alternative RT-PCR protocols, and alternative methods such as isothermal amplification. We carried out meta-analyses to estimate the diagnostic accuracy and clinical utility of each test class. We also estimated the positive and negative predictive values of all diagnostic test classes across a range of prevalence rates. Using previously validated RT-PCR assays as a reference standard, 11 out of 12 classes showed a summary sensitivity estimate of at least 92% and a specificity estimate of at least 99%. Several diagnostic test classes were estimated to have positive predictive values of 100% throughout the investigated prevalence spectrum, whilst estimated negative predictive values were more variable and sensitive to disease prevalence. We also report the results of clinical utility models that can be used to determine the information gained from a positive and negative test result in each class, and whether each test is more suitable for confirmation or exclusion of disease. Our analysis suggests that several tests exist that are suitable alternatives to standard RT-PCR and we discuss scenarios in which these could be most beneficial, such as where time to test result is critical or, where resources are constrained. However, we also highlight methodological concerns with the design and conduct of many included studies, and also the existence of likely publication bias for some test classes. Our results should be interpreted with these shortcomings in mind. Furthermore, our conclusions on test performance are limited to their use in symptomatic populations: we did not identify sufficient suitable data to allow analysis of testing in asymptomatic populations.


Sign in / Sign up

Export Citation Format

Share Document