scholarly journals The two-step Fagan's nomogram: ad hoc interpretation of a diagnostic test result without calculation

2013 ◽  
Vol 18 (4) ◽  
pp. 125-128 ◽  
Author(s):  
Charles G B Caraguel ◽  
Raphaël Vanderstichel
2005 ◽  
Vol 44 (01) ◽  
pp. 124-126 ◽  
Author(s):  
W. Lehmacher ◽  
M. Hellmich

Summary Objectives: Bayes’ rule formalizes how the pre-test probability of having a condition of interest is changed by a diagnostic test result to yield the post-test probability of having the condition. To simplify this calculation a geometric solution in form of a ruler is presented. Methods: Using odds and the likelihood ratio of a test result in favor of having the condition of interest, Bayes’ rule can succinctly be expressed as ”the post-test odds equals the pre-test odds times the likelihood ratio”. Taking logarithms of both sides yields an additive equation. Results: The additive log odds equation can easily be solved geometrically. We propose a ruler made of two scales to be adjusted laterally. A different, widely used solution in form of a nomogram was published by Fagan [2]. Conclusions: Whilst use of the nomogram seems more obvious, the ruler may be easier to operate in clinical practice since no straight edge is needed for precise reading. Moreover, the ruler yields more intuitive results because it shows the change in probability due to a given test result on the same scale.


2006 ◽  
Vol 14 (7S_Part_15) ◽  
pp. P823-P823
Author(s):  
Leonie N.C. Visser ◽  
Sophie Pelt ◽  
Marij A. Hillen ◽  
Femke H. Bouwman ◽  
Wiesje M. Van der Flier ◽  
...  

2016 ◽  
Vol 55 (10) ◽  
pp. 1379-1382
Author(s):  
Toshihide Izumida ◽  
Hidenao Sakata ◽  
Masahiko Nakamura ◽  
Yumiko Hayashibara ◽  
Noriko Inasaki ◽  
...  

2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Breanna Wright ◽  
Alyse Lennox ◽  
Mark L. Graber ◽  
Peter Bragge

Abstract Background Communication failures involving test results contribute to issues of patient harm and sentinel events. This article aims to synthesise review evidence, practice insights and patient perspectives addressing problems encountered in the communication of diagnostic test results. Methods The rapid review identified ten systematic reviews and four narrative reviews. Five practitioner interviews identified insights into interventions and implementation, and a citizen panel with 15 participants explored the patient viewpoint. Results The rapid review provided support for the role of technology to ensure effective communication; behavioural interventions such as audit and feedback could be effective in changing clinician behaviour; and point-of-care tests (bedside testing) eliminate the communication breakdown problem altogether. The practice interviews highlighted transparency, and clarifying the lines of responsibility as central to improving test result communication. Enabling better information sharing, implementing adequate planning and utilising technology were also identified in the practice interviews as viable strategies to improve test result communication. The citizen panel highlighted technology as critical to improving communication of test results to both health professionals and patients. Patients also highlighted the importance of having different ways of accessing test results, which is particularly pertinent when ensuring suitability for vulnerable populations. Conclusions This paper draws together multiple perspectives on the problem of failures in diagnostic test results communication to inform appropriate interventions. Across the three studies, technology was identified as the most feasible option for closing the loop on test result communication. However, the importance of clear, consistent communication and more streamlined processes were also key elements that emerged. Review registration The protocol for the rapid review was registered with PROSPERO CRD42018093316.


2003 ◽  
Vol 42 (03) ◽  
pp. 260-264 ◽  
Author(s):  
W. A. Benish

Summary Objectives: This paper demonstrates that diagnostic test performance can be quantified as the average amount of information the test result (R) provides about the disease state (D). Methods: A fundamental concept of information theory, mutual information, is directly applicable to this problem. This statistic quantifies the amount of information that one random variable contains about another random variable. Prior to performing a diagnostic test, R and D are random variables. Hence, their mutual information, I(D;R), is the amount of information that R provides about D. Results: I(D;R) is a function of both 1) the pretest probabilities of the disease state and 2) the set of conditional probabilities relating each possible test result to each possible disease state. The area under the receiver operating characteristic curve (AUC) is a popular measure of diagnostic test performance which, in contrast to I(D;R), is independent of the pretest probabilities; it is a function of only the set of conditional probabilities. The AUC is not a measure of diagnostic information. Conclusions: Because I(D;R) is dependent upon pretest probabilities, knowledge of the setting in which a diagnostic test is employed is a necessary condition for quantifying the amount of information it provides. Advantages of I(D;R) over the AUC are that it can be calculated without invoking an arbitrary curve fitting routine, it is applicable to situations in which multiple diagnoses are under consideration, and it quantifies test performance in meaningful units (bits of information).


2019 ◽  
Vol 29 (4) ◽  
pp. 1227-1242 ◽  
Author(s):  
Zelalem F Negeri ◽  
Joseph Beyene

Bivariate random-effects models are currently widely used to synthesize pairs of test sensitivity and specificity across studies. Inferences drawn based on these models may be distorted in the presence of outlying or influential studies. Currently, subjective methods such as inspection of forest plots are used to identify outlying studies in meta-analysis of diagnostic test accuracy studies. We proposed objective methods based on solid statistical reasoning for identifying outlying and/or influential studies. The proposed methods have been validated using simulation study and illustrated on two published meta-analysis data. Our methods outperform and neglect the subjectivity of the currently used ad hoc methods. The proposed methods can be used as a sensitivity analysis tool concurrently with the current bivariate random-effects models or as a preliminary analysis tool for robust models that accommodate outlying and/or influential studies in meta-analysis of diagnostic test accuracy studies.


BMJ Open ◽  
2018 ◽  
Vol 8 (2) ◽  
pp. e019241 ◽  
Author(s):  
Bonnie Armstrong ◽  
Julia Spaniol ◽  
Nav Persaud

ObjectiveClinicians often overestimate the probability of a disease given a positive test result (positive predictive value; PPV) and the probability of no disease given a negative test result (negative predictive value; NPV). The purpose of this study was to investigate whether experiencing simulated patient cases (ie, an ‘experience format’) would promote more accurate PPV and NPV estimates compared with a numerical format.DesignParticipants were presented with information about three diagnostic tests for the same fictitious disease and were asked to estimate the PPV and NPV of each test. Tests varied with respect to sensitivity and specificity. Information about each test was presented once in the numerical format and once in the experience format. The study used a 2 (format: numerical vs experience) × 3 (diagnostic test: gold standard vs low sensitivity vs low specificity) within-subjects design.SettingThe study was completed online, via Qualtrics (Provo, Utah, USA).Participants50 physicians (12 clinicians and 38 residents) from the Department of Family and Community Medicine at St Michael’s Hospital in Toronto, Canada, completed the study. All participants had completed at least 1 year of residency.ResultsEstimation accuracy was quantified by the mean absolute error (MAE; absolute difference between estimate and true predictive value). PPV estimation errors were larger in the numerical format (MAE=32.6%, 95% CI 26.8% to 38.4%) compared with the experience format (MAE=15.9%, 95% CI 11.8% to 20.0%,d=0.697, P<0.001). Likewise, NPV estimation errors were larger in the numerical format (MAE=24.4%, 95% CI 14.5% to 34.3%) than in the experience format (MAE=11.0%, 95% CI 6.5% to 15.5%,d=0.303, P=0.015).ConclusionsExposure to simulated patient cases promotes accurate estimation of predictive values in clinicians. This finding carries implications for diagnostic training and practice.


Sign in / Sign up

Export Citation Format

Share Document