scholarly journals Preferred reporting items for journal and conference abstracts of systematic reviews and meta-analyses of diagnostic test accuracy studies (PRISMA-DTA for Abstracts): checklist, explanation, and elaboration

BMJ ◽  
2021 ◽  
pp. n265
Author(s):  
Jérémie F Cohen ◽  
Jonathan J Deeks ◽  
Lotty Hooft ◽  
Jean-Paul Salameh ◽  
Daniël A Korevaar ◽  
...  
Radiology ◽  
2018 ◽  
Vol 289 (2) ◽  
pp. 313-314 ◽  
Author(s):  
Robert A. Frank ◽  
Patrick M. Bossuyt ◽  
Matthew D. F. McInnes

2017 ◽  
Vol 6 (1) ◽  
Author(s):  
Trevor A. McGrath ◽  
Mostafa Alabousi ◽  
Becky Skidmore ◽  
Daniël A. Korevaar ◽  
Patrick M. M. Bossuyt ◽  
...  

2019 ◽  
Vol 8 (1) ◽  
Author(s):  
Christopher R. Norman ◽  
Mariska M. G. Leeflang ◽  
Raphaël Porcher ◽  
Aurélie Névéol

Abstract Background The large and increasing number of new studies published each year is making literature identification in systematic reviews ever more time-consuming and costly. Technological assistance has been suggested as an alternative to the conventional, manual study identification to mitigate the cost, but previous literature has mainly evaluated methods in terms of recall (search sensitivity) and workload reduction. There is a need to also evaluate whether screening prioritization methods leads to the same results and conclusions as exhaustive manual screening. In this study, we examined the impact of one screening prioritization method based on active learning on sensitivity and specificity estimates in systematic reviews of diagnostic test accuracy. Methods We simulated the screening process in 48 Cochrane reviews of diagnostic test accuracy and re-run 400 meta-analyses based on a least 3 studies. We compared screening prioritization (with technological assistance) and screening in randomized order (standard practice without technology assistance). We examined if the screening could have been stopped before identifying all relevant studies while still producing reliable summary estimates. For all meta-analyses, we also examined the relationship between the number of relevant studies and the reliability of the final estimates. Results The main meta-analysis in each systematic review could have been performed after screening an average of 30% of the candidate articles (range 0.07 to 100%). No systematic review would have required screening more than 2308 studies, whereas manual screening would have required screening up to 43,363 studies. Despite an average 70% recall, the estimation error would have been 1.3% on average, compared to an average 2% estimation error expected when replicating summary estimate calculations. Conclusion Screening prioritization coupled with stopping criteria in diagnostic test accuracy reviews can reliably detect when the screening process has identified a sufficient number of studies to perform the main meta-analysis with an accuracy within pre-specified tolerance limits. However, many of the systematic reviews did not identify a sufficient number of studies that the meta-analyses were accurate within a 2% limit even with exhaustive manual screening, i.e., using current practice.


Diagnosis ◽  
2018 ◽  
Vol 5 (4) ◽  
pp. 205-214 ◽  
Author(s):  
Matthew L. Rubinstein ◽  
Colleen S. Kraft ◽  
J. Scott Parrott

AbstractBackgroundDiagnostic test accuracy (DTA) systematic reviews (SRs) characterize a test’s potential for diagnostic quality and safety. However, interpreting DTA measures in the context of SRs is challenging. Further, some evidence grading methods (e.g. Centers for Disease Control and Prevention, Division of Laboratory Systems Laboratory Medicine Best Practices method) require determination of qualitative effect size ratings as a contributor to practice recommendations. This paper describes a recently developed effect size rating approach for assessing a DTA evidence base.MethodsA likelihood ratio scatter matrix will plot positive and negative likelihood ratio pairings for DTA studies. Pairings are graphed as single point estimates with confidence intervals, positioned in one of four quadrants derived from established thresholds for test clinical validity. These quadrants support defensible judgments on “substantial”, “moderate”, or “minimal” effect size ratings for each plotted study. The approach is flexible in relation to a priori determinations of the relative clinical importance of false positive and false negative test results.Results and conclusionsThis qualitative effect size rating approach was operationalized in a recent SR that assessed effectiveness of test practices for the diagnosis ofClostridium difficile. Relevance of this approach to other methods of grading evidence, and efforts to measure diagnostic quality and safety are described. Limitations of the approach arise from understanding that a diagnostic test is not an isolated element in the diagnostic process, but provides information in clinical context towards diagnostic quality and safety.


Author(s):  
Jared Campbell ◽  
Miloslav Klugar ◽  
Sandrine Ding ◽  
Dennis Carmody ◽  
Sasja Hakonsen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document