scholarly journals Assessing treatment effects and publication bias across different specialties in medicine: a large empirical study of the Cochrane Database of Systematic Reviews

2020 ◽  
Author(s):  
Simon Schwab ◽  
Kreiliger Giuachin ◽  
Leonhard Held

Publication bias is a persisting problem in meta-analyses for evidence based medicine. As a consequence small studies with large treatment effects are more likely to be reported than studies with a null result which causes asymmetry. Here, we investigated treatment effects from 57,186 studies from 1922 to 2019, and overall 99,129 meta-analyses and 5,557 large meta-analyses from the Cochrane Database of Systematic Reviews. Altogether 19% (95%-CI from 18% to 20%) of the meta-analyses demonstrated evidence for asymmetry, but only 3.9% (95%-CI from 3.4% to 4.4%) showed evidence for publication bias after further assessment of funnel plots. Adjusting treatment effects resulted in overall less evidence for efficacy, and treatment effects in some medical specialties or published in prestigious journals were more likely to be statistically significant. These results suggest that asymmetry from exaggerated effects from small studies causes greater concern than publication bias.

BMJ Open ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. e045942
Author(s):  
Simon Schwab ◽  
Giuachin Kreiliger ◽  
Leonhard Held

ObjectivesTo assess the prevalence of statistically significant treatment effects, adverse events and small-study effects (when small studies report more extreme results than large studies) and publication bias (over-reporting of statistically significant results) across medical specialties.DesignLarge meta-epidemiological study of treatment effects from the Cochrane Database of Systematic Reviews.MethodsWe investigated outcomes from 57 162 studies from 1922 to 2019, and overall 98 966 meta-analyses and 5534 large meta-analyses (≥10 studies). Egger’s and Harbord’s tests to detect small-study effects, limit meta-analysis and Copas selection models to bias-adjust effect estimates and generalised linear mixed models were used to analyse one of the largest collections of evidence in medicine.ResultsMedical specialties showed differences in the prevalence of statistically significant results of efficacy and safety outcomes. Treatment effects from primary studies published in high ranking journals were more likely to be statistically significant (OR=1.52; 95% CI 1.32 to 1.75) while randomised controlled trials were less likely to report a statistically significant effect (OR=0.90; 95% CI 0.86 to 0.94). Altogether 19% (95% CI 18% to 20%) of the large meta-analyses showed evidence for small-study effects, but only 3.9% (95% CI 3.4% to 4.4%) showed evidence for publication bias after further assessment of funnel plots. Adjusting treatment effects resulted in overall less evidence for efficacy.ConclusionsThese results suggest that reporting of large treatment effects from small studies may cause greater concern than publication bias. Incentives should be created so that studies of the highest quality become more visible than studies that report more extreme results.


Neurosurgery ◽  
2020 ◽  
Vol 87 (3) ◽  
pp. 435-441 ◽  
Author(s):  
Victor M Lu ◽  
Christopher S Graffeo ◽  
Avital Perry ◽  
Michael J Link ◽  
Fredric B Meyer ◽  
...  

Abstract Systematic reviews and meta-analyses in the neurosurgical literature have surged in popularity over the last decade. It is our concern that, without a renewed effort to critically interpret and appraise these studies as high or low quality, we run the risk of the quality and value of evidence-based medicine in neurosurgery being misinterpreted. Correspondingly, we have outlined 4 major domains to target in interpreting neurosurgical systematic reviews and meta-analyses based on the lessons learned by a collaboration of clinicians and academics summarized as 4 pearls. The domains of (1) heterogeneity, (2) modeling, (3) certainty, and (4) bias in neurosurgical systematic reviews and meta-analyses were identified as aspects in which the authors’ approaches have changed over time to improve robustness and transparency. Examples of how and why these pearls were adapted were provided in areas of cranial neuralgia, spine, pediatric, and neuro-oncology to demonstrate how neurosurgical readers and writers may improve their interpretation of these domains. The incorporation of these pearls into practice will empower neurosurgical academics to effectively interpret systematic reviews and meta-analyses, enhancing the quality of our evidence-based medicine literature while maintaining a critical focus on the needs of the individual patients in neurosurgery.


2012 ◽  
Vol 21 (2) ◽  
pp. 151-153 ◽  
Author(s):  
A. Cipriani ◽  
C. Barbui ◽  
C. Rizzo ◽  
G. Salanti

Standard meta-analyses are an effective tool in evidence-based medicine, but one of their main drawbacks is that they can compare only two alternative treatments at a time. Moreover, if no trials exist which directly compare two interventions, it is not possible to estimate their relative efficacy. Multiple treatments meta-analyses use a meta-analytical technique that allows the incorporation of evidence from both direct and indirect comparisons from a network of trials of different interventions to estimate summary treatment effects as comprehensively and precisely as possible.


2008 ◽  
Vol 5;12 (5;9) ◽  
pp. 819-850
Author(s):  
Laxmaiah Manchikanti

Observational studies provide an important source of information when randomized controlled trials (RCTs) cannot or should not be undertaken, provided that the data are analyzed and interpreted with special attention to bias. Evidence-based medicine (EBM) stresses the examination of evidence from clinical research and describes it as a shift in medical paradigm, in contrast to intuition, unsystematic clinical experience, and pathophysiologic rationale. While the importance of randomized trials has been created by the concept of the hierarchy of evidence in guiding therapy, much of the medical research is observational. The reporting of observational research is often not detailed and clear enough with insufficient quality and poor reporting, which hampers the assessment of strengths and weaknesses of the study and the generalizability of the mixed results. Thus, in recent years, progress and innovations in health care are measured by systematic reviews and meta-analyses. A systematic review is defined as, “the application of scientific strategies that limit bias by the systematic assembly, clinical appraisal, and synthesis of all relevant studies on a specific topic.” Meta-analysis usually is the final step in a systematic review. Systematic reviews and meta-analyses are labor intensive, requiring expertise in both the subject matter and review methodology, and also must follow the rules of EBM which suggests that a formal set of rules must complement medical training and common sense for clinicians to integrate the results of clinical research effectively. While expertise in the review methods is important, the expertise in the subject matter and technical components is also crucial. Even though, systematic reviews and meta-analyses, specifically of RCTs, have exploded, the quality of the systematic reviews is highly variable and consequently, the opinions reached of the same studies are quite divergent. Numerous deficiencies have been described in methodologic assessment of the quality of the individual articles. Consequently, observational studies can provide an important complementary source of information, provided that the data are analyzed and interpreted in the context of confounding bias to which they are prone. Appropriate systematic reviews of observational studies, in conjunction with RCTs, may provide the basis for elimination of a dangerous discrepancy between the experts and the evidence. Steps in conducting systematic reviews of observational studies include planning, conducting, reporting, and disseminating the results. MOOSE, or Meta-analysis of Observational Studies in Epidemiology, a proposal for reporting contains specifications including background, search strategy, methods, results, discussion, and conclusion. Use of the MOOSE checklist should improve the usefulness of meta-analysis for authors, reviewers, editors, readers, and decision-makers. This manuscript describes systematic reviews and meta-analyses of observational studies. Authors frequently utilize RCTs and observational studies in one systematic review; thus, they should also follow the reporting standards of the Quality of Reporting of Meta-analysis (QUOROM) statement, which also provides a checklist. A combined approach of QUOROM and MOOSE will improve reporting of systematic reviews and lead to progress and innovations in health care. Key words: Observational studies, evidence-based medicine, systematic reviews, metaanalysis, randomized trials, case-control studies, cross-sectional studies, cohort studies, confounding bias, QUOROM, MOOSE


BMJ ◽  
2020 ◽  
pp. l6802 ◽  
Author(s):  
Helene Moustgaard ◽  
Gemma L Clayton ◽  
Hayley E Jones ◽  
Isabelle Boutron ◽  
Lars Jørgensen ◽  
...  

Abstract Objectives To study the impact of blinding on estimated treatment effects, and their variation between trials; differentiating between blinding of patients, healthcare providers, and observers; detection bias and performance bias; and types of outcome (the MetaBLIND study). Design Meta-epidemiological study. Data source Cochrane Database of Systematic Reviews (2013-14). Eligibility criteria for selecting studies Meta-analyses with both blinded and non-blinded trials on any topic. Review methods Blinding status was retrieved from trial publications and authors, and results retrieved automatically from the Cochrane Database of Systematic Reviews. Bayesian hierarchical models estimated the average ratio of odds ratios (ROR), and estimated the increases in heterogeneity between trials, for non-blinded trials (or of unclear status) versus blinded trials. Secondary analyses adjusted for adequacy of concealment of allocation, attrition, and trial size, and explored the association between outcome subjectivity (high, moderate, low) and average bias. An ROR lower than 1 indicated exaggerated effect estimates in trials without blinding. Results The study included 142 meta-analyses (1153 trials). The ROR for lack of blinding of patients was 0.91 (95% credible interval 0.61 to 1.34) in 18 meta-analyses with patient reported outcomes, and 0.98 (0.69 to 1.39) in 14 meta-analyses with outcomes reported by blinded observers. The ROR for lack of blinding of healthcare providers was 1.01 (0.84 to 1.19) in 29 meta-analyses with healthcare provider decision outcomes (eg, readmissions), and 0.97 (0.64 to 1.45) in 13 meta-analyses with outcomes reported by blinded patients or observers. The ROR for lack of blinding of observers was 1.01 (0.86 to 1.18) in 46 meta-analyses with subjective observer reported outcomes, with no clear impact of degree of subjectivity. Information was insufficient to determine whether lack of blinding was associated with increased heterogeneity between trials. The ROR for trials not reported as double blind versus those that were double blind was 1.02 (0.90 to 1.13) in 74 meta-analyses. Conclusion No evidence was found for an average difference in estimated treatment effect between trials with and without blinded patients, healthcare providers, or outcome assessors. These results could reflect that blinding is less important than often believed or meta-epidemiological study limitations, such as residual confounding or imprecision. At this stage, replication of this study is suggested and blinding should remain a methodological safeguard in trials.


PLoS ONE ◽  
2019 ◽  
Vol 14 (12) ◽  
pp. e0226305
Author(s):  
David A. Groneberg ◽  
Stefan Rolle ◽  
Michael H. K. Bendels ◽  
Doris Klingelhöfer ◽  
Norman Schöffel ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-7 ◽  
Author(s):  
Yi Yang ◽  
Yao Ma ◽  
Lingmin Chen ◽  
Yuqi Liu ◽  
Yonggang Zhang

Objective. The objective of this study was to analyze the 100 top-cited systematic reviews/meta-analyses on diabetic research. Methods. The Science Citation Index Expanded database was searched to identify top-cited studies on diabetic research up to March 4th, 2020. Studies were analyzed using the following characteristics: citation number, publication year, country and institution of origin, authorship, topics, and journals. Results. The 100 top-cited diabetic systematic reviews/meta-analyses were published in 43 different journals, with Diabetes Care having the highest numbers (n=17), followed by The Journal of the American Medical Association (n=14) and Lancet (n=9). The majority of studies are published in the 2000s. The number of citations ranged from 2197 to 301. The highest number of contributions was from the USA, followed by England and Australia. The leading institution was Harvard University. The hot topic was a risk factor (n=33), followed by comorbidity (n=27). Conclusions. The 100 top-cited systematic reviews/meta-analyses on diabetic research identify impactful authors, journals, institutes, and countries. It will also provide the most important references to evidence-based medicine in diabetes and serve as a guide to the features of a citable paper in this field.


2015 ◽  
Vol 34 (20) ◽  
pp. 2781-2793 ◽  
Author(s):  
Michal Kicinski ◽  
David A. Springate ◽  
Evangelos Kontopantelis

Sign in / Sign up

Export Citation Format

Share Document