scholarly journals Assessing treatment effects and publication bias across different specialties in medicine: a meta-epidemiological study

BMJ Open ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. e045942
Author(s):  
Simon Schwab ◽  
Giuachin Kreiliger ◽  
Leonhard Held

ObjectivesTo assess the prevalence of statistically significant treatment effects, adverse events and small-study effects (when small studies report more extreme results than large studies) and publication bias (over-reporting of statistically significant results) across medical specialties.DesignLarge meta-epidemiological study of treatment effects from the Cochrane Database of Systematic Reviews.MethodsWe investigated outcomes from 57 162 studies from 1922 to 2019, and overall 98 966 meta-analyses and 5534 large meta-analyses (≥10 studies). Egger’s and Harbord’s tests to detect small-study effects, limit meta-analysis and Copas selection models to bias-adjust effect estimates and generalised linear mixed models were used to analyse one of the largest collections of evidence in medicine.ResultsMedical specialties showed differences in the prevalence of statistically significant results of efficacy and safety outcomes. Treatment effects from primary studies published in high ranking journals were more likely to be statistically significant (OR=1.52; 95% CI 1.32 to 1.75) while randomised controlled trials were less likely to report a statistically significant effect (OR=0.90; 95% CI 0.86 to 0.94). Altogether 19% (95% CI 18% to 20%) of the large meta-analyses showed evidence for small-study effects, but only 3.9% (95% CI 3.4% to 4.4%) showed evidence for publication bias after further assessment of funnel plots. Adjusting treatment effects resulted in overall less evidence for efficacy.ConclusionsThese results suggest that reporting of large treatment effects from small studies may cause greater concern than publication bias. Incentives should be created so that studies of the highest quality become more visible than studies that report more extreme results.

2020 ◽  
Author(s):  
Simon Schwab ◽  
Kreiliger Giuachin ◽  
Leonhard Held

Publication bias is a persisting problem in meta-analyses for evidence based medicine. As a consequence small studies with large treatment effects are more likely to be reported than studies with a null result which causes asymmetry. Here, we investigated treatment effects from 57,186 studies from 1922 to 2019, and overall 99,129 meta-analyses and 5,557 large meta-analyses from the Cochrane Database of Systematic Reviews. Altogether 19% (95%-CI from 18% to 20%) of the meta-analyses demonstrated evidence for asymmetry, but only 3.9% (95%-CI from 3.4% to 4.4%) showed evidence for publication bias after further assessment of funnel plots. Adjusting treatment effects resulted in overall less evidence for efficacy, and treatment effects in some medical specialties or published in prestigious journals were more likely to be statistically significant. These results suggest that asymmetry from exaggerated effects from small studies causes greater concern than publication bias.


2014 ◽  
Vol 18 (4) ◽  
pp. 1031-1044 ◽  
Author(s):  
Spyridon N. Papageorgiou ◽  
Moschos A. Papadopoulos ◽  
Athanasios E. Athanasiou

BMJ ◽  
2020 ◽  
pp. l6802 ◽  
Author(s):  
Helene Moustgaard ◽  
Gemma L Clayton ◽  
Hayley E Jones ◽  
Isabelle Boutron ◽  
Lars Jørgensen ◽  
...  

Abstract Objectives To study the impact of blinding on estimated treatment effects, and their variation between trials; differentiating between blinding of patients, healthcare providers, and observers; detection bias and performance bias; and types of outcome (the MetaBLIND study). Design Meta-epidemiological study. Data source Cochrane Database of Systematic Reviews (2013-14). Eligibility criteria for selecting studies Meta-analyses with both blinded and non-blinded trials on any topic. Review methods Blinding status was retrieved from trial publications and authors, and results retrieved automatically from the Cochrane Database of Systematic Reviews. Bayesian hierarchical models estimated the average ratio of odds ratios (ROR), and estimated the increases in heterogeneity between trials, for non-blinded trials (or of unclear status) versus blinded trials. Secondary analyses adjusted for adequacy of concealment of allocation, attrition, and trial size, and explored the association between outcome subjectivity (high, moderate, low) and average bias. An ROR lower than 1 indicated exaggerated effect estimates in trials without blinding. Results The study included 142 meta-analyses (1153 trials). The ROR for lack of blinding of patients was 0.91 (95% credible interval 0.61 to 1.34) in 18 meta-analyses with patient reported outcomes, and 0.98 (0.69 to 1.39) in 14 meta-analyses with outcomes reported by blinded observers. The ROR for lack of blinding of healthcare providers was 1.01 (0.84 to 1.19) in 29 meta-analyses with healthcare provider decision outcomes (eg, readmissions), and 0.97 (0.64 to 1.45) in 13 meta-analyses with outcomes reported by blinded patients or observers. The ROR for lack of blinding of observers was 1.01 (0.86 to 1.18) in 46 meta-analyses with subjective observer reported outcomes, with no clear impact of degree of subjectivity. Information was insufficient to determine whether lack of blinding was associated with increased heterogeneity between trials. The ROR for trials not reported as double blind versus those that were double blind was 1.02 (0.90 to 1.13) in 74 meta-analyses. Conclusion No evidence was found for an average difference in estimated treatment effect between trials with and without blinded patients, healthcare providers, or outcome assessors. These results could reflect that blinding is less important than often believed or meta-epidemiological study limitations, such as residual confounding or imprecision. At this stage, replication of this study is suggested and blinding should remain a methodological safeguard in trials.


2018 ◽  
Author(s):  
Michele B. Nuijten ◽  
Marcel A. L. M. van Assen ◽  
Hilde Augusteijn ◽  
Elise Anne Victoire Crompvoets ◽  
Jelte M. Wicherts

In this meta-study, we analyzed 2,442 effect sizes from 131 meta-analyses in intelligence research, published from 1984 to 2014, to estimate the average effect size, median power, and evidence for bias. We found that the average effect size in intelligence research was a Pearson’s correlation of .26, and the median sample size was 60. Furthermore, across primary studies, we found a median power of 11.9% to detect a small effect, 54.5% to detect a medium effect, and 93.9% to detect a large effect. We documented differences in average effect size and median estimated power between different types of in intelligence studies (correlational studies, studies of group differences, experiments, toxicology, and behavior genetics). On average, across all meta-analyses (but not in every meta-analysis), we found evidence for small study effects, potentially indicating publication bias and overestimated effects. We found no differences in small study effects between different study types. We also found no convincing evidence for the decline effect, US effect, or citation bias across meta-analyses. We conclude that intelligence research does show signs of low power and publication bias, but that these problems seem less severe than in many other scientific fields.


2020 ◽  
Vol 8 (4) ◽  
pp. 36
Author(s):  
Michèle B. Nuijten ◽  
Marcel A. L. M. van Assen ◽  
Hilde E. M. Augusteijn ◽  
Elise A. V. Crompvoets ◽  
Jelte M. Wicherts

In this meta-study, we analyzed 2442 effect sizes from 131 meta-analyses in intelligence research, published from 1984 to 2014, to estimate the average effect size, median power, and evidence for bias. We found that the average effect size in intelligence research was a Pearson’s correlation of 0.26, and the median sample size was 60. Furthermore, across primary studies, we found a median power of 11.9% to detect a small effect, 54.5% to detect a medium effect, and 93.9% to detect a large effect. We documented differences in average effect size and median estimated power between different types of intelligence studies (correlational studies, studies of group differences, experiments, toxicology, and behavior genetics). On average, across all meta-analyses (but not in every meta-analysis), we found evidence for small-study effects, potentially indicating publication bias and overestimated effects. We found no differences in small-study effects between different study types. We also found no convincing evidence for the decline effect, US effect, or citation bias across meta-analyses. We concluded that intelligence research does show signs of low power and publication bias, but that these problems seem less severe than in many other scientific fields.


Author(s):  
Chongliang Luo ◽  
Arielle K Marks-Anglin ◽  
Rui Duan ◽  
Lifeng Lin ◽  
Chuan Hong ◽  
...  

In meta-analyses, small-study effects (SSE) refer to the phenomenon that smaller studies show different, often larger, treatment effects than larger studies, which may lead to incorrect, commonly optimistic estimates of treatment effects. Visualization tools such as funnel plots have been widely used to investigate the SSE in univariate meta-analyses. The trim and fill procedure is a non-parametric method to identify and adjust for SSE and is widely used in practice due to its simplicity. However, most visualization tools and SSE bias correction methods have been focusing on univariate outcomes. For a meta-analysis with multiple outcomes, the estimated number of trimmed studies by trim and fill for different outcomes may be different, leading to inconsistent conclusions. In this paper, we propose a bivariate trim and fill procedure to account for SSE in a bivariate meta-analysis. Based on a recently developed visualization tool of bivariate meta-analysis, known as the galaxy plot, we develop a sensible data-driven imputation algorithm for SSE bias correction. The method relies on the symmetry of the galaxy plot and assumes that some studies are suppressed based on a linear combination of outcomes. The studies are projected along a particular direction and the univariate trim and fill method is used to estimate the number of trimmed studies. Compared to the univariate method, the proposed method yields consistent conclusion about SSE and trimmed studies. The proposed approach is validated using simulated data and is applied to a meta-analysis of efficacy and safety of antidepressant drugs.


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


2021 ◽  
Vol 5 (1) ◽  
pp. e001129
Author(s):  
Bill Stevenson ◽  
Wubshet Tesfaye ◽  
Julia Christenson ◽  
Cynthia Mathew ◽  
Solomon Abrha ◽  
...  

BackgroundHead lice infestation is a major public health problem around the globe. Its treatment is challenging due to product failures resulting from rapidly emerging resistance to existing treatments, incorrect treatment applications and misdiagnosis. Various head lice treatments with different mechanism of action have been developed and explored over the years, with limited report on systematic assessments of their efficacy and safety. This work aims to present a robust evidence summarising the interventions used in head lice.MethodThis is a systematic review and network meta-analysis which will be reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analyses statement for network meta-analyses. Selected databases, including PubMed, Embase, MEDLINE, Web of Science, CINAHL and Cochrane Central Register of Controlled Trials will be systematically searched for randomised controlled trials exploring head lice treatments. Searches will be limited to trials published in English from database inception till 2021. Grey literature will be identified through Open Grey, AHRQ, Grey Literature Report, Grey Matters, ClinicalTrials.gov, WHO International Clinical Trials Registry and International Standard Randomised Controlled Trials Number registry. Additional studies will be sought from reference lists of included studies. Study screening, selection, data extraction and assessment of methodological quality will be undertaken by two independent reviewers, with disagreements resolved via a third reviewer. The primary outcome measure is the relative risk of cure at 7 and 14 days postinitial treatment. Secondary outcome measures may include adverse drug events, ovicidal activity, treatment compliance and acceptability, and reinfestation. Information from direct and indirect evidence will be used to generate the effect sizes (relative risk) to compare the efficacy and safety of individual head lice treatments against a common comparator (placebo and/or permethrin). Risk of bias assessment will be undertaken by two independent reviewers using the Cochrane Risk of Bias tool and the certainty of evidence assessed using the Grading of Recommendations, Assessment, Development and Evaluations guideline for network meta-analysis. All quantitative analyses will be conducted using STATA V.16.DiscussionThe evidence generated from this systematic review and meta-analysis is intended for use in evidence-driven treatment of head lice infestations and will be instrumental in informing health professionals, public health practitioners and policy-makers.PROSPERO registration numberCRD42017073375.


2012 ◽  
Vol 9 (5) ◽  
pp. 610-620 ◽  
Author(s):  
Thomas A Trikalinos ◽  
Ingram Olkin

Background Many comparative studies report results at multiple time points. Such data are correlated because they pertain to the same patients, but are typically meta-analyzed as separate quantitative syntheses at each time point, ignoring the correlations between time points. Purpose To develop a meta-analytic approach that estimates treatment effects at successive time points and takes account of the stochastic dependencies of those effects. Methods We present both fixed and random effects methods for multivariate meta-analysis of effect sizes reported at multiple time points. We provide formulas for calculating the covariance (and correlations) of the effect sizes at successive time points for four common metrics (log odds ratio, log risk ratio, risk difference, and arcsine difference) based on data reported in the primary studies. We work through an example of a meta-analysis of 17 randomized trials of radiotherapy and chemotherapy versus radiotherapy alone for the postoperative treatment of patients with malignant gliomas, where in each trial survival is assessed at 6, 12, 18, and 24 months post randomization. We also provide software code for the main analyses described in the article. Results We discuss the estimation of fixed and random effects models and explore five options for the structure of the covariance matrix of the random effects. In the example, we compare separate (univariate) meta-analyses at each of the four time points with joint analyses across all four time points using the proposed methods. Although results of univariate and multivariate analyses are generally similar in the example, there are small differences in the magnitude of the effect sizes and the corresponding standard errors. We also discuss conditional multivariate analyses where one compares treatment effects at later time points given observed data at earlier time points. Limitations Simulation and empirical studies are needed to clarify the gains of multivariate analyses compared with separate meta-analyses under a variety of conditions. Conclusions Data reported at multiple time points are multivariate in nature and are efficiently analyzed using multivariate methods. The latter are an attractive alternative or complement to performing separate meta-analyses.


Sign in / Sign up

Export Citation Format

Share Document