scholarly journals Robust metrics and sensitivity analyses for meta-analyses of heterogeneous effects

2020 ◽  
Author(s):  
Maya B Mathur ◽  
Tyler VanderWeele

We recently suggested new statistical metrics for routine reporting in random-effects meta-analyses to convey evidence strength for scientifically meaningful effects under effect heterogeneity. First, given a chosen threshold of meaningful effect size, we suggested reporting the estimated proportion of true effect sizes above this threshold. Second, we suggested reporting the proportion of effect sizes below a second, possibly symmetric, threshold in the opposite direction from the estimated mean. Our previous methods applied when the true effects are approximately normal, when the number of studies is relatively large, and when the proportion is between approximately 0.15 and 0.85. Here, we additionally describe robust methods for point estimation and inference that perform well under considerably more general conditions, as we validate in an extensive simulation study. The methods are implemented in the R package MetaUtility (function prop_stronger). We describe application of the robust methods to conducting sensitivity analyses for unmeasured confounding in meta-analyses.

2018 ◽  
Author(s):  
Maya B Mathur ◽  
Tyler VanderWeele

We provide two simple metrics that could be reported routinely in random-effects meta-analyses to convey evidence strength for scientifically meaningful effects under effect heterogeneity (i.e., a nonzero estimated variance of the true effect distribution). First, given a chosen threshold of meaningful effect size, meta-analyses could report the estimated proportion of true effect sizes above this threshold. Second, meta-analyses could estimate the proportion of effect sizes below a second, possibly symmetric, threshold in the opposite direction from the estimated mean. These metrics could help identify if: (1) there are few effects of scientifically meaningful size despite a "statistically significant" pooled point estimate; (2) there are some large effects despite an apparently null point estimate; or (3) strong effects in the direction opposite the pooled estimate regularly also occur (and thus, potential effect modifiers should be examined). These metrics should be presented with confidence intervals, which can be obtained analytically or, under weaker assumptions, using bias-corrected and accelerated (BCa) bootstrapping. Additionally, these metrics inform relative comparison of evidence strength across related meta-analyses. We illustrate with applied examples and provide an R package to compute the metrics and confidence intervals.


2019 ◽  
Author(s):  
Shinichi Nakagawa ◽  
Malgorzata Lagisz ◽  
Rose E O'Dea ◽  
Joanna Rutkowska ◽  
Yefeng Yang ◽  
...  

‘Classic’ forest plots show the effect sizes from individual studies and the aggregate effect from a meta-analysis. However, in ecology and evolution meta-analyses routinely contain over 100 effect sizes, making the classic forest plot of limited use. We surveyed 102 meta-analyses in ecology and evolution, finding that only 11% use the classic forest plot. Instead, most used a ‘forest-like plot’, showing point estimates (with 95% confidence intervals; CIs) from a series of subgroups or categories in a meta-regression. We propose a modification of the forest-like plot, which we name the ‘orchard plot’. Orchard plots, in addition to showing overall mean effects and CIs from meta-analyses/regressions, also includes 95% prediction intervals (PIs), and the individual effect sizes scaled by their precision. The PI allows the user and reader to see the range in which an effect size from a future study may be expected to fall. The PI, therefore, provides an intuitive interpretation of any heterogeneity in the data. Supplementing the PI, the inclusion of underlying effect sizes also allows the user to see any influential or outlying effect sizes. We showcase the orchard plot with example datasets from ecology and evolution, using the R package, orchard, including several functions for visualizing meta-analytic data using forest-plot derivatives. We consider the orchard plot as a variant on the classic forest plot, cultivated to the needs of meta-analysts in ecology and evolution. Hopefully, the orchard plot will prove fruitful for visualizing large collections of heterogeneous effect sizes regardless of the field of study.


2021 ◽  
Author(s):  
Maya B Mathur ◽  
Tyler VanderWeele

Meta-analyses contribute critically to cumulative science, but they can produce misleading conclusions if their constituent primary studies are biased, for example by unmeasured confounding in nonrandomized studies. We provide practical guidance on how meta-analysts can address confounding and other biases that affect studies' internal validity, focusing primarily on sensitivity analyses that help quantify how biased the meta-analysis estimates might be. We review a number of sensitivity analysis methods to do so, especially recent developments that are straightforward to implement and interpret and that use somewhat less stringent statistical assumptions than earlier methods. We give recommendations for how these methods could be applied in practice and illustrate using a previously published meta-analysis. Sensitivity analyses can provide informative quantitative summaries of evidence strength, and we suggest reporting them routinely in meta-analyses of potentially biased studies. This recommendation in no way diminishes the importance of defining study eligibility criteria that reduce bias and of characterizing studies’ risks of bias qualitatively.


2021 ◽  
Author(s):  
Loretta Gasparini ◽  
Sho Tsuji ◽  
Christina Bergmann

Meta-analyses provide researchers with an overview of the body of evidence in a topic, with quantified estimates of effect sizes and the role of moderators, and weighting studies according to their precision. We provide a guide for conducting a transparent and reproducible meta-analysis in the field of developmental psychology within the framework of the MetaLab platform, in 10 steps: 1) Choose a topic for your meta-analysis, 2) Formulate your research question and specify inclusion criteria, 3) Preregister and carefully document all stages of your meta-analysis, 4) Conduct the literature search, 5) Collect and screen records, 6) Extract data from eligible studies, 7) Read the data into analysis software and compute effect sizes, 8) Create meta-analytic models to assess the strength of the effect and investigate possible moderators, 9) Visualize your data, 10) Write up and promote your meta-analysis. Meta-analyses can inform future studies, through power calculations, by identifying robust methods and exposing research gaps. By adding a new meta-analysis to MetaLab, datasets across multiple topics of developmental psychology can be synthesized, and the dataset can be maintained as a living, community-augmented meta-analysis to which researchers add new data, allowing for a cumulative approach to evidence synthesis.


2021 ◽  
Vol 43 (1) ◽  
Author(s):  
Maya B. Mathur ◽  
Tyler J. VanderWeele

Meta-analyses contribute critically to cumulative science, but they can produce misleading conclusions if their constituent primary studies are biased, for example by unmeasured confounding in nonrandomized studies. We provide practical guidance on how meta-analysts can address confounding and other biases that affect studies’ internal validity, focusing primarily on sensitivity analyses that help quantify how biased the meta-analysis estimates might be. We review a number of sensitivity analysis methods to do so, especially recent developments that are straightforward to implement and interpret and that use somewhat less stringent statistical assumptions than do earlier methods. We give recommendations for how these newer methods could be applied in practice and illustrate using a previously published meta-analysis. Sensitivity analyses can provide informative quantitative summaries of evidence strength, and we suggest reporting them routinely in meta-analyses of potentially biased studies. This recommendation in no way diminishes the importance of defining study eligibility criteria that reduce bias and of characterizing studies’ risks of bias qualitatively. Expected final online publication date for the Annual Review of Public Health, Volume 43 is April 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


2019 ◽  
Vol 22 (4) ◽  
pp. 153-160 ◽  
Author(s):  
Sara Balduzzi ◽  
Gerta Rücker ◽  
Guido Schwarzer

ObjectiveMeta-analysis is of fundamental importance to obtain an unbiased assessment of the available evidence. In general, the use of meta-analysis has been increasing over the last three decades with mental health as a major research topic. It is then essential to well understand its methodology and interpret its results. In this publication, we describe how to perform a meta-analysis with the freely available statistical software environment R, using a working example taken from the field of mental health.MethodsR package meta is used to conduct standard meta-analysis. Sensitivity analyses for missing binary outcome data and potential selection bias are conducted with R package metasens. All essential R commands are provided and clearly described to conduct and report analyses.ResultsThe working example considers a binary outcome: we show how to conduct a fixed effect and random effects meta-analysis and subgroup analysis, produce a forest and funnel plot and to test and adjust for funnel plot asymmetry. All these steps work similar for other outcome types.ConclusionsR represents a powerful and flexible tool to conduct meta-analyses. This publication gives a brief glimpse into the topic and provides directions to more advanced meta-analysis methods available in R.


2019 ◽  
Author(s):  
Maya B Mathur ◽  
Tyler VanderWeele

We propose sensitivity analyses for publication bias in meta-analyses. We consider a publication process such that "statistically significant" results are more likely to be published than negative or "nonsignificant" results by an unknown ratio, eta. Our proposed methods also accommodate some plausible forms of selection based on a study's standard error. Using inverse-probability weighting and robust estimation that accommodates non-normal population effects, small meta-analyses, and clustering, we develop sensitivity analyses that enable statements such as: "For publication bias to shift the observed point estimate to the null, 'significant' results would need to be at least 30-fold more likely to be published than negative or 'nonsignificant' results." Comparable statements can be made regarding shifting to a chosen non-null value or shifting the confidence interval. To aid interpretation, we describe empirical benchmarks for plausible values of eta across disciplines. We show that a worst-case meta-analytic point estimate for maximal publication bias under the selection model can be obtained simply by conducting a standard meta-analysis of only the negative and "nonsignificant" studies; this method sometimes indicates that no amount of such publication bias could "explain away" the results. We illustrate the proposed methods using real-life meta-analyses and provide an R package, PublicationBias.


1998 ◽  
Vol 172 (3) ◽  
pp. 227-231 ◽  
Author(s):  
Joanna Moncrieff ◽  
Simon Wessely ◽  
Rebecca Hardy

BackgroundUnblinding effects may-introduce bias into clinical trials. The use of active placebos to mimic side-effects of medication may therefore produce more rigorous evidence on the efficacy of antidepressants.MethodTrials comparing antidepressants with active placebos were located. A standard measure of effect was calculated for each trial and weighted pooled estimates obtained. Heterogeneity was examined and sensitivity analyses performed. A subgroup analysis of in-patient and out-patient trials was conducted.ResultsOnly two of the nine studies examined produced effect sizes which showed a consistent significant difference in favour of the active drug. Combining all studies produced pooled effect size estimates of between 0.41 (0.27–0.56) and 0.46 (0.31–0.60) with high heterogeneity due to one strongly positive trial. Sensitivity analyses excluding this and one other trial reduced the pooled effect to between 0.21 (0.03–0.38) and 0.27 (0.10–0.45).ConclusionsMeta-analysis is very sensitive to decisions about exclusions. Previous general meta-analyses have found combined effect sizes in the range 0.4–0.8. The more conservative estimates produced here suggest that unblinding effects may inflate the efficacy of antidepressants in trials using inert placebos.


2020 ◽  
Vol 46 (2-3) ◽  
pp. 343-354 ◽  
Author(s):  
Timothy R Levine ◽  
René Weber

Abstract We examined the interplay between how communication researchers use meta-analyses to make claims and the prevalence, causes, and implications of unresolved heterogeneous findings. Heterogeneous findings can result from substantive moderators, methodological artifacts, and combined construct invalidity. An informal content analysis of meta-analyses published in four elite communication journals revealed that unresolved between-study effect heterogeneity was ubiquitous. Communication researchers mainly focus on computing mean effect sizes, to the exclusion of how effect sizes in primary studies are distributed and of what might be driving effect size distributions. We offer four recommendations for future meta-analyses. Researchers are advised to be more diligent and sophisticated in testing for heterogeneity. We encourage greater description of how effects are distributed, coupled with greater reliance on graphical displays. We council greater recognition of combined construct invalidity and advocate for content expertise. Finally, we endorse greater awareness and improved tests for publication bias and questionable research practices.


Cephalalgia ◽  
2015 ◽  
Vol 36 (5) ◽  
pp. 474-492 ◽  
Author(s):  
Kerstin Luedtke ◽  
Angie Allers ◽  
Laura H Schulte ◽  
Arne May

Aim We aimed to conduct a systematic review evaluating the effectiveness of interventions used by physiotherapists on the intensity, frequency and duration of migraine, tension-type (TTH) and cervicogenic headache (CGH). Methods We performed a systematic search of electronic databases and a hand search for controlled trials. A risk of bias analysis was conducted using the Cochrane risk of bias tool (RoB). Meta-analyses present the combined mean effects; sensitivity analyses evaluate the influence of methodological quality. Results Of 77 eligible trials, 26 were included in the RoB assessment. Twenty trials were included in meta-analyses. Nineteen out of 26 trials had a high RoB in >1 domain. Meta-analyses of all trials indicated a reduction of TTH ( p < 0.0001; mean reduction −1.11 on a 0–10 visual analog scale (VAS); 95% CI −1.64 to −0.57) and CGH ( p = 0.0002; mean reduction −2.52 on a 0–10 VAS; 95% CI −3.86 to −1.19) pain intensity, CGH frequency ( p < 0.00001; mean reduction −1.34 days per month; 95% CI −1.40 to −1.28), and migraine ( p = 0.0001; mean reduction −22.39 hours without relief; 95% CI −33.90 to −10.88) and CGH ( p < 0.00001; mean reduction −1.68 hours per day; 95% CI −2.09 to −1.26) duration. Excluding high RoB trials increased the effect sizes and reached additional statistical significance for migraine pain intensity ( p < 0.00001; mean reduction −1.94 on a 0–10 VAS; 95% CI −2.61 to −1.27) and frequency ( p < 0.00001; mean reduction −9.07 days per month; 95% CI −9.52 to −8.62). Discussion Results suggest a statistically significant reduction in the intensity, frequency and duration of migraine, TTH and CGH. Pain reduction and reduction in CGH frequency do not reach clinically relevant effect sizes. Small sample sizes, inadequate use of headache classification, and other methodological shortcomings reduce the confidence in these results. Methodologically sound, randomized controlled trials with adequate sample sizes are required to provide information on whether and which physiotherapy approach is effective. According to Grading of Recommendations Assessment, Development and Evaluation (GRADE), the current level of evidence is low.


Sign in / Sign up

Export Citation Format

Share Document