Visual Inference for the Funnel Plot in Meta-Analysis

2019 ◽  
Vol 227 (1) ◽  
pp. 83-89 ◽  
Author(s):  
Michael Kossmeier ◽  
Ulrich S. Tran ◽  
Martin Voracek

Abstract. The funnel plot is widely used in meta-analyses to assess potential publication bias. However, experimental evidence suggests that informal, mere visual, inspection of funnel plots is frequently prone to incorrect conclusions, and formal statistical tests (Egger regression and others) entirely focus on funnel plot asymmetry. We suggest using the visual inference framework with funnel plots routinely, including for didactic purposes. In this framework, the type I error is controlled by design, while the explorative, holistic, and open nature of visual graph inspection is preserved. Specifically, the funnel plot of the actually observed data is presented simultaneously, in a lineup, with null funnel plots showing data simulated under the null hypothesis. Only when the real data funnel plot is identifiable from all the funnel plots presented, funnel plot-based conclusions might be warranted. Software to implement visual funnel plot inference is provided via a tailored R function.

2020 ◽  
Vol 228 (1) ◽  
pp. 43-49 ◽  
Author(s):  
Michael Kossmeier ◽  
Ulrich S. Tran ◽  
Martin Voracek

Abstract. Currently, dedicated graphical displays to depict study-level statistical power in the context of meta-analysis are unavailable. Here, we introduce the sunset (power-enhanced) funnel plot to visualize this relevant information for assessing the credibility, or evidential value, of a set of studies. The sunset funnel plot highlights the statistical power of primary studies to detect an underlying true effect of interest in the well-known funnel display with color-coded power regions and a second power axis. This graphical display allows meta-analysts to incorporate power considerations into classic funnel plot assessments of small-study effects. Nominally significant, but low-powered, studies might be seen as less credible and as more likely being affected by selective reporting. We exemplify the application of the sunset funnel plot with two published meta-analyses from medicine and psychology. Software to create this variation of the funnel plot is provided via a tailored R function. In conclusion, the sunset (power-enhanced) funnel plot is a novel and useful graphical display to critically examine and to present study-level power in the context of meta-analysis.


2021 ◽  
Author(s):  
Megha Joshi ◽  
James E Pustejovsky ◽  
S. Natasha Beretvas

The most common and well-known meta-regression models work under the assumption that there is only one effect size estimate per study and that the estimates are independent. However, meta-analytic reviews of social science research often include multiple effect size estimates per primary study, leading to dependence in the estimates. Some meta-analyses also include multiple studies conducted by the same lab or investigator, creating another potential source of dependence. An increasingly popular method to handle dependence is robust variance estimation (RVE), but this method can result in inflated Type I error rates when the number of studies is small. Small-sample correction methods for RVE have been shown to control Type I error rates adequately but may be overly conservative, especially for tests of multiple-contrast hypotheses. We evaluated an alternative method for handling dependence, cluster wild bootstrapping, which has been examined in the econometrics literature but not in the context of meta-analysis. Results from two simulation studies indicate that cluster wild bootstrapping maintains adequate Type I error rates and provides more power than extant small sample correction methods, particularly for multiple-contrast hypothesis tests. We recommend using cluster wild bootstrapping to conduct hypothesis tests for meta-analyses with a small number of studies. We have also created an R package that implements such tests.


2021 ◽  
Vol 18 (1) ◽  
Author(s):  
Lawrence M. Paul

Abstract Background The use of meta-analysis to aggregate the results of multiple studies has increased dramatically over the last 40 years. For homogeneous meta-analysis, the Mantel–Haenszel technique has typically been utilized. In such meta-analyses, the effect size across the contributing studies of the meta-analysis differs only by statistical error. If homogeneity cannot be assumed or established, the most popular technique developed to date is the inverse-variance DerSimonian and Laird (DL) technique (DerSimonian and Laird, in Control Clin Trials 7(3):177–88, 1986). However, both of these techniques are based on large sample, asymptotic assumptions. At best, they are approximations especially when the number of cases observed in any cell of the corresponding contingency tables is small. Results This research develops an exact, non-parametric test for evaluating statistical significance and a related method for estimating effect size in the meta-analysis of k 2 × 2 tables for any level of heterogeneity as an alternative to the asymptotic techniques. Monte Carlo simulations show that even for large values of heterogeneity, the Enhanced Bernoulli Technique (EBT) is far superior at maintaining the pre-specified level of Type I Error than the DL technique. A fully tested implementation in the R statistical language is freely available from the author. In addition, a second related exact test for estimating the Effect Size was developed and is also freely available. Conclusions This research has developed two exact tests for the meta-analysis of dichotomous, categorical data. The EBT technique was strongly superior to the DL technique in maintaining a pre-specified level of Type I Error even at extremely high levels of heterogeneity. As shown, the DL technique demonstrated many large violations of this level. Given the various biases towards finding statistical significance prevalent in epidemiology today, a strong focus on maintaining a pre-specified level of Type I Error would seem critical. In addition, a related exact method for estimating the Effect Size was developed.


2014 ◽  
Vol 53 (01) ◽  
pp. 54-61 ◽  
Author(s):  
M. Preuß ◽  
A. Ziegler

SummaryBackground: The random-effects (RE) model is the standard choice for meta-analysis in the presence of heterogeneity, and the stand ard RE method is the DerSimonian and Laird (DSL) approach, where the degree of heterogeneity is estimated using a moment-estimator. The DSL approach does not take into account the variability of the estimated heterogeneity variance in the estimation of Cochran’s Q. Biggerstaff and Jackson derived the exact cumulative distribution function (CDF) of Q to account for the variability of Ť 2.Objectives: The first objective is to show that the explicit numerical computation of the density function of Cochran’s Q is not required. The second objective is to develop an R package with the possibility to easily calculate the classical RE method and the new exact RE method.Methods: The novel approach was validated in extensive simulation studies. The different approaches used in the simulation studies, including the exact weights RE meta-analysis, the I 2 and T 2 estimates together with their confidence intervals were implemented in the R package metaxa.Results: The comparison with the classical DSL method showed that the exact weights RE meta-analysis kept the nominal type I error level better and that it had greater power in case of many small studies and a single large study. The Hedges RE approach had inflated type I error levels. Another advantage of the exact weights RE meta-analysis is that an exact confidence interval for T 2is readily available. The exact weights RE approach had greater power in case of few studies, while the restricted maximum likelihood (REML) approach was superior in case of a large number of studies. Differences between the exact weights RE meta-analysis and the DSL approach were observed in the re-analysis of real data sets. Application of the exact weights RE meta-analysis, REML, and the DSL approach to real data sets showed that conclusions between these methods differed.Conclusions: The simplification does not require the calculation of the density of Cochran’s Q, but only the calculation of the cumulative distribution function, while the previous approach required the computation of both the density and the cumulative distribution function. It thus reduces computation time, improves numerical stability, and reduces the approximation error in meta-analysis. The different approaches, including the exact weights RE meta-analysis, the I 2 and T 2estimates together with their confidence intervals are available in the R package metaxa, which can be used in applications.


2015 ◽  
Vol 26 (3) ◽  
pp. 1500-1518 ◽  
Author(s):  
Annamaria Guolo ◽  
Cristiano Varin

This paper investigates the impact of the number of studies on meta-analysis and meta-regression within the random-effects model framework. It is frequently neglected that inference in random-effects models requires a substantial number of studies included in meta-analysis to guarantee reliable conclusions. Several authors warn about the risk of inaccurate results of the traditional DerSimonian and Laird approach especially in the common case of meta-analysis involving a limited number of studies. This paper presents a selection of likelihood and non-likelihood methods for inference in meta-analysis proposed to overcome the limitations of the DerSimonian and Laird procedure, with a focus on the effect of the number of studies. The applicability and the performance of the methods are investigated in terms of Type I error rates and empirical power to detect effects, according to scenarios of practical interest. Simulation studies and applications to real meta-analyses highlight that it is not possible to identify an approach uniformly superior to alternatives. The overall recommendation is to avoid the DerSimonian and Laird method when the number of meta-analysis studies is modest and prefer a more comprehensive procedure that compares alternative inferential approaches. R code for meta-analysis according to all of the inferential methods examined in the paper is provided.


METRON ◽  
2021 ◽  
Author(s):  
Dankmar Böhning ◽  
Heinz Holling ◽  
Walailuck Böhning ◽  
Patarawan Sangnawakij

AbstractIn many meta-analyses, the variable of interest is frequently a count outcome reported in an intervention and a control group. Single- or double-zero studies are often observed in this type of data. Given this setting, the well-known Cochran’s Q statistic for testing homogeneity becomes undefined. In this paper, we propose two statistics for testing homogeneity of the risk ratio, particularly for application in the case of rare events in meta-analysis. The first one is a chi-square type statistic. It is constructed based on information of the conditional probability of the number of events in the treatment group given the total number of events. The second one is a likelihood ratio statistic, derived from the logistic regression models allowing fixed and random effects for the risk ratio. Both proposed statistics are well defined even in the situation of single-zero studies. In a simulation study, the proposed tests show a performance better than the traditional test in terms of type I error and power of the test under common and rare event situations. However, as the performance of the two newly proposed tests is still unsatisfactory in the very rare events setting, we suggest a bootstrap approach that does not rely on asymptotic distributional theory and it is shown that the bootstrap approach performs well in terms of type I error. Furthermore, a number of empirical meta-analyses are used to illustrate the methods.


Methodology ◽  
2020 ◽  
Vol 16 (4) ◽  
pp. 299-315
Author(s):  
Belén Fernández-Castilla ◽  
Lies Declercq ◽  
Laleh Jamshidi ◽  
Susan Natasha Beretvas ◽  
Patrick Onghena ◽  
...  

Meta-analytic datasets can be large, especially when in primary studies multiple effect sizes are reported. The visualization of meta-analytic data is therefore useful to summarize data and understand information reported in primary studies. The gold standard figures in meta-analysis are forest and funnel plots. However, none of these plots can yet account for the existence of multiple effect sizes within primary studies. This manuscript describes extensions to the funnel plot, forest plot and caterpillar plot to adapt them to three-level meta-analyses. For forest plots, we propose to plot the study-specific effects and their precision, and to add additional confidence intervals that reflect the sampling variance of individual effect sizes. For caterpillar plots and funnel plots, we recommend to plot individual effect sizes and averaged study-effect sizes in two separate graphs. For the funnel plot, plotting separate graphs might improve the detection of both publication bias and/or selective outcome reporting bias.


2021 ◽  
Author(s):  
Lawrence Marc Paul

Abstract BackgroundThe use of meta-analysis to aggregate the results of multiple studies has increased dramatically over the last 40 years. For homogeneous meta-analysis, the Mantel-Haenszel technique has typically been utilized. In such meta-analyses, the effect size across the contributing studies of the meta-analysis differ only by statistical error. If homogeneity cannot be assumed or established, the most popular technique developed to date is the inverse-variance DerSimonian & Laird (DL) technique [1]. However, both of these techniques are based on large sample, asymptotic assumptions. At best, they are approximations especially when the number of cases observed in any cell of the corresponding contingency tables is small.ResultsThis research develops an exact, non-parametric test for evaluating statistical significance and a related method for estimating effect size in the meta-analysis of k 2 x 2 tables for any level of heterogeneity as an alternative to the asymptotic techniques. Monte Carlo simulations show that even for large values of heterogeneity, the Enhanced Bernoulli Technique (EBT) is far superior at maintaining the pre-specified level of Type I Error than the DL technique. A fully tested implementation in the R statistical language is freely available from the author. In addition, a second related exact test for estimating the Effect Size was developed and is also freely available.ConclusionsThis research has developed two exact tests for the meta-analysis of dichotomous, categorical data. The EBT technique was strongly superior to the DL technique in maintaining a pre-specified level of Type I Error even at extremely high levels of heterogeneity. As shown, the DL technique demonstrated many large violations of this level. Given the various biases towards finding statistical significance prevalent in epidemiology today, a strong focus on maintaining a pre-specified level of Type I Error would seem critical.


Sign in / Sign up

Export Citation Format

Share Document