Evaluation of the Normality Assumption in Meta-Analyses

2019 ◽  
Vol 189 (3) ◽  
pp. 235-242 ◽  
Author(s):  
Chia-Chun Wang ◽  
Wen-Chung Lee

Abstract Random-effects meta-analysis is one of the mainstream methods for research synthesis. The heterogeneity in meta-analyses is usually assumed to follow a normal distribution. This is actually a strong assumption, but one that often receives little attention and is used without justification. Although methods for assessing the normality assumption are readily available, they cannot be used directly because the included studies have different within-study standard errors. Here we present a standardization framework for evaluation of the normality assumption and examine its performance in random-effects meta-analyses with simulation studies and real examples. We use both a formal statistical test and a quantile-quantile plot for visualization. Simulation studies show that our normality test has well-controlled type I error rates and reasonable power. We also illustrate the real-world significance of examining the normality assumption with examples. Investigating the normality assumption can provide valuable information for further analysis or clinical application. We recommend routine examination of the normality assumption with the proposed framework in future meta-analyses.

2021 ◽  
Author(s):  
Megha Joshi ◽  
James E Pustejovsky ◽  
S. Natasha Beretvas

The most common and well-known meta-regression models work under the assumption that there is only one effect size estimate per study and that the estimates are independent. However, meta-analytic reviews of social science research often include multiple effect size estimates per primary study, leading to dependence in the estimates. Some meta-analyses also include multiple studies conducted by the same lab or investigator, creating another potential source of dependence. An increasingly popular method to handle dependence is robust variance estimation (RVE), but this method can result in inflated Type I error rates when the number of studies is small. Small-sample correction methods for RVE have been shown to control Type I error rates adequately but may be overly conservative, especially for tests of multiple-contrast hypotheses. We evaluated an alternative method for handling dependence, cluster wild bootstrapping, which has been examined in the econometrics literature but not in the context of meta-analysis. Results from two simulation studies indicate that cluster wild bootstrapping maintains adequate Type I error rates and provides more power than extant small sample correction methods, particularly for multiple-contrast hypothesis tests. We recommend using cluster wild bootstrapping to conduct hypothesis tests for meta-analyses with a small number of studies. We have also created an R package that implements such tests.


2015 ◽  
Vol 26 (3) ◽  
pp. 1500-1518 ◽  
Author(s):  
Annamaria Guolo ◽  
Cristiano Varin

This paper investigates the impact of the number of studies on meta-analysis and meta-regression within the random-effects model framework. It is frequently neglected that inference in random-effects models requires a substantial number of studies included in meta-analysis to guarantee reliable conclusions. Several authors warn about the risk of inaccurate results of the traditional DerSimonian and Laird approach especially in the common case of meta-analysis involving a limited number of studies. This paper presents a selection of likelihood and non-likelihood methods for inference in meta-analysis proposed to overcome the limitations of the DerSimonian and Laird procedure, with a focus on the effect of the number of studies. The applicability and the performance of the methods are investigated in terms of Type I error rates and empirical power to detect effects, according to scenarios of practical interest. Simulation studies and applications to real meta-analyses highlight that it is not possible to identify an approach uniformly superior to alternatives. The overall recommendation is to avoid the DerSimonian and Laird method when the number of meta-analysis studies is modest and prefer a more comprehensive procedure that compares alternative inferential approaches. R code for meta-analysis according to all of the inferential methods examined in the paper is provided.


2021 ◽  
pp. 096228022110028
Author(s):  
Zhen Meng ◽  
Qinglong Yang ◽  
Qizhai Li ◽  
Baoxue Zhang

For a nonparametric Behrens-Fisher problem, a directional-sum test is proposed based on division-combination strategy. A one-layer wild bootstrap procedure is given to calculate its statistical significance. We conduct simulation studies with data generated from lognormal, t and Laplace distributions to show that the proposed test can control the type I error rates properly and is more powerful than the existing rank-sum and maximum-type tests under most of the considered scenarios. Applications to the dietary intervention trial further show the performance of the proposed test.


2001 ◽  
Vol 40 (02) ◽  
pp. 148-155 ◽  
Author(s):  
A. Koch ◽  
N. Victor ◽  
S. Ziegler

AbstractThe random effects model is often used in meta-analyses. A corresponding significance test based on a normal approximation has been established. Its type I error is derived in this article by theoretical considerations and computer simulations. The test can be conservative as well as unacceptably anti-conservative. The anti-conservatism increases with the increasing number of patients and the decreasing number of studies. A modification is proposed, which keeps the nominal level asymptotically as the number of patients approaches infinity. Simulations show that the modified test is often conservative, but its conservatism is small in those situations where the standard test is highly anti-conservative.


2001 ◽  
Vol 26 (1) ◽  
pp. 105-132 ◽  
Author(s):  
Douglas A. Powell ◽  
William D. Schafer

The robustness literature for the structural equation model was synthesized following the method of Harwell which employs meta-analysis as developed by Hedges and Vevea. The study focused on the explanation of empirical Type I error rates for six principal classes of estimators: two that assume multivariate normality (maximum likelihood and generalized least squares), elliptical estimators, two distribution-free estimators (asymptotic and others), and latent projection. Generally, the chi-square tests for overall model fit were found to be sensitive to non-normality and the size of the model for all estimators (with the possible exception of the elliptical estimators with respect to model size and the latent projection techniques with respect to non-normality). The asymptotic distribution-free (ADF) and latent projection techniques were also found to be sensitive to sample sizes. Distribution-free methods other than ADF showed, in general, much less sensitivity to all factors considered.


2019 ◽  
Vol 14 (2) ◽  
pp. 399-425 ◽  
Author(s):  
Haolun Shi ◽  
Guosheng Yin

Sign in / Sign up

Export Citation Format

Share Document