On seeking moderator variables in the meta-analysis of correlational data: A Monte Carlo investigation of statistical power and resistance to Type I error.

1986 ◽  
Vol 71 (2) ◽  
pp. 302-310 ◽  
Author(s):  
Paul R. Sackett ◽  
Michael M. Harris ◽  
John M. Orr
2013 ◽  
Vol 18 (4) ◽  
pp. 553-571 ◽  
Author(s):  
Georgina Guilera ◽  
Juana Gómez-Benito ◽  
Maria Dolores Hidalgo ◽  
Julio Sánchez-Meca

1993 ◽  
Vol 30 (2) ◽  
pp. 246-255 ◽  
Author(s):  
Murali Chandrashekaran ◽  
Beth A. Walker

To enhance the utility of meta-analysis as an integrative tool for marketing research, heteroscedastic MLE (HMLE), a maximum-likelihood-based estimation procedure, is proposed as a method that overcomes heteroscedasticity, a problem known to impair OLS estimates and threaten the validity of meta-analytic findings. The results of a Monté Carlo simulation experiment reveal that, under a wide range of heteroscedastic conditions, HMLE is more efficient and powerful than OLS and achieves these performance advantages without inflating type I error. Further, the relative performance of HMLE increases as heteroscedasticity becomes more severe. An empirical analysis of a meta-analytic dataset in marketing confirmed and extended these findings by illustrating how the enhanced efficiency and power of HMLE improve the ability to detect moderator variables and by demonstrating how the theoretical generalizations emerging from a meta-analysis are affected by the choice of the analytic procedure.


2020 ◽  
Author(s):  
Han Du ◽  
Ge Jiang ◽  
Zijun Ke

Meta-analysis combines pertinent information from existing studies to provide an overall estimate of population parameters/effect sizes, as well as to quantify and explain the differences between studies. However, testing the between-study heterogeneity is one of the most troublesome topics in meta-analysis research. Additionally, no methods have been proposed to test whether the size of the heterogeneity is larger than a specific level. The existing methods, such as the Q test and likelihood ratio (LR) tests, are criticized for their failure to control the Type I error rate and/or failure to attain enough statistical power. Although better reference distribution approximations have been proposed in the literature, the expression is complicated and the application is limited. In this article, we propose bootstrap based heterogeneity tests combining the restricted maximum likelihood (REML) ratio test or Q test with bootstrap procedures, denoted as B-REML-LRT and B-Q respectively. Simulation studies were conducted to examine and compare the performance of the proposed methods with the regular LR tests, the regular Q test, and the improved Q test in both the random-effects meta-analysis and mixed-effects meta-analysis. Based on the results of Type I error rates and statistical power, B-Q is recommended. An R package \mathtt{boot.heterogeneity} is provided to facilitate the implementation of the proposed method.


2017 ◽  
Author(s):  
Hyemin Han ◽  
Andrea L. Glenn

AbstractIn fMRI research, the goal of correcting for multiple comparisons is to identify areas of activity that reflect true effects, and thus would be expected to replicate in future studies. Finding an appropriate balance between trying to minimize false positives (Type I error) while not being too stringent and omitting true effects (Type II error) can be challenging. Furthermore, the advantages and disadvantages of these types of errors may differ for different areas of study. In many areas of social neuroscience that involve complex processes and considerable individual differences, such as the study of moral judgment, effects are typically smaller and statistical power weaker, leading to the suggestion that less stringent corrections that allow for more sensitivity may be beneficial, but also result in more false positives. Using moral judgment fMRI data, we evaluated four commonly used methods for multiple comparison correction implemented in SPM12 by examining which method produced the most precise overlap with results from a meta-analysis of relevant studies and with results from nonparametric permutation analyses. We found that voxel-wise thresholding with family-wise error correction based on Random Field Theory provides a more precise overlap (i.e., without omitting too few regions or encompassing too many additional regions) than either clusterwise thresholding, Bonferroni correction, or false discovery rate correction methods.


2015 ◽  
Vol 14s5 ◽  
pp. CIN.S27718 ◽  
Author(s):  
Putri W. Novianti ◽  
Ingeborg Van Der Tweel ◽  
Victor L. Jong ◽  
Kit C. B. Roes ◽  
Marinus J. C. Eijkemans

Most of the discoveries from gene expression data are driven by a study claiming an optimal subset of genes that play a key role in a specific disease. Meta-analysis of the available datasets can help in getting concordant results so that a real-life application may be more successful. Sequential meta-analysis (SMA) is an approach for combining studies in chronological order while preserving the type I error and pre-specifying the statistical power to detect a given effect size. We focus on the application of SMA to find gene expression signatures across experiments in acute myeloid leukemia. SMA of seven raw datasets is used to evaluate whether the accumulated samples show enough evidence or more experiments should be initiated. We found 313 differentially expressed genes, based on the cumulative information of the experiments. SMA offers an alternative to existing methods in generating a gene list by evaluating the adequacy of the cumulative information.


2021 ◽  
Author(s):  
Josue E. Rodriguez ◽  
Donald Ray Williams ◽  
Paul - Christian Bürkner

Categorical moderators are often included in mixed-effects meta-analysis to explain heterogeneity in effect sizes. An assumption in tests of moderator effects is that of a constant between-study variance across all levels of the moderator. Although it rarely receives serious thought, there can be drastic ramifications to upholding this assumption. We propose that researchers should instead assume unequal between-study variances by default. To achieve this, we suggest using a mixed-effects location-scale model (MELSM) to allow group-specific estimates for the between-study variances. In two extensive simulation studies, we show that in terms of Type I error and statistical power, nearly nothing is lost by using the MELSM for moderator tests, but there can be serious costs when a mixed-effects model with equal variances is used. Most notably, in scenarios with balanced sample sizes or equal between-study variance, the Type I error and power rates are nearly identical between the mixed-effects model and the MELSM. On the other hand, with imbalanced sample sizes and unequal variances, the Type I error rate under the mixed-effects model can be grossly inflated or overly conservative, whereas the MELSM excellently controlled the Type I error across all scenarios. With respect to power, the MELSM had comparable or higher power than the mixed-effects model in all conditions where the latter produced valid (i.e., not inflated) Type 1 error rates. Altogether, our results strongly support that assuming unequal between-study variances is preferred as a default strategy when testing categorical moderators


2019 ◽  
Vol 227 (4) ◽  
pp. 261-279 ◽  
Author(s):  
Frank Renkewitz ◽  
Melanie Keiner

Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect such bias in meta-analytic results. We present an evaluation of the performance of six of these tools. To assess the Type I error rate and the statistical power of these methods, we simulated a large variety of literatures that differed with regard to true effect size, heterogeneity, number of available primary studies, and sample sizes of these primary studies; furthermore, simulated studies were subjected to different degrees of publication bias. Our results show that across all simulated conditions, no method consistently outperformed the others. Additionally, all methods performed poorly when true effect sizes were heterogeneous or primary studies had a small chance of being published, irrespective of their results. This suggests that in many actual meta-analyses in psychology, bias will remain undiscovered no matter which detection method is used.


Sign in / Sign up

Export Citation Format

Share Document