Estimating Effect Size under Publication Bias: Small Sample Properties and Robustness of a Random Effects Selection Model

1996 ◽  
Vol 21 (4) ◽  
pp. 299 ◽  
Author(s):  
Larry V. Hedges ◽  
Jack L. Vevea
1996 ◽  
Vol 21 (4) ◽  
pp. 299-332 ◽  
Author(s):  
Larry V. Hedges ◽  
Jack L. Vevea

When there is publication bias, studies yielding large p values, and hence small effect estimates, are less likely to be published, which leads to biased estimates of effects in meta-analysis. We investigate a selection model based on one-tailed p values in the context of a random effects model. The procedure both models the selection process and corrects for the consequences of selection on estimates of the mean and variance of effect parameters. A test of the statistical significance of selection is also provided. The small sample properties of the method are evaluated by means of simulations, and the asymptotic theory is found to be reasonably accurate under correct model specification and plausible conditions. The method substantially reduces bias due to selection when model specification is correct, but the variance of estimates is increased; thus mean squared error is reduced only when selection produces substantial bias. The robustness of the method to violations of assumptions about the form of the distribution of the random effects is also investigated via simulation, and the model-corrected estimates of the mean effect are generally found to be much less biased than the uncorrected estimates. The significance test for selection bias, however, is found to be highly nonrobust, rejecting at up to 10 times the nominal rate when there is no selection but the distribution of the effects is incorrectly specified.


2018 ◽  
Author(s):  
Robbie Cornelis Maria van Aert ◽  
Marcel A. L. M. van Assen

Publication bias is a major threat to the validity of a meta-analysis resulting in overestimated effect sizes. P-uniform is a meta-analysis method that corrects estimates for publication bias but overestimates average effect size if heterogeneity in true effect sizes (i.e., between-study variance) is present. We propose an extension and improvement of p-uniform called p-uniform*. P-uniform* improves upon p-uniform in three important ways, as it (i) entails a more efficient estimator, (ii) eliminates the overestimation of effect size in case of between-study variance in true effect sizes, and (iii) enables estimating and testing for the presence of the between-study variance. We compared the statistical properties of p-uniform* with p-uniform, the selection model approach of Hedges (1992), and the random-effects model. Statistical properties of p-uniform* and the selection model approach were comparable and generally outperformed p-uniform and the random-effects model if publication bias was present. We demonstrate that p-uniform* and the selection model approach estimate average effect size and between-study variance rather well with ten or more studies in the meta-analysis when publication bias is not extreme. P-uniform* generally provides more accurate estimates of the between-study variance in meta-analyses containing many studies (e.g., 60 or more) and if publication bias is present. However, both methods do not perform well if the meta-analysis only includes statistically significant studies. P-uniform performed best in this case but only when between-study variance was zero or small. We offer recommendations for applied researchers, and provide an R package and an easy-to-use web application for applying p-uniform*.


2016 ◽  
Vol 27 (9) ◽  
pp. 2722-2741 ◽  
Author(s):  
Qiaohao Zhu ◽  
KC Carriere

Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.


2019 ◽  
Vol 227 (4) ◽  
pp. 261-279 ◽  
Author(s):  
Frank Renkewitz ◽  
Melanie Keiner

Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect such bias in meta-analytic results. We present an evaluation of the performance of six of these tools. To assess the Type I error rate and the statistical power of these methods, we simulated a large variety of literatures that differed with regard to true effect size, heterogeneity, number of available primary studies, and sample sizes of these primary studies; furthermore, simulated studies were subjected to different degrees of publication bias. Our results show that across all simulated conditions, no method consistently outperformed the others. Additionally, all methods performed poorly when true effect sizes were heterogeneous or primary studies had a small chance of being published, irrespective of their results. This suggests that in many actual meta-analyses in psychology, bias will remain undiscovered no matter which detection method is used.


2018 ◽  
Author(s):  
Prathiba Natesan ◽  
Smita Mehta

Single case experimental designs (SCEDs) have become an indispensable methodology where randomized control trials may be impossible or even inappropriate. However, the nature of SCED data presents challenges for both visual and statistical analyses. Small sample sizes, autocorrelations, data types, and design types render many parametric statistical analyses and maximum likelihood approaches ineffective. The presence of autocorrelation decreases interrater reliability in visual analysis. The purpose of the present study is to demonstrate a newly developed model called the Bayesian unknown change-point (BUCP) model which overcomes all the above-mentioned data analytic challenges. This is the first study to formulate and demonstrate rate ratio effect size for autocorrelated data, which has remained an open question in SCED research until now. This expository study also compares and contrasts the results from BUCP model with visual analysis, and rate ratio effect size with nonoverlap of all pairs (NAP) effect size. Data from a comprehensive behavioral intervention are used for the demonstration.


Sign in / Sign up

Export Citation Format

Share Document