scholarly journals Postprint - Heterogeneity in direct replications in psychology and its association with effect size

2020 ◽  
Author(s):  
Anton Olsson-Collentine ◽  
Marcel A. L. M. van Assen ◽  
Jelte M. Wicherts

We examined the evidence for heterogeneity (of effect sizes) when only minor changes to sample population and settings were made between studies and explored the association between heterogeneity and average effect size in a sample of 68 meta-analyses from thirteen pre-registered multi-lab direct replication projects in social and cognitive psychology. Amongst the many examined effects, examples include the Stroop effect, the “verbal overshadowing” effect, and various priming effects such as “anchoring” effects. We found limited heterogeneity; 48/68 (71%) meta-analyses had non-significant heterogeneity, and most (49/68; 72%) were most likely to have zero to small heterogeneity. Power to detect small heterogeneity (as defined by Higgins, 2003) was low for all projects (mean 43%), but good to excellent for medium and large heterogeneity. Our findings thus show little evidence of widespread heterogeneity in direct replication studies in social and cognitive psychology, suggesting that minor changes in sample population and settings are unlikely to affect research outcomes in these fields of psychology. We also found strong correlations between observed average effect sizes (standardized mean differences and log odds ratios) and heterogeneity in our sample. Our results suggest that heterogeneity and moderation of effects is unlikely for a zero average true effect size, but increasingly likely for larger average true effect size.

2018 ◽  
Author(s):  
Robbie Cornelis Maria van Aert

More and more scientific research gets published nowadays, asking for statistical methods that enable researchers to get an overview of the literature in a particular research field. For that purpose, meta-analysis methods were developed that can be used for statistically combining the effect sizes from independent primary studies on the same topic. My dissertation focuses on two issues that are crucial when conducting a meta-analysis: publication bias and heterogeneity in primary studies’ true effect sizes. Accurate estimation of both the meta-analytic effect size as well as the between-study variance in true effect size is crucial since the results of meta-analyses are often used for policy making. Publication bias distorts the results of a meta-analysis since it refers to situations where publication of a primary study depends on its results. We developed new meta-analysis methods, p-uniform and p-uniform*, which estimate effect sizes corrected for publication bias and also test for publication bias. Although the methods perform well in many conditions, these and the other existing methods are shown not to perform well when researchers use questionable research practices. Additionally, when publication bias is absent or limited, traditional methods that do not correct for publication bias outperform p¬-uniform and p-uniform*. Surprisingly, we found no strong evidence for the presence of publication bias in our pre-registered study on the presence of publication bias in a large-scale data set consisting of 83 meta-analyses and 499 systematic reviews published in the fields of psychology and medicine. We also developed two methods for meta-analyzing a statistically significant published original study and a replication of that study, which reflects a situation often encountered by researchers. One method is a frequentist whereas the other method is a Bayesian statistical method. Both methods are shown to perform better than traditional meta-analytic methods that do not take the statistical significance of the original study into account. Analytical studies of both methods also show that sometimes the original study is better discarded for optimal estimation of the true effect size. Finally, we developed a program for determining the required sample size in a replication analogous to power analysis in null hypothesis testing. Computing the required sample size with the method revealed that large sample sizes (approximately 650 participants) are required to be able to distinguish a zero from a small true effect.Finally, in the last two chapters we derived a new multi-step estimator for the between-study variance in primary studies’ true effect sizes, and examined the statistical properties of two methods (Q-profile and generalized Q-statistic method) to compute the confidence interval of the between-study variance in true effect size. We proved that the multi-step estimator converges to the Paule-Mandel estimator which is nowadays one of the recommended methods to estimate the between-study variance in true effect sizes. Two Monte-Carlo simulation studies showed that the coverage probabilities of Q-profile and generalized Q-statistic method can be substantially below the nominal coverage rate if the assumptions underlying the random-effects meta-analysis model were violated.


2019 ◽  
Vol 227 (4) ◽  
pp. 261-279 ◽  
Author(s):  
Frank Renkewitz ◽  
Melanie Keiner

Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect such bias in meta-analytic results. We present an evaluation of the performance of six of these tools. To assess the Type I error rate and the statistical power of these methods, we simulated a large variety of literatures that differed with regard to true effect size, heterogeneity, number of available primary studies, and sample sizes of these primary studies; furthermore, simulated studies were subjected to different degrees of publication bias. Our results show that across all simulated conditions, no method consistently outperformed the others. Additionally, all methods performed poorly when true effect sizes were heterogeneous or primary studies had a small chance of being published, irrespective of their results. This suggests that in many actual meta-analyses in psychology, bias will remain undiscovered no matter which detection method is used.


2019 ◽  
Vol 3 ◽  
Author(s):  
Niclas Kuper ◽  
Antonia Bott

Moral licensing describes the phenomenon that displaying moral behavior can lead to subsequent immoral behavior. This is usually explained by the idea that an initial moral act affirms the moral self-image and hence licenses subsequent immoral acts. Previous meta-analyses on moral licensing indicate significant overall effects of d> .30. However, several large replication studies have either not found the effect or reported a substantially smaller effect size. The present article investigated whether this can be attributed to publication bias. Datasets from two previous meta-analyses on moral licensing were compared and when necessary modified. The larger dataset was used for the present analyses. Using PET-PEESE and a three-parameter-selection-model (3-PSM), we found some evidence for publication bias. The adjusted effect sizes were reduced to d= -0.05, p= .64 and d= 0.18, p= .002, respectively. While the first estimate could be an underestimation, we also found indications that the second estimate might exaggerate the true effect size. It is concluded that both the evidence for and the size of moral licensing effects has likely been inflated by publication bias. Furthermore, our findings indicate that culture moderates the moral licensing effect. Recommendations for future meta-analytic and empirical work are given. Subsequent studies on moral licensing should be adequately powered and ideally pre-registered.  


2020 ◽  
Author(s):  
Robbie Cornelis Maria van Aert ◽  
Joris Mulder

Meta-analysis methods are used to synthesize results of multiple studies on the same topic. The most frequently used statistical model in meta-analysis is the random-effects model containing parameters for the average effect, between-study variance in primary study's true effect size, and random effects for the study specific effects. We propose Bayesian hypothesis testing and estimation methods using the marginalized random-effects meta-analysis (MAREMA) model where the study specific true effects are regarded as nuisance parameters which are integrated out of the model. A flat prior distribution is placed on the overall effect size in case of estimation and a proper unit information prior for the overall effect size is proposed in case of hypothesis testing. For the between-study variance in true effect size, a proper uniform prior is placed on the proportion of total variance that can be attributed to between-study variability. Bayes factors are used for hypothesis testing that allow testing point and one-sided hypotheses. The proposed methodology has several attractive properties. First, the proposed MAREMA model encompasses models with a zero, negative, and positive between-study variance, which enables testing a zero between-study variance as it is not a boundary problem. Second, the methodology is suitable for default Bayesian meta-analyses as it requires no prior information about the unknown parameters. Third, the methodology can even be used in the extreme case when only two studies are available, because Bayes factors are not based on large sample theory. We illustrate the developed methods by applying it to two meta-analyses and introduce easy-to-use software in the R package BFpack to compute the proposed Bayes factors.


2018 ◽  
Author(s):  
Frank Renkewitz ◽  
Melanie Keiner

Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates observed in the social sciences. Both of these problems do not only increase the proportion of false positives in the literature but can also lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect and correct such bias in meta-analytic results. We present an evaluation of the performance of six of these tools in detecting bias. To assess the Type I error rate and the statistical power of these tools we simulated a large variety of literatures that differed with regard to underlying true effect size, heterogeneity, number of available primary studies and variation of sample sizes in these primary studies. Furthermore, simulated primary studies were subjected to different degrees of publication bias. Our results show that the power of the detection methods follows a complex pattern. Across all simulated conditions, no method consistently outperformed all others. Hence, choosing an optimal method would require knowledge about parameters (e.g., true effect size, heterogeneity) that meta-analysts cannot have. Additionally, all methods performed badly when true effect sizes were heterogeneous or primary studies had a small chance of being published irrespective of their results. This suggests, that in many actual meta-analyses in psychology bias will remain undiscovered no matter which detection method is used.


2018 ◽  
Author(s):  
Niclas Kuper ◽  
Antonia Bott

Moral licensing describes the phenomenon that displaying moral behavior can lead to subsequent immoral behavior. This is usually explained by the idea that an initial moral act affirms the moral self-image and hence licenses subsequent immoral acts. Previous meta-analyses on moral licensing indicate significant overall effects of d > .30. However, several large replication studies have either not found the effect or reported a substantially smaller effect size. The present article investigated whether this can be attributed to publication bias. Datasets from two previous meta-analyses on moral licensing were compared and when necessary modified. The larger dataset was used for the present analyses. Using PET-PEESE and a three-parameter-selection-model (3-PSM), we found some evidence for publication bias. The adjusted effect sizes were reduced to d = -.05, p = .64 and d = .18, p = .002, respectively. While the first estimate could be an underestimation, we also found indications that the second estimate might exaggerate the true effect size. It is concluded that both the evidence for and the size of moral licensing effects has likely been inflated by publication bias. Furthermore, our findings indicate that culture moderates the moral licensing effect. Recommendations for future meta-analytic and empirical work are given. Subsequent studies on moral licensing should be adequately powered and ideally pre-registered.


2018 ◽  
Author(s):  
Michele B. Nuijten ◽  
Marcel A. L. M. van Assen ◽  
Hilde Augusteijn ◽  
Elise Anne Victoire Crompvoets ◽  
Jelte M. Wicherts

In this meta-study, we analyzed 2,442 effect sizes from 131 meta-analyses in intelligence research, published from 1984 to 2014, to estimate the average effect size, median power, and evidence for bias. We found that the average effect size in intelligence research was a Pearson’s correlation of .26, and the median sample size was 60. Furthermore, across primary studies, we found a median power of 11.9% to detect a small effect, 54.5% to detect a medium effect, and 93.9% to detect a large effect. We documented differences in average effect size and median estimated power between different types of in intelligence studies (correlational studies, studies of group differences, experiments, toxicology, and behavior genetics). On average, across all meta-analyses (but not in every meta-analysis), we found evidence for small study effects, potentially indicating publication bias and overestimated effects. We found no differences in small study effects between different study types. We also found no convincing evidence for the decline effect, US effect, or citation bias across meta-analyses. We conclude that intelligence research does show signs of low power and publication bias, but that these problems seem less severe than in many other scientific fields.


2019 ◽  
Vol 103 (4) ◽  
pp. 273-280
Author(s):  
Thomas R. Guskey

School leaders today are making important decisions regarding education innovations based on published average effect sizes, even though few understand exactly how effect sizes are calculated or what they mean. This article explains how average effect sizes are determined in meta-analyses and the importance of including measures of variability with any average effect size. By considering the variation in effect sizes among studies of the same innovation, education leaders can make better decisions about innovations and greatly increase the likelihood of achieving optimal results from implementation.


2020 ◽  
Vol 8 (4) ◽  
pp. 36
Author(s):  
Michèle B. Nuijten ◽  
Marcel A. L. M. van Assen ◽  
Hilde E. M. Augusteijn ◽  
Elise A. V. Crompvoets ◽  
Jelte M. Wicherts

In this meta-study, we analyzed 2442 effect sizes from 131 meta-analyses in intelligence research, published from 1984 to 2014, to estimate the average effect size, median power, and evidence for bias. We found that the average effect size in intelligence research was a Pearson’s correlation of 0.26, and the median sample size was 60. Furthermore, across primary studies, we found a median power of 11.9% to detect a small effect, 54.5% to detect a medium effect, and 93.9% to detect a large effect. We documented differences in average effect size and median estimated power between different types of intelligence studies (correlational studies, studies of group differences, experiments, toxicology, and behavior genetics). On average, across all meta-analyses (but not in every meta-analysis), we found evidence for small-study effects, potentially indicating publication bias and overestimated effects. We found no differences in small-study effects between different study types. We also found no convincing evidence for the decline effect, US effect, or citation bias across meta-analyses. We concluded that intelligence research does show signs of low power and publication bias, but that these problems seem less severe than in many other scientific fields.


2018 ◽  
Vol 226 (1) ◽  
pp. 56-80 ◽  
Author(s):  
Rolf Ulrich ◽  
Jeff Miller ◽  
Edgar Erdfelder

Abstract. Publication bias hampers the estimation of true effect sizes. Specifically, effect sizes are systematically overestimated when studies report only significant results. In this paper we show how this overestimation depends on the true effect size and on the sample size. Furthermore, we review and follow up methods originally suggested by Hedges (1984) , Iyengar and Greenhouse (1988) , and Rust, Lehmann, and Farley (1990) allowing the estimation of the true effect size from published test statistics (e.g., from the t-values of reported significant results). Moreover, we adapted these methods allowing meta-analysts to estimate the percentage of researchers who consign undesired results in a research domain to the file drawer. We also apply the same logic to the case when significant results tend to be underreported. We demonstrate the application of these procedures for conventional one-sample and two-sample t-tests. Finally, we provide R and MATLAB versions of a computer program to estimate the true unbiased effect size and the prevalence of publication bias in the literature.


Sign in / Sign up

Export Citation Format

Share Document