scholarly journals The role of meta-analysis and preregistration in assessing the evidence for cleansing effects

2021 ◽  
Vol 44 ◽  
Author(s):  
Robert M. Ross ◽  
Robbie C. M. van Aert ◽  
Olmo R. van den Akker ◽  
Michiel van Elk

Abstract Lee and Schwarz interpret meta-analytic research and replication studies as providing evidence for the robustness of cleansing effects. We argue that the currently available evidence is unconvincing because (a) publication bias and the opportunistic use of researcher degrees of freedom appear to have inflated meta-analytic effect size estimates, and (b) preregistered replications failed to find any evidence of cleansing effects.

2021 ◽  
Author(s):  
Robert M Ross ◽  
Robbie Cornelis Maria van Aert ◽  
Olmo Van den Akker ◽  
Michiel van Elk

Lee and Schwarz interpret meta-analytic research and replication studies as providing evidence for the robustness of cleansing effects. We argue that the currently available evidence is unconvincing because (a) publication bias and the opportunistic use of researcher degrees of freedom appear to have inflated meta-analytic effect size estimates, and (b) preregistered replications failed to find any evidence of cleansing effects.


2017 ◽  
Vol 22 (4) ◽  
pp. 347-377 ◽  
Author(s):  
Arlin J. Benjamin ◽  
Sven Kepes ◽  
Brad J. Bushman

A landmark 1967 study showed that simply seeing a gun can increase aggression—called the “weapons effect.” Since 1967, many other studies have attempted to replicate and explain the weapons effect. This meta-analysis integrates the findings of weapons effect studies conducted from 1967 to 2017 and uses the General Aggression Model (GAM) to explain the weapons effect. It includes 151 effect-size estimates from 78 independent studies involving 7,668 participants. As predicted by the GAM, our naïve meta-analytic results indicate that the mere presence of weapons increased aggressive thoughts, hostile appraisals, and aggression, suggesting a cognitive route from weapons to aggression. Weapons did not significantly increase angry feelings. Yet, a comprehensive sensitivity analysis indicated that not all naïve mean estimates were robust to the presence of publication bias. In general, these results suggest that the published literature tends to overestimate the weapons effect for some outcomes and moderators.


2021 ◽  
Author(s):  
Anton Olsson-Collentine ◽  
Robbie Cornelis Maria van Aert ◽  
Marjan Bakker ◽  
Jelte M. Wicherts

There are arbitrary decisions to be made (i.e., researcher degrees of freedom) in the execution and reporting of most research. These decisions allow for many possible outcomes from a single study. Selective reporting of results from this ‘multiverse’ of outcomes, whether intentional (_p_-hacking) or not, can lead to inflated effect size estimates and false positive results in the literature. In this study, we examine and illustrate the consequences of researcher degrees of freedom in primary research, both for primary outcomes and for subsequent meta-analyses. We used a set of 10 preregistered multi-lab direct replication projects from psychology (Registered Replication Reports) with a total of 14 primary outcome variables, 236 labs and 37,602 participants. By exploiting researcher degrees of freedom in each project, we were able to compute between 3,840 and 2,621,440 outcomes per lab. We show that researcher degrees of freedom in primary research can cause substantial variability in effect size that we denote the Underlying Multiverse Variability (UMV). In our data, the median UMV across labs was 0.1 standard deviations (interquartile range = 0.09 – 0.15). In one extreme case, the effect size estimate could change by _d_ = 1.27, evidence that _p_-hacking in some (rare) cases can provide support for almost any conclusion. We also show that researcher degrees of freedom in primary research provide another source of uncertainty in meta-analysis beyond those usually estimated. This would not be a large concern for meta-analysis if researchers made all arbitrary decisions at random. However, emulating selective reporting of lab results led to inflation of meta-analytic average effect size estimates in our data by as much as 0.1 - 0.48 standard deviations, depending to a large degree on the number of possible outcomes at the lab level (i.e., multiverse size). Our results illustrate the importance of making research decisions transparent (e.g., through preregistration and multiverse analysis), evaluating studies for selective reporting, and whenever feasible making raw data available.


2020 ◽  
Author(s):  
Malte Friese ◽  
Julius Frankenbach

Science depends on trustworthy evidence. Thus, a biased scientific record is of questionable value because it impedes scientific progress, and the public receives advice on the basis of unreliable evidence that has the potential to have far-reaching detrimental consequences. Meta-analysis is a valid and reliable technique that can be used to summarize research evidence. However, meta-analytic effect size estimates may themselves be biased, threatening the validity and usefulness of meta-analyses to promote scientific progress. Here, we offer a large-scale simulation study to elucidate how p-hacking and publication bias distort meta-analytic effect size estimates under a broad array of circumstances that reflect the reality that exists across a variety of research areas. The results revealed that, first, very high levels of publication bias can severely distort the cumulative evidence. Second, p-hacking and publication bias interact: At relatively high and low levels of publication bias, p-hacking does comparatively little harm, but at medium levels of publication bias, p-hacking can considerably contribute to bias, especially when the true effects are very small or are approaching zero. Third, p-hacking can severely increase the rate of false positives. A key implication is that, in addition to preventing p-hacking, policies in research institutions, funding agencies, and scientific journals need to make the prevention of publication bias a top priority to ensure a trustworthy base of evidence.


2015 ◽  
Vol 19 (2) ◽  
pp. 172-182 ◽  
Author(s):  
Michèle B. Nuijten ◽  
Marcel A. L. M. van Assen ◽  
Coosje L. S. Veldkamp ◽  
Jelte M. Wicherts

Replication is often viewed as the demarcation between science and nonscience. However, contrary to the commonly held view, we show that in the current (selective) publication system replications may increase bias in effect size estimates. Specifically, we examine the effect of replication on bias in estimated population effect size as a function of publication bias and the studies’ sample size or power. We analytically show that incorporating the results of published replication studies will in general not lead to less bias in the estimated population effect size. We therefore conclude that mere replication will not solve the problem of overestimation of effect sizes. We will discuss the implications of our findings for interpreting results of published and unpublished studies, and for conducting and interpreting results of meta-analyses. We also discuss solutions for the problem of overestimation of effect sizes, such as discarding and not publishing small studies with low power, and implementing practices that completely eliminate publication bias (e.g., study registration).


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


2021 ◽  
pp. 152483802110216
Author(s):  
Brooke N. Lombardi ◽  
Todd M. Jensen ◽  
Anna B. Parisi ◽  
Melissa Jenkins ◽  
Sarah E. Bledsoe

Background: The association between a lifetime history of sexual victimization and the well-being of women during the perinatal period has received increasing attention. However, research investigating this relationship has yet to be systematically reviewed or quantitatively synthesized. Aim: This systematic review and meta-analysis aims to calculate the pooled effect size estimate of the statistical association between a lifetime history of sexual victimization and perinatal depression (PND). Method: Four bibliographic databases were systematically searched, and reference harvesting was conducted to identify peer-reviewed articles that empirically examined associations between a lifetime history of sexual victimization and PND. A random effects model was used to ascertain an overall pooled effect size estimate in the form of an odds ratio and corresponding 95% confidence intervals (CIs). Subgroup analyses were also conducted to assess whether particular study features and sample characteristic (e.g., race and ethnicity) influenced the magnitude of effect size estimates. Results: This review included 36 studies, with 45 effect size estimates available for meta-analysis. Women with a lifetime history of sexual victimization had 51% greater odds of experiencing PND relative to women with no history of sexual victimization ( OR = 1.51, 95% CI [1.35, 1.67]). Effect size estimates varied considerably according to the PND instrument used in each study and the racial/ethnic composition of each sample. Conclusion: Findings provide compelling evidence for an association between a lifetime history of sexual victimization and PND. Future research should focus on screening practices and interventions that identify and support survivors of sexual victimization perinatally.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Liansheng Larry Tang ◽  
Michael Caudy ◽  
Faye Taxman

Multiple meta-analyses may use similar search criteria and focus on the same topic of interest, but they may yield different or sometimes discordant results. The lack of statistical methods for synthesizing these findings makes it challenging to properly interpret the results from multiple meta-analyses, especially when their results are conflicting. In this paper, we first introduce a method to synthesize the meta-analytic results when multiple meta-analyses use the same type of summary effect estimates. When meta-analyses use different types of effect sizes, the meta-analysis results cannot be directly combined. We propose a two-step frequentist procedure to first convert the effect size estimates to the same metric and then summarize them with a weighted mean estimate. Our proposed method offers several advantages over existing methods by Hemming et al. (2012). First, different types of summary effect sizes are considered. Second, our method provides the same overall effect size as conducting a meta-analysis on all individual studies from multiple meta-analyses. We illustrate the application of the proposed methods in two examples and discuss their implications for the field of meta-analysis.


Sign in / Sign up

Export Citation Format

Share Document