scholarly journals A new method to explore inferential risks associated with each study in a meta-analysis: An approach based on Design Analysis

2021 ◽  
Author(s):  
Francesca Giorgi

In the last ten years, scientific research has experienced an unprecedented “credibility’s crisis” of results. This means that researchers couldn't find the same results as in the original ones when conducting replication studies. In fact, the results showed that effects size were often not as strong as in the original studies and sometimes no effect was found. However, an important side-effect of the replicability crisis is that it increased the awareness of the problematic issues in the published literature and it promoted the development of new practices which would guarantee rigour, transparency and reproducibility. In the current work, the aim is to propose a new method to explore the inferential risks associated with each study in a meta-analysis. Specifically, this method is based on Design Analysis, a power analysis approach developed by @gelmanPowerCalculationsAssessing2014, which allows to analyse two other type of errors that are not commonly considered: the Type M (Magnitude) error and the Type S (Sign) error, concerning the magnitude and direction of the effects. We chose the Design Analysis approach because it allows to put more emphasis on the estimate of the effect size and it can be a valid tool available to researchers to make more conscious and informed decisions.

2021 ◽  
Vol 44 ◽  
Author(s):  
Robert M. Ross ◽  
Robbie C. M. van Aert ◽  
Olmo R. van den Akker ◽  
Michiel van Elk

Abstract Lee and Schwarz interpret meta-analytic research and replication studies as providing evidence for the robustness of cleansing effects. We argue that the currently available evidence is unconvincing because (a) publication bias and the opportunistic use of researcher degrees of freedom appear to have inflated meta-analytic effect size estimates, and (b) preregistered replications failed to find any evidence of cleansing effects.


Psychology ◽  
2019 ◽  
Author(s):  
David B. Flora

Simply put, effect size (ES) is the magnitude or strength of association between or among variables. Effect sizes (ESs) are commonly represented numerically (i.e., as parameters for population ESs and statistics for sample estimates of population ESs) but also may be communicated graphically. Although the word “effect” may imply that an ES quantifies the strength of a causal association (“cause and effect”), ESs are used more broadly to represent any empirical association between variables. Effect sizes serve three general purposes: research results reporting, power analysis, and meta-analysis. Even under the same research design, an ES that is appropriate for one of these purposes may not be ideal for another. Effect size can be conveyed graphically or numerically using either unstandardized metrics, which are interpreted relative to the original scales of the variables involved (e.g., the difference between two means or an unstandardized regression slope), or standardized metrics, which are interpreted in relative terms (e.g., Cohen’s d or multiple R2). Whereas unstandardized ESs and graphs illustrating ES are typically most effective for research reporting, that is, communicating the original findings of an empirical study, many standardized ES measures have been developed for use in power analysis and especially meta-analysis. Although the concept of ES is clearly fundamental to data analysis, ES reporting has been advocated as an essential complement to null hypothesis significance testing (NHST), or even as a replacement for NHST. A null hypothesis significance test involves making a dichotomous judgment about whether to reject a hypothesis that a true population effect equals zero. Even in the context of a traditional NHST paradigm, ES is a critical concept because of its central role in power analysis.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Jia-Jin Wei ◽  
En-Xuan Lin ◽  
Jian-Dong Shi ◽  
Ke Yang ◽  
Zong-Liang Hu ◽  
...  

Abstract Background Meta-analysis is a statistical method to synthesize evidence from a number of independent studies, including those from clinical studies with binary outcomes. In practice, when there are zero events in one or both groups, it may cause statistical problems in the subsequent analysis. Methods In this paper, by considering the relative risk as the effect size, we conduct a comparative study that consists of four continuity correction methods and another state-of-the-art method without the continuity correction, namely the generalized linear mixed models (GLMMs). To further advance the literature, we also introduce a new method of the continuity correction for estimating the relative risk. Results From the simulation studies, the new method performs well in terms of mean squared error when there are few studies. In contrast, the generalized linear mixed model performs the best when the number of studies is large. In addition, by reanalyzing recent coronavirus disease 2019 (COVID-19) data, it is evident that the double-zero-event studies impact the estimate of the mean effect size. Conclusions We recommend the new method to handle the zero-event studies when there are few studies in a meta-analysis, or instead use the GLMM when the number of studies is large. The double-zero-event studies may be informative, and so we suggest not excluding them.


2021 ◽  
Author(s):  
Robbie Cornelis Maria van Aert ◽  
Jelte M. Wicherts

Outcome reporting bias (ORB) refers to the biasing effect caused by researchers selectively reporting outcomes based on their statistical significance. ORB leads to inflated average effect size estimates in a meta-analysis if only the outcome with the largest effect size is reported due to ORB. We propose a new method (CORB) to correct for ORB that includes an estimate of the variability of the outcomes' effect size as a moderator in a meta-regression model. An estimate of the variability of the outcomes' effect size can be computed by assuming a correlation among the outcomes. Results of a Monte-Carlo simulation study showed that effect size in meta-analyses may be severely overestimated without any correction for ORB. The CORB method accurately estimates effect size when overestimation caused by ORB is the largest. Applying the new method to a meta-analysis on the effect of playing violent video games on aggressive cognition showed that the average effect size estimate decreased when correcting for ORB. We recommend to routinely apply methods to correct for ORB in any meta-analysis. We provide annotated R code and functions to facilitate researchers to apply the CORB method.


2020 ◽  
Author(s):  
Molly Lewis ◽  
Maya B Mathur ◽  
Tyler VanderWeele ◽  
Michael C. Frank

What is the best way to estimate the size of important effects? Should we aggregate across disparate findings using statistical meta-analysis, or instead run large, multi-lab replications (MLR)? A recent paper by Kvarven, Strømland, and Johannesson (2020) compared effect size estimates derived from these two different methods for 15 different psychological phenomena. The authors report that, for the same phenomenon, the meta-analytic estimate tends to be about three times larger than the MLR estimate. These results pose an important puzzle: What is the relationship between these two estimates? Kvarven et al. suggest that their results undermine the value of meta-analysis. In contrast, we argue that both meta-analysis and MLR are informative, and that the discrepancy between estimates obtained via the two methods is in fact still unexplained. Informed by re-analyses of Kvarven et al.’s data and by other empirical evidence, we discuss possible sources of this discrepancy and argue that understanding the relationship between estimates obtained from these two methods is an important puzzle for future meta-scientific research.


2018 ◽  
Vol 4 (1) ◽  
Author(s):  
Eva Specker ◽  
Helmut Leder

The present study is a pre-registered replication of a study by Specker et al. (2018) that tests the hypothesis that brightness of colors is associated with positivity. Our results showed an implicit association between brightness and positivity in both Study 1 and Study 2, however, an explicit association between brightness and positivity was only found in Study 2, thereby replicating 3 out of 4 effects. To investigate these effects in more detail, we present a meta-analysis of both the original and the replication study. This indicated a large effect 1.31 [1.12, 1.51]. In addition, we used meta-analysis to assess potential moderators of the effect, in particular stimulus type (chromatic vs. achromatic) and measure type (implicit vs. explicit). This indicated that the effect is stronger when measured implicitly than when measured explicitly and that the effect is stronger when achromatic stimuli are used. In sum, we take these findings to indicate that there is a strong and replicable association between brightness and positivity. These findings offer researchers interested in the effect concrete tools when designing a study investigating the effect with regard to effect size estimates for power analysis as well as stimulus and measurement design.


2021 ◽  
Author(s):  
Robert M Ross ◽  
Robbie Cornelis Maria van Aert ◽  
Olmo Van den Akker ◽  
Michiel van Elk

Lee and Schwarz interpret meta-analytic research and replication studies as providing evidence for the robustness of cleansing effects. We argue that the currently available evidence is unconvincing because (a) publication bias and the opportunistic use of researcher degrees of freedom appear to have inflated meta-analytic effect size estimates, and (b) preregistered replications failed to find any evidence of cleansing effects.


Sign in / Sign up

Export Citation Format

Share Document