scholarly journals Publication Bias and Evidential Value in Speech, Language, and Hearing Research

2021 ◽  
Author(s):  
Jean Alexander ◽  
James A Green

Purpose: This research examined the evidential value of research in Speech, Language, and Hearing (SLH), and the extent to which there is publication bias in reported findings. We also looked at the prevalence of good research practices, including those that work to minimize publication bias.Method: We extracted statistical results from 51 articles reported in four meta-analyses. These were there analyzed with two recent tests for evidential value and publication bias —the p-curve and the Z-curve. These articles were also coded for pre-registration, data access statements, and whether they were replication studies. Results: P-curves were right-skewed indicating evidential value, ruling out selective reporting as the sole reason for the significant findings. The Z-curve similarly found evidential value but detected a relative absence of null results, suggesting there is some publication bias. No studies were pre-registered, no studies had a data access statement, and no studies were full replication studies (3 studies were partial replications).Conclusions: Findings indicate SLH research has evidential value. This means that decision-makers and clinicians can continue to rely on the SLH research evidence base to influence service and clinical decisions. However, the presence of publication bias means that meta-analytic estimates of effectiveness may be exaggerated. Thus, we encourage SLH researchers to engage in study pre-registration, make result data accessible, conduct replication studies, and document null findings.

2020 ◽  
Vol 228 (1) ◽  
pp. 43-49 ◽  
Author(s):  
Michael Kossmeier ◽  
Ulrich S. Tran ◽  
Martin Voracek

Abstract. Currently, dedicated graphical displays to depict study-level statistical power in the context of meta-analysis are unavailable. Here, we introduce the sunset (power-enhanced) funnel plot to visualize this relevant information for assessing the credibility, or evidential value, of a set of studies. The sunset funnel plot highlights the statistical power of primary studies to detect an underlying true effect of interest in the well-known funnel display with color-coded power regions and a second power axis. This graphical display allows meta-analysts to incorporate power considerations into classic funnel plot assessments of small-study effects. Nominally significant, but low-powered, studies might be seen as less credible and as more likely being affected by selective reporting. We exemplify the application of the sunset funnel plot with two published meta-analyses from medicine and psychology. Software to create this variation of the funnel plot is provided via a tailored R function. In conclusion, the sunset (power-enhanced) funnel plot is a novel and useful graphical display to critically examine and to present study-level power in the context of meta-analysis.


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


2019 ◽  
Vol 3 ◽  
Author(s):  
Niclas Kuper ◽  
Antonia Bott

Moral licensing describes the phenomenon that displaying moral behavior can lead to subsequent immoral behavior. This is usually explained by the idea that an initial moral act affirms the moral self-image and hence licenses subsequent immoral acts. Previous meta-analyses on moral licensing indicate significant overall effects of d> .30. However, several large replication studies have either not found the effect or reported a substantially smaller effect size. The present article investigated whether this can be attributed to publication bias. Datasets from two previous meta-analyses on moral licensing were compared and when necessary modified. The larger dataset was used for the present analyses. Using PET-PEESE and a three-parameter-selection-model (3-PSM), we found some evidence for publication bias. The adjusted effect sizes were reduced to d= -0.05, p= .64 and d= 0.18, p= .002, respectively. While the first estimate could be an underestimation, we also found indications that the second estimate might exaggerate the true effect size. It is concluded that both the evidence for and the size of moral licensing effects has likely been inflated by publication bias. Furthermore, our findings indicate that culture moderates the moral licensing effect. Recommendations for future meta-analytic and empirical work are given. Subsequent studies on moral licensing should be adequately powered and ideally pre-registered.  


2018 ◽  
Author(s):  
Niclas Kuper ◽  
Antonia Bott

Moral licensing describes the phenomenon that displaying moral behavior can lead to subsequent immoral behavior. This is usually explained by the idea that an initial moral act affirms the moral self-image and hence licenses subsequent immoral acts. Previous meta-analyses on moral licensing indicate significant overall effects of d > .30. However, several large replication studies have either not found the effect or reported a substantially smaller effect size. The present article investigated whether this can be attributed to publication bias. Datasets from two previous meta-analyses on moral licensing were compared and when necessary modified. The larger dataset was used for the present analyses. Using PET-PEESE and a three-parameter-selection-model (3-PSM), we found some evidence for publication bias. The adjusted effect sizes were reduced to d = -.05, p = .64 and d = .18, p = .002, respectively. While the first estimate could be an underestimation, we also found indications that the second estimate might exaggerate the true effect size. It is concluded that both the evidence for and the size of moral licensing effects has likely been inflated by publication bias. Furthermore, our findings indicate that culture moderates the moral licensing effect. Recommendations for future meta-analytic and empirical work are given. Subsequent studies on moral licensing should be adequately powered and ideally pre-registered.


2019 ◽  
Vol 3 (Supplement_1) ◽  
pp. S849-S850
Author(s):  
Christopher Brydges ◽  
Laura Gaeta

Abstract A recent published systematic review (Hein et al., 2019) found that consumption of blueberries could improve memory, executive function, and psychomotor function in healthy children and adults, as well as adults with mild cognitive impairment. However, attention to questionable research practices (QRPs; such as selective reporting of results and/or performing analyses on data until statistical significance is achieved) has grown in recent years. The purpose of this study was to examine the results of the studies included in the review for potential publication bias and/or QRPs. p-curve and the test of insufficient variance (TIVA) were conducted on the 22 reported p values to test for evidential value of the published research, and publication bias and QRPs, respectively. The p-curve analyses revealed that the studies did not contain any evidential value for the effect of blueberries on cognitive ability, and the TIVAs suggested that there was evidence of publication bias and/or QRPs in the studies. Although these findings do not indicate that there is no relationship between blueberries and cognitive ability, more high-quality research that is pre-registered and appropriately powered is needed to determine whether a relationship exists at all, and if so, the strength of the evidence to support this association.


2020 ◽  
Author(s):  
Arielle Marks-Anglin ◽  
Yong Chen

Publication bias is a well-known threat to the validity of meta-analyses and, more broadly, the reproducibility of scientific findings. When policies and recommendations are predicated on an incomplete evidence-base, it undermines the goals of evidence-based decision-making. Great strides have been made in the last fifty years to understand and address this problem, including calls for mandatory trial registration and the development of statistical methods to detect and correct for publication bias. We offer an historical account of seminal contributions by the evidence synthesis community, with an emphasis on the parallel development of graph-based and selection model approaches. We also draw attention to current innovations and opportunities for future methodological work.


2015 ◽  
Vol 19 (2) ◽  
pp. 172-182 ◽  
Author(s):  
Michèle B. Nuijten ◽  
Marcel A. L. M. van Assen ◽  
Coosje L. S. Veldkamp ◽  
Jelte M. Wicherts

Replication is often viewed as the demarcation between science and nonscience. However, contrary to the commonly held view, we show that in the current (selective) publication system replications may increase bias in effect size estimates. Specifically, we examine the effect of replication on bias in estimated population effect size as a function of publication bias and the studies’ sample size or power. We analytically show that incorporating the results of published replication studies will in general not lead to less bias in the estimated population effect size. We therefore conclude that mere replication will not solve the problem of overestimation of effect sizes. We will discuss the implications of our findings for interpreting results of published and unpublished studies, and for conducting and interpreting results of meta-analyses. We also discuss solutions for the problem of overestimation of effect sizes, such as discarding and not publishing small studies with low power, and implementing practices that completely eliminate publication bias (e.g., study registration).


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Many researchers rely on meta-analysis to summarize research evidence. However, recent replication projects in the behavioral sciences suggest that effect sizes of original studies are overestimated, and this overestimation is typically attributed to publication bias and selective reporting of scientific results. As the validity of meta-analyses depends on the primary studies, there is a concern that systematic overestimation of effect sizes may translate into biased meta-analytic effect sizes. We compare the results of meta-analyses to large-scale pre-registered replications in psychology carried out at multiple labs. The multiple labs replications provide relatively precisely estimated effect sizes, which do not suffer from publication bias or selective reporting. Searching the literature, 17 meta-analyses – spanning more than 1,200 effect sizes and more than 370,000 participants - on the same topics as multiple labs replications are identified. We find that the meta-analytic effect sizes are significantly different from the replication effect sizes for 12 out of the 17 meta-replication pairs. These differences are systematic and on average meta-analytic effect sizes are about three times as large as the replication effect sizes.


2019 ◽  
Vol 227 (4) ◽  
pp. 261-279 ◽  
Author(s):  
Frank Renkewitz ◽  
Melanie Keiner

Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect such bias in meta-analytic results. We present an evaluation of the performance of six of these tools. To assess the Type I error rate and the statistical power of these methods, we simulated a large variety of literatures that differed with regard to true effect size, heterogeneity, number of available primary studies, and sample sizes of these primary studies; furthermore, simulated studies were subjected to different degrees of publication bias. Our results show that across all simulated conditions, no method consistently outperformed the others. Additionally, all methods performed poorly when true effect sizes were heterogeneous or primary studies had a small chance of being published, irrespective of their results. This suggests that in many actual meta-analyses in psychology, bias will remain undiscovered no matter which detection method is used.


2021 ◽  
pp. 152483802098556
Author(s):  
Mark A. Wood ◽  
Stuart Ross ◽  
Diana Johns

In the last decade, an array of smartphone apps have been designed to prevent crime, violence, and abuse. The evidence base of these apps has, however, yet to analyzed systematically. To rectify this, the aims of this review were (1) to establish the extent, range, and nature of research into smartphone apps with a primary crime prevention function; (2) to locate gaps in the primary crime prevention app literature; and (3) to develop a typology of primary crime prevention apps. Employing a scoping review methodology and following Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, studies were identified via Web of Science, EBSCOhost, and Google Scholar. We included English-language research published between 2008 and 2020 that examined smartphone applications designed explicitly for primary crime prevention. Sixty-one publications met our criteria for review, out of an initial sample of 151 identified. Our review identified six types of crime prevention app examined in these publications: self-surveillance apps, decision aid apps, child-tracking apps, educational apps, crime-mapping/alert apps, and crime reporting apps. The findings of our review indicate that most of these forms of primary crime prevention apps have yet to be rigorously evaluated and many are not evidence-based in their design. Consequently, our review indicates that recent enthusiasm over primary crime prevention apps is not supported by an adequate evidence base.


Sign in / Sign up

Export Citation Format

Share Document