Meta-Analytic Methods to Detect Publication Bias in Behavior Science Research

Author(s):  
Art Dowdy ◽  
Donald A. Hantula ◽  
Jason C. Travers ◽  
Matt Tincani
2019 ◽  
Vol 2 (2) ◽  
pp. 115-144 ◽  
Author(s):  
Evan C. Carter ◽  
Felix D. Schönbrodt ◽  
Will M. Gervais ◽  
Joseph Hilgard

Publication bias and questionable research practices in primary research can lead to badly overestimated effects in meta-analysis. Methodologists have proposed a variety of statistical approaches to correct for such overestimation. However, it is not clear which methods work best for data typically seen in psychology. Here, we present a comprehensive simulation study in which we examined how some of the most promising meta-analytic methods perform on data that might realistically be produced by research in psychology. We simulated several levels of questionable research practices, publication bias, and heterogeneity, and used study sample sizes empirically derived from the literature. Our results clearly indicated that no single meta-analytic method consistently outperformed all the others. Therefore, we recommend that meta-analysts in psychology focus on sensitivity analyses—that is, report on a variety of methods, consider the conditions under which these methods fail (as indicated by simulation studies such as ours), and then report how conclusions might change depending on which conditions are most plausible. Moreover, given the dependence of meta-analytic methods on untestable assumptions, we strongly recommend that researchers in psychology continue their efforts to improve the primary literature and conduct large-scale, preregistered replications. We provide detailed results and simulation code at https://osf.io/rf3ys and interactive figures at http://www.shinyapps.org/apps/metaExplorer/ .


2017 ◽  
Author(s):  
Evan C Carter ◽  
Felix D. Schönbrodt ◽  
Will M Gervais ◽  
Joseph Hilgard

Publication bias and questionable research practices in primary research can lead to badly overestimated effects in meta-analysis. Methodologists have proposed a variety of statistical approaches to correct for such overestimation. However, much of this work has not been tailored specifically to psychology, so it is not clear which methods work best for data typically seen in our field. Here, we present a comprehensive simulation study to examine how some of the most promising meta-analytic methods perform on data that might realistically be produced by research in psychology. We created such scenarios by simulating several levels of questionable research practices, publication bias, heterogeneity, and using study sample sizes empirically derived from the literature. Our results clearly indicated that no single meta-analytic method consistently outperformed all others. Therefore, we recommend that meta-analysts in psychology focus on sensitivity analyses—that is, report on a variety of methods, consider the conditions under which these methods fail (as indicated by simulation studies such as ours), and then report how conclusions might change based on which conditions are most plausible. Moreover, given the dependence of meta-analytic methods on untestable assumptions, we strongly recommend that researchers in psychology continue their efforts on improving the primary literature and conducting large-scale, pre-registered replications. We provide detailed results and simulation code at https://osf.io/rf3ys and interactive figures at http://www.shinyapps.org/apps/metaExplorer/.


2021 ◽  
Author(s):  
Eric R. Louderback ◽  
Sally M Gainsbury ◽  
Robert Heirene ◽  
Karen Amichia ◽  
Alessandra Grossman ◽  
...  

The replication crisis has stimulated researchers around the world to adopt open science research practices intended to reduce publication bias and improve research quality. Open science practices include study pre-registration, open data, open publication, and avoiding methods that can lead to publication bias and low replication rates. Although gambling studies uses similar research methods to behavioral research fields that have struggled with replication, we know little about the uptake of open science research practices in gambling-focused research. We conducted a scoping review of 500 recent (1/1/2016 – 12/1/2019) studies focused on gambling and problem gambling to examine the use of open science and transparent research practices. Our results showed that a small percentage of studies used most practices: whereas 54.6% (95% CI: [50.2, 58.9]) of studies used at least one of nine open science practices, each practice’s prevalence was: 1.6% for pre-registration (95% CI:[0.8, 3.1]), 3.2% for open data (95% CI:[2.0, 5.1]), 0% for open notebook, 35.2% for open access (95% CI:[31.1, 39.5]), 7.8% for open materials (95% CI:[5.8, 10.5]), 1.4% for open code (95% CI:[0.7, 2.9]), and 15.0% for preprint posting (95% CI:[12.1, 18.4]). In all, 6.4% (95% CI:[4.6, 8.9]) used a power analysis and 2.4% (95% CI:[1.4, 4.2]) of the studies were replication studies. Exploratory analyses showed that studies that used any open science practice, and open access in particular, had higher citation counts. We suggest several practical ways to enhance the uptake of open science principles and practices both within gambling studies and in science more broadly.


ASHA Leader ◽  
2013 ◽  
Vol 18 (4) ◽  
pp. 62-62
Keyword(s):  

Apply for Audiology/Hearing Science Research Travel Award


2012 ◽  
Vol 22 (1) ◽  
pp. 14-20
Author(s):  
Donald Finan ◽  
Stephen M. Tasko

The history of speech-language pathology as a profession encompasses a tradition of knowledge generation. In recent years, the quantity of speech science research and the presence of speech scientists within the domain of the American Speech-Hearing-Language Association (ASHA) has diminished, even as ASHA membership and the size of the ASHA Convention have grown dramatically. The professional discipline of speech science has become increasingly fragmented, yet speech science coursework is an integral part of the mandated curriculum. Establishing an active, vibrant community structure will serve to aid researchers, educators, and clinicians as they work in the common area of speech science.


2019 ◽  
Vol 227 (4) ◽  
pp. 261-279 ◽  
Author(s):  
Frank Renkewitz ◽  
Melanie Keiner

Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect such bias in meta-analytic results. We present an evaluation of the performance of six of these tools. To assess the Type I error rate and the statistical power of these methods, we simulated a large variety of literatures that differed with regard to true effect size, heterogeneity, number of available primary studies, and sample sizes of these primary studies; furthermore, simulated studies were subjected to different degrees of publication bias. Our results show that across all simulated conditions, no method consistently outperformed the others. Additionally, all methods performed poorly when true effect sizes were heterogeneous or primary studies had a small chance of being published, irrespective of their results. This suggests that in many actual meta-analyses in psychology, bias will remain undiscovered no matter which detection method is used.


Sign in / Sign up

Export Citation Format

Share Document