scholarly journals Testing for Questionable Research Practices in a Meta-Analysis: An Example from Experimental Parapsychology

PLoS ONE ◽  
2016 ◽  
Vol 11 (5) ◽  
pp. e0153049 ◽  
Author(s):  
Dick J. Bierman ◽  
James P. Spottiswoode ◽  
Aron Bijl
2019 ◽  
Vol 2 (2) ◽  
pp. 115-144 ◽  
Author(s):  
Evan C. Carter ◽  
Felix D. Schönbrodt ◽  
Will M. Gervais ◽  
Joseph Hilgard

Publication bias and questionable research practices in primary research can lead to badly overestimated effects in meta-analysis. Methodologists have proposed a variety of statistical approaches to correct for such overestimation. However, it is not clear which methods work best for data typically seen in psychology. Here, we present a comprehensive simulation study in which we examined how some of the most promising meta-analytic methods perform on data that might realistically be produced by research in psychology. We simulated several levels of questionable research practices, publication bias, and heterogeneity, and used study sample sizes empirically derived from the literature. Our results clearly indicated that no single meta-analytic method consistently outperformed all the others. Therefore, we recommend that meta-analysts in psychology focus on sensitivity analyses—that is, report on a variety of methods, consider the conditions under which these methods fail (as indicated by simulation studies such as ours), and then report how conclusions might change depending on which conditions are most plausible. Moreover, given the dependence of meta-analytic methods on untestable assumptions, we strongly recommend that researchers in psychology continue their efforts to improve the primary literature and conduct large-scale, preregistered replications. We provide detailed results and simulation code at https://osf.io/rf3ys and interactive figures at http://www.shinyapps.org/apps/metaExplorer/ .


2017 ◽  
Author(s):  
Evan C Carter ◽  
Felix D. Schönbrodt ◽  
Will M Gervais ◽  
Joseph Hilgard

Publication bias and questionable research practices in primary research can lead to badly overestimated effects in meta-analysis. Methodologists have proposed a variety of statistical approaches to correct for such overestimation. However, much of this work has not been tailored specifically to psychology, so it is not clear which methods work best for data typically seen in our field. Here, we present a comprehensive simulation study to examine how some of the most promising meta-analytic methods perform on data that might realistically be produced by research in psychology. We created such scenarios by simulating several levels of questionable research practices, publication bias, heterogeneity, and using study sample sizes empirically derived from the literature. Our results clearly indicated that no single meta-analytic method consistently outperformed all others. Therefore, we recommend that meta-analysts in psychology focus on sensitivity analyses—that is, report on a variety of methods, consider the conditions under which these methods fail (as indicated by simulation studies such as ours), and then report how conclusions might change based on which conditions are most plausible. Moreover, given the dependence of meta-analytic methods on untestable assumptions, we strongly recommend that researchers in psychology continue their efforts on improving the primary literature and conducting large-scale, pre-registered replications. We provide detailed results and simulation code at https://osf.io/rf3ys and interactive figures at http://www.shinyapps.org/apps/metaExplorer/.


2018 ◽  
Author(s):  
Dick Bierman ◽  
Jacob Jolij

We have tested the feasibility of a method to prevent the occurrence of so-called Questionable Research Practices (QRP). A part from embedded pre-registration the major aspect of the system is real-time uploading of data on a secure server. We outline the method, discuss the drop-out treatment and compare it to the Born-open data method, and report on our preliminary experiences. We also discuss the extension of the data-integrity system from secure server to use of blockchain technology.


2019 ◽  
Author(s):  
Rens van de Schoot ◽  
Elian Griffioen ◽  
Sonja Désirée Winter

The trial-and-roulette method is a popular method to extract experts’ beliefs about a statistical parameter. However, most studies examining the validity of this method only use ‘perfect’ elicitation results. In practice, it is sometimes hard to obtain such neat elicitation results. In our project about predicting fraud and questionable research practices among PhD candidates, we ran into issues with imperfect elicitation results. The goal of the current chapter is to provide an over-view of the solutions we used for dealing with these imperfect results, so that others can benefit from our experience. We present information about the nature of our project, the reasons for the imperfect results, and how we resolved these sup-ported by annotated R-syntax.


2018 ◽  
Vol 10 (6) ◽  
pp. 783-791 ◽  
Author(s):  
Stefan Janke ◽  
Martin Daumiller ◽  
Selma Carolin Rudert

Questionable research practices (QRPs) are a strongly debated topic in the scientific community. Hypotheses about the relationship between individual differences and QRPs are plentiful but have rarely been empirically tested. Here, we investigate whether researchers’ personal motivation (expressed by achievement goals) is associated with self-reported engagement in QRPs within a sample of 217 psychology researchers. Appearance approach goals (striving for skill demonstration) positively predicted engagement in QRPs, while learning approach goals (striving for skill development) were a negative predictor. These effects remained stable when also considering Machiavellianism, narcissism, and psychopathy in a latent multiple regression model. Additional moderation analyses revealed that the more researchers favored publishing over scientific rigor, the stronger the association between appearance approach goals and engagement in QRPs. The findings deliver first insights into the nature of the relationship between personal motivation and scientific malpractice.


Sign in / Sign up

Export Citation Format

Share Document