scholarly journals Psychological science from 'publish or perish' to 'trust but verify'

Author(s):  
Anđela Keljanović

At the time when social psychologists believed they could be proud of their discipline, there was the devastating news that Diederik Stapel had committed a major scientific fraud. This event coincided with the start of the discussion on trust in psychological findings. It was soon followed by the report of a series of nine studies that failed to replicate the 'professor's study'. These replication results were astounding due to earlier reports of successful replications. Due to the crisis of confidence in the results of field research, the Open Science Collaboration subsequently replicated 100 correlation and experimental studies published in 2008 in Psychological Science, Journal of Personality and Social Psychology, and Journal of Experimental Psychology: Learning, Memory, and Cognition. Of the 97% of the original studies that had a positive effect, 36% were replicated. However, their findings have also been called into question by calculating the Bayesian factor. In addition to fraud, questionable research practices resulting from publication bias that results in false positives undermine confidence in the validity of psychological research findings. Perhaps the most costly mistake of false-positive findings is to erroneously reject the null hypothesis. However, that Stapel (2011) confirmed the null hypothesis, or that Bargh (1996) found that admission of participants did not affect walking speed, or that Dijksterhuis and van Knipenberg (1998) reported that participants received with the word 'professor' did not improve their performance on task, no one would be interested in their findings. Zero findings are only interesting if they contradict the main hypothesis derived from the theory or contradict a number of previous studies. The fact that good experimental research is usually conducted in order to test theories, researchers can never be sure whether they have chosen the optimal operationalization of a given construct. As researchers can never be sure that they have properly operationalized the theoretical constructs they are evaluating and whether they have been successful in controlling the third variables that may be responsible for their results, the theory can never be proven true.

2012 ◽  
Vol 7 (6) ◽  
pp. 543-554 ◽  
Author(s):  
Marjan Bakker ◽  
Annette van Dijk ◽  
Jelte M. Wicherts

If science were a game, a dominant rule would probably be to collect results that are statistically significant. Several reviews of the psychological literature have shown that around 96% of papers involving the use of null hypothesis significance testing report significant outcomes for their main results but that the typical studies are insufficiently powerful for such a track record. We explain this paradox by showing that the use of several small underpowered samples often represents a more efficient research strategy (in terms of finding p < .05) than does the use of one larger (more powerful) sample. Publication bias and the most efficient strategy lead to inflated effects and high rates of false positives, especially when researchers also resorted to questionable research practices, such as adding participants after intermediate testing. We provide simulations that highlight the severity of such biases in meta-analyses. We consider 13 meta-analyses covering 281 primary studies in various fields of psychology and find indications of biases and/or an excess of significant results in seven. These results highlight the need for sufficiently powerful replications and changes in journal policies.


2021 ◽  
Author(s):  
Ryuju Hasegawa ◽  
Kanae Tada ◽  
Fumiya Yonemitsu ◽  
Ayumi Ikeda ◽  
Yuki Yamada ◽  
...  

In the midst of the current reproducibility crisis in psychology, pre-registration is considered a remedy to increase the reliability of psychological research. However, as pre-registration is an unconventional practice for most psychological researchers, they find it difficult to introduce pre-registration into their studies. In order to promote pre-registration, this article provides a detailed and practical step-by-step tutorial for beginners on pre-registration with the Open Science Framework. Furthermore, a typical example of the practical experience of beginners and its revisions are provided as supplementary material. Finally, we discuss various issues related to pre-registration, such as transparent research, registered reports, preprints, and open science education. We hope that this article will contribute to the improvement of reproducible psychological science in Japan.


2021 ◽  
Author(s):  
Bradley David McAuliff ◽  
Melanie B. Fessinger ◽  
Anthony Perillo ◽  
Jennifer Torkildson Perillo

As the field of psychology and law begins to embrace more transparent and accessible science, many questions arise about what open science actually is and how to do it. In this chapter, we contextualize this reform by examining fundamental concerns about psychological research—irreproducibility and replication failures, false-positive errors, and questionable research practices—that threaten its validity and credibility. Next, we turn to psychology’s response by reviewing the concept of open science and explaining how to implement specific practices—preregistration, registered reports, open materials/data/code, and open access publishing—designed to make research more transparent and accessible. We conclude by weighing the implications of open science for the field of psychology and law, specifically with respect to how we conduct and evaluate research, as well as how we train the next generation of psychological scientists and share scientific findings in applied settings.


2021 ◽  
Author(s):  
Michael Bosnjak ◽  
Christian Fiebach ◽  
David Thomas Mellor ◽  
Stefanie Mueller ◽  
Daryl Brian O'Connor ◽  
...  

Recent years have seen dramatic changes in research practices in psychological science. In particular, preregistration of study plans prior to conducting a study has been identified as an important tool to help increase the transparency of science and to improve the robustness of psychological research findings. This article presents the Psychological Research Preregistration-Quantitative (PRP-QUANT) Template produced by a Joint Psychological Societies Preregistration Task Force consisting of the American Psychological Association (APA), British Psychological Society (BPS) and German Psychological Society (DGPs), supported by the Center for Open Science (COS) and the Leibniz Institute for Psychology (ZPID). The goal of the Task Force was to provide the psychological community with a consensus template for the preregistration of quantitative research in psychology, one with wide coverage and the ability, if necessary, to adapt to specific journals, disciplines and researcher needs. This article covers the structure and use of the PRP-QUANT template, while outlining and discussing the benefits of its use for researchers, authors, funders and other relevant stakeholders. We hope that by introducing this template and by demonstrating the support of preregistration by major academic psychological societies, we will facilitate an increase in preregistration practices and thereby also the further advancement of transparency and knowledge-sharing in the psychological sciences.


2020 ◽  
Author(s):  
Soufian Azouaghe ◽  
Adeyemi Adetula ◽  
Patrick S. Forscher ◽  
Dana Basnight-Brown ◽  
Nihal Ouherrou ◽  
...  

The quality of scientific research is assessed not only by its positive impact on socio-economic development and human well-being, but also by its contribution to the development of valid and reliable scientific knowledge. Thus, researchers regardless of their scientific discipline, are supposed to adopt research practices based on transparency and rigor. However, the history of science and the scientific literature teach us that a part of scientific results is not systematically reproducible (Ioannidis, 2005). This is what is commonly known as the "replication crisis" which concerns the natural sciences as well as the social sciences, of which psychology is no exception.Firstly, we aim to address some aspects of the replication crisis and Questionable Research Practices (QRPs). Secondly, we discuss how we can involve more labs in Africa to take part in the global research process, especially the Psychological Science Accelerator (PSA). For these goals, we will develop a tutorial for the labs in Africa, by highlighting the open science practices. In addition, we emphasize that it is substantial to identify African labs needs and factors that hinder their participating in the PSA, and the support needed from the Western world. Finally, we discuss how to make psychological science more participatory and inclusive.


2019 ◽  
Author(s):  
Roger W. Strong ◽  
George Alvarez

The replication crisis has brought about an increased focus on improving the reproducibility of psychological research (Open Science Collaboration, 2015). Although some failed replications reflect false-positives in original research findings, many are likely the result of low statistical power, which can cause failed replications even when an effect is real, no questionable research practices are used, and an experiment’s methodology is repeated perfectly. The present paper describes a simulation method (bootstrap resampling) that can be used in combination with pilot data or synthetic data to produce highly powered experimental designs. Unlike other commonly used power analysis approaches (e.g., G*Power), bootstrap resampling can be adapted to any experimental design to account for various factors that influence statistical power, including sample size, number of trials per condition, and participant exclusion criteria. Ignoring some of these factors (e.g., by using G*Power) can overestimate the power of a study or replication, increasing the likelihood that your findings will not replicate. By demonstrating how these factors influence the consistency of experimental findings, this paper provides examples of how simulation can be used to improve statistical power and reproducibility. Further, we provide a MATLAB toolbox that can be used to implement these simulation-based methods on existing pilot data (https://harvard-visionlab.github.io/power-sim).


2019 ◽  
Author(s):  
Farid Anvari ◽  
Daniel Lakens

Replication failures of past findings in several scientific disciplines, including psychology, medicine, and experimental economics, have created a ‘crisis of confidence’ among scientists. Psychological science has been at the forefront of tackling these issues, with discussions about replication failures and scientific self-criticisms of questionable research practices (QRPs) increasingly taking place in public forums. How this replicability crisis impacts the public’s trust is a question yet to be answered by research. Whereas some researchers believe that the public’s trust will be positively impacted or maintained, others believe trust will be diminished. Because it is our field of expertise, we focus on trust in psychological science. We performed a study testing how public trust in past and future psychological research would be impacted by being informed about i) replication failures, ii) replication failures and criticisms of QRPs, and iii) replication failures, criticisms of QRPs, and proposed reforms. Results from a mostly European sample (N = 1129) showed that, compared to a control group, whereas trust in past research was reduced when people were informed about the aspects of the replication crisis, trust in future research was maintained except when they were also informed about proposed reforms. Potential explanations are discussed.


2020 ◽  
Author(s):  
Daryl Brian O'Connor

There has been much talk of psychological science undergoing a renaissance with recent years being marked by dramatic changes in research practices and to the publishing landscape. This article briefly summarises a number of the ways in which psychological science can improve its rigor, lessen use of questionable research practices and reduce publication bias. The importance of pre-registration as a useful tool to increase transparency of science and improve the robustness of our evidence base, especially in COVID-19 times, is presented. In particular, the case for the increased adoption of Registered Reports, the article format that allows peer review of research studies before the results are known, is outlined. Finally, the article argues that the scientific architecture and the academic reward structure need to change with a move towards “slow science” and away from the “publish or perish” culture.


2021 ◽  
Author(s):  
Chelsea Moran ◽  
Alexandra Richard ◽  
Kate Wilson ◽  
Rosemary Twomey ◽  
Adina Coroiu

Background: Questionable research practices (QRPs) have been identified as a driving force of the replication crisis in the field of psychological science. The aim of this study was to assess the frequency of QRP use among psychology students in Canadian universities, and to better understand reasons and motivations for QRP use.Method: Participants were psychology students attending Canadian universities and were recruited via online advertising and university email invitations to complete a bilingual survey. Respondents were asked how often they and others engaged in seven QRPs. They were also asked to estimate the proportion of psychology research impacted by each QRP and how acceptable they found each QRP. Data were collected through Likert-scale survey items and open-ended text responses between May 2020 and January 2021, and was analyzed using descriptive statistics and thematic analysis. Results: 425 psychology students completed the survey. The sample consisted of 40% undergraduate students, 59% graduate students and 1% post-doctoral fellows. Overall, 64% of participants reported using at least one QRP, while 79% reported having observed others engaging in at least one QRP. The most frequently reported QRPs were p-hacking (46%), not submitting null results for publication (31%), excluding outcome measures (30%), and hypothesizing after results are known (27%). These QRPs were also the most frequently observed in others, estimated to be the most prevalent in the field, and rated as the most acceptable. Qualitative findings show that students reported that pressures to publish motivated their QRP use, with some reporting that certain QRPs are justifiable in some cases (e.g., in the case of exploratory research). Students also reported that QRPs contribute to the replication crisis and to publication bias and offered several alternatives and solutions to engaging in QRPs, such as gaining familiarity with open science practices. Conclusions: Most Canadian psychology students in this sample report using QRPs, which is unsurprising since they observe such practices in their research environment and estimate that they are prevalent. In contrast, most students believe that QRPs are not acceptable. The results of this study highlight the need to examine the pedagogical standards and cultural norms in academia that may promote or normalize QRPs in psychological science, to improve the quality and replicability of research in this field.


2019 ◽  
Vol 227 (4) ◽  
pp. 261-279 ◽  
Author(s):  
Frank Renkewitz ◽  
Melanie Keiner

Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect such bias in meta-analytic results. We present an evaluation of the performance of six of these tools. To assess the Type I error rate and the statistical power of these methods, we simulated a large variety of literatures that differed with regard to true effect size, heterogeneity, number of available primary studies, and sample sizes of these primary studies; furthermore, simulated studies were subjected to different degrees of publication bias. Our results show that across all simulated conditions, no method consistently outperformed the others. Additionally, all methods performed poorly when true effect sizes were heterogeneous or primary studies had a small chance of being published, irrespective of their results. This suggests that in many actual meta-analyses in psychology, bias will remain undiscovered no matter which detection method is used.


Sign in / Sign up

Export Citation Format

Share Document