“I know it's bad but I have been pressured into it”: Questionable research practices among psychology students in Canada

2021 ◽  
Author(s):  
Chelsea Moran ◽  
Alexandra Richard ◽  
Kate Wilson ◽  
Rosemary Twomey ◽  
Adina Coroiu

Background: Questionable research practices (QRPs) have been identified as a driving force of the replication crisis in the field of psychological science. The aim of this study was to assess the frequency of QRP use among psychology students in Canadian universities, and to better understand reasons and motivations for QRP use.Method: Participants were psychology students attending Canadian universities and were recruited via online advertising and university email invitations to complete a bilingual survey. Respondents were asked how often they and others engaged in seven QRPs. They were also asked to estimate the proportion of psychology research impacted by each QRP and how acceptable they found each QRP. Data were collected through Likert-scale survey items and open-ended text responses between May 2020 and January 2021, and was analyzed using descriptive statistics and thematic analysis. Results: 425 psychology students completed the survey. The sample consisted of 40% undergraduate students, 59% graduate students and 1% post-doctoral fellows. Overall, 64% of participants reported using at least one QRP, while 79% reported having observed others engaging in at least one QRP. The most frequently reported QRPs were p-hacking (46%), not submitting null results for publication (31%), excluding outcome measures (30%), and hypothesizing after results are known (27%). These QRPs were also the most frequently observed in others, estimated to be the most prevalent in the field, and rated as the most acceptable. Qualitative findings show that students reported that pressures to publish motivated their QRP use, with some reporting that certain QRPs are justifiable in some cases (e.g., in the case of exploratory research). Students also reported that QRPs contribute to the replication crisis and to publication bias and offered several alternatives and solutions to engaging in QRPs, such as gaining familiarity with open science practices. Conclusions: Most Canadian psychology students in this sample report using QRPs, which is unsurprising since they observe such practices in their research environment and estimate that they are prevalent. In contrast, most students believe that QRPs are not acceptable. The results of this study highlight the need to examine the pedagogical standards and cultural norms in academia that may promote or normalize QRPs in psychological science, to improve the quality and replicability of research in this field.

2020 ◽  
Author(s):  
Soufian Azouaghe ◽  
Adeyemi Adetula ◽  
Patrick S. Forscher ◽  
Dana Basnight-Brown ◽  
Nihal Ouherrou ◽  
...  

The quality of scientific research is assessed not only by its positive impact on socio-economic development and human well-being, but also by its contribution to the development of valid and reliable scientific knowledge. Thus, researchers regardless of their scientific discipline, are supposed to adopt research practices based on transparency and rigor. However, the history of science and the scientific literature teach us that a part of scientific results is not systematically reproducible (Ioannidis, 2005). This is what is commonly known as the "replication crisis" which concerns the natural sciences as well as the social sciences, of which psychology is no exception.Firstly, we aim to address some aspects of the replication crisis and Questionable Research Practices (QRPs). Secondly, we discuss how we can involve more labs in Africa to take part in the global research process, especially the Psychological Science Accelerator (PSA). For these goals, we will develop a tutorial for the labs in Africa, by highlighting the open science practices. In addition, we emphasize that it is substantial to identify African labs needs and factors that hinder their participating in the PSA, and the support needed from the Western world. Finally, we discuss how to make psychological science more participatory and inclusive.


2019 ◽  
Author(s):  
Simon Dennis ◽  
Paul Michael Garrett ◽  
Hyungwook Yim ◽  
Jihun Hamm ◽  
Adam F Osth ◽  
...  

Pervasive internet and sensor technologies promise to revolutionize psychological science. However, the data collected using these technologies is often very personal - indeed the value of the data is often directly related to how personal it is. At the same time, driven by the replication crisis, there is a sustained push to publish data to open repositories. These movements are in fundamental conflict. In this paper, we propose a way to navigate this issue. We argue that there are significant advantages to be gained by ceding the ownership of data to the participants who generate it. Then we provide desiderata for a privacy-preserving platform. In particular, we suggest that researchers should use an interface to perform experiments and run analyses rather than observing the stimuli themselves. We argue that this method not only improves privacy but will also encourage greater compliance with good research practices than is possible with open repositories.


2017 ◽  
Vol 12 (4) ◽  
pp. 660-664 ◽  
Author(s):  
Scott O. Lilienfeld

The past several years have been a time for soul searching in psychology, as we have gradually come to grips with the reality that some of our cherished findings are less robust than we had assumed. Nevertheless, the replication crisis highlights the operation of psychological science at its best, as it reflects our growing humility. At the same time, institutional variables, especially the growing emphasis on external funding as an expectation or de facto requirement for faculty tenure and promotion, pose largely unappreciated hazards for psychological science, including (a) incentives for engaging in questionable research practices, (b) a single-minded focus on programmatic research, (c) intellectual hyperspecialization, (d) disincentives for conducting direct replications, (e) stifling of creativity and intellectual risk taking, (f) researchers promising more than they can deliver, and (g) diminished time for thinking deeply. Preregistration should assist with (a), but will do little about (b) through (g). Psychology is beginning to right the ship, but it will need to confront the increasingly deleterious impact of the grant culture on scientific inquiry.


2021 ◽  
Author(s):  
Jennifer L Beaudry ◽  
Matt N Williams ◽  
Michael Carl Philipp ◽  
Emily Jane Kothe

Background: Understanding students’ naive conceptions about how science works and the norms that guide scientific best practice is important so that teachers can adapt their teaching to students’ existing understandings. Objective: To describe what incoming undergraduate students of psychology believe about reproducibility and open science practices in psychology. Method: International online survey with participants who were about to start their first course in psychology at a university (N = 239). Results: When asked about how research should be done, most students endorsed most (but not all) of ten open science practices. When asked to estimate the proportion of published psychological studies that apply each of a set of 10 open science practices, participants’ estimates tended to average near 50%. Only 18% of participants had heard of the term “replication crisis.” Conclusion: Despite relatively significant media attention on the replication crisis, few incoming psychology students are familiar with the term. Incoming students nevertheless appear to be sympathetic toward most open science practices, although they may overestimate the prevalence of these practices in psychology. Teaching Implications: Teaching materials aimed at incoming psychology students should not assume pre-existing knowledge about open science or replicability.


2021 ◽  
Author(s):  
Jason Chin ◽  
Justin T. Pickett ◽  
Simine Vazire ◽  
Alex O. Holcombe

2021 ◽  
Vol 37 (4) ◽  
pp. 1-6
Author(s):  
Jason M. Lodge ◽  
Linda Corrin ◽  
Gwo-Jen Hwang ◽  
Kate Thompson

Over the last decade a spate of issues has been emerging in empirical research spanning diverse fields such as biology, medicine, economics, and psychological science. The crisis has already led to fundamental shifts in how research is being conducted in several fields, particularly psychological science. Broadly labelled the ‘replication crisis’, these issues place substantial doubt on the robustness of peer-reviewed quantitative research across many disciplines. In this editorial, we will delve into the replication crisis and what it means for educational technology research. We will address two key areas, describing the extent to which the replication crisis applies to educational technology research and suggestions for responses by our community.


2021 ◽  
Author(s):  
P. Priscilla Lui ◽  
Monica C. Skewes ◽  
Sarah Gobrial ◽  
David Rollock

To answer questions about human psychology, psychological science needs to yield credible findings. Because of their goals of understanding people’s lived experiences and advocating for the needs of the Native communities, Indigenous scholars tend to use community-based participatory research (CBPR) or approach science from a constructivist framework. The primary goal of mainstream psychological science is to uncover generalizable facts about human functioning. Approached from a postpositivist framework, mainstream psychological scholars tend to assume the possibility of identifying researcher biases and achieving objective science. Recently, many psychological findings fail to replicate in new samples. The replication crisis raised concerns about the validity of psychological science. The mainstream open science has been promoted as a solution to this replication crisis; the open science movement encourages researchers to emphasize transparency and accountability to the broad research community. The notion of transparency aligns with the principles of CBPR—research approach common in Indigenous research. Yet, open science practices are not widely adopted in Indigenous research, and mainstream open science does not emphasize researchers’ accountability to the communities that their science is intended to serve. We examined Indigenous researchers’ awareness and concerns about mainstream open science. Participants endorsed the value of transparency with the participants and their communities. They also were concerned about being disadvantaged and the possible negative impact of data sharing on the Native communities. We suggest that there is value in connecting mainstream open science and Indigenous research to advance science that empowers people and makes positive community impact.


2019 ◽  
Vol 6 (12) ◽  
pp. 190738 ◽  
Author(s):  
Jerome Olsen ◽  
Johanna Mosen ◽  
Martin Voracek ◽  
Erich Kirchler

The replicability of research findings has recently been disputed across multiple scientific disciplines. In constructive reaction, the research culture in psychology is facing fundamental changes, but investigations of research practices that led to these improvements have almost exclusively focused on academic researchers. By contrast, we investigated the statistical reporting quality and selected indicators of questionable research practices (QRPs) in psychology students' master's theses. In a total of 250 theses, we investigated utilization and magnitude of standardized effect sizes, along with statistical power, the consistency and completeness of reported results, and possible indications of p -hacking and further testing. Effect sizes were reported for 36% of focal tests (median r = 0.19), and only a single formal power analysis was reported for sample size determination (median observed power 1 − β = 0.67). Statcheck revealed inconsistent p -values in 18% of cases, while 2% led to decision errors. There were no clear indications of p -hacking or further testing. We discuss our findings in the light of promoting open science standards in teaching and student supervision.


2019 ◽  
Author(s):  
Dustin Fife ◽  
Joseph Lee Rodgers

In light of the “replication crisis,” some (e.g., Nelson, Simmons, & Simonsohn, 2018) advocate for greater policing and transparency in research methods. Others (Baumeister, 2016; Finkel, Eastwick, & Reis, 2017; Goldin-meadow, 2016; Levenson, 2017) argue against rigid requirements that may inadvertently restrict discovery. We embrace both positions and argue that proper understanding and implementation of the well-established paradigm of Exploratory Data Analysis (EDA; Tukey, 1977) is necessary to push beyond the replication crisis. Unfortunately, many don’t realize EDA exists (Goldin-Meadow, 2016), fail to understand the philosophy and proper tools for exploration (Baumeister, 2016), or reject EDA as unscientific (Lindsay, 2015). EDA’s mistreatment is unfortunate, and is usually based on misunderstanding the nature and goal of EDA. We develop an expanded typology that situates EDA, CDA, and rough CDA in the same framework with fishing, p-hacking, and HARKing, and argue that most, if not all, questionable research practices (QRPs) would be resolved by understanding and implementing the EDA/CDA gradient. We argue most psychological research is “rough CDA,” which has often and inadvertently used the wrong tools. We conclude with guidelines about how these typologies can be integrated into a cumulative research program that is necessary to move beyond the replication crisis.


2020 ◽  
Author(s):  
Dwight Kravitz ◽  
Stephen Mitroff

Large-scale replication failures have shaken confidence in the social sciences, psychology in particular. Most researchers acknowledge the problem, yet there is widespread debate about the causes and solutions. Using “big data,” the current project demonstrates that unintended consequences of three common questionable research practices (retaining pilot data, adding data after checking for significance, and not publishing null findings) can explain the lion’s share of the replication failures. A massive dataset was randomized to create a true null effect between two conditions, and then these three practices were applied. They produced false discovery rates far greater than 5% (the generally accepted rate), and were strong enough to obscure, or even reverse, the direction of real effects. These demonstrations suggest that much of the replication crisis might be explained by simple, misguided experimental choices. This approach also produces empirically-based corrections to account for these practices when they are unavoidable, providing a viable path forward.


Sign in / Sign up

Export Citation Format

Share Document