Questionable Research Practices: How to Recognize and Avoid Them

2020 ◽  
Vol 32 (4) ◽  
pp. 183-190
Author(s):  
William Newton Suter

This article focuses on questionable research practices (QRPs) that bias findings and conclusions. QRPs cast doubt on the credibility of research findings in home health and nursing science in general. They assault the research integrity of all researchers to the extent they are permitted to exist at all. Each QRP is defined via bundles of specific research behaviors with unifying labels that include deceptive mirages and phantom sharpshooters among others. These questionable behaviors are described in ways that enhance research understanding and enable QRP avoidance by careful home health nurse researchers using higher standards of scientific rigor. QRPs impede scientific progress by generating false conclusions. They threaten the validity and dependability of scientific research and confuse other researchers who practice rigorous science and maintain integrity. QRPs also clog the literature with studies that cannot be replicated. When researchers engage in QRPs at the expense of rigor, overall trust in the scientific knowledge base erodes.

2018 ◽  
Vol 10 (6) ◽  
pp. 783-791 ◽  
Author(s):  
Stefan Janke ◽  
Martin Daumiller ◽  
Selma Carolin Rudert

Questionable research practices (QRPs) are a strongly debated topic in the scientific community. Hypotheses about the relationship between individual differences and QRPs are plentiful but have rarely been empirically tested. Here, we investigate whether researchers’ personal motivation (expressed by achievement goals) is associated with self-reported engagement in QRPs within a sample of 217 psychology researchers. Appearance approach goals (striving for skill demonstration) positively predicted engagement in QRPs, while learning approach goals (striving for skill development) were a negative predictor. These effects remained stable when also considering Machiavellianism, narcissism, and psychopathy in a latent multiple regression model. Additional moderation analyses revealed that the more researchers favored publishing over scientific rigor, the stronger the association between appearance approach goals and engagement in QRPs. The findings deliver first insights into the nature of the relationship between personal motivation and scientific malpractice.


Author(s):  
Noémie Aubert Bonn ◽  
Wim Pinxten

ABSTRACTBackgroundResearch misconduct and questionable research practices have been the subject of increasing attention in the past few years. But despite the rich body of research available, few empirical works provide the perspectives of non-researcher stakeholders.MethodsTo capture some of the forgotten voices, we conducted semi-structured interviews and focus groups with policy makers, funders, institution leaders, editors or publishers, research integrity office members, research integrity community members, laboratory technicians, researchers, research students, and former-researchers who changed career to inquire on the topics of success, integrity, and responsibilities in science. We used the Flemish biomedical landscape as a baseline to be able to grasp the views of interacting and complementary actors in a system setting.ResultsGiven the breadth of our results, we divided our findings in a two-paper series with the current paper focusing on the problems that affect the quality and integrity of science. We first discovered that perspectives on misconduct, including the core reasons for condemning misconduct, differed between individuals and actor groups. Beyond misconduct, interviewees also identified numerous problems which affect the integrity of research. Issues related to personalities and attitudes, lack of knowledge of good practices, and research climate were mentioned. Elements that were described as essential for success (in the associate paper) were often thought to accentuate the problems of research climates by disrupting research cultures and research environments. Even though everyone agreed that current research climates need to be addressed, no one felt responsible nor capable of initiating change. Instead, respondents revealed a circle of blame and mistrust between actor groups.ConclusionsOur findings resonate with recent debates, and extrapolate a few action points which might help advance the discussion. First, we must tackle how research is assessed. Second, approaches to promote better science should be revisited: not only should they directly address the impact of climates on research practices, but they should also redefine their objective to empower and support researchers rather than to capitalize on their compliance. Finally, inter-actor dialogues and shared decision making are crucial to building joint objectives for change.Trial registrationosf.io/33v3m


2020 ◽  
Vol 7 (4) ◽  
pp. 181351 ◽  
Author(s):  
Sarahanne M. Field ◽  
E.-J. Wagenmakers ◽  
Henk A. L. Kiers ◽  
Rink Hoekstra ◽  
Anja F. Ernst ◽  
...  

The crisis of confidence has undermined the trust that researchers place in the findings of their peers. In order to increase trust in research, initiatives such as preregistration have been suggested, which aim to prevent various questionable research practices. As it stands, however, no empirical evidence exists that preregistration does increase perceptions of trust. The picture may be complicated by a researcher's familiarity with the author of the study, regardless of the preregistration status of the research. This registered report presents an empirical assessment of the extent to which preregistration increases the trust of 209 active academics in the reported outcomes, and how familiarity with another researcher influences that trust. Contrary to our expectations, we report ambiguous Bayes factors and conclude that we do not have strong evidence towards answering our research questions. Our findings are presented along with evidence that our manipulations were ineffective for many participants, leading to the exclusion of 68% of complete datasets, and an underpowered design as a consequence. We discuss other limitations and confounds which may explain why the findings of the study deviate from a previously conducted pilot study. We reflect on the benefits of using the registered report submission format in light of our results. The OSF page for this registered report and its pilot can be found here: http://dx.doi.org/10.17605/OSF.IO/B3K75 .


2021 ◽  
Author(s):  
Robert Schulz ◽  
Georg Langen ◽  
Robert Prill ◽  
Michael Cassel ◽  
Tracey Weissgerber

Introduction: While transparent reporting of clinical trials is essential to assess the risk of bias and translate research findings into clinical practice, earlier studies have shown that deficiencies are common. This study examined current clinical trial reporting and transparent research practices in sports medicine and orthopedics. Methods: The sample included clinical trials published in the top 25% of sports medicine and orthopedics journals over eight months. Two independent reviewers assessed pre-registration, open data and criteria related to scientific rigor, the study sample, and data analysis. Results: The sample included 163 clinical trials from 27 journals. While the majority of trials mentioned rigor criteria, essential details were often missing. Sixty percent (confidence interval [CI] 53-68%) of trials reported sample size calculations, but only 32% (CI 25-39%) justified the expected effect size. Few trials indicated the blinding status of all main stakeholders (4%; CI 1-7%). Only 18% (CI 12-24%) included information on randomization type, method, and concealed allocation. Most trials reported participants' sex/gender (95%; CI 92-98%) and information on inclusion and exclusion criteria (78%; CI 72-84%). Only 20% (CI 14-26%) of trials were pre-registered. No trials deposited data in open repositories. Conclusions: These results will aid the sports medicine and orthopedics community in developing tailored interventions to improve reporting. While authors typically mention blinding, randomization and other factors, essential details are often missing. Greater acceptance of open science practices, like pre-registration and open data, is needed. These practices have been widely encouraged, we discuss systemic interventions that may improve clinical trial reporting. Registration: https://doi.org/10.17605/OSF.IO/9648H


2019 ◽  
Vol 6 (12) ◽  
pp. 190738 ◽  
Author(s):  
Jerome Olsen ◽  
Johanna Mosen ◽  
Martin Voracek ◽  
Erich Kirchler

The replicability of research findings has recently been disputed across multiple scientific disciplines. In constructive reaction, the research culture in psychology is facing fundamental changes, but investigations of research practices that led to these improvements have almost exclusively focused on academic researchers. By contrast, we investigated the statistical reporting quality and selected indicators of questionable research practices (QRPs) in psychology students' master's theses. In a total of 250 theses, we investigated utilization and magnitude of standardized effect sizes, along with statistical power, the consistency and completeness of reported results, and possible indications of p -hacking and further testing. Effect sizes were reported for 36% of focal tests (median r = 0.19), and only a single formal power analysis was reported for sample size determination (median observed power 1 − β = 0.67). Statcheck revealed inconsistent p -values in 18% of cases, while 2% led to decision errors. There were no clear indications of p -hacking or further testing. We discuss our findings in the light of promoting open science standards in teaching and student supervision.


Author(s):  
Ana Marija Ljubenković ◽  
Ana Borovečki ◽  
Marko Ćurković ◽  
Bjørn Hofmann ◽  
Søren Holm

This cross-sectional study evaluates the knowledge, attitudes, experiences, and behavior of final year medical students, PhD students, and supervisors at the School of Medicine of the University of Zagreb in relation to research misconduct, questionable research practices, and the research environment. The overall response rate was 36.4% (68%–100% for the paper survey and 8%–15% for the online surveys). The analysis reveals statistically significant differences in attitude scores between PhD students and supervisors, the latter having attitudes more in concordance with accepted norms. The results overall show a nonnegligible incidence of self-reported misconduct and questionable research practices, as well as some problematic attitudes towards misconduct and questionable research practices. The incidence of problematic authorship practices was particularly high. The research environment was evaluated as being mostly supportive of research integrity.


Author(s):  
Holly L. Storkel ◽  
Frederick J. Gallun

Purpose: This editorial introduces the new registered reports article type for the Journal of Speech, Language, and Hearing Research . The goal of registered reports is to create a structural solution to address issues of publication bias toward results that are unexpected and sensational, questionable research practices that are used to produce novel results, and a peer-review process that occurs at the end of the research process when changes in fundamental design are difficult or impossible to implement. Conclusion: Registered reports can be a positive addition to scientific publications by addressing issues of publication bias, questionable research practices, and the late influence of peer review. This article type does so by requiring reviewers and authors to agree in advance that the experimental design is solid, the questions are interesting, and the results will be publishable regardless of the outcome. This procedure ensures that replication studies and null results make it into the published literature and that authors are not incentivized to alter their analyses based on the results that they obtain. Registered reports represent an ongoing commitment to research integrity and finding structural solutions to structural problems inherent in a research and publishing landscape in which publications are such a high-stakes aspect of individual and institutional success.


2018 ◽  
Author(s):  
Christopher Brydges

Objectives: Research has found evidence of publication bias, questionable research practices (QRPs), and low statistical power in published psychological journal articles. Isaacowitz’s (2018) editorial in the Journals of Gerontology Series B, Psychological Sciences called for investigation of these issues in gerontological research. The current study presents meta-research findings based on published research to explore if there is evidence of these practices in gerontological research. Method: 14,481 test statistics and p values were extracted from articles published in eight top gerontological psychology journals since 2000. Frequentist and Bayesian caliper tests were used to test for publication bias and QRPs (specifically, p-hacking and incorrect rounding of p values). A z-curve analysis was used to estimate average statistical power across studies.Results: Strong evidence of publication bias was observed, and average statistical power was approximately .70 – below the recommended .80 level. Evidence of p-hacking was mixed. Evidence of incorrect rounding of p values was inconclusive.Discussion: Gerontological research is not immune to publication bias, QRPs, and low statistical power. Researchers, journals, institutions, and funding bodies are encouraged to adopt open and transparent research practices, and using Registered Reports as an alternative article type to minimize publication bias and QRPs, and increase statistical power.


2018 ◽  
Vol 29 (2) ◽  
pp. 174-187 ◽  
Author(s):  
Dennis Tourish ◽  
Russell Craig

This article analyses 131 articles that have been retracted from peer-reviewed journals in business and management studies. We also draw from six in-depth interviews: three with journal editors involved in retractions, two with coauthors of papers retracted because a fellow author committed research fraud, and one with a former academic found guilty of research fraud. Our aim is to promote debate about the causes and consequences of research misconduct and to suggest possible remedies. Drawing on corruption theory, we suggest that a range of institutional, environmental, and behavioral factors interacts to provide incentives that sustain research misconduct. We explore the research practices that have prompted retractions. We contend that some widely used, but questionable research practices, should be challenged so as to promote stronger commitment to research integrity and to deter misconduct. To this end, we propose eleven recommendations for action by authors, editors, publishers, and the broader scientific community.


Sign in / Sign up

Export Citation Format

Share Document