Prevalence of Research Misconduct and Questionable Research Practices: A Systematic Review and Meta-Analysis

2021 ◽  
Vol 27 (4) ◽  
Author(s):  
Yu Xie ◽  
Kai Wang ◽  
Yan Kong
2019 ◽  
Author(s):  
Shelby Rauh ◽  
Trevor Torgerson ◽  
Austin L. Johnson ◽  
Jonathan Pollard ◽  
Daniel Tritz ◽  
...  

AbstractBackgroundThe objective of this study was to evaluate the nature and extent of reproducible and transparent research practices in neurology research.MethodsThe NLM catalog was used to identify MEDLINE-indexed neurology journals. A PubMed search of these journals was conducted to retrieve publications over a 5-year period from 2014 to 2018. A random sample of publications was extracted. Two authors conducted data extraction in a blinded, duplicate fashion using a pilot-tested Google form. This form prompted data extractors to determine whether publications provided access to items such as study materials, raw data, analysis scripts, and protocols. In addition, we determined if the publication was included in a replication study or systematic review, was preregistered, had a conflict of interest declaration, specified funding sources, and was open access.ResultsOur search identified 223,932 publications meeting the inclusion criteria, from which 300 were randomly sampled. Only 290 articles were accessible, yielding 202 publications with empirical data for analysis. Our results indicate that 8.99% provided access to materials, 9.41% provided access to raw data, 0.50% provided access to the analysis scripts, 0.99% linked the protocol, and 3.47% were preregistered. A third of sampled publications lacked funding or conflict of interest statements. No publications from our sample were included in replication studies, but a fifth were cited in a systematic review or meta-analysis.ConclusionsCurrent research in the field of neurology does not consistently provide information needed for reproducibility. The implications of poor research reporting can both affect patient care and increase research waste. Collaborative intervention by authors, peer reviewers, journals, and funding sources is needed to mitigate this problem.


Author(s):  
Noémie Aubert Bonn ◽  
Wim Pinxten

ABSTRACTBackgroundResearch misconduct and questionable research practices have been the subject of increasing attention in the past few years. But despite the rich body of research available, few empirical works provide the perspectives of non-researcher stakeholders.MethodsTo capture some of the forgotten voices, we conducted semi-structured interviews and focus groups with policy makers, funders, institution leaders, editors or publishers, research integrity office members, research integrity community members, laboratory technicians, researchers, research students, and former-researchers who changed career to inquire on the topics of success, integrity, and responsibilities in science. We used the Flemish biomedical landscape as a baseline to be able to grasp the views of interacting and complementary actors in a system setting.ResultsGiven the breadth of our results, we divided our findings in a two-paper series with the current paper focusing on the problems that affect the quality and integrity of science. We first discovered that perspectives on misconduct, including the core reasons for condemning misconduct, differed between individuals and actor groups. Beyond misconduct, interviewees also identified numerous problems which affect the integrity of research. Issues related to personalities and attitudes, lack of knowledge of good practices, and research climate were mentioned. Elements that were described as essential for success (in the associate paper) were often thought to accentuate the problems of research climates by disrupting research cultures and research environments. Even though everyone agreed that current research climates need to be addressed, no one felt responsible nor capable of initiating change. Instead, respondents revealed a circle of blame and mistrust between actor groups.ConclusionsOur findings resonate with recent debates, and extrapolate a few action points which might help advance the discussion. First, we must tackle how research is assessed. Second, approaches to promote better science should be revisited: not only should they directly address the impact of climates on research practices, but they should also redefine their objective to empower and support researchers rather than to capitalize on their compliance. Finally, inter-actor dialogues and shared decision making are crucial to building joint objectives for change.Trial registrationosf.io/33v3m


PLoS ONE ◽  
2016 ◽  
Vol 11 (5) ◽  
pp. e0153049 ◽  
Author(s):  
Dick J. Bierman ◽  
James P. Spottiswoode ◽  
Aron Bijl

2019 ◽  
Vol 2 (2) ◽  
pp. 115-144 ◽  
Author(s):  
Evan C. Carter ◽  
Felix D. Schönbrodt ◽  
Will M. Gervais ◽  
Joseph Hilgard

Publication bias and questionable research practices in primary research can lead to badly overestimated effects in meta-analysis. Methodologists have proposed a variety of statistical approaches to correct for such overestimation. However, it is not clear which methods work best for data typically seen in psychology. Here, we present a comprehensive simulation study in which we examined how some of the most promising meta-analytic methods perform on data that might realistically be produced by research in psychology. We simulated several levels of questionable research practices, publication bias, and heterogeneity, and used study sample sizes empirically derived from the literature. Our results clearly indicated that no single meta-analytic method consistently outperformed all the others. Therefore, we recommend that meta-analysts in psychology focus on sensitivity analyses—that is, report on a variety of methods, consider the conditions under which these methods fail (as indicated by simulation studies such as ours), and then report how conclusions might change depending on which conditions are most plausible. Moreover, given the dependence of meta-analytic methods on untestable assumptions, we strongly recommend that researchers in psychology continue their efforts to improve the primary literature and conduct large-scale, preregistered replications. We provide detailed results and simulation code at https://osf.io/rf3ys and interactive figures at http://www.shinyapps.org/apps/metaExplorer/ .


Author(s):  
Ana Marija Ljubenković ◽  
Ana Borovečki ◽  
Marko Ćurković ◽  
Bjørn Hofmann ◽  
Søren Holm

This cross-sectional study evaluates the knowledge, attitudes, experiences, and behavior of final year medical students, PhD students, and supervisors at the School of Medicine of the University of Zagreb in relation to research misconduct, questionable research practices, and the research environment. The overall response rate was 36.4% (68%–100% for the paper survey and 8%–15% for the online surveys). The analysis reveals statistically significant differences in attitude scores between PhD students and supervisors, the latter having attitudes more in concordance with accepted norms. The results overall show a nonnegligible incidence of self-reported misconduct and questionable research practices, as well as some problematic attitudes towards misconduct and questionable research practices. The incidence of problematic authorship practices was particularly high. The research environment was evaluated as being mostly supportive of research integrity.


2018 ◽  
Vol 29 (2) ◽  
pp. 174-187 ◽  
Author(s):  
Dennis Tourish ◽  
Russell Craig

This article analyses 131 articles that have been retracted from peer-reviewed journals in business and management studies. We also draw from six in-depth interviews: three with journal editors involved in retractions, two with coauthors of papers retracted because a fellow author committed research fraud, and one with a former academic found guilty of research fraud. Our aim is to promote debate about the causes and consequences of research misconduct and to suggest possible remedies. Drawing on corruption theory, we suggest that a range of institutional, environmental, and behavioral factors interacts to provide incentives that sustain research misconduct. We explore the research practices that have prompted retractions. We contend that some widely used, but questionable research practices, should be challenged so as to promote stronger commitment to research integrity and to deter misconduct. To this end, we propose eleven recommendations for action by authors, editors, publishers, and the broader scientific community.


2017 ◽  
Author(s):  
Evan C Carter ◽  
Felix D. Schönbrodt ◽  
Will M Gervais ◽  
Joseph Hilgard

Publication bias and questionable research practices in primary research can lead to badly overestimated effects in meta-analysis. Methodologists have proposed a variety of statistical approaches to correct for such overestimation. However, much of this work has not been tailored specifically to psychology, so it is not clear which methods work best for data typically seen in our field. Here, we present a comprehensive simulation study to examine how some of the most promising meta-analytic methods perform on data that might realistically be produced by research in psychology. We created such scenarios by simulating several levels of questionable research practices, publication bias, heterogeneity, and using study sample sizes empirically derived from the literature. Our results clearly indicated that no single meta-analytic method consistently outperformed all others. Therefore, we recommend that meta-analysts in psychology focus on sensitivity analyses—that is, report on a variety of methods, consider the conditions under which these methods fail (as indicated by simulation studies such as ours), and then report how conclusions might change based on which conditions are most plausible. Moreover, given the dependence of meta-analytic methods on untestable assumptions, we strongly recommend that researchers in psychology continue their efforts on improving the primary literature and conducting large-scale, pre-registered replications. We provide detailed results and simulation code at https://osf.io/rf3ys and interactive figures at http://www.shinyapps.org/apps/metaExplorer/.


2016 ◽  
Vol 0 (0) ◽  
pp. 1-10 ◽  
Author(s):  
Chris H.J. Hartgerink ◽  
Jelte M. Wicherts

Abstract This article discusses the responsible conduct of research, questionable research practices, and research misconduct. Responsible conduct of research is often defined in terms of a set of abstract, normative principles, professional standards, and ethics in doing research. In order to accommodate the normative principles of scientific research, the professional standards, and a researcher’s moral principles, transparent research practices can serve as a framework for responsible conduct of research. We suggest a “prune-and-add” project structure to enhance transparency and, by extension, responsible conduct of research. Questionable research practices are defined as practices that are detrimental to the research process. The prevalence of questionable research practices remains largely unknown, and reproducibility of findings has been shown to be problematic. Questionable practices are discouraged by transparent practices because practices that arise from them will become more apparent to scientific peers. Most effective might be preregistrations of research design, hypotheses, and analyses, which reduce particularism of results by providing an a priori research scheme. Research misconduct has been defined as fabrication, falsification, and plagiarism (FFP), which is clearly the worst type of research practice. Despite it being clearly wrong, it can be approached from a scientific and legal perspective. The legal perspective sees research misconduct as a form of white-collar crime. The scientific perspective seeks to answer the following question: “Were results invalidated because of the misconduct?” We review how misconduct is typically detected, how its detection can be improved, and how prevalent it might be. Institutions could facilitate detection of data fabrication and falsification by implementing data auditing. Nonetheless, the effect of misconduct is pervasive: many retracted articles are still cited after the retraction has been issued. Main points Researchers systematically evaluate their own conduct as more responsible than colleagues, but not as responsible as they would like. Transparent practices, facilitated by the Open Science Framework, help embody scientific norms that promote responsible conduct. Questionable research practices harm the research process and work counter to the generally accepted scientific norms, but are hard to detect. Research misconduct requires active scrutiny of the research community because editors and peer-reviewers do not pay adequate attention to detecting this. Tips are given on how to improve your detection of potential problems.


2021 ◽  
Author(s):  
Gowri Gopalakrishna ◽  
Gerben ter Riet ◽  
Maarten J.L.F. Cruyff ◽  
Gerko Vink ◽  
Ineke Stoop ◽  
...  

BackgroundPrevalence of research misconduct, questionable research practices (QRPs) and their associations with a range of explanatory factors has not been studied sufficiently among academic researchers.Methods The National Survey on Research Integrity was aimed at all disciplinary fields and academic ranks in the Netherlands. The survey enquired about engagement in fabrication, falsification and 11 QRPs over the previous three years, and 12 explanatory factor scales. We ensured strict identity protection and used a randomized response method for questions on research misconduct. Results6,813 respondents completed the survey. Prevalence of fabrication was 4.3% (95% CI: 2.9, 5.7) and falsification 4.2% (95% CI: 2.8, 5.6). Prevalence of QRPs ranged from 0.6% (95% CI: 0.5, 0.9) to 17.5% (95 % CI: 16.4, 18.7) with 51.3% (95% CI: 50.1, 52.5) of respondents engaging frequently in ≥ 1 QRP. Being a PhD candidate or junior researcher increased the odds of frequently engaging in ≥ 1 QRP, as did being male. Scientific norm subscription (odds ratio (OR) 0.79; 95% CI: 0.63, 1.00) and perceived likelihood of detection by reviewers (OR 0.62, 95% CI: 0.44, 0.88) were associated with lower odds of research misconduct. Publication pressure was associated with higher odds of engaging frequently in ≥ 1 QRP (OR 1.22, 95% CI: 1.14, 1.30).ConclusionsWe found higher prevalence of misconduct than earlier surveys. Our results suggest that greater emphasis on scientific norm subscription, strengthening reviewers in their role as gatekeepers of research quality and curbing the “publish or perish” incentive system can promote research integrity.


Sign in / Sign up

Export Citation Format

Share Document