scientific fraud
Recently Published Documents


TOTAL DOCUMENTS

172
(FIVE YEARS 17)

H-INDEX

12
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Julia G. Bottesini ◽  
Mijke Rhemtulla ◽  
Simine Vazire

What research practices should be considered acceptable? Historically, scientists have set the standards for what constitutes acceptable research practices. However, there is value in considering non-scientists’ perspectives, including research participants’. 1,873 participants from MTurk and university subject pools were surveyed after their participation in one of eight minimal-risk studies. We asked participants how they would feel if common research practices were applied to their data: p-hacking/cherry-picking results, selective reporting of studies, Hypothesizing After Results are Known (HARKing), committing fraud, conducting direct replications, sharing data, sharing methods, and open access publishing. An overwhelming majority of psychology research participants think questionable research practices (e.g., p-hacking, HARKing) are unacceptable (68.3--81.3%), and were supportive of practices to increase transparency and replicability (71.4--80.1%). A surprising number of participants expressed positive or neutral views toward scientific fraud, raising concerns about the quality of our data. We grapple with this concern and interpret our results in light of the limitations of our study. Despite ambiguity in our results, we argue that there is evidence (from our study and others’) that researchers may be violating participants’ expectations and should be transparent with participants about how their data will be used.


2021 ◽  
Vol 3 (3) ◽  
pp. 1-9
Author(s):  
Ahmed Nasr M Ghanem ◽  

Scientific fraud is caused by Fraudulent Research in Science (FRS) or Fraudulent Science (FS) induced by an unintentional error. Although both are totally different, they both have deleterious effect on patient care.


Author(s):  
Mads P. Sørensen ◽  
Tine Ravn ◽  
Ana Marušić ◽  
Andrea Reyes Elizondo ◽  
Panagiotis Kavouras ◽  
...  

AbstractThe widespread problems with scientific fraud, questionable research practices, and the reliability of scientific results have led to an increased focus on research integrity (RI). International organisations and networks have been established, declarations have been issued, and codes of conducts have been formed. The abstract principles of these documents are now also being translated into concrete topic areas that Research Performing organisations (RPOs) and Research Funding organisations (RFOs) should focus on. However, so far, we know very little about disciplinary differences in the need for RI support from RPOs and RFOs. The paper attempts to fill this knowledge gap. It reports on a comprehensive focus group study with 30 focus group interviews carried out in eight different countries across Europe focusing on the following research question: “Which RI topics would researchers and stakeholders from the four main areas of research (humanities, social science, natural science incl. technical science, and medical science incl. biomedicine) prioritise for RPOs and RFOs?” The paper reports on the results of these focus group interviews and gives an overview of the priorities of the four main areas of research. The paper ends with six policy recommendations and a reflection on how the results of the study can be used in RPOs and RFOs.


Author(s):  
Emmanuel J. Genot ◽  
Erik J. Olsson

Fake news can originate from scientific fraud. An article can be retracted upon discovery of fraud. A case study shows, however, that such fake science can be visible in Google even after the article is retracted. Authors hypothesize that the explanation lies in the popularity-based logic governing Google’s foundational PageRank algorithm, in conjunction with the “law of retraction”: a retraction notice is typically taken to be less interesting and therefore less popular with internet users than the original content retracted. This chapter presents an empirical study drawing on records of articles retracted due to fraud (fabrication of data) in the Retraction Watch public database. It finds that both Google Search and Google Scholar more often than not rank a link to the original article higher than a link indicating that the article has been retracted. Thus, both Google Search and Google Scholar risk disseminating fake science through their ranking algorithms.


2021 ◽  
Vol 89 ◽  
pp. 117-129
Author(s):  
Liam Kofi Bright

AbstractIt's natural to think of scientists as truth seekers, people driven by an intense curiosity to understand the natural world. Yet this picture of scientists and scientific inquiry sits uncomfortably with the reality and prevalence of scientific fraud. If one wants to get at the truth about nature, why lie? Won't that just set inquiry back, as people pursue false leads? To understand why this occurs – and what can be done about it – we need to understand the social structures scientists work within, and how some of the institutions which enable science to be such a successful endeavour all things considered, also abet and encourage fraud.


Author(s):  
Anđela Keljanović

At the time when social psychologists believed they could be proud of their discipline, there was the devastating news that Diederik Stapel had committed a major scientific fraud. This event coincided with the start of the discussion on trust in psychological findings. It was soon followed by the report of a series of nine studies that failed to replicate the 'professor's study'. These replication results were astounding due to earlier reports of successful replications. Due to the crisis of confidence in the results of field research, the Open Science Collaboration subsequently replicated 100 correlation and experimental studies published in 2008 in Psychological Science, Journal of Personality and Social Psychology, and Journal of Experimental Psychology: Learning, Memory, and Cognition. Of the 97% of the original studies that had a positive effect, 36% were replicated. However, their findings have also been called into question by calculating the Bayesian factor. In addition to fraud, questionable research practices resulting from publication bias that results in false positives undermine confidence in the validity of psychological research findings. Perhaps the most costly mistake of false-positive findings is to erroneously reject the null hypothesis. However, that Stapel (2011) confirmed the null hypothesis, or that Bargh (1996) found that admission of participants did not affect walking speed, or that Dijksterhuis and van Knipenberg (1998) reported that participants received with the word 'professor' did not improve their performance on task, no one would be interested in their findings. Zero findings are only interesting if they contradict the main hypothesis derived from the theory or contradict a number of previous studies. The fact that good experimental research is usually conducted in order to test theories, researchers can never be sure whether they have chosen the optimal operationalization of a given construct. As researchers can never be sure that they have properly operationalized the theoretical constructs they are evaluating and whether they have been successful in controlling the third variables that may be responsible for their results, the theory can never be proven true.


2020 ◽  
Author(s):  
Chang Qi ◽  
Jian Zhang ◽  
Peng Luo

AbstractScientific fraud by image duplications and manipulations within western blot images is a rising problem. Currently, problematic western blot images are mainly detected by checking repeated bands or through visual observation. However, the completeness of the above methods in detecting problematic images has not been demonstrated. Here we show that Generative Adversarial Nets (GANs) can generate realistic western blot images that indistinguishable from real western blots. The overall accuracy of researchers for identifying synthetic western blot images is 0.52, which almost equal to blind guess (0.5). We found that GANs can generate western blot images with bands of the expected lengths, widths, and angles in desired positions that can fool researchers. For the case study, we find that the accuracy of detecting the synthetic western blot images is related to years of researchers performed studies relevant to western blots, but there was no apparent difference in accuracy among researchers with different academic degrees. Our results demonstrate that GANs can generate fake western blot images to fool existing problematic image detection methods. Therefore, more information is needed to ensure that the western blots appearing in scientific articles are real. We argue to require every western blot image to be uploaded along with a unique identifier generated by the laboratory machine and to peer review these images along with the corresponding submitted articles, which may reduce the incidence of scientific fraud.


2020 ◽  
Vol 32 (1) ◽  
pp. 94-121
Author(s):  
Bruce B. Svare

Cases of scientific fraud and research misconduct in general have escalated in Western higher education over the last 20 years. These practices include forgery, distortion of facts and plagiarism, the outright faking of research results and thriving black markets for positive peer reviews and ghost-written papers. More recently, the same abuses have found their way into Asian higher education with some high profile and widely covered cases in India, South Korea, China and Japan. Reports of misconduct are now reaching alarming proportions in Asia, and the negative consequences for individuals, institutions, governments and society at large are incalculable. The incentives for academic scientists in Asia are approaching and even surpassing those ordinarily seen in the West. Cash payments for publishing articles in high impact journals can double or even triple yearly salaries in some cases. Combining this environment with the simultaneous pressure to obtain oftentimes scarce funding for research has produced a culture of unethical behaviour worldwide. This article assesses three important issues regarding scientific fraud and research misconduct: distorted incentives for research and overreliance upon metrics, damage to the integrity of higher education and public trust and improving research environments so as to deter unethical behaviour. This is especially crucial for emerging Asian countries, in particular Association for Southeast Asian Nations (ASEAN), whose scientific infrastructure is less developed, but nonetheless has the potential to become a major player in the development of psychology as well as Science, Technology, Engineering and Mathematics (STEM) research and training.


Sign in / Sign up

Export Citation Format

Share Document