scholarly journals Research on illicit cultural artefacts

2021 ◽  
Author(s):  
Mehreen Sheikh

This paper examines whether we can have confidence in the scientific integrity of a research effort that could potentially be part of the illicit trade in cultural artefacts. As an example, I use the research on the ancient clay tablets from the Schøyen Collection. A closer study of the research product reveals questionable research practices, and the latter issue is then put into a wider context. After highlighting the importance of the research community as a social institution in shaping the norms and values of its members, and its influence on what is desirable research, I explore how these expectations and guidelines impact research conducted on illicit cultural artefacts.

2018 ◽  
Author(s):  
Anthony R. Artino ◽  
Erik W. Driessen ◽  
Lauren A. Maggio

AbstractPurposeTo maintain scientific integrity and engender public confidence, research must be conducted responsibly. Whereas scientific misconduct, like data fabrication, is clearly irresponsible and unethical, other behaviors—often referred to as questionable research practices (QRPs)—exploit the ethical shades of gray that color acceptable practice. This study aimed to measure the frequency of self-reported QRPs in a diverse, international sample of health professions education (HPE) researchers.MethodIn 2017, the authors conducted an anonymous, cross-sectional survey study. The web-based survey contained 43 QRP items that asked respondents to rate how often they had engaged in various forms of scientific misconduct. The items were adapted from two previously published surveys.ResultsIn total, 590 HPE researchers took the survey. The mean age was 46 years (SD=11.6), and the majority of participants were from the United States (26.4%), Europe (23.2%), and Canada (15.3%). The three most frequently reported QRPs were adding authors to a paper who did not qualify for authorship (60.6%), citing articles that were not read (49.5%), and selectively citing papers to please editors or reviewers (49.4%). Additionally, respondents reported misrepresenting a participant’s words (6.7%), plagiarizing (5.5%), inappropriately modifying results (5.3%), deleting data without disclosure (3.4%), and fabricating data (2.4%). Overall, 533 (90.3%) respondents reported at least one QRP.ConclusionsNotwithstanding the methodological limitations of survey research, these findings indicate that a substantial proportion of HPE researchers report a range of QRPs. In light of these results, reforms are needed to improve the credibility and integrity of the HPE research enterprise.“Researchers should practice research responsibly. Unfortunately, some do not.” –Nicholas H. Steneck, 20061


2018 ◽  
Author(s):  
Hannah Fraser ◽  
Timothy H. Parker ◽  
Shinichi Nakagawa ◽  
Ashley Barnett ◽  
Fiona Fidler

We surveyed 807 researchers (494 ecologists and 313 evolutionary biologists) about their use of Questionable Research Practices (QRPs), including cherry picking statistically significant results, p hacking, and hypothesising after the results are known (HARKing). We also asked them to estimate the proportion of their colleagues that use each of these QRPs. Several of the QRPs were prevalent within the ecology and evolution research community. Across the two groups, we found 64% of surveyed researchers reported they had at least once failed to report results because they were not statistically significant (cherry picking); 42% had collected more data after inspecting whether results were statistically significant (a form of p hacking) and 51% had reported an unexpected finding as though it had been hypothesised from the start (HARKing). Such practices have been directly implicated in the low rates of reproducible results uncovered by recent large scale replication studies in psychology and other disciplines. The rates of QRPs found in this study are comparable with the rates seen in psychology, indicating that the reproducibility problems discovered in psychology are also likely to be present in ecology and evolution.


2018 ◽  
Author(s):  
Hannah Fraser ◽  
Timothy H. Parker ◽  
Shinichi Nakagawa ◽  
Ashley Barnett ◽  
Fiona Fidler

We surveyed 807 researchers (494 ecologists and 313 evolutionary biologists) about their use of Questionable Research Practices (QRPs), including cherry picking statistically significant results, p hacking, and hypothesising after the results are known (HARKing). We also asked them to estimate the proportion of their colleagues that use each of these QRPs. Several of the QRPs were prevalent within the ecology and evolution research community. Across the two groups, we found 64% of surveyed researchers reported they had at least once failed to report results because they were not statistically significant (cherry picking); 42% had collected more data after inspecting whether results were statistically significant (a form of p hacking) and 51% had reported an unexpected finding as though it had been hypothesised from the start (HARKing). Such practices have been directly implicated in the low rates of reproducible results uncovered by recent large scale replication studies in psychology and other disciplines. The rates of QRPs found in this study are comparable with the rates seen in psychology, indicating that the reproducibility problems discovered in psychology are also likely to be present in ecology and evolution.


2018 ◽  
Author(s):  
Dick Bierman ◽  
Jacob Jolij

We have tested the feasibility of a method to prevent the occurrence of so-called Questionable Research Practices (QRP). A part from embedded pre-registration the major aspect of the system is real-time uploading of data on a secure server. We outline the method, discuss the drop-out treatment and compare it to the Born-open data method, and report on our preliminary experiences. We also discuss the extension of the data-integrity system from secure server to use of blockchain technology.


2019 ◽  
Author(s):  
Rens van de Schoot ◽  
Elian Griffioen ◽  
Sonja Désirée Winter

The trial-and-roulette method is a popular method to extract experts’ beliefs about a statistical parameter. However, most studies examining the validity of this method only use ‘perfect’ elicitation results. In practice, it is sometimes hard to obtain such neat elicitation results. In our project about predicting fraud and questionable research practices among PhD candidates, we ran into issues with imperfect elicitation results. The goal of the current chapter is to provide an over-view of the solutions we used for dealing with these imperfect results, so that others can benefit from our experience. We present information about the nature of our project, the reasons for the imperfect results, and how we resolved these sup-ported by annotated R-syntax.


2021 ◽  
Author(s):  
Jesse Fox ◽  
Katy E Pearce ◽  
Adrienne L Massanari ◽  
Julius Matthew Riles ◽  
Łukasz Szulc ◽  
...  

Abstract The open science (OS) movement has advocated for increased transparency in certain aspects of research. Communication is taking its first steps toward OS as some journals have adopted OS guidelines codified by another discipline. We find this pursuit troubling as OS prioritizes openness while insufficiently addressing essential ethical principles: respect for persons, beneficence, and justice. Some recommended open science practices increase the potential for harm for marginalized participants, communities, and researchers. We elaborate how OS can serve a marginalizing force within academia and the research community, as it overlooks the needs of marginalized scholars and excludes some forms of scholarship. We challenge the current instantiation of OS and propose a divergent agenda for the future of Communication research centered on ethical, inclusive research practices.


Sign in / Sign up

Export Citation Format

Share Document