scholarly journals Reflexive Behaviour: How Publication Pressure Affects Research Quality in Astronomy

Publications ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 52
Author(s):  
Julia Heuritsch

Reflexive metrics is a branch of science studies that explores how the demand for accountability and performance measurement in science has shaped the research culture in recent decades. Hypercompetition and publication pressure are part of this neoliberal culture. How do scientists respond to these pressures? Studies on research integrity and organisational culture suggest that people who feel treated unfairly by their institution are more likely to engage in deviant behaviour, such as scientific misconduct. By building up on reflexive metrics, combined with studies on the influence of organisational culture on research integrity, this study reflects on the research behaviour of astronomers with the following questions: (1) To what extent is research (mis-)behaviour reflexive, i.e., dependent on perceptions of publication pressure and distributive and organisational justice? (2) What impact does scientific misconduct have on research quality? In order to perform this reflection, we conducted a comprehensive survey of academic and non-academic astronomers worldwide and received 3509 responses. We found that publication pressure explains 10% of the variance in occurrence of misconduct and between 7% and 13% of the variance of the perception of distributive and organisational justice as well as overcommitment to work. Our results on the perceived impact of scientific misconduct on research quality show that the epistemic harm of questionable research practices should not be underestimated. This suggests there is a need for a policy change. In particular, lesser attention to metrics (such as publication rate) in the allocation of grants, telescope time and institutional rewards would foster better scientific conduct and, hence, research quality.

2018 ◽  
Author(s):  
Lauren A. Maggio ◽  
Ting Dong ◽  
Erik W. Driessen ◽  
Anthony R. Artino

AbstractIntroductionEngaging in scientific misconduct and questionable research practices (QRPs) is a noted problem across fields, including health professions education (HPE). To mitigate these practices, other disciplines have enacted strategies based on researcher characteristics and practice factors. Thus, to inform HPE, this article seeks to determine which researcher characteristics and practice factors, if any, might explain the frequency of irresponsible research practices.MethodIn 2017, a cross-sectional survey of HPE researchers was conducted. The survey included 66 items derived from two published QRP surveys and a publication pressure scale adapted from the literature. The study outcome was the self-reported misconduct frequency score, which is a weighted mean score for each respondent on all misconduct and QRP items. Statistical analysis included descriptive statistics, correlation analysis, and multiple linear regression analysis.Results and DiscussionIn total, 590 researchers took the survey. Results from the regression analysis indicated that researcher age had a negative association with the misconduct frequency score (b = −.01, t = −2.91, p<.05) suggesting that older researchers tended to have lower misconduct frequency scores. Publication pressure (b = .20, t = 7.82, p<.001) and number of publications (b = .001, t = 3.27, p<.01) had positive associations with the misconduct frequency score. The greater the publication pressure or the more publications a researcher reported, the higher the misconduct frequency score. Overall, the explanatory variables accounted for 21% of the variance in the misconduct frequency score, and publication pressure was the strongest predictor. These findings provide an evidence base from which HPE might tailor strategies to address scientific misconduct and QRPs.


2021 ◽  
Author(s):  
Gowri Gopalakrishna ◽  
Gerben ter Riet ◽  
Maarten J.L.F. Cruyff ◽  
Gerko Vink ◽  
Ineke Stoop ◽  
...  

BackgroundPrevalence of research misconduct, questionable research practices (QRPs) and their associations with a range of explanatory factors has not been studied sufficiently among academic researchers.Methods The National Survey on Research Integrity was aimed at all disciplinary fields and academic ranks in the Netherlands. The survey enquired about engagement in fabrication, falsification and 11 QRPs over the previous three years, and 12 explanatory factor scales. We ensured strict identity protection and used a randomized response method for questions on research misconduct. Results6,813 respondents completed the survey. Prevalence of fabrication was 4.3% (95% CI: 2.9, 5.7) and falsification 4.2% (95% CI: 2.8, 5.6). Prevalence of QRPs ranged from 0.6% (95% CI: 0.5, 0.9) to 17.5% (95 % CI: 16.4, 18.7) with 51.3% (95% CI: 50.1, 52.5) of respondents engaging frequently in ≥ 1 QRP. Being a PhD candidate or junior researcher increased the odds of frequently engaging in ≥ 1 QRP, as did being male. Scientific norm subscription (odds ratio (OR) 0.79; 95% CI: 0.63, 1.00) and perceived likelihood of detection by reviewers (OR 0.62, 95% CI: 0.44, 0.88) were associated with lower odds of research misconduct. Publication pressure was associated with higher odds of engaging frequently in ≥ 1 QRP (OR 1.22, 95% CI: 1.14, 1.30).ConclusionsWe found higher prevalence of misconduct than earlier surveys. Our results suggest that greater emphasis on scientific norm subscription, strengthening reviewers in their role as gatekeepers of research quality and curbing the “publish or perish” incentive system can promote research integrity.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Noémie Aubert Bonn ◽  
Wim Pinxten

Abstract Background Research misconduct and questionable research practices have been the subject of increasing attention in the past few years. But despite the rich body of research available, few empirical works also include the perspectives of non-researcher stakeholders. Methods We conducted semi-structured interviews and focus groups with policy makers, funders, institution leaders, editors or publishers, research integrity office members, research integrity community members, laboratory technicians, researchers, research students, and former-researchers who changed career to inquire on the topics of success, integrity, and responsibilities in science. We used the Flemish biomedical landscape as a baseline to be able to grasp the views of interacting and complementary actors in a system setting. Results Given the breadth of our results, we divided our findings in a two-paper series with the current paper focusing on the problems that affect the integrity and research culture. We first found that different actors have different perspectives on the problems that affect the integrity and culture of research. Problems were either linked to personalities and attitudes, or to the climates in which researchers operate. Elements that were described as essential for success (in the associate paper) were often thought to accentuate the problems of research climates by disrupting research culture and research integrity. Even though all participants agreed that current research climates need to be addressed, participants generally did not feel responsible nor capable of initiating change. Instead, respondents revealed a circle of blame and mistrust between actor groups. Conclusions Our findings resonate with recent debates, and extrapolate a few action points which might help advance the discussion. First, the research integrity debate must revisit and tackle the way in which researchers are assessed. Second, approaches to promote better science need to address the impact that research climates have on research integrity and research culture rather than to capitalize on individual researchers’ compliance. Finally, inter-actor dialogues and shared decision making must be given priority to ensure that the perspectives of the full research system are captured. Understanding the relations and interdependency between these perspectives is key to be able to address the problems of science. Study registration https://osf.io/33v3m


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Noémie Aubert Bonn ◽  
Wim Pinxten

Abstract Background Success shapes the lives and careers of scientists. But success in science is difficult to define, let alone to translate in indicators that can be used for assessment. In the past few years, several groups expressed their dissatisfaction with the indicators currently used for assessing researchers. But given the lack of agreement on what should constitute success in science, most propositions remain unanswered. This paper aims to complement our understanding of success in science and to document areas of tension and conflict in research assessments. Methods We conducted semi-structured interviews and focus groups with policy makers, funders, institution leaders, editors or publishers, research integrity office members, research integrity community members, laboratory technicians, researchers, research students, and former-researchers who changed career to inquire on the topics of success, integrity, and responsibilities in science. We used the Flemish biomedical landscape as a baseline to be able to grasp the views of interacting and complementary actors in a system setting. Results Given the breadth of our results, we divided our findings in a two-paper series, with the current paper focusing on what defines and determines success in science. Respondents depicted success as a multi-factorial, context-dependent, and mutable construct. Success appeared to be an interaction between characteristics from the researcher (Who), research outputs (What), processes (How), and luck. Interviewees noted that current research assessments overvalued outputs but largely ignored the processes deemed essential for research quality and integrity. Interviewees suggested that science needs a diversity of indicators that are transparent, robust, and valid, and that also allow a balanced and diverse view of success; that assessment of scientists should not blindly depend on metrics but also value human input; and that quality should be valued over quantity. Conclusions The objective of research assessments may be to encourage good researchers, to benefit society, or simply to advance science. Yet we show that current assessments fall short on each of these objectives. Open and transparent inter-actor dialogue is needed to understand what research assessments aim for and how they can best achieve their objective. Study Registration osf.io/33v3m.


Author(s):  
Noémie Aubert Bonn ◽  
Wim Pinxten

ABSTRACTBackgroundResearch misconduct and questionable research practices have been the subject of increasing attention in the past few years. But despite the rich body of research available, few empirical works provide the perspectives of non-researcher stakeholders.MethodsTo capture some of the forgotten voices, we conducted semi-structured interviews and focus groups with policy makers, funders, institution leaders, editors or publishers, research integrity office members, research integrity community members, laboratory technicians, researchers, research students, and former-researchers who changed career to inquire on the topics of success, integrity, and responsibilities in science. We used the Flemish biomedical landscape as a baseline to be able to grasp the views of interacting and complementary actors in a system setting.ResultsGiven the breadth of our results, we divided our findings in a two-paper series with the current paper focusing on the problems that affect the quality and integrity of science. We first discovered that perspectives on misconduct, including the core reasons for condemning misconduct, differed between individuals and actor groups. Beyond misconduct, interviewees also identified numerous problems which affect the integrity of research. Issues related to personalities and attitudes, lack of knowledge of good practices, and research climate were mentioned. Elements that were described as essential for success (in the associate paper) were often thought to accentuate the problems of research climates by disrupting research cultures and research environments. Even though everyone agreed that current research climates need to be addressed, no one felt responsible nor capable of initiating change. Instead, respondents revealed a circle of blame and mistrust between actor groups.ConclusionsOur findings resonate with recent debates, and extrapolate a few action points which might help advance the discussion. First, we must tackle how research is assessed. Second, approaches to promote better science should be revisited: not only should they directly address the impact of climates on research practices, but they should also redefine their objective to empower and support researchers rather than to capitalize on their compliance. Finally, inter-actor dialogues and shared decision making are crucial to building joint objectives for change.Trial registrationosf.io/33v3m


2020 ◽  
Author(s):  
Jonathan Plucker ◽  
Matthew C. Makel

Replicability and the importance of enhanced research rigor are foundational issues across the social sciences, and educational psychology is no exception. Yet strategies for increasing research quality are not widespread in the field, including the use of replication studies. In this manuscript, we examine the nature and scope of replication problems in educational psychology research, and how these issues threaten research integrity and transparency. We also examine strategies to mitigate these problems in educational psychology. Finally, we discuss several on-going challenges that contribute to replication problems, and which need additional attention from researchers.


2018 ◽  
Author(s):  
Anthony R. Artino ◽  
Erik W. Driessen ◽  
Lauren A. Maggio

AbstractPurposeTo maintain scientific integrity and engender public confidence, research must be conducted responsibly. Whereas scientific misconduct, like data fabrication, is clearly irresponsible and unethical, other behaviors—often referred to as questionable research practices (QRPs)—exploit the ethical shades of gray that color acceptable practice. This study aimed to measure the frequency of self-reported QRPs in a diverse, international sample of health professions education (HPE) researchers.MethodIn 2017, the authors conducted an anonymous, cross-sectional survey study. The web-based survey contained 43 QRP items that asked respondents to rate how often they had engaged in various forms of scientific misconduct. The items were adapted from two previously published surveys.ResultsIn total, 590 HPE researchers took the survey. The mean age was 46 years (SD=11.6), and the majority of participants were from the United States (26.4%), Europe (23.2%), and Canada (15.3%). The three most frequently reported QRPs were adding authors to a paper who did not qualify for authorship (60.6%), citing articles that were not read (49.5%), and selectively citing papers to please editors or reviewers (49.4%). Additionally, respondents reported misrepresenting a participant’s words (6.7%), plagiarizing (5.5%), inappropriately modifying results (5.3%), deleting data without disclosure (3.4%), and fabricating data (2.4%). Overall, 533 (90.3%) respondents reported at least one QRP.ConclusionsNotwithstanding the methodological limitations of survey research, these findings indicate that a substantial proportion of HPE researchers report a range of QRPs. In light of these results, reforms are needed to improve the credibility and integrity of the HPE research enterprise.“Researchers should practice research responsibly. Unfortunately, some do not.” –Nicholas H. Steneck, 20061


2021 ◽  
pp. 152-172
Author(s):  
R. Barker Bausell

The “mass” replications of multiple studies, some employing dozens of investigators distributed among myriad sites, is unique to the reproducibility movement. The most impressive of these initiatives was employed by the Open Science Collaboration directed by Brian Nosek, who recruited 270 investigators to participate in the replication of 100 psychological experiments via a very carefully structured, prespecified protocol that avoided questionable research practices. Just before this Herculean effort, two huge biotech firms (Amegen and Bayer Health Care) respectively conducted 53 and 67 preclinical replications of promising published studies to ascertain which results were worth pursuing for commercial applications. Amazingly, in less than a 10-year period, a number of other diverse multistudy replications were also conducted involving hundreds of effects. Among these were the three “many lab” multistudy replications based on the Open Science Model (but also designed to ascertain if potential confounders of the approach itself existed, such as differences in participant types, settings, and timing), replications of social science studies published in Science and Nature, experimental economics studies, and even self-reported replications ascertained from a survey. Somewhat surprisingly, the overall successful replication percentage for this diverse collection of 811 studies was 46%, mirroring the modeling results discussed in Chapter 3 and supportive of John Ioannidis’s pejorative and often quoted conclusion that most scientific results are incorrect.


Sign in / Sign up

Export Citation Format

Share Document