Research Integrity and Peer Review
Latest Publications


TOTAL DOCUMENTS

112
(FIVE YEARS 60)

H-INDEX

15
(FIVE YEARS 4)

Published By Springer (Biomed Central Ltd.)

2058-8615

2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Mohammad Hosseini ◽  
Shiva Sharifzad

Abstract Background The current paper follows up on the results of an exploratory quantitative analysis that compared the publication and citation records of men and women researchers affiliated with the Faculty of Computing and Engineering at Dublin City University (DCU) in Ireland. Quantitative analysis of publications between 2013 and 2018 showed that women researchers had fewer publications, received fewer citations per person, and participated less often in international collaborations. Given the significance of publications for pursuing an academic career, we used qualitative methods to understand these differences and explore factors that, according to women researchers, have contributed to this disparity. Methods Sixteen women researchers from DCU’s Faculty of Computing and Engineering were interviewed using a semi-structured questionnaire. Once interviews were transcribed and anonymised, they were coded by both authors in two rounds using an inductive approach. Results Interviewed women believed that their opportunities for research engagement and research funding, collaborations, publications and promotions are negatively impacted by gender roles, implicit gender biases, their own high professional standards, family responsibilities, nationality and negative perceptions of their expertise and accomplishments. Conclusions Our study has found that women in DCU’s Faculty of Computing and Engineering face challenges that, according to those interviewed, negatively affect their engagement in various research activities, and, therefore, have contributed to their lower publication record. We suggest that while affirmative programmes aiming to correct disparities are necessary, they are more likely to  improve organisational culture if they are implemented in parallel with bottom-up initiatives that engage all parties, including men researchers and non-academic partners, to inform and sensitise them about the significance of gender equity.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Evan Mayo-Wilson ◽  
Meredith L. Phillips ◽  
Avonne E. Connor ◽  
Kelly J. Vander Ley ◽  
Kevin Naaman ◽  
...  

Abstract Background The Patient-Centered Outcomes Research Institute (PCORI) is obligated to peer review and to post publicly “Final Research Reports” of all funded projects. PCORI peer review emphasizes adherence to PCORI’s Methodology Standards and principles of ethical scientific communication. During the peer review process, reviewers and editors seek to ensure that results are presented objectively and interpreted appropriately, e.g., free of spin. Methods Two independent raters assessed PCORI peer review feedback sent to authors. We calculated the proportion of reports in which spin was identified during peer review, and the types of spin identified. We included reports submitted by April 2018 with at least one associated journal article. The same raters then assessed whether authors addressed reviewers’ comments about spin. The raters also assessed whether spin identified during PCORI peer review was present in related journal articles. Results We included 64 PCORI-funded projects. Peer reviewers or editors identified spin in 55/64 (86%) submitted research reports. Types of spin included reporting bias (46/55; 84%), inappropriate interpretation (40/55; 73%), inappropriate extrapolation of results (15/55; 27%), and inappropriate attribution of causality (5/55; 9%). Authors addressed comments about spin related to 47/55 (85%) of the reports. Of 110 associated journal articles, PCORI comments about spin were potentially applicable to 44/110 (40%) articles, of which 27/44 (61%) contained the same spin that was identified in the PCORI research report. The proportion of articles with spin was similar for articles accepted before and after PCORI peer review (63% vs 58%). Discussion Just as spin is common in journal articles and press releases, we found that most reports submitted to PCORI included spin. While most spin was mitigated during the funder’s peer review process, we found no evidence that review of PCORI reports influenced spin in journal articles. Funders could explore interventions aimed at reducing spin in published articles of studies they support.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Veli-Matti Karhulahti ◽  
Hans-Joachim Backe

Abstract Background Open peer review practices are increasing in medicine and life sciences, but in social sciences and humanities (SSH) they are still rare. We aimed to map out how editors of respected SSH journals perceive open peer review, how they balance policy, ethics, and pragmatism in the review processes they oversee, and how they view their own power in the process. Methods We conducted 12 pre-registered semi-structured interviews with editors of respected SSH journals. Interviews consisted of 21 questions and lasted an average of 67 min. Interviews were transcribed, descriptively coded, and organized into code families. Results SSH editors saw anonymized peer review benefits to outweigh those of open peer review. They considered anonymized peer review the “gold standard” that authors and editors are expected to follow to respect institutional policies; moreover, anonymized review was also perceived as ethically superior due to the protection it provides, and more pragmatic due to eased seeking of reviewers. Finally, editors acknowledged their power in the publication process and reported strategies for keeping their work as unbiased as possible. Conclusions Editors of SSH journals preferred the benefits of anonymized peer review over open peer and acknowledged the power they hold in the publication process during which authors are almost completely disclosed to editorial bodies. We recommend journals to communicate the transparency elements of their manuscript review processes by listing all bodies who contributed to the decision on every review stage.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Balazs Aczel ◽  
Barnabas Szaszi ◽  
Alex O. Holcombe

Abstract Background The amount and value of researchers’ peer review work is critical for academia and journal publishing. However, this labor is under-recognized, its magnitude is unknown, and alternative ways of organizing peer review labor are rarely considered. Methods Using publicly available data, we provide an estimate of researchers’ time and the salary-based contribution to the journal peer review system. Results We found that the total time reviewers globally worked on peer reviews was over 100 million hours in 2020, equivalent to over 15 thousand years. The estimated monetary value of the time US-based reviewers spent on reviews was over 1.5 billion USD in 2020. For China-based reviewers, the estimate is over 600 million USD, and for UK-based, close to 400 million USD. Conclusions By design, our results are very likely to be under-estimates as they reflect only a portion of the total number of journals worldwide. The numbers highlight the enormous amount of work and time that researchers provide to the publication system, and the importance of considering alternative ways of structuring, and paying for, peer review. We foster this process by discussing some alternative models that aim to boost the benefits of peer review, thus improving its cost-benefit ratio.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Jan-Ole Hesselberg ◽  
Knut Inge Fostervold ◽  
Pål Ulleberg ◽  
Ida Svege

Abstract Background Vast sums are distributed based on grant peer review, but studies show that interrater reliability is often low. In this study, we tested the effect of receiving two short individual feedback reports compared to one short general feedback report on the agreement between reviewers. Methods A total of 42 reviewers at the Norwegian Foundation Dam were randomly assigned to receive either a general feedback report or an individual feedback report. The general feedback group received one report before the start of the reviews that contained general information about the previous call in which the reviewers participated. In the individual feedback group, the reviewers received two reports, one before the review period (based on the previous call) and one during the period (based on the current call). In the individual feedback group, the reviewers were presented with detailed information on their scoring compared with the review committee as a whole, both before and during the review period. The main outcomes were the proportion of agreement in the eligibility assessment and the average difference in scores between pairs of reviewers assessing the same proposal. The outcomes were measured in 2017 and after the feedback was provided in 2018. Results A total of 2398 paired reviews were included in the analysis. There was a significant difference between the two groups in the proportion of absolute agreement on whether the proposal was eligible for the funding programme, with the general feedback group demonstrating a higher rate of agreement. There was no difference between the two groups in terms of the average score difference. However, the agreement regarding the proposal score remained critically low for both groups. Conclusions We did not observe changes in proposal score agreement between 2017 and 2018 in reviewers receiving different feedback. The low levels of agreement remain a major concern in grant peer review, and research to identify contributing factors as well as the development and testing of interventions to increase agreement rates are still needed. Trial registration The study was preregistered at OSF.io/n4fq3.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Joanna Diong ◽  
Cynthia M. Kroeger ◽  
Katherine J. Reynolds ◽  
Adrian Barnett ◽  
Lisa A. Bero

Abstract Background Australian health and medical research funders support substantial research efforts, and incentives within grant funding schemes influence researcher behaviour. We aimed to determine to what extent Australian health and medical funders incentivise responsible research practices. Methods We conducted an audit of instructions from research grant and fellowship schemes. Eight national research grants and fellowships were purposively sampled to select schemes that awarded the largest amount of funds. The funding scheme instructions were assessed against 9 criteria to determine to what extent they incentivised these responsible research and reporting practices: (1) publicly register study protocols before starting data collection, (2) register analysis protocols before starting data analysis, (3) make study data openly available, (4) make analysis code openly available, (5) make research materials openly available, (6) discourage use of publication metrics, (7) conduct quality research (e.g. adhere to reporting guidelines), (8) collaborate with a statistician, and (9) adhere to other responsible research practices. Each criterion was answered using one of the following responses: “Instructed”, “Encouraged”, or “No mention”. Results Across the 8 schemes from 5 funders, applicants were instructed or encouraged to address a median of 4 (range 0 to 5) of the 9 criteria. Three criteria received no mention in any scheme (register analysis protocols, make analysis code open, collaborate with a statistician). Importantly, most incentives did not seem strong as applicants were only instructed to register study protocols, discourage use of publication metrics and conduct quality research. Other criteria were encouraged but were not required. Conclusions Funders could strengthen the incentives for responsible research practices by requiring grant and fellowship applicants to implement these practices in their proposals. Administering institutions could be required to implement these practices to be eligible for funding. Strongly rewarding researchers for implementing robust research practices could lead to sustained improvements in the quality of health and medical research.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Evan Mayo-Wilson ◽  
Sean Grant ◽  
Lauren Supplee ◽  
Sina Kianersi ◽  
Afsah Amin ◽  
...  

Abstract Background The Transparency and Openness Promotion (TOP) Guidelines describe modular standards that journals can adopt to promote open science. The TOP Factor is a metric to describe the extent to which journals have adopted the TOP Guidelines in their policies. Systematic methods and rating instruments are needed to calculate the TOP Factor. Moreover, implementation of these open science policies depends on journal procedures and practices, for which TOP provides no standards or rating instruments. Methods We describe a process for assessing journal policies, procedures, and practices according to the TOP Guidelines. We developed this process as part of the Transparency of Research Underpinning Social Intervention Tiers (TRUST) Initiative to advance open science in the social intervention research ecosystem. We also provide new instruments for rating journal instructions to authors (policies), manuscript submission systems (procedures), and published articles (practices) according to standards in the TOP Guidelines. In addition, we describe how to determine the TOP Factor score for a journal, calculate reliability of journal ratings, and assess coherence among a journal’s policies, procedures, and practices. As a demonstration of this process, we describe a protocol for studying approximately 345 influential journals that have published research used to inform evidence-based policy. Discussion The TRUST Process includes systematic methods and rating instruments for assessing and facilitating implementation of the TOP Guidelines by journals across disciplines. Our study of journals publishing influential social intervention research will provide a comprehensive account of whether these journals have policies, procedures, and practices that are consistent with standards for open science and thereby facilitate the publication of trustworthy findings to inform evidence-based policy. Through this demonstration, we expect to identify ways to refine the TOP Guidelines and the TOP Factor. Refinements could include: improving templates for adoption in journal instructions to authors, manuscript submission systems, and published articles; revising explanatory guidance intended to enhance the use, understanding, and dissemination of the TOP Guidelines; and clarifying the distinctions among different levels of implementation. Research materials are available on the Open Science Framework: https://osf.io/txyr3/.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Kim Boesen ◽  
Anders Lykkemark Simonsen ◽  
Karsten Juhl Jørgensen ◽  
Peter C. Gøtzsche

Abstract Background Healthcare professionals are exposed to advertisements for prescription drugs in medical journals. Such advertisements may increase prescriptions of new drugs at the expense of older treatments even when they have no added benefits, are more harmful, and are more expensive. The publication of medical advertisements therefore raises ethical questions related to editorial integrity. Methods We conducted a descriptive cross-sectional study of all medical advertisements published in the Journal of the Danish Medical Association in 2015. Drugs advertised 6 times or more were compared with older comparators: (1) comparative evidence of added benefit; (2) Defined Daily Dose cost; (3) regulatory safety announcements; and (4) completed and ongoing post-marketing studies 3 years after advertising. Results We found 158 medical advertisements for 35 prescription drugs published in 24 issues during 2015, with a median of 7 advertisements per issue (range 0 to 11). Four drug groups and 5 single drugs were advertised 6 times or more, for a total of 10 indications, and we made 14 comparisons with older treatments. We found: (1) ‘no added benefit’ in 4 (29%) of 14 comparisons, ‘uncertain benefits’ in 7 (50%), and ‘no evidence’ in 3 (21%) comparisons. In no comparison did we find evidence of ‘substantial added benefit’ for the new drug; (2) advertised drugs were 2 to 196 times (median 6) more expensive per Defined Daily Dose; (3) 11 safety announcements for five advertised drugs were issued compared to one announcement for one comparator drug; (4) 20 post-marketing studies (7 completed, 13 ongoing) were requested for the advertised drugs versus 10 studies (4 completed, 6 ongoing) for the comparator drugs, and 7 studies (2 completed, 5 ongoing) assessed both an advertised and a comparator drug at 3 year follow-up. Conclusions and relevance In this cross-sectional study of medical advertisements published in the Journal of the Danish Medical Association during 2015, the most advertised drugs did not have documented substantial added benefits over older treatments, whereas they were substantially more expensive. From January 2021, the Journal of the Danish Medical Association no longer publishes medical advertisements.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Tamarinde Haven ◽  
Joeri Tijdink ◽  
Brian Martinson ◽  
Lex Bouter ◽  
Frans Oort

Abstract Background Concerns about research misbehavior in academic science have sparked interest in the factors that may explain research misbehavior. Often three clusters of factors are distinguished: individual factors, climate factors and publication factors. Our research question was: to what extent can individual, climate and publication factors explain the variance in frequently perceived research misbehaviors? Methods From May 2017 until July 2017, we conducted a survey study among academic researchers in Amsterdam. The survey included three measurement instruments that we previously reported individual results of and here we integrate these findings. Results One thousand two hundred ninety-eight researchers completed the survey (response rate: 17%). Results showed that individual, climate and publication factors combined explained 34% of variance in perceived frequency of research misbehavior. Individual factors explained 7%, climate factors explained 22% and publication factors 16%. Conclusions Our results suggest that the perceptions of the research climate play a substantial role in explaining variance in research misbehavior. This suggests that efforts to improve departmental norms might have a salutary effect on behavior.


Sign in / Sign up

Export Citation Format

Share Document