scholarly journals Re-evaluation of solutions to the problem of unprofessionalism in peer review

2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Travis G. Gerwing ◽  
Alyssa M. Allen Gerwing ◽  
Chi-Yeung Choi ◽  
Stephanie Avery-Gomm ◽  
Jeff C. Clements ◽  
...  

AbstractOur recent paper (10.1186/s41073-020-00096-x) reported that 43% of reviewer comment sets (n=1491) shared with authors contained at least one unprofessional comment or an incomplete, inaccurate of unsubstantiated critique (IIUC). Publication of this work sparked an online (i.e., Twitter, Instagram, Facebook, and Reddit) conversation surrounding professionalism in peer review. We collected and analyzed these social media comments as they offered real-time responses to our work and provided insight into the views held by commenters and potential peer-reviewers that would be difficult to quantify using existing empirical tools (96 comments from July 24th to September 3rd, 2020). Overall, 75% of comments were positive, of which 59% were supportive and 16% shared similar personal experiences. However, a subset of negative comments emerged (22% of comments were negative and 6% were an unsubstantiated critique of the methodology), that provided potential insight into the reasons underlying unprofessional comments were made during the peer-review process. These comments were classified into three main themes: (1) forced niceness will adversely impact the peer-review process and allow for publication of poor-quality science (5% of online comments); (2) dismissing comments as not offensive to another person because they were not deemed personally offensive to the reader (6%); and (3) authors brought unprofessional comments upon themselves as they submitted substandard work (5%). Here, we argue against these themes as justifications for directing unprofessional comments towards authors during the peer review process. We argue that it is possible to be both critical and professional, and that no author deserves to be the recipient of demeaning ad hominem attacks regardless of supposed provocation. Suggesting otherwise only serves to propagate a toxic culture within peer review. While we previously postulated that establishing a peer-reviewer code of conduct could help improve the peer-review system, we now posit that priority should be given to repairing the negative cultural zeitgeist that exists in peer-review.

Author(s):  
Ann Blair Kennedy, LMT, BCTMB, DrPH

  Peer review is a mainstay of scientific publishing and, while peer reviewers and scientists report satisfaction with the process, peer review has not been without criticism. Within this editorial, the peer review process at the IJTMB is defined and explained. Further, seven steps are identified by the editors as a way to improve efficiency of the peer review and publication process. Those seven steps are: 1) Ask authors to submit possible reviewers; 2) Ask reviewers to update profiles; 3) Ask reviewers to “refer a friend”; 4) Thank reviewers regularly; 5) Ask published authors to review for the Journal; 6) Reduce the length of time to accept peer review invitation; and 7) Reduce requested time to complete peer review. We believe these small requests and changes can have a big effect on the quality of reviews and speed in which manuscripts are published. This manuscript will present instructions for completing peer review profiles. Finally, we more formally recognize and thank peer reviewers from 2018–2020.


F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 683 ◽  
Author(s):  
Marco Giordan ◽  
Attila Csikasz-Nagy ◽  
Andrew M. Collings ◽  
Federico Vaggi

BackgroundPublishing in scientific journals is one of the most important ways in which scientists disseminate research to their peers and to the wider public. Pre-publication peer review underpins this process, but peer review is subject to various criticisms and is under pressure from growth in the number of scientific publications.MethodsHere we examine an element of the editorial process ateLife, in which the Reviewing Editor usually serves as one of the referees, to see what effect this has on decision times, decision type, and the number of citations. We analysed a dataset of 8,905 research submissions toeLifesince June 2012, of which 2,750 were sent for peer review, using R and Python to perform the statistical analysis.ResultsThe Reviewing Editor serving as one of the peer reviewers results in faster decision times on average, with the time to final decision ten days faster for accepted submissions (n=1,405) and 5 days faster for papers that were rejected after peer review (n=1,099). There was no effect on whether submissions were accepted or rejected, and a very small (but significant) effect on citation rates for published articles where the Reviewing Editor served as one of the peer reviewers.ConclusionsAn important aspect ofeLife’s peer-review process is shown to be effective, given that decision times are faster when the Reviewing Editor serves as a reviewer. Other journals hoping to improve decision times could consider adopting a similar approach.


2019 ◽  
Author(s):  
Malte Elson ◽  
Markus Huff ◽  
Sonja Utz

Peer review has become the gold standard in scientific publishing as a selection method and a refinement scheme for research reports. However, despite its pervasiveness and conferred importance, relatively little empirical research has been conducted to document its effectiveness. Further, there is evidence that factors other than a submission’s merits can substantially influence peer reviewers’ evaluations. We report the results of a metascientific field experiment on the effect of the originality of a study and the statistical significance of its primary outcome on reviewers’ evaluations. The general aim of this experiment, which was carried out in the peer-review process for a conference, was to demonstrate the feasibility and value of metascientific experiments on the peer-review process and thereby encourage research that will lead to understanding its mechanisms and determinants, effectively contextualizing it in psychological theories of various biases, and developing practical procedures to increase its utility.


2019 ◽  
Author(s):  
Damian Pattinson

In recent years, funders have increased their support for early sharing of biomedical research through the use of preprints. For most, such as the COAlitionS group of funders (ASAPbio 2019) and the Gates foundation, this takes the form of active encouragement, while for others, it is mandated. But despite these motivations, few authors are routinely depositing their work as a preprint before submitting to a journal. Some journals have started offering authors the option of posting their work early at the point at which it is submitted for review. These include PLOS, who offer a link to BiorXiv, the Cell journals, who offer SSRN posting through ‘Sneak Peak’, and Nature Communications, who offer posting to any preprint and a link from the journal page called ‘Under Consideration’. Uptake has ranged from 3% for the Nature pilot, to 18% for PLOS (The Official Plos Blog 2018). In order to encourage more researchers to post their work early, we have been offering authors who submit to BMC Series titles the opportunity to post their work as a preprint on Research Square, a new platform that lets authors share and improve their research. To encourage participation, authors are offered a greater amount of control and transparency over the peer review process if they opt in. First, they are given a detailed peer review timeline which updates in real time every time an event occurs on their manuscript (reviewer invited, reviewer accepts etc). Second, they are encouraged to share their preprint with colleagues, who are able to post comments on the paper. These comments are sent to the editor when they are making their decision. Third, authors can suggest potential peer reviewers, recommendations which are also passed onto the editor to vet and invite. Together, these incentives have had a positive impact on authors choosing to post a preprint. Among the journals that offer this service, the average opt-in rate is 40%. This translates to over 3,000 manuscripts (as of July 2019) that have been posted to Research Square since the launch of the service in October 2018. In this talk I will demonstrate the functionality of Research Square, and provide demographic and discipline data on which areas are most and least likely to post.


2020 ◽  
Vol 125 (1) ◽  
pp. 115-133 ◽  
Author(s):  
Maciej J. Mrowinski ◽  
Agata Fronczak ◽  
Piotr Fronczak ◽  
Olgica Nedic ◽  
Aleksandar Dekanski

Abstract In this paper, we provide insight into the editorial process as seen from the perspective of journal editors. We study a dataset obtained from the Journal of the Serbian Chemical Society, which contains information about submitted and rejected manuscripts, in order to find differences between local (Serbian) and external (non-Serbian) submissions. We show that external submissions (mainly from India, Iran and China) constitute the majority of all submissions, while local submissions are in the minority. Most of submissions are rejected for technical reasons (e.g. wrong manuscript formatting or problems with images) and many users resubmit the same paper without making necessary corrections. Manuscripts with just one author are less likely to pass the technical check, which can be attributed to missing metadata. Articles from local authors are better prepared and require fewer resubmissions on average before they are accepted for peer review. The peer review process for local submissions takes less time than for external papers and local submissions are more likely to be accepted for publication. Also, while there are more men than women among external users, this trend is reversed for local users. In the combined group of local and external users, articles submitted by women are more likely to be published than articles submitted by men.


F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 683 ◽  
Author(s):  
Marco Giordan ◽  
Attila Csikasz-Nagy ◽  
Andrew M. Collings ◽  
Federico Vaggi

BackgroundPublishing in scientific journals is one of the most important ways in which scientists disseminate research to their peers and to the wider public. Pre-publication peer review underpins this process, but peer review is subject to various criticisms and is under pressure from growth in the number of scientific publications.MethodsHere we examine an element of the editorial process ateLife, in which the Reviewing Editor usually serves as one of the referees, to see what effect this has on decision times, decision type, and the number of citations. We analysed a dataset of 8,905 research submissions toeLifesince June 2012, of which 2,747 were sent for peer review. This subset of 2747 papers was then analysed in detail.  ResultsThe Reviewing Editor serving as one of the peer reviewers results in faster decision times on average, with the time to final decision ten days faster for accepted submissions (n=1,405) and five days faster for papers that were rejected after peer review (n=1,099). Moreover, editors acting as reviewers had no effect on whether submissions were accepted or rejected, and a very small (but significant) effect on citation rates.ConclusionsAn important aspect ofeLife’s peer-review process is shown to be effective, given that decision times are faster when the Reviewing Editor serves as a reviewer. Other journals hoping to improve decision times could consider adopting a similar approach.


2020 ◽  
Author(s):  
Bård Smedsrød ◽  
Erik Lieungh

In this episode professor at UIT - The Arctic University of Norway, Bård Smedsrød, gives us an insight into peer review. How does the system work today, and what's problematic with it? Smedsrød also offers some solutions and encourages Universities to be much more involved in the peer review process. The host of this episode is Erik Lieungh. You can also read Bård's latest paper on peer reviewing: Peer reviewing: a private affair between the individual researcher and the publishing houses, or responsibility of the university? This episode was first published 2 November 2018.


2019 ◽  
Author(s):  
Malte Elson ◽  
Markus Huff ◽  
Sonja Utz

Peer review has become the gold standard in scientific publishing as a selection method and a refinement scheme for research reports. However, despite its pervasiveness and conferred importance, relatively little empirical research has been conducted to document its effectiveness. Further, there is evidence that factors other than a submission’s merits can substantially influence peer reviewers’ evaluations. We report the results of a metascientific field experiment on the effect of the originality of a study and the statistical significance of its primary outcome on reviewers’ evaluations. The general aim of this experiment, which was carried out in the peer-review process for a conference, was to demonstrate the feasibility and value of metascientific experiments on the peer-review process and thereby encourage research that will lead to understanding its mechanisms and determinants, effectively contextualizing it in psychological theories of various biases, and developing practical procedures to increase its utility.


2017 ◽  
Vol 33 (1) ◽  
pp. 129-144 ◽  
Author(s):  
Jay C. Thibodeau ◽  
L. Tyler Williams ◽  
Annie L. Witte

ABSTRACT In the new research frontier of data availability, this study develops guidelines to aid accounting academicians as they seek to evidence data integrity proactively in the peer-review process. To that end, we explore data integrity issues associated with two emerging data streams that are gaining prominence in the accounting literature: online labor markets and social media sources. We provide rich detail surrounding academic thought about these data platforms through interview data collected from a sample of former senior journal editors and survey data collected from a sample of peer reviewers. We then propound a set of best practice considerations that are designed to mitigate the perceived risks identified by our assessment.


2018 ◽  
Vol 115 (12) ◽  
pp. 2952-2957 ◽  
Author(s):  
Elizabeth L. Pier ◽  
Markus Brauer ◽  
Amarette Filut ◽  
Anna Kaatz ◽  
Joshua Raclaw ◽  
...  

Obtaining grant funding from the National Institutes of Health (NIH) is increasingly competitive, as funding success rates have declined over the past decade. To allocate relatively scarce funds, scientific peer reviewers must differentiate the very best applications from comparatively weaker ones. Despite the importance of this determination, little research has explored how reviewers assign ratings to the applications they review and whether there is consistency in the reviewers’ evaluation of the same application. Replicating all aspects of the NIH peer-review process, we examined 43 individual reviewers’ ratings and written critiques of the same group of 25 NIH grant applications. Results showed no agreement among reviewers regarding the quality of the applications in either their qualitative or quantitative evaluations. Although all reviewers received the same instructions on how to rate applications and format their written critiques, we also found no agreement in how reviewers “translated” a given number of strengths and weaknesses into a numeric rating. It appeared that the outcome of the grant review depended more on the reviewer to whom the grant was assigned than the research proposed in the grant. This research replicates the NIH peer-review process to examine in detail the qualitative and quantitative judgments of different reviewers examining the same application, and our results have broad relevance for scientific grant peer review.


Sign in / Sign up

Export Citation Format

Share Document