scholarly journals Reviewers' Decision to Sign Reviews is Related to Their Recommendation

2021 ◽  
Vol 5 ◽  
Author(s):  
Nino Van Sambeek ◽  
Daniel Lakens

Surveys indicate that researchers generally have a positive attitude towards open peer review when this consists of making reviews available alongside published articles. Researchers are more negative about revealing the identity of reviewers. They worry reviewers will be less likely to express criticism if their identity is known to authors. Experiments suggest that reviewers are somewhat less likely to recommend rejection when they are told their identity will be communicated to authors, than when they will remain anonymous. One recent study revealed reviewers in five journals who voluntarily signed their reviews gave more positive recommendations than those who did not sign their reviews. We replicate and extend this finding by analyzing 12010 open reviews in PeerJ and 4188 reviews in the Royal Society Open Science where authors can voluntarily sign their reviews. These results based on behavioral data from real peer reviews across a wide range of scientific disciplines demonstrate convincingly that reviewers’ decision to sign is related to their recommendation. The proportion of signed reviews was higher for more positive recommendations, than for more negative recommendations. We also share all 23649 text-mined reviews as raw data underlying our results that can be re-used by researchers interested in peer review.

2019 ◽  
Author(s):  
Nino van Sambeek ◽  
Daniel Lakens

Surveys indicate that researchers generally have a positive attitude towards open peer review when this consists of making reviews available alongside published articles. Researchers are more negative about revealing the identity of reviewers. They worry reviewers will be less likely to express criticism if their identity is known to authors. Experiments suggest that reviewers are somewhat less likely to recommend rejection when they are told their identity will be communicated to authors, than when they will remain anonymous. One recent study revealed reviewers in five journals who voluntarily signed their reviews gave more positive recommendations than those who did not sign their reviews. We replicate and extend this finding by analyzing 12010 open reviews in PeerJ and 4188 reviews in the Royal Society Open Science where authors can voluntarily sign their reviews. These results based on behavioral data from real peer reviews across a wide range of scientific disciplines demonstrate convincingly that reviewers' decision to sign is related to their recommendation. The proportion of signed reviews was higher for more positive recommendations, than for more negative recommendations. We also share all 23649 text-mined reviews as raw data underlying our results that can be re-used by researchers interested in peer review.


Publications ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. 65 ◽  
Author(s):  
Marcel Knöchelmann

Open science refers to both the practices and norms of more open and transparent communication and research in scientific disciplines and the discourse on these practices and norms. There is no such discourse dedicated to the humanities. Though the humanities appear to be less coherent as a cluster of scholarship than the sciences are, they do share unique characteristics which lead to distinct scholarly communication and research practices. A discourse on making these practices more open and transparent needs to take account of these characteristics. The prevalent scientific perspective in the discourse on more open practices does not do so, which confirms that the discourse’s name, open science, indeed excludes the humanities so that talking about open science in the humanities is incoherent. In this paper, I argue that there needs to be a dedicated discourse for more open research and communication practices in the humanities, one that integrates several elements currently fragmented into smaller, unconnected discourses (such as on open access, preprints, or peer review). I discuss three essential elements of open science—preprints, open peer review practices, and liberal open licences—in the realm of the humanities to demonstrate why a dedicated open humanities discourse is required.


2020 ◽  
Vol 125 (2) ◽  
pp. 1033-1051
Author(s):  
Dietmar Wolfram ◽  
Peiling Wang ◽  
Adam Hembree ◽  
Hyoungjoo Park

AbstractOpen peer review (OPR), where review reports and reviewers’ identities are published alongside the articles, represents one of the last aspects of the open science movement to be widely embraced, although its adoption has been growing since the turn of the century. This study provides the first comprehensive investigation of OPR adoption, its early adopters and the implementation approaches used. Current bibliographic databases do not systematically index OPR journals, nor do the OPR journals clearly state their policies on open identities and open reports. Using various methods, we identified 617 OPR journals that published at least one article with open identities or open reports as of 2019 and analyzed their wide-ranging implementations to derive emerging OPR practices. The findings suggest that: (1) there has been a steady growth in OPR adoption since 2001, when 38 journals initially adopted OPR, with more rapid growth since 2017; (2) OPR adoption is most prevalent in medical and scientific disciplines (79.9%); (3) five publishers are responsible for 81% of the identified OPR journals; (4) early adopter publishers have implemented OPR in different ways, resulting in different levels of transparency. Across the variations in OPR implementations, two important factors define the degree of transparency: open identities and open reports. Open identities may include reviewer names and affiliation as well as credentials; open reports may include timestamped review histories consisting of referee reports and author rebuttals or a letter from the editor integrating reviewers’ comments. When and where open reports can be accessed are also important factors indicating the OPR transparency level. Publishers of optional OPR journals should add metric data in their annual status reports.


2019 ◽  
Author(s):  
Rodolfo Jaffé

The exploitation of scientists by traditional academic publishers is widespread, as they monopolize the right to distribute scientific papers, strip authors of their own article’s copyrights, and charge them if they wish to read papers from their peers. It is then up to scientists to free themselves (and their papers) from the tyranny of academic publishers by refusing to perform free peer-reviews for them and by publishing open-access when possible. Starved of peer-reviewers, academic publishers would have nothing to publish, while subscription fees are doomed to disappear in an age of open-science. This system would also create incentives to perform peer-review: #Pay4Reviews


2020 ◽  
Author(s):  
Courtney K. Soderberg ◽  
Timothy M. Errington ◽  
Brian A. Nosek

Preprints increase accessibility and can speed scholarly communication if researchers view them as credible enough to read and use. Preprint services, though, do not provide the heuristic cues of a journal’s reputation, selection, and peer review processes that are often used as a guide for deciding what to read. We conducted a survey of 3,759 researchers across a wide range of disciplines to determine the importance of different cues for assessing the credibility of individual preprints and preprint services. We found that cues related to information about open science content and independent verification of author claims were rated as highly important for judging preprint credibility. As of early 2020, very few preprint services display any of these cues. By adding such cues, services may be able to help researchers better assess the credibility of preprints, enabling scholars to more confidently use preprints, thereby accelerating scientific communication and discovery.


2017 ◽  
Vol 1 (4) ◽  
pp. 60-80 ◽  
Author(s):  
Peiling Wang ◽  
Sukjin You ◽  
Rath Manasa ◽  
Dietmar Wolfram

AbstractPurposeTo understand how authors and reviewers are accepting and embracing Open Peer Review (OPR), one of the newest innovations in the Open Science movement.Design/methodology/approachThis research collected and analyzed data from the Open Access journal PeerJ over its first three years (2013–2016). Web data were scraped, cleaned, and structured using several Web tools and programs. The structured data were imported into a relational database. Data analyses were conducted using analytical tools as well as programs developed by the researchers.FindingsPeerJ, which supports optional OPR, has a broad international representation of authors and referees. Approximately 73.89% of articles provide full review histories. Of the articles with published review histories, 17.61% had identities of all reviewers and 52.57% had at least one signed reviewer. In total, 43.23% of all reviews were signed. The observed proportions of signed reviews have been relatively stable over the period since the Journal’s inception.Research limitationsThis research is constrained by the availability of the peer review history data. Some peer reviews were not available when the authors opted out of publishing their review histories. The anonymity of reviewers made it impossible to give an accurate count of reviewers who contributed to the review process.Practical implicationsThese findings shed light on the current characteristics of OPR. Given the policy that authors are encouraged to make their articles’ review history public and referees are encouraged to sign their review reports, the three years of PeerJ review data demonstrate that there is still some reluctance by authors to make their reviews public and by reviewers to identify themselves.Originality/valueThis is the first study to closely examine PeerJ as an example of an OPR model journal. As Open Science moves further towards open research, OPR is a final and critical component. Research in this area must identify the best policies and paths towards a transparent and open peer review process for scientific communication.


2018 ◽  
Vol 36 (1) ◽  
pp. 38-67 ◽  
Author(s):  
Ashley Rose Mehlenbacher

The research article is a staple genre in the economy of scientific research, and although research articles have received considerable treatment in genre scholarship, little attention has been given to the important development of Registered Reports. Registered Reports are an emerging, hybrid genre that proceeds through a two-stage model of peer review. This article charts the emergence of Registered Reports and explores how this new form intervenes in the evolution of the research article genre by replacing the central topoi of novelty with methodological rigor. Specifically, I investigate this discursive and publishing phenomenon by describing current conversations about challenges in replicating research studies, the rhetorical exigence those conversations create, and how Registered Reports respond to this exigence. Then, to better understand this emerging form, I present an empirical study of the genre itself by closely examining four articles published under the Registered Report model from the journal Royal Society Open Science and then investigating the genre hybridity by examining 32 protocols (Stage 1 Registered Reports) and 77 completed (Stage 2 Registered Reports) from a range of journals in the life and psychological sciences. Findings from this study suggest Registered Reports mark a notable intervention in the research article genre for life and psychological sciences, centering the reporting of science in serious methodological debates.


2021 ◽  
pp. 35-45
Author(s):  
Benjamin Michael Marshall

Across many scientific disciplines, direct replication efforts and meta-analyses have fuelled concerns on the replicability of findings. Ecology and evolution are similarly affected. Investigations into the causes of this lack of replicability have implicated a suite of research practices linked to incentives in the current publishing system. Other fields have taken great strides to counter incentives that can reward obfuscation –chiefly by championing transparency. But how prominent are protransparency (open science) policies in herpetology journals? We use the recently developed Transparency and Openness Promotion (TOP) Factor to assess the transparency promotion of 19 herpetology journals, and compare the TOP scores to broader science. We find promotion of transparent practices currently lacking in many herpetological journals; and encourage authors, students, editors, and publishers to redouble efforts to bring open science practices to herpetology by changing journal policy, peer-review, and personal practice. We promote an array of options –developed and tested in other fields– demonstrated to counter publication bias, boost research uptake, and enable more transparent science, to enrich herpetological research.


2019 ◽  
Author(s):  
Rodolfo Jaffé

The exploitation of scientists by traditional academic publishers is widespread, as they monopolize the right to distribute scientific papers, strip authors of their own article’s copyrights, and charge them if they wish to read papers from their peers. It is then up to scientists to free themselves (and their papers) from the tyranny of academic publishers by refusing to perform free peer-reviews for them and by publishing open-access when possible. Starved of peer-reviewers, academic publishers would have nothing to publish, while subscription fees are doomed to disappear in an age of open-science. This system would also create incentives to perform peer-review: #Pay4Reviews


1982 ◽  
Vol 5 (2) ◽  
pp. 187-195 ◽  
Author(s):  
Douglas P. Peters ◽  
Stephen J. Ceci

AbstractA growing interest in and concern about the adequacy and fairness of modern peer-review practices in publication and funding are apparent across a wide range of scientific disciplines. Although questions about reliability, accountability, reviewer bias, and competence have been raised, there has been very little direct research on these variables.The present investigation was an attempt to study the peer-review process directly, in the natural setting of actual journal referee evaluations of submitted manuscripts. As test materials we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.” A number of possible interpretations of these data are reviewed and evaluated.


Sign in / Sign up

Export Citation Format

Share Document