scholarly journals What is open peer review? A systematic review

F1000Research ◽  
2017 ◽  
Vol 6 ◽  
pp. 588 ◽  
Author(s):  
Tony Ross-Hellauer

Background: “Open peer review” (OPR), despite being a major pillar of Open Science, has neither a standardized definition nor an agreed schema of its features and implementations. The literature reflects this, with a myriad of overlapping and often contradictory definitions. While the term is used by some to refer to peer review where the identities of both author and reviewer are disclosed to each other, for others it signifies systems where reviewer reports are published alongside articles. For others it signifies both of these conditions, and for yet others it describes systems where not only “invited experts” are able to comment. For still others, it includes a variety of combinations of these and other novel methods. Methods: Recognising the absence of a consensus view on what open peer review is, this article undertakes a systematic review of definitions of “open peer review” or “open review”, to create a corpus of 122 definitions. These definitions are then systematically analysed to build a coherent typology of the many different innovations in peer review signified by the term, and hence provide the precise technical definition currently lacking. Results: This quantifiable data yields rich information on the range and extent of differing definitions over time and by broad subject area. Quantifying definitions in this way allows us to accurately portray exactly how  ambiguously the phrase “open peer review”  has been used thus far, for the literature offers a total of 22 distinct configurations of seven traits, effectively meaning that there are 22 different definitions of OPR in the literature. Conclusions: Based on this work, I propose a pragmatic definition of open peer review as an umbrella term for a number of overlapping ways that peer review models can be adapted in line with the ethos of Open Science, including making reviewer and author identities open, publishing review reports and enabling greater participation in the peer review process.

F1000Research ◽  
2017 ◽  
Vol 6 ◽  
pp. 588 ◽  
Author(s):  
Tony Ross-Hellauer

Background: “Open peer review” (OPR), despite being a major pillar of Open Science, has neither a standardized definition nor an agreed schema of its features and implementations. The literature reflects this, with numerous overlapping and contradictory definitions. While for some the term refers to peer review where the identities of both author and reviewer are disclosed to each other, for others it signifies systems where reviewer reports are published alongside articles. For others it signifies both of these conditions, and for yet others it describes systems where not only “invited experts” are able to comment. For still others, it includes a variety of combinations of these and other novel methods. Methods: Recognising the absence of a consensus view on what open peer review is, this article undertakes a systematic review of definitions of “open peer review” or “open review”, to create a corpus of 122 definitions. These definitions are systematically analysed to build a coherent typology of the various innovations in peer review signified by the term, and hence provide the precise technical definition currently lacking. Results: This quantifiable data yields rich information on the range and extent of differing definitions over time and by broad subject area. Quantifying definitions in this way allows us to accurately portray exactly how ambiguously the phrase “open peer review” has been used thus far, for the literature offers 22 distinct configurations of seven traits, effectively meaning that there are 22 different definitions of OPR in the literature reviewed. Conclusions: I propose a pragmatic definition of open peer review as an umbrella term for a number of overlapping ways that peer review models can be adapted in line with the aims of Open Science, including making reviewer and author identities open, publishing review reports and enabling greater participation in the peer review process.


BMJ Open ◽  
2020 ◽  
Vol 10 (6) ◽  
pp. e035604
Author(s):  
Cecilia Superchi ◽  
Darko Hren ◽  
David Blanco ◽  
Roser Rius ◽  
Alessandro Recchioni ◽  
...  

ObjectiveTo develop a tool to assess the quality of peer-review reports in biomedical research.MethodsWe conducted an online survey intended for biomedical editors and authors. The survey aimed to (1) determine if participants endorse the proposed definition of peer-review report quality; (2) identify the most important items to include in the final version of the tool and (3) identify any missing items. Participants rated on a 5-point scale whether an item should be included in the tool and they were also invited to comment on the importance and wording of each item. Principal component analysis was performed to examine items redundancy and a general inductive approach was used for qualitative data analysis.ResultsA total of 446 biomedical editors and authors participated in the survey. Participants were mainly male (65.9%), middle-aged (mean=50.3, SD=13) and with PhD degrees (56.4%). The majority of participants (84%) agreed on the definition of peer-review report quality we proposed. The 20 initial items included in the survey questionnaire were generally highly rated with a mean score ranging from 3.38 (SD=1.13) to 4.60 (SD=0.69) (scale 1–5). Participants suggested 13 items that were not included in the initial list of items. A steering committee composed of five members with different expertise discussed the selection of items to include in the final version of the tool. The final checklist includes 14 items encompassed in five domains (Importance of the study, Robustness of the study methods, Interpretation and discussion of the study results, Reporting and transparency of the manuscript, Characteristics of peer reviewer’s comments).ConclusionAssessment of Review reports with a Checklist Available to eDItors and Authors tool could be used regularly by editors to evaluate the reviewers’ work, and also as an outcome when evaluating interventions to improve the peer-review process.


2017 ◽  
Vol 1 (4) ◽  
pp. 60-80 ◽  
Author(s):  
Peiling Wang ◽  
Sukjin You ◽  
Rath Manasa ◽  
Dietmar Wolfram

AbstractPurposeTo understand how authors and reviewers are accepting and embracing Open Peer Review (OPR), one of the newest innovations in the Open Science movement.Design/methodology/approachThis research collected and analyzed data from the Open Access journal PeerJ over its first three years (2013–2016). Web data were scraped, cleaned, and structured using several Web tools and programs. The structured data were imported into a relational database. Data analyses were conducted using analytical tools as well as programs developed by the researchers.FindingsPeerJ, which supports optional OPR, has a broad international representation of authors and referees. Approximately 73.89% of articles provide full review histories. Of the articles with published review histories, 17.61% had identities of all reviewers and 52.57% had at least one signed reviewer. In total, 43.23% of all reviews were signed. The observed proportions of signed reviews have been relatively stable over the period since the Journal’s inception.Research limitationsThis research is constrained by the availability of the peer review history data. Some peer reviews were not available when the authors opted out of publishing their review histories. The anonymity of reviewers made it impossible to give an accurate count of reviewers who contributed to the review process.Practical implicationsThese findings shed light on the current characteristics of OPR. Given the policy that authors are encouraged to make their articles’ review history public and referees are encouraged to sign their review reports, the three years of PeerJ review data demonstrate that there is still some reluctance by authors to make their reviews public and by reviewers to identify themselves.Originality/valueThis is the first study to closely examine PeerJ as an example of an OPR model journal. As Open Science moves further towards open research, OPR is a final and critical component. Research in this area must identify the best policies and paths towards a transparent and open peer review process for scientific communication.


Author(s):  
Lonni Besançon ◽  
Niklas Rönnberg ◽  
Jonas Löwgren ◽  
Jonathan P. Tennant ◽  
Matthew Cooper

We present a discussion and analysis regarding the benefits and limitations of open and non-anonymized peer review based on literature results and responses to a survey on the reviewing process of alt.chi, a more or less open-review track within the CHI conference, the predominant conference in the field of human-computer interaction (HCI). This track currently is the only implementation of an open-peer-review process in the field of HCI while, with the recent increase in interest in open science practices, open review is now being considered and used in other fields. We collected 30 responses from alt.chi authors and reviewers and found that, while the benefits are quite clear and the system is generally well liked by alt.chi participants, they are reluctant to see it used in other venues. This concurs with a number of recent studies that suggest a divergence between support for a more open review process and its practical implementation. The data and scripts are available on https://osf.io/vuw7h/, and the figures and follow-up work on http://tiny.cc/OpenReviews.


2020 ◽  
Author(s):  
Cezary Bolek ◽  
Dejan Marolov ◽  
Monika Bolek ◽  
Jovan Shopovski

This research article is aimed at comparing review reports in which the identity of the reviewers is revealed to the authors of the papers with those where the reviewers decided to remain anonymous. The review reports are gathered as part of the peer review process of the European Scientific Journal (ESJ). This journal maintains a single-blind peer review procedure and optional open review. Reviewers are familiar with the names of the authors but not vice versa. When sending the review reports, the reviewers can opt to reveal their identity to the authors. 343 review reports from members of the ESJ editorial board, gathered within the period of May to July 2019, were analyzed. The data analysis was performed using Python programing language based on NumPy, Pandas, and Scipy packages.Half of the reviewers decided to choose the open option and reveal their names to the authors of the papers. The other half remained anonymous. The results show that female reviewers more often decide to remain anonymous than their male colleagues. However, there is no significant difference in the review reports on the basis of gender or country of institutional affiliation of the reviewers. Revealing identity did not make difference in reviewers’ point appraisal in the review reports. This difference was not significant. However, majority of the reviewers who recommended rejection in their review reports were not willing to reveal their identities. Even more, those reviewers who revealed their identity were more likely to recommend acceptance without revision or minor revision in their review reports.


Author(s):  
Lonni Besançon ◽  
Niklas Rönnberg ◽  
Jonas Löwgren ◽  
Jonathan Tennant ◽  
Matthew Cooper

We present a discussion and analysis regarding the benefits and limitations of open and non-anonymized peer review based on literature results and responses to a survey on the reviewing process of alt.chi, a more or less open-review track within the CHI conference, the predominant conference in the field of human-computer interaction (HCI). This track currently is the only implementation of an open-peer-review process in the field of HCI while, with the recent increase in interest in open science practices, open review is now being considered and used in other fields. We collected 30 responses from alt.chi authors and reviewers and found that, while the benefits are quite clear and the system is generally well liked by alt.chi participants, they are reluctant to see it used in other venues. This concurs with a number of recent studies that suggest a divergence between support for a more open review process and its practical implementation. The data and scripts are available on https://osf.io/vuw7h/, and the figures and follow-up work on http://tiny.cc/OpenReviews.


2018 ◽  
Author(s):  
Cody Fullerton

For years, the gold-standard in academic publishing has been the peer-review process, and for the most part, peer-review remains a safeguard to authors publishing intentionally biased, misleading, and inaccurate information. Its purpose is to hold researchers accountable to the publishing standards of that field, including proper methodology, accurate literature reviews, etc. This presentation will establish the core tenants of peer-review, discuss if certain types of publications should be able to qualify as such, offer possible solutions, and discuss how this affects a librarian's reference interactions.


2020 ◽  
Vol 125 (2) ◽  
pp. 1033-1051
Author(s):  
Dietmar Wolfram ◽  
Peiling Wang ◽  
Adam Hembree ◽  
Hyoungjoo Park

AbstractOpen peer review (OPR), where review reports and reviewers’ identities are published alongside the articles, represents one of the last aspects of the open science movement to be widely embraced, although its adoption has been growing since the turn of the century. This study provides the first comprehensive investigation of OPR adoption, its early adopters and the implementation approaches used. Current bibliographic databases do not systematically index OPR journals, nor do the OPR journals clearly state their policies on open identities and open reports. Using various methods, we identified 617 OPR journals that published at least one article with open identities or open reports as of 2019 and analyzed their wide-ranging implementations to derive emerging OPR practices. The findings suggest that: (1) there has been a steady growth in OPR adoption since 2001, when 38 journals initially adopted OPR, with more rapid growth since 2017; (2) OPR adoption is most prevalent in medical and scientific disciplines (79.9%); (3) five publishers are responsible for 81% of the identified OPR journals; (4) early adopter publishers have implemented OPR in different ways, resulting in different levels of transparency. Across the variations in OPR implementations, two important factors define the degree of transparency: open identities and open reports. Open identities may include reviewer names and affiliation as well as credentials; open reports may include timestamped review histories consisting of referee reports and author rebuttals or a letter from the editor integrating reviewers’ comments. When and where open reports can be accessed are also important factors indicating the OPR transparency level. Publishers of optional OPR journals should add metric data in their annual status reports.


2018 ◽  
Vol 30 (2) ◽  
pp. 209-218 ◽  
Author(s):  
Paula CABEZAS Del FIERRO ◽  
Omar SABAJ MERUANE ◽  
Germán VARAS ESPINOZA ◽  
Valeria GONZÁLEZ HERRERA

Abstract The value of scientific knowledge is highly dependent on the quality of the process used to produce it, namely, the quality of the peer-review process. This process is a pivotal part of science as it works both to legitimize and improve the work of the scientific community. In this context, the present study investigated the relationship between review time, length, and feedback quality of review reports in the peer-review process of research articles. For this purpose, the review time of 313 referee reports from three Chilean international journals were recorded. Feedback quality was determined estimating the rate of direct requests by the total number of comments in each report. Number of words was used to describe the average length in the sample. Results showed that average time and length have little variation across review reports, irrespective of their quality. Low quality reports tended to take longer to reach the editor, so neither time nor length were related to feedback quality. This suggests that referees mostly describe, criticize, or praise the content of the article instead of making useful and direct comments to help authors improve their manuscripts.


Sign in / Sign up

Export Citation Format

Share Document