scholarly journals Opening and Reusing Transparent Peer Reviews with Automatic Article Annotation

Publications ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 13 ◽  
Author(s):  
Afshin Sadeghi ◽  
Sarven Capadisli ◽  
Johannes Wilm ◽  
Christoph Lange ◽  
Philipp Mayr

An increasing number of scientific publications are created in open and transparent peer review models: a submission is published first, and then reviewers are invited, or a submission is reviewed in a closed environment but then these reviews are published with the final article, or combinations of these. Reasons for open peer review include giving better credit to reviewers, and enabling readers to better appraise the quality of a publication. In most cases, the full, unstructured text of an open review is published next to the full, unstructured text of the article reviewed. This approach prevents human readers from getting a quick impression of the quality of parts of an article, and it does not easily support secondary exploitation, e.g., for scientometrics on reviews. While document formats have been proposed for publishing structured articles including reviews, integrated tool support for entire open peer review workflows resulting in such documents is still scarce. We present AR-Annotator, the Automatic Article and Review Annotator which employs a semantic information model of an article and its reviews, using semantic markup and unique identifiers for all entities of interest. The fine-grained article structure is not only exposed to authors and reviewers but also preserved in the published version. We publish articles and their reviews in a Linked Data representation and thus maximise their reusability by third party applications. We demonstrate this reusability by running quality-related queries against the structured representation of articles and their reviews.

2010 ◽  
Vol 96 (1) ◽  
pp. 20-29
Author(s):  
Jerry C. Calvanese

ABSTRACT Study Objective: The purpose of this study was to obtain data on various characteristics of peer reviews. These reviews were performed for the Nevada State Board of Medical Examiners (NSBME) to assess physician licensees' negligence and/or incompetence. It was hoped that this data could help identify and define certain characteristics of peer reviews. Methods: This study examined two years of data collected on peer reviews. The complaints were initially screened by a medical reviewer and/or a committee composed of Board members to assess the need for a peer review. Data was then collected from the peer reviews performed. The data included costs, specialty of the peer reviewer, location of the peer reviewer, and timeliness of the peer reviews. Results: During the two-year study, 102 peer reviews were evaluated. Sixty-nine percent of the peer-reviewed complaints originated from civil malpractice cases and 15% originated from complaints made by patients. Eighty percent of the complaint physicians were located in Clark County and 12% were located in Washoe County. Sixty-one percent of the physicians who performed the peer reviews were located in Washoe County and 24% were located in Clark County. Twelve percent of the complaint physicians were in practice in the state for 5 years or less, 40% from 6 to 10 years, 20% from 11 to 15 years, 16% from 16 to 20 years, and 13% were in practice 21 years or more. Forty-seven percent of the complaint physicians had three or less total complaints filed with the Board, 10% had four to six complaints, 17% had 7 to 10 complaints, and 26% had 11 or more complaints. The overall quality of peer reviews was judged to be good or excellent in 96% of the reviews. A finding of malpractice was found in 42% of the reviews ordered by the medical reviewer and in 15% ordered by the Investigative Committees. There was a finding of malpractice in 38% of the overall total of peer reviews. The total average cost of a peer review was $791. In 47% of the peer reviews requested, materials were sent from the Board to the peer reviewer within 60 days of the original request and 33% took more than 120 days for the request to be sent. In 48% of the reviews, the total time for the peer review to be performed by the peer reviewer was less than 60 days. Twenty seven percent of the peer reviews took more than 120 days to be returned. Conclusion: Further data is needed to draw meaningful conclusions from certain peer review characteristics reported in this study. However, useful data was obtained regarding timeliness in sending out peer review materials, total times for the peer reviews, and costs.


1970 ◽  
Vol 3 ◽  
pp. 175-184
Author(s):  
Julie Walker

Increasing the visibility of a journal is the key to increasing quality. The International Network for the Availability of Scientific Publications works with journal editors in the global South to publish their journals online and to increase the efficiency of the peer review process. Editors are trained in using the Open Journals System software and in online journal management and strategy so they have the tools and knowledge needed to initiate a ‘virtuous cycle' in which visibility leads to an increase in the number and quality of submissions and in turn, increased citations and impact. In order to maximise this increase in quality, it must be supported by strong editorial office processes and management. This article describes some of the issues and strategies faced by the editors INASP works with, placing a particular emphasis on Nepal Journals Online. Key words: INASP; Open Journals System; Journals Online Projects; Nepal Journals Online; journal visibility; peer review DOI: 10.3126/dsaj.v3i0.2786 Dhaulagiri Journal of Sociology and Anthropology Vol.3 2009 175-184


1999 ◽  
Vol 21 (2) ◽  
pp. 95-107 ◽  
Author(s):  
Michael J. Calegari ◽  
Gregory G. Geisler ◽  
Ernest R. Larkins

Extant literature suggests that the process of constructing a teaching portfolio can identify areas to improve, motivate positive changes, and elevate the importance of teaching in academe. This study describes the experience of the tax faculty at a public university in using teaching portfolios and peer reviews to improve the quality of the first two tax courses. The type of teaching portfolio used in this project consists of a course syllabus and a reflective statement that documents the rationale for all components of a course (i.e., lectures, projects, exams, writing assignments, presentations, etc.). The peer review aspect involves written feedback from a colleague on this teaching portfolio. Though research publications are usually subject to extensive peer review, teaching generally is not. Like research, however, teaching can be evaluated and ultimately improved through peer review. Thus, this study can provide valuable guidance to tax professors attempting to improve their courses.


Author(s):  
V.  N. Gureyev ◽  
N.  A. Mazov

The paper summarizes experience of the authors as peer-reviewers of more than 100 manuscripts in twelve Russian and foreign academic journals on Library and Information Science in the last seven years. Prepared peer-reviews were used for making a list of the most usual critical and special comments for each manuscript that were subsequently structured for the conducted analyzes. Typical issues accompanying the peer-review process are shown. Significant differences between the results of peer-review in Russian and foreign journals are detected: although the initial quality of newly submitted manuscripts is approximately equal, the final published versions in foreign journals addressed all critical and the majority of minor reviewers’ comments, while in Russian journals more than one third of final versions were published with critical gaps. We conclude about low interest in high quality peer reviews among both authors and editors-in-chief in Russian journals. Despite the limitations of the samples, the obtained findings can be useful when evaluating the current peer-review system in Russian academic journals on Library and Information Science.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 253
Author(s):  
Daniel Nüst ◽  
Stephen J. Eglen

The traditional scientific paper falls short of effectively communicating computational research.  To help improve this situation, we propose a system by which the computational workflows underlying research articles are checked. The CODECHECK system uses open infrastructure and tools and can be integrated into review and publication processes in multiple ways. We describe these integrations along multiple dimensions (importance, who, openness, when). In collaboration with academic publishers and conferences, we demonstrate CODECHECK with 25 reproductions of diverse scientific publications. These CODECHECKs show that asking for reproducible workflows during a collaborative review can effectively improve executability. While CODECHECK has clear limitations, it may represent a building block in Open Science and publishing ecosystems for improving the reproducibility, appreciation, and, potentially, the quality of non-textual research artefacts. The CODECHECK website can be accessed here: https://codecheck.org.uk/.


2016 ◽  
Vol 113 (30) ◽  
pp. 8414-8419 ◽  
Author(s):  
Stefano Balietti ◽  
Robert L. Goldstone ◽  
Dirk Helbing

To investigate the effect of competitive incentives under peer review, we designed a novel experimental setup called the Art Exhibition Game. We present experimental evidence of how competition introduces both positive and negative effects when creative artifacts are evaluated and selected by peer review. Competition proved to be a double-edged sword: on the one hand, it fosters innovation and product diversity, but on the other hand, it also leads to more unfair reviews and to a lower level of agreement between reviewers. Moreover, an external validation of the quality of peer reviews during the laboratory experiment, based on 23,627 online evaluations on Amazon Mechanical Turk, shows that competition does not significantly increase the level of creativity. Furthermore, the higher rejection rate under competitive conditions does not improve the average quality of published contributions, because more high-quality work is also rejected. Overall, our results could explain why many ground-breaking studies in science end up in lower-tier journals. Differences and similarities between the Art Exhibition Game and scholarly peer review are discussed and the implications for the design of new incentive systems for scientists are explained.


2020 ◽  
Vol 13 (1) ◽  
pp. 1-27 ◽  
Author(s):  
Tine Köhler ◽  
M. Gloria González-Morales ◽  
George C. Banks ◽  
Ernest H. O’Boyle ◽  
Joseph A. Allen ◽  
...  

AbstractPeer review is a critical component toward facilitating a robust science in industrial and organizational (I-O) psychology. Peer review exists beyond academic publishing in organizations, university departments, grant agencies, classrooms, and many more work contexts. Reviewers are responsible for judging the quality of research conducted and submitted for evaluation. Furthermore, they are responsible for treating authors and their work with respect, in a supportive and developmental manner. Given its central role in our profession, it is curious that we do not have formalized review guidelines or standards and that most of us never receive formal training in peer reviewing. To support this endeavor, we are proposing a competency framework for peer review. The purpose of the competency framework is to provide a definition of excellent peer reviewing and guidelines to reviewers for which types of behaviors will lead to good peer reviews. By defining these competencies, we create clarity around expectations for peer review, standards for good peer reviews, and opportunities for training the behaviors required to deliver good peer reviews. We further discuss how the competency framework can be used to improve peer reviewing and suggest additional steps forward that involve suggestions for how stakeholders can get involved in fostering high-quality peer reviewing.


2018 ◽  
Author(s):  
Timothy H. Parker ◽  
Simon C Griffith ◽  
Judith Lee Bronstein ◽  
Fiona Fidler ◽  
Susan Adlai Foster ◽  
...  

Peer review is widely considered fundamental to maintaining the rigor of science, but it is an imperfect process. In that context, it is noteworthy that formal standards or guidelines for peer reviews themselves are rarely discussed in many disciplines, including ecology and evolutionary biology. Some may argue that a dearth of explicit guidelines is not a problem. After all, a tremendous amount of effective peer reviewing happens every day. However, there are reasons to expect that well-constructed guidelines in the form of checklists could be useful for improving certain aspects of peer review), such as promoting transparency of reviewed manuscripts, and that such checklists might be widely and enthusiastically adopted by many reviewers. Although some journals already provide checklists to reviewers, most of these checklists are quite limited in scope and do not substantially improve the rigor of the review process. There are also guidelines that seek to explain the general process of peer review. Instead we propose a short list of important questions that reviewers can use to help authors produce more transparent and reliable manuscripts. We want to empower excellent peer review because it helps promote the production of high quality scientific publications.


2018 ◽  
Vol 21 (4) ◽  
pp. 126-135
Author(s):  
Stephen James Walsh ◽  
Jie Yan ◽  
Vincent Mangematin ◽  
Maggie Qiuzhu Mei

How were paper bastions added to the walls of academic citadels? By mapping the evolution of the coauthorship network in 180 management journals from 1991 to 2009, we identify an elite league of business schools that retained dominance despite the research community’s significant growth. The elite universities maintain their prominence through a loop of reinforcement involving the peer review process and third-party ranking bodies, though the perceived quality of the papers published was declining as measured by the percentage of overall citations. Leading U.S. universities dominate top journal publications, while new local poles of management research among European and Asian universiteis emerged.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 253
Author(s):  
Daniel Nüst ◽  
Stephen J. Eglen

The traditional scientific paper falls short of effectively communicating computational research.  To help improve this situation, we propose a system by which the computational workflows underlying research articles are checked. The CODECHECK system uses open infrastructure and tools and can be integrated into review and publication processes in multiple ways. We describe these integrations along multiple dimensions (importance, who, openness, when). In collaboration with academic publishers and conferences, we demonstrate CODECHECK with 25 reproductions of diverse scientific publications. These CODECHECKs show that asking for reproducible workflows during a collaborative review can effectively improve executability. While CODECHECK has clear limitations, it may represent a building block in Open Science and publishing ecosystems for improving the reproducibility, appreciation, and, potentially, the quality of non-textual research artefacts. The CODECHECK website can be accessed here: https://codecheck.org.uk/.


Sign in / Sign up

Export Citation Format

Share Document