scholarly journals CODECHECK: an Open Science initiative for the independent execution of computations underlying research articles during peer review to improve reproducibility

F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 253
Author(s):  
Daniel Nüst ◽  
Stephen J. Eglen

The traditional scientific paper falls short of effectively communicating computational research.  To help improve this situation, we propose a system by which the computational workflows underlying research articles are checked. The CODECHECK system uses open infrastructure and tools and can be integrated into review and publication processes in multiple ways. We describe these integrations along multiple dimensions (importance, who, openness, when). In collaboration with academic publishers and conferences, we demonstrate CODECHECK with 25 reproductions of diverse scientific publications. These CODECHECKs show that asking for reproducible workflows during a collaborative review can effectively improve executability. While CODECHECK has clear limitations, it may represent a building block in Open Science and publishing ecosystems for improving the reproducibility, appreciation, and, potentially, the quality of non-textual research artefacts. The CODECHECK website can be accessed here: https://codecheck.org.uk/.

F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 253
Author(s):  
Daniel Nüst ◽  
Stephen J. Eglen

The traditional scientific paper falls short of effectively communicating computational research.  To help improve this situation, we propose a system by which the computational workflows underlying research articles are checked. The CODECHECK system uses open infrastructure and tools and can be integrated into review and publication processes in multiple ways. We describe these integrations along multiple dimensions (importance, who, openness, when). In collaboration with academic publishers and conferences, we demonstrate CODECHECK with 25 reproductions of diverse scientific publications. These CODECHECKs show that asking for reproducible workflows during a collaborative review can effectively improve executability. While CODECHECK has clear limitations, it may represent a building block in Open Science and publishing ecosystems for improving the reproducibility, appreciation, and, potentially, the quality of non-textual research artefacts. The CODECHECK website can be accessed here: https://codecheck.org.uk/.


1970 ◽  
Vol 3 ◽  
pp. 175-184
Author(s):  
Julie Walker

Increasing the visibility of a journal is the key to increasing quality. The International Network for the Availability of Scientific Publications works with journal editors in the global South to publish their journals online and to increase the efficiency of the peer review process. Editors are trained in using the Open Journals System software and in online journal management and strategy so they have the tools and knowledge needed to initiate a ‘virtuous cycle' in which visibility leads to an increase in the number and quality of submissions and in turn, increased citations and impact. In order to maximise this increase in quality, it must be supported by strong editorial office processes and management. This article describes some of the issues and strategies faced by the editors INASP works with, placing a particular emphasis on Nepal Journals Online. Key words: INASP; Open Journals System; Journals Online Projects; Nepal Journals Online; journal visibility; peer review DOI: 10.3126/dsaj.v3i0.2786 Dhaulagiri Journal of Sociology and Anthropology Vol.3 2009 175-184


2018 ◽  
Vol 30 (2) ◽  
pp. 209-218 ◽  
Author(s):  
Paula CABEZAS Del FIERRO ◽  
Omar SABAJ MERUANE ◽  
Germán VARAS ESPINOZA ◽  
Valeria GONZÁLEZ HERRERA

Abstract The value of scientific knowledge is highly dependent on the quality of the process used to produce it, namely, the quality of the peer-review process. This process is a pivotal part of science as it works both to legitimize and improve the work of the scientific community. In this context, the present study investigated the relationship between review time, length, and feedback quality of review reports in the peer-review process of research articles. For this purpose, the review time of 313 referee reports from three Chilean international journals were recorded. Feedback quality was determined estimating the rate of direct requests by the total number of comments in each report. Number of words was used to describe the average length in the sample. Results showed that average time and length have little variation across review reports, irrespective of their quality. Low quality reports tended to take longer to reach the editor, so neither time nor length were related to feedback quality. This suggests that referees mostly describe, criticize, or praise the content of the article instead of making useful and direct comments to help authors improve their manuscripts.


Publications ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 13 ◽  
Author(s):  
Afshin Sadeghi ◽  
Sarven Capadisli ◽  
Johannes Wilm ◽  
Christoph Lange ◽  
Philipp Mayr

An increasing number of scientific publications are created in open and transparent peer review models: a submission is published first, and then reviewers are invited, or a submission is reviewed in a closed environment but then these reviews are published with the final article, or combinations of these. Reasons for open peer review include giving better credit to reviewers, and enabling readers to better appraise the quality of a publication. In most cases, the full, unstructured text of an open review is published next to the full, unstructured text of the article reviewed. This approach prevents human readers from getting a quick impression of the quality of parts of an article, and it does not easily support secondary exploitation, e.g., for scientometrics on reviews. While document formats have been proposed for publishing structured articles including reviews, integrated tool support for entire open peer review workflows resulting in such documents is still scarce. We present AR-Annotator, the Automatic Article and Review Annotator which employs a semantic information model of an article and its reviews, using semantic markup and unique identifiers for all entities of interest. The fine-grained article structure is not only exposed to authors and reviewers but also preserved in the published version. We publish articles and their reviews in a Linked Data representation and thus maximise their reusability by third party applications. We demonstrate this reusability by running quality-related queries against the structured representation of articles and their reviews.


2018 ◽  
Vol 1 ◽  
Author(s):  
Pavel Stoev

There are three key challenges that need to be addressed by journal publishers nowadays: increasing machine-readability and semantic enrichment of the published content to allow text and data mining, aggregation and re-use; adopting open science principles to expand from publication of mainly research articles to all research objects through the research cycle, and facilitating all of this to authors, reviewers and editors through novel and user-friendly technological solutions. increasing machine-readability and semantic enrichment of the published content to allow text and data mining, aggregation and re-use; adopting open science principles to expand from publication of mainly research articles to all research objects through the research cycle, and facilitating all of this to authors, reviewers and editors through novel and user-friendly technological solutions. ARPHA stands for: Authoring, Reviewing, Publishing, Hosting and Archiving, all in one place. ARPHA is the first publishing platform to support the full life cycle of a manuscript within a single online collaborative environment. The platform consists of two interconnected but independently functioning journal publishing workflows: ARPHA-XML: Entirely XML- and Web-based, collaborative authoring, peer review and publication workflow; ARPHA-DOC: Document-based submission (PDF, or text files), peer review and publication workflow. ARPHA-XML: Entirely XML- and Web-based, collaborative authoring, peer review and publication workflow; ARPHA-DOC: Document-based submission (PDF, or text files), peer review and publication workflow. A full list of services provided by ARPHA is available at: http://arphahub.com/about/services Furthermore, Pensoft has been heavily investing in the technological advancement of its journals. The most significant technologies implemented by Pensoft as demonstrated also by the journal Subterranean Biology in the recent years are: Automatic registrations of reviews at Publons - Publons helps reviewers and editors get recognition for every review they make for the journal; Dimensions - powerful tracker of citations, which provides ranking of given research in a given field; Scopus CiteScore Metrics - interactive tool providing information on journal’s performance; Еxport of published figures & supplementary materials to Biodiversity Literature Repository at ZENODO - increases visibility and traceability of article and sub-article elements; Hypothes.is - tool allowing annotations on selected texts from the published article. Automatic registrations of reviews at Publons - Publons helps reviewers and editors get recognition for every review they make for the journal; Dimensions - powerful tracker of citations, which provides ranking of given research in a given field; Scopus CiteScore Metrics - interactive tool providing information on journal’s performance; Еxport of published figures & supplementary materials to Biodiversity Literature Repository at ZENODO - increases visibility and traceability of article and sub-article elements; Hypothes.is - tool allowing annotations on selected texts from the published article.


Author(s):  
Daniel Noesgaard

The work required to collect, clean and publish biodiversity datasets is significant, and those who do it deserve recognition for their efforts. Researchers publish studies using open biodiversity data available from GBIF—the Global Biodiversity Information Facility—at a rate of about two papers a day. These studies cover areas such as macroecology, evolution, climate change, and invasive alien species, relying on data sharing by hundreds of publishing institutions and the curatorial work of thousands of individual contributors. With more than 90 per cent of these datasets licensed under Creative Commons Attribution licenses (CC BY and CC BY-NC), data users are required to credit the dataset providers. For GBIF, it is crucial to link these scientific uses to the underlying data as one means of demonstrating the value and impact of open science, while seeking to ensure attribution of individual, organizational and national contributions to the global pool of open data about biodiversity. Every single authenticated download of occurrence records from GBIF.org is issued a unique Digital Object Identifier (DOI). These DOIs each resolve to a landing page that contains details of the search parameters used to generate the download a quantitative map of the underlying datasets that contributed to the download a simple citation to be included in works that rely on the data the search parameters used to generate the download a quantitative map of the underlying datasets that contributed to the download a simple citation to be included in works that rely on the data When used properly by authors and deposited correctly by journals in the article metadata, the DOI citation establishes a direct link between a scientific paper and the underlying data. Crossref—the main DOI Registration Agency for academic literature— exposes such links in Event Data, which can be consumed programmatically to report direct use of individual datasets. GBIF also records these links, permanently preserving the download archives while exposing a citation count on download landing pages that is also summarized on the landing pages of each contributing datasets and publishers. The citation counts can be expanded to produce lists of all papers unambiguously linked to use of specific datasets. In 2018, just 15 per cent of papers based on GBIF-mediated data used DOIs to cite or acknowledge the datasets used in the studies. To promote crediting of data publishers and digital recognition of data sharing, the GBIF Secretariat has been reaching out systematically to authors and publishers since April 2018 whenever a paper fails to include a proper data citation. While publishing lags may hinder immediate effects, preliminary findings suggest that uptake is improving—as the number of papers with DOI data citations during the first part of 2019 is up more than 60 per cent compared to 2018. Focusing on the value of linking scientific publications and data, this presentation will explore the potential for establishing automatic linkage through DOI metadata while demonstrating efforts to improve metrics of data use and attribution of data providers through outreach campaigns to authors and journal publishers.


2018 ◽  
Vol 36 (1) ◽  
pp. 38-67 ◽  
Author(s):  
Ashley Rose Mehlenbacher

The research article is a staple genre in the economy of scientific research, and although research articles have received considerable treatment in genre scholarship, little attention has been given to the important development of Registered Reports. Registered Reports are an emerging, hybrid genre that proceeds through a two-stage model of peer review. This article charts the emergence of Registered Reports and explores how this new form intervenes in the evolution of the research article genre by replacing the central topoi of novelty with methodological rigor. Specifically, I investigate this discursive and publishing phenomenon by describing current conversations about challenges in replicating research studies, the rhetorical exigence those conversations create, and how Registered Reports respond to this exigence. Then, to better understand this emerging form, I present an empirical study of the genre itself by closely examining four articles published under the Registered Report model from the journal Royal Society Open Science and then investigating the genre hybridity by examining 32 protocols (Stage 1 Registered Reports) and 77 completed (Stage 2 Registered Reports) from a range of journals in the life and psychological sciences. Findings from this study suggest Registered Reports mark a notable intervention in the research article genre for life and psychological sciences, centering the reporting of science in serious methodological debates.


2020 ◽  
Vol 9 (3) ◽  
pp. 12
Author(s):  
Sandro Serpa ◽  
Maria José Sá ◽  
Ana Isabel Santos ◽  
Carlos Miguel Ferreira

The academic editor has been, and still is, the gatekeeper of peer-reviewed scientific publications, by being whom, ultimately, defines whether or not a manuscript can be published. At a time of profound transformation in the context of scientific publication (digital publishing, open access, preprint, open peer review,...) and the expectations, inside and outside academia, towards academic publication, this perspective paper aims to add to the discussion of the (re)formulation of the academic editor’s role, considering that he or she, in this panoply of changes, continues, and will continue to be, the ultimate guardian of the scientific quality of what is published.


2020 ◽  
Author(s):  
Stephen Eglen ◽  
Erik Lieungh

In this episode, we are talking about code and the benefits of making your code available in a peer review process and having it checked. Our guest is Dr. Stephen Eglen from the department of Applied Mathematics and Theoretical Physics at the University of Cambridge. Together with Dr. Daniel Nüst, from the University of Münster, he has created CodeCheck – an Open Science initiative to facilitate the sharing of computer programs and results presented in scientific publications. The host of this episode is Erik Lieungh. This episode was first published 20 January 2020.


Sign in / Sign up

Export Citation Format

Share Document