scholarly journals Editorial: The publication of geoscientific model developments v1.2

2019 ◽  
Vol 12 (6) ◽  
pp. 2215-2225
Author(s):  

Abstract. Version 1.1 of the editorial of Geoscientific Model Development (GMD), published in 2015 (GMD Executive Editors, 2015), introduced clarifications to the policy on publication of source code and input data for papers published in the journal. Three years of working with this policy has revealed that it is necessary to be more precise in the requirements of the policy and in the narrowness of its exceptions. Furthermore, the previous policy was not specific in the requirements for suitable archival locations. Best practice in code and data archiving continues to develop and is far from universal among scientists. This has resulted in many manuscripts requiring improvement in code and data availability practice during the peer-review process. New researchers continually start their professional lives, and it remains the case that not all authors fully appreciate why code and data publication is necessary. This editorial provides an opportunity to explain this in the context of GMD. The changes in the code and data policy are summarised as follows: The requirement for authors to publish source code, unless this is impossible for reasons beyond their control, is clarified. The minimum requirements are strengthened such that all model code must be made accessible during the review process to the editor and to potentially anonymous reviewers. Source code that can be made public must be made public, and embargoes are not permitted. Identical requirements exist for input data and model evaluation data sets in the model experiment descriptions. The scope of the code and data required to be published is described. In accordance with Copernicus' own data policy, we now specifically strongly encourage all code and data used in any analyses be made available. This will have particular relevance for some model evaluation papers where editors may now strongly request this material be made available. The requirements of suitable archival locations are specified, along with the recommendation that Zenodo is often a good choice. In addition, since the last editorial, an “Author contributions” section must now be included in all manuscripts.

2019 ◽  
Vol 6 (1) ◽  
pp. 205395171983625 ◽  
Author(s):  
Dan Sholler ◽  
Karthik Ram ◽  
Carl Boettiger ◽  
Daniel S Katz

To improve the quality and efficiency of research, groups within the scientific community seek to exploit the value of data sharing. Funders, institutions, and specialist organizations are developing and implementing strategies to encourage or mandate data sharing within and across disciplines, with varying degrees of success. Academic journals in ecology and evolution have adopted several types of public data archiving policies requiring authors to make data underlying scholarly manuscripts freely available. The effort to increase data sharing in the sciences is one part of a broader “data revolution” that has prompted discussion about a paradigm shift in scientific research. Yet anecdotes from the community and studies evaluating data availability suggest that these policies have not obtained the desired effects, both in terms of quantity and quality of available datasets. We conducted a qualitative, interview-based study with journal editorial staff and other stakeholders in the academic publishing process to examine how journals enforce data archiving policies. We specifically sought to establish who editors and other stakeholders perceive as responsible for ensuring data completeness and quality in the peer review process. Our analysis revealed little consensus with regard to how data archiving policies should be enforced and who should hold authors accountable for dataset submissions. Themes in interviewee responses included hopefulness that reviewers would take the initiative to review datasets and trust in authors to ensure the completeness and quality of their datasets. We highlight problematic aspects of these thematic responses and offer potential starting points for improvement of the public data archiving process.


2017 ◽  
Vol 33 (1) ◽  
pp. 129-144 ◽  
Author(s):  
Jay C. Thibodeau ◽  
L. Tyler Williams ◽  
Annie L. Witte

ABSTRACT In the new research frontier of data availability, this study develops guidelines to aid accounting academicians as they seek to evidence data integrity proactively in the peer-review process. To that end, we explore data integrity issues associated with two emerging data streams that are gaining prominence in the accounting literature: online labor markets and social media sources. We provide rich detail surrounding academic thought about these data platforms through interview data collected from a sample of former senior journal editors and survey data collected from a sample of peer reviewers. We then propound a set of best practice considerations that are designed to mitigate the perceived risks identified by our assessment.


Publications ◽  
2019 ◽  
Vol 7 (3) ◽  
pp. 59
Author(s):  
J. Israel Martínez-López ◽  
Samantha Barrón-González ◽  
Alejandro Martínez López

There is a large amount of Information Technology and Communication (ITC) tools that surround scholar activity. The prominent place of the peer-review process upon publication has promoted a crowded market of technological tools in several formats. Despite this abundance, many tools are unexploited or underused because they are not known by the academic community. In this study, we explored the availability and characteristics of the assisting tools for the peer-reviewing process. The aim was to provide a more comprehensive understanding of the tools available at this time, and to hint at new trends for further developments. The result of an examination of literature assisted the creation of a novel taxonomy of types of software available in the market. This new classification is divided into nine categories as follows: (I) Identification and social media, (II) Academic search engines, (III) Journal-abstract matchmakers, (IV) Collaborative text editors, (V) Data visualization and analysis tools, (VI) Reference management, (VII) Proofreading and plagiarism detection, (VIII) Data archiving, and (IX) Scientometrics and Altmetrics. Considering these categories and their defining traits, a curated list of 220 software tools was completed using a crowdfunded database (AlternativeTo) to identify relevant programs and ongoing trends and perspectives of tools developed and used by scholars.


2015 ◽  
Vol 8 (10) ◽  
pp. 3487-3495 ◽  
Author(s):  

Abstract. Version 1.0 of the editorial of the EGU (European Geosciences Union) journal, Geoscientific Model Development (GMD), was published in 2013. In that editorial an assessment was made of the progress the journal had made since it started, and some revisions to the editorial policy were introduced. After 2 years of experience with this revised editorial policy there are a few required updates, refinements and clarifications, so here we present version 1.1 of the editorial. The most significant amendments relate to the peer-review criteria as presented in the Framework for GMD manuscript types, which is published as an appendix to this paper and also available on the GMD manuscript types webpage. We also slightly refine and update the Publication guide and introduce a self-contained code and data policy. The changes are summarised as follows: – All manuscript types are now required to include code or data availability paragraphs, and model code must always be made available (in the case of copyright or other legal issues, to the editor at a minimum). – The role of evaluation in GMD papers is clarified, and a separate evaluation paper type is introduced. Model descriptions must already be published or in peer review when separate evaluation papers are submitted. – Observationally derived data should normally be published in a data journal rather than in GMD. Syntheses of data which were specifically designed for tasks such as model boundary conditions or direct evaluation of model output may, however, be published in GMD. – GMD publishes a broad range of different kinds of models, and this fact is now more explicitly acknowledged. – The main changes to the Publication guide are the addition of guidelines for editors when assessing papers at the initial review stage. Before sending papers for peer review, editors are required to make sure that papers comply with the Framework for GMD paper types and to carefully consider the topic of plagiarism. – A new appendix, the GMD code and data policy, is included. Version 1.1 of the manuscript types and Publication guide are included in the appendices with changed sentences marked in bold font.


2008 ◽  
Vol 13 (1) ◽  
pp. 1-12
Author(s):  
Christopher R. Brigham ◽  
Robert D. Rondinelli ◽  
Elizabeth Genovese ◽  
Craig Uejo ◽  
Marjorie Eskay-Auerbach

Abstract The AMA Guides to the Evaluation of Permanent Impairment (AMA Guides), Sixth Edition, was published in December 2007 and is the result of efforts to enhance the relevance of impairment ratings, improve internal consistency, promote precision, and simplify the rating process. The revision process was designed to address shortcomings and issues in previous editions and featured an open, well-defined, and tiered peer review process. The principles underlying the AMA Guides have not changed, but the sixth edition uses a modified conceptual framework based on the International Classification of Functioning, Disability, and Health (ICF), a comprehensive model of disablement developed by the World Health Organization. The ICF classifies domains that describe body functions and structures, activities, and participation; because an individual's functioning and disability occur in a context, the ICF includes a list of environmental factors to consider. The ICF classification uses five impairment classes that, in the sixth edition, were developed into diagnosis-based grids for each organ system. The grids use commonly accepted consensus-based criteria to classify most diagnoses into five classes of impairment severity (normal to very severe). A figure presents the structure of a typical diagnosis-based grid, which includes ranges of impairment ratings and greater clarity about choosing a discreet numerical value that reflects the impairment.


2008 ◽  
Author(s):  
Kenya Malcolm ◽  
Allison Groenendyk ◽  
Mary Cwik ◽  
Alisa Beyer

2018 ◽  
Author(s):  
Cody Fullerton

For years, the gold-standard in academic publishing has been the peer-review process, and for the most part, peer-review remains a safeguard to authors publishing intentionally biased, misleading, and inaccurate information. Its purpose is to hold researchers accountable to the publishing standards of that field, including proper methodology, accurate literature reviews, etc. This presentation will establish the core tenants of peer-review, discuss if certain types of publications should be able to qualify as such, offer possible solutions, and discuss how this affects a librarian's reference interactions.


Sign in / Sign up

Export Citation Format

Share Document