scholarly journals Code of practice for research data usage metrics release 1

Author(s):  
Martin Fenner ◽  
Daniella Lowenberg ◽  
Matt Jones ◽  
Paul Needham ◽  
Dave Vieglais ◽  
...  

The Code of Practice for Research Data Usage Metrics standardizes the generation and distribution of usage metrics for research data, enabling for the first time the consistent and credible reporting of research data usage. This is the first release of the Code of Practice and the recommendations are aligned as much as possible with the COUNTER Code of Practice Release 5 that standardizes usage metrics for many scholarly resources, including journals and books. With the Code of Practice for Research Data Usage Metrics data repositories and platform providers can report usage metrics following common best practices and using a standard report format. This is an essential step towards realizing usage metrics as a critical component in our understanding of how publicly available research data are being reused. This complements ongoing work on establishing best practices and services for data citation.

2018 ◽  
Author(s):  
Martin Fenner ◽  
Daniella Lowenberg ◽  
Matt Jones ◽  
Paul Needham ◽  
Dave Vieglais ◽  
...  

The Code of Practice for Research Data Usage Metrics standardizes the generation and distribution of usage metrics for research data, enabling for the first time the consistent and credible reporting of research data usage. This is the first release of the Code of Practice and the recommendations are aligned as much as possible with the COUNTER Code of Practice Release 5 that standardizes usage metrics for many scholarly resources, including journals and books. With the Code of Practice for Research Data Usage Metrics data repositories and platform providers can report usage metrics following common best practices and using a standard report format. This is an essential step towards realizing usage metrics as a critical component in our understanding of how publicly available research data are being reused. This complements ongoing work on establishing best practices and services for data citation.


2018 ◽  
Author(s):  
Martin Fenner

There is a need for the consistent and credible reporting of research data usage. Such usage metrics are required as an important component in understanding how publicly available research data are being reused.To address this need, ...


Author(s):  
Gail M. Thornton ◽  
Ali Shiri

Introduction: Open health data provides healthcare professionals, biomedical researchers and the general public with access to health data which has the potential to improve healthcare delivery and policy. The challenge for data providers is to create and implement appropriate metadata, or structured data about the data, to ensure that data are easy to discover, access and re-use. The goal of this study is to identify, evaluate and compare Canadian open health data repositories for their searching, browsing and navigation functionalities, the richness of their metadata description practices, and their metadata-based filtering mechanisms.Methods: Metadata-based search and browsing was evaluated in addition to the number and nature of metadata elements. Canadian open health data repositories across national, provincial and institutional levels were evaluated. Data collected using verbatim text recording was evaluated using an analytical framework based on the 2019 Dataverse North Metadata Best Practices guide and 2019 Data Citation Implementation Project roadmap.Results: All six repositories required filtering to access “open health data”. All six repositories included subject facets for filtering, and title and description on the Results List. Inconsistencies suggest that improvements should address advanced search, health-specific search terms, records for all repositories and links to related publications.Discussion: Consistent use of title and description suggests that an interoperable interface is possible. Records indicate the need for explicit, easy to find mechanisms to access metadata in repositories. The analytical framework represents first draft guidelines for metadata creation and implementation to improve organization, discoverability and access to Canadian open health data.


2020 ◽  
Author(s):  
Graham Smith ◽  
Andrew Hufton

<p>Researchers are increasingly expected by funders and journals to make their data available for reuse as a condition of publication. At Springer Nature, we feel that publishers must support researchers in meeting these additional requirements, and must recognise the distinct opportunities data holds as a research output. Here, we outline some of the varied ways that Springer Nature supports research data sharing and report on key outcomes.</p><p>Our staff and journals are closely involved with community-led efforts, like the Enabling FAIR Data initiative and the COPDESS 2014 Statement of Commitment <sup>1-4</sup>. The Enabling FAIR Data initiative, which was endorsed in January 2019 by <em>Nature</em> and <em>Scientific Data</em>, and by <em>Nature Geoscience</em> in January 2020, establishes a clear expectation that Earth and environmental sciences data should be deposited in FAIR<sup>5</sup> Data-aligned community repositories, when available (and in general purpose repositories otherwise). In support of this endorsement, <em>Nature</em> and <em>Nature Geoscience</em> require authors to share and deposit their Earth and environmental science data, and <em>Scientific Data</em> has committed to progressively updating its list of recommended data repositories to help authors comply with this mandate.</p><p>In addition, we offer a range of research data services, with various levels of support available to researchers in terms of data curation, expert guidance on repositories and linking research data and publications.</p><p>We appreciate that researchers face potentially challenging requirements in terms of the ‘what’, ‘where’ and ‘how’ of sharing research data. This can be particularly difficult for researchers to negotiate given that huge diversity of policies across different journals. We have therefore developed a series of standardised data policies, which have now been adopted by more than 1,600 Springer Nature journals. </p><p>We believe that these initiatives make important strides in challenging the current replication crisis and addressing the economic<sup>6</sup> and societal consequences of data unavailability. They also offer an opportunity to drive change in how academic credit is measured, through the recognition of a wider range of research outputs than articles and their citations alone. As signatories of the San Francisco Declaration on Research Assessment<sup>7</sup>, Nature Research is committed to improving the methods of evaluating scholarly research. Research data in this context offers new mechanisms to measure the impact of all research outputs. To this end, Springer Nature supports the publication of peer-reviewed data papers through journals like <em>Scientific Data</em>. Analysis of citation patterns demonstrate that data papers can be well-cited, and offer a viable way for researchers to receive credit for data sharing through traditional citation metrics. Springer Nature is also working hard to improve support for direct data citation. In 2018 a data citation roadmap developed by the Publishers Early Adopters Expert Group was published in <em>Scientific Data</em><sup>8</sup>, outlining practical steps for publishers to work with data citations and associated benefits in transparency and credit for researchers. Using examples from this roadmap, its implementation and supporting services, we outline how a FAIR-led data approach from publishers can help researchers in the Earth and environmental sciences to capitalise on new expectations around data sharing.</p><p>__</p><ol><li>https://doi.org/10.1038/d41586-019-00075-3</li> <li>https://doi.org/10.1038/s41561-019-0506-4</li> <li>https://copdess.org/enabling-fair-data-project/commitment-statement-in-the-earth-space-and-environmental-sciences/</li> <li>https://copdess.org/statement-of-commitment/</li> <li>https://www.force11.org/group/fairgroup/fairprinciples</li> <li>https://op.europa.eu/en/publication-detail/-/publication/d375368c-1a0a-11e9-8d04-01aa75ed71a1</li> <li>https://sfdora.org/read/</li> <li>https://doi.org/10.1038/sdata.2018.259</li> </ol>


2018 ◽  
Vol 12 (2) ◽  
pp. 274-285 ◽  
Author(s):  
Dan Fowler ◽  
Jo Barratt ◽  
Paul Walsh

There is significant friction in the acquisition, sharing, and reuse of research data. It is estimated that eighty percent of data analysis is invested in the cleaning and mapping of data (Dasu and Johnson,2003). This friction hampers researchers not well versed in data preparation techniques from reusing an ever-increasing amount of data available within research data repositories. Frictionless Data is an ongoing project at Open Knowledge International focused on removing this friction. We are doing this by developing a set of tools, specifications, and best practices for describing, publishing, and validating data. The heart of this project is the “Data Package”, a containerization format for data based on existing practices for publishing open source software. This paper will report on current progress toward that goal.


2021 ◽  
Author(s):  
Steven L Goldstein ◽  
Kerstin Lehnert ◽  
Albrecht W Hofmann

<p>The ultimate goal of research data management is to achieve the long-term utility and impact of data acquired by research projects. Proper data management ensures that all researchers can validate and replicate findings, and reuse data in the quest for new discoveries. Research data need to be open, consistently and comprehensively documented for meaningful evaluation and reuse following domain-specific guidelines, and available for reuse via public data repositories that make them Findable, persistently Accessible, Interoperable, and Reusable (FAIR).</p><p>In the early 2000’s, the development of geochemical databases such as GEOROC and PetDB underscored that the reporting and documenting practices of geochemical data in the scientific literature were inconsistent and incomplete. The original data could often not be recovered from the publications, and essential information about samples, analytical procedures, data reduction, and data uncertainties was missing, thus limiting meaningful reuse of the data and reproducibility of the scientific findings. In order to avoid that such poor scientific practice might potentially damage the health of the entire discipline, we launched the Editors Roundtable in 2007, an initiative to bring together editors, publishers, and database providers to implement consistent publication practices for geochemical data. Recognizing that mainstream scientific journals were the most effective agents to rectify problems in data reporting and implement best practices, members of the Editors Roundtable created and signed a policy statement that laid out ‘Requirements for the Publication of Geochemical Data’ (Goldstein et al. 2014, http://dx.doi.org/10.1594/IEDA/100426). This presentation will examine the impact of this initial policy statement, assess the current status of best practices for geochemical data management, and explore what actions are still needed. </p><p>While the Editors Roundtable policy statement led to improved data reporting practices in some journals, and provided the basis for data submission policies and guidelines of the EarthChem Library (ECL), data reporting practices overall remained inconsistent and inadequate. Only with the formation of the Coalition for Publishing Data in the Earth and Space Sciences (COPDESS, www.copdess.org), which extended the Editors Roundtable to include publishers and data facilities across the entire Earth and Space Sciences, along with the subsequent AGU project ‘Enabling FAIR Data’, has the implementation of new requirements by publishers, funders, and data repositories progressed and led to significant compliance with the FAIR Data Principles. Submission of geochemical data to open and FAIR repositories has increased substantially. Nevertheless, standard guidelines for documenting geochemical data and standard protocols for exchanging geochemical data among distributed data systems still need to be defined, and structures to govern such standards need to be identified by the global geochemistry community. Professional societies such as the Geochemical Society, the European Association of Geochemistry, and the International Association of GeoChemistry can and should take a leading role in this process.</p>


2017 ◽  
Vol 41 (3) ◽  
pp. 428-435 ◽  
Author(s):  
David Stuart

Purpose The purpose of this paper is to highlight the problem of establishing metrics for the impact of research data when norms of behaviour have not yet become established. Design/methodology/approach The paper considers existing research into data citation and explores the citation of data journals. Findings The paper finds that the diversity of data and its citation precludes the drawing of any simple conclusions about how to measure the impact of data, and an over emphasis on metrics before norms of behaviour have become established may adversely affect the data ecosystem. Originality/value The paper considers multiple different types of data citation, including for the first time the citation of data journals.


2017 ◽  
Author(s):  
Sarala M. Wimalaratne ◽  
Nick Juty ◽  
John Kunze ◽  
Greg Janée ◽  
Julie A. McMurry ◽  
...  

AbstractMost biomedical data repositories issue locally-unique accessions numbers, but do not provide globally unique, machine-resolvable, persistent identifiers for their datasets, as required by publishers wishing to implement data citation in accordance with widely accepted principles. Local accessions may however be prefixed with a namespace identifier, providing global uniqueness. Such “compact identifiers” have been widely used in biomedical informatics to support global resource identification with local identifier assignment.We report here on our project to provide robust support for machine-resolvable, persistent compact identifiers in biomedical data citation, by harmonizing the Identifiers.org and N2T.net (Name-To-Thing) meta-resolvers and extending their capabilities. Identifiers.org services hosted at the European Molecular Biology Laboratory – European Bioinformatics Institute (EMBL-EBI), and N2T.net services hosted at the California Digital Library (CDL), can now resolve any given identifier from over 600 source databases to its original source on the Web, using a common registry of prefix-based redirection rules.We believe these services will be of significant help to publishers and others implementing persistent, machine-resolvable citation of research data.


2009 ◽  
pp. 97-112
Author(s):  
Z. V. Karamysheva

The paper is dedicated to the famous geobotanist and botanical geographer A. A. Yunatov and his researches in Mongolia. Yunatov’s scientific activities and his role as an organizer of the science is analyzed. His personal contributions into a study of the vegetation of Mongolia are following: the vegetation cover of Mongolia was described in detail for the first time, zonal and altitudinal regularities of its distribution were revealed, the scheme of botanical-geographic regionalization and the first medium-scale vegetation map were compiled. Author’s research data were published in Russia, Mongolia and China.


2021 ◽  
pp. 016555152199863
Author(s):  
Ismael Vázquez ◽  
María Novo-Lourés ◽  
Reyes Pavón ◽  
Rosalía Laza ◽  
José Ramón Méndez ◽  
...  

Current research has evolved in such a way scientists must not only adequately describe the algorithms they introduce and the results of their application, but also ensure the possibility of reproducing the results and comparing them with those obtained through other approximations. In this context, public data sets (sometimes shared through repositories) are one of the most important elements for the development of experimental protocols and test benches. This study has analysed a significant number of CS/ML ( Computer Science/ Machine Learning) research data repositories and data sets and detected some limitations that hamper their utility. Particularly, we identify and discuss the following demanding functionalities for repositories: (1) building customised data sets for specific research tasks, (2) facilitating the comparison of different techniques using dissimilar pre-processing methods, (3) ensuring the availability of software applications to reproduce the pre-processing steps without using the repository functionalities and (4) providing protection mechanisms for licencing issues and user rights. To show the introduced functionality, we created STRep (Spam Text Repository) web application which implements our recommendations adapted to the field of spam text repositories. In addition, we launched an instance of STRep in the URL https://rdata.4spam.group to facilitate understanding of this study.


Sign in / Sign up

Export Citation Format

Share Document