scholarly journals Towards Machine-Readable (Meta) Data and the FAIR Value for Artificial Intelligence Exploration of COVID-19 and Cancer Research Data

2021 ◽  
Vol 4 ◽  
Author(s):  
Maria Luiza. M. Campos ◽  
Eugênio Silva ◽  
Renato Cerceau ◽  
Sérgio Manuel Serra da Cruz ◽  
Fabricio A. B. Silva ◽  
...  
2019 ◽  
Vol 46 (8) ◽  
pp. 622-638
Author(s):  
Joachim Schöpfel ◽  
Dominic Farace ◽  
Hélène Prost ◽  
Antonella Zane

Data papers have been defined as scholarly journal publications whose primary purpose is to describe research data. Our survey provides more insights about the environment of data papers, i.e., disciplines, publishers and business models, and about their structure, length, formats, metadata, and licensing. Data papers are a product of the emerging ecosystem of data-driven open science. They contribute to the FAIR principles for research data management. However, the boundaries with other categories of academic publishing are partly blurred. Data papers are (can be) generated automatically and are potentially machine-readable. Data papers are essentially information, i.e., description of data, but also partly contribute to the generation of knowledge and data on its own. Part of the new ecosystem of open and data-driven science, data papers and data journals are an interesting and relevant object for the assessment and understanding of the transition of the former system of academic publishing.


2021 ◽  
Author(s):  
Richard Wessels ◽  
Thijmen Kok ◽  
Hans van Melick ◽  
Martyn Drury

<p>Publishing research data in a Findable, Accessible, Interoperable, and Reusable (FAIR) manner is increasingly valued and nowadays often required by publishers and funders. Because experimental research data provide the backbone for scientific publications, it is important to publish this data as FAIRly as possible to enable reuse and citation of the data, thereby increasing the impact of research.</p><p>The structural geology group at Utrecht University is collaborating with the EarthCube-funded StraboSpot initiative to develop (meta)data schemas, templates and workflows, to support researchers in collecting and publishing petrological and microstructural data. This data will be made available in a FAIR manner through the EPOS (European Plate Observing System) data publication chain <span xml:lang="EN-GB"><span>(https://epos-msl.uu.nl/</span></span><span xml:lang="EN-GB"><span>)</span></span><span xml:lang="EN-GB"><span>.</span></span></p><p>The data workflow under development currently includes: a) collecting structural field (meta)data compliant with the StraboSpot protocols, b) creating thin sections oriented in three dimensions by applying a notch system (Tikoff et al., 2019), c) scanning and digitizing thin sections using a high-resolution scanner, d) automated mineralogy through EDS on a SEM, and e) high-resolution geochemistry using a microprobe. The purpose of this workflow is to be able to track geochemical and structural measurements and observations throughout the analytical process.</p><p>This workflow is applied to samples from the Cap de Creus region in northeast Spain. Located in the axial zone of the Pyrenees, the pre-Cambrian metasediments underwent HT-LP greenschist- to amphibolite-facies metamorphism, are intruded by pegmatitic bodies, and transected by greenschist-facies shear zones. Cap de Creus is a natural laboratory for studying the deformation history of the Pyrenees, and samples from the region are ideal to test and refine the data workflow. In particular, the geochemical data collected under this workflow is used as input for modelling the bulk rock composition using Perple_X.    </p><p>In the near future the workflow will be complimented by adding unique identifiers to the collected samples using IGSN (International Geo Sample Number), and by incorporating a StraboSpot-developed application for microscopy-based image correlation. This workflow will be refined and included in the broader correlative microscopy workflow that will be applied in the upcoming EXCITE project, an H2020-funded European collaboration of electron and x-ray microscopy facilities and researchers aimed at structural and chemical imaging of earth materials. </p>


Author(s):  
Samuel L. Volchenboum ◽  
Suzanne M. Cox ◽  
Allison Heath ◽  
Adam Resnick ◽  
Susan L. Cohn ◽  
...  

The falling costs and increasing fidelity of high-throughput biomedical research data have led to a renaissance in cancer surveillance and treatment. Yet, the amount, velocity, and complexity of these data have overcome the capacity of the increasing number of researchers collecting and analyzing this information. By centralizing the data, processing power, and tools, there is a valuable opportunity to share resources and thus increase the efficiency, power, and impact of research. Herein, we describe current data commons and how they operate in the oncology landscape, including an overview of the International Neuroblastoma Risk Group data commons as a paradigm case. We outline the practical steps and considerations in building data commons. Finally, we discuss the unique opportunities and benefits of creating a data commons within the context of pediatric cancer research, highlighting the particular advantages for clinical oncology and suggested next steps.


2012 ◽  
Vol 3 (1) ◽  
Author(s):  
Nell Sedransk ◽  
Linda J. Young ◽  
Cliff Spiegelman

Making published, scientific research data publicly available can benefit scientists and policy makers only if there is sufficient information for these data to be intelligible. Thus the necessary meta-data go beyond the scientific, technological detail and extend to the statistical approach and methodologies applied to these data. The statistical principles that give integrity to researchers’ analyses and interpretations of their data require documentation. This is true when the intent is to verify or validate the published research findings; it is equally true when the intent is to utilize the scientific data in conjunction with other data or new experimental data to explore complex questions; and it is profoundly important when the scientific results and interpretations are taken outside the world of science to establish a basis for policy, for legal precedent or for decision-making. When research draws on already public data bases, e.g., a large federal statistical data base or a large scientific data base, selection of data for analysis, whether by selection (subsampling) or by aggregating, is specific to that research so that this (statistical) methodology is a crucial part of the meta-data. Examples illustrate the role of statistical meta-data in the use and reuse of these public datasets and the impact on public policy and precedent.


Sign in / Sign up

Export Citation Format

Share Document