scholarly journals Methods for Open and Reproducible Materials Science

2019 ◽  
Author(s):  
Sara L Wilson ◽  
Micah Altman ◽  
Rafael Jaramillo

Data stewardship in experimental materials science is increasingly complex and important. Progress in data science and inverse-design of materials give reason for optimism that advances can be made if appropriate data resources are made available. Data stewardship also plays a critical role in maintaining broad support for research in the face of well-publicized replication failures (in different fields) and frequently changing attitudes, norms, and sponsor requirements for open science. The present-day data management practices and attitudes in materials science are not well understood. In this article, we collect information on the practices of a selection of materials scientists at two leading universities, using a semi-structured interview instrument. An analysis of these interviews reveals that although data management is universally seen as important, data management practices vary widely. Based on this analysis, we conjecture that broad adoption of basic file-level data sharing at the time of manuscript submission would benefit the field without imposing substantial burdens on researchers. More comprehensive solutions for lifecycle open research in materials science will have to overcome substantial differences in attitudes and practices.

2016 ◽  
Vol 11 (1) ◽  
pp. 156 ◽  
Author(s):  
Wei Jeng ◽  
Liz Lyon

We report on a case study which examines the social science community’s capability and institutional support for data management. Fourteen researchers were invited for an in-depth qualitative survey between June 2014 and October 2015. We modify and adopt the Community Capability Model Framework (CCMF) profile tool to ask these scholars to self-assess their current data practices and whether their academic environment provides enough supportive infrastructure for data related activities. The exemplar disciplines in this report include anthropology, political sciences, and library and information science. Our findings deepen our understanding of social disciplines and identify capabilities that are well developed and those that are poorly developed. The participants reported that their institutions have made relatively slow progress on economic supports and data science training courses, but acknowledged that they are well informed and trained for participants’ privacy protection. The result confirms a prior observation from previous literature that social scientists are concerned with ethical perspectives but lack technical training and support. The results also demonstrate intra- and inter-disciplinary commonalities and differences in researcher perceptions of data-intensive capability, and highlight potential opportunities for the development and delivery of new and impactful research data management support services to social sciences researchers and faculty. 


2021 ◽  
Vol 16 (1) ◽  
pp. 20
Author(s):  
Hagen Peukert

After a century of theorising and applying management practices, we are in the middle of entering a new stage in management science: digital management. The management of digital data submerges in traditional functions of management and, at the same time, continues to recreate viable solutions and conceptualisations in its established fields, e.g. research data management. Yet, one can observe bilateral synergies and mutual enrichment of traditional and data management practices in all fields. The paper at hand addresses a case in point, in which new and old management practices amalgamate to meet a steadily, in part characterised by leaps and bounds, increasing demand of data curation services in academic institutions. The idea of modularisation, as known from software engineering, is applied to data curation workflows so that economies of scale and scope can be used. While scaling refers to both management science and data science, optimising is understood in the traditional managerial sense, that is, with respect to the cost function. By means of a situation analysis describing how data curation services were applied from one department to the entire institution and an analysis of the factors of influence, a method of modularisation is outlined that converges to an optimal state of curation workflows.


2019 ◽  
Vol 25 (2) ◽  
pp. 132-150
Author(s):  
Isaac C. Oti ◽  
Nasir G. Gharaibeh

The amount of data being maintained by state departments of transportation (DOTs) and local transportation agencies is increasing steadily. Although data provide opportunities to facilitate decision making at transportation agencies, there are challenges involved in managing large and diverse data. This article provides an assessment of the maturity of three data management practice (stewardship, storage and warehousing, and integration) for 16 transportation data groups based on a survey of 43 DOTs in the United States. The assessment results show that data management practices at the monitoring and operations phases of transportation infrastructure life cycle are likely more mature than those at other phases. Inventory data, in particular, has the most mature data management practices. On the other end, real estate data and travel modeling data have the least mature data management practices. A comparison of the practices indicates that data stewardship is more mature than data integration and storage and warehousing practices.


2020 ◽  
Vol 2 (1-2) ◽  
pp. 238-245 ◽  
Author(s):  
Luana Sales ◽  
Patrícia Henning ◽  
Viviane Veiga ◽  
Maira Murrieta Costa ◽  
Luís Fernando Sayão ◽  
...  

The FAIR principles, an acronym for Findable, Accessible, Interoperable and Reusable, are recognised worldwide as key elements for good practice in all data management processes. To understand how the Brazilian scientific community is adhering to these principles, this article reports Brazilian adherence to the GO FAIR initiative through the creation of the GO FAIR Brazil Office and the manner in which they create their implementation networks. To contextualise this understanding, we provide a brief presentation of open data policies in Brazilian research and government, and finally, we describe a model that has been adopted for the GO FAIR Brazil implementation networks. The Brazilian Institute of Information in Science and Technology is responsible for the GO FAIR Brazil Office, which operates in all fields of knowledge and supports thematic implementation networks. Today, GO FAIR Brazil-Health is the first active implementation network in operation, which works in all health domains, serving as a model for other fields like agriculture, nuclear energy, and digital humanities, which are in the process of adherence negotiation. This report demonstrates the strong interest and effort from the Brazilian scientific communities in implementing the FAIR principles in their research data management practices.


2017 ◽  
Author(s):  
Philippa C. Griffin ◽  
Jyoti Khadake ◽  
Kate S. LeMay ◽  
Suzanna E. Lewis ◽  
Sandra Orchard ◽  
...  

AbstractThroughout history, the life sciences have been revolutionised by technological advances; in our era this is manifested by advances in instrumentation for data generation, and consequently researchers now routinely handle large amounts of heterogeneous data in digital formats. The simultaneous transitions towards biology as a data science and towards a ‘life cycle’ view of research data pose new challenges. Researchers face a bewildering landscape of data management requirements, recommendations and regulations, without necessarily being able to access data management training or possessing a clear understanding of practical approaches that can assist in data management in their particular research domain.Here we provide an overview of best practice data life cycle approaches for researchers in the life sciences/bioinformatics space with a particular focus on ‘omics’ datasets and computer-based data processing and analysis. We discuss the different stages of the data life cycle and provide practical suggestions for useful tools and resources to improve data management practices.


2019 ◽  
Author(s):  
Heather Andrews ◽  
Marta Teperek ◽  
Jasper van Dijck ◽  
Kees den Heijer ◽  
Robbert Eggermont ◽  
...  

The Data Stewardship project is a new initiative from the Delft University of Technology (TU Delft) in the Netherlands. Its aim is to create mature working practices and policies regarding research data management across all TU Delft faculties. The novelty of this project relies on having a dedicated person, the so-called ‘Data Steward’, embedded in each faculty to approach research data management from a more discipline-specific perspective. It is within this framework that a research data management survey was carried out at the faculties that had a Data Steward in place by July 2018. The goal was to get an overview of the general data management practices, and use its results as a benchmark for the project. The total response rate was 11 to 37% depending on the faculty. Overall, the results show similar trends in all faculties, and indicate lack of awareness regarding different data management topics such as automatic data backups, data ownership, relevance of data management plans, awareness of FAIR data principles and usage of research data repositories. The results also show great interest towards data management, as more than ~80% of the respondents in each faculty claimed to be interested in data management training and wished to see the summary of survey results. Thus, the survey helped identified the topics the Data Stewardship project is currently focusing on, by carrying out awareness campaigns and providing training at both university and faculty levels.


2020 ◽  
Vol 36 (3) ◽  
pp. 281-299
Author(s):  
Stefka Tzanova

In this paper we study the changes in academic library services inspired by the Open Science movement and especially the changes prompted from Open Data as a founding part of Open Science. We argue that academic libraries face the even bigger challenges for accommodating and providing support for Open Big Data composed from existing raw data sets and new massive sets generated from data driven research. Ensuring the veracity of Open Big Data is a complex problem dominated by data science. For academic libraries, that challenge triggers not only the expansion of traditional library services, but also leads to adoption of a set of new roles and responsibilities. That includes, but is not limited to development of the supporting models for Research Data Management, providing Data Management Plan assistance, expanding the qualifications of library personnel toward data science literacy, integration of the library services into research and educational process by taking part in research grants and many others. We outline several approaches taken by some academic libraries and by libraries at the City University of New York (CUNY) to meet necessities imposed by doing research and education with Open Big Data – from changes in libraries’ administrative structure, changes in personnel qualifications and duties, leading the interdisciplinary advisory groups, to active collaboration in principal projects.


2020 ◽  
Author(s):  
Massimo Cocco ◽  
Daniele Bailo ◽  
Keith G. Jeffery ◽  
Rossana Paciello ◽  
Valerio Vinciarelli ◽  
...  

<p>Interoperability has long been an objective for research infrastructures dealing with research data to foster open access and open science. More recently, FAIR principles (Findability, Accessibility, Interoperability and Reusability) have been proposed. The FAIR principles are now reference criteria for promoting and evaluating openness of scientific data. FAIRness is considered a necessary target for research infrastructures in different scientific domains at European and global level.</p><p>Solid Earth RIs have long been committed to engage scientific communities involved in data collection, standardization and quality management as well as providing metadata and services for qualification, storage and accessibility. They are working to adopt FAIR principles, thus addressing the onerous task of turning these principles into practices. To make FAIR principles a reality in terms of service provision for data stewardship, some RI implementers in EPOS have proposed a FAIR-adoption process leveraging a four stage roadmap that reorganizes FAIR principles to better fit to scientists and RI implementers mindset. The roadmap considers FAIR principles as requirements in the software development life cycle, and reorganizes them into data, metadata, access services and use services. Both the implementation and the assessment of “FAIRness” level by means of questionnaire and metrics is made simple and closer to day-to-day scientists works.</p><p>FAIR data and service management is demanding, requiring resources and skills and more importantly it needs sustainable IT resources. For this reason, FAIR data management is challenging for many Research Infrastructures and data providers turning FAIR principles into reality through viable and sustainable practices. FAIR data management also includes implementing services to access data as well as to visualize, process, analyse and model them for generating new scientific products and discoveries.</p><p>FAIR data management is challenging to Earth scientists because it depends on their perception of finding, accessing and using data and scientific products: in other words, the perception of data sharing. The sustainability of FAIR data and service management is not limited to financial sustainability and funding; rather, it also includes legal, governance and technical issues that concern the scientific communities.</p><p>In this contribution, we present and discuss some of the main challenges that need to be urgently tackled in order to run and operate FAIR data services in the long-term, as also envisaged by the European Open Science Cloud initiative: a) sustainability of the IT solutions and resources to support practices for FAIR data management (i.e., PID usage and preservation, including costs for operating the associated IT services); b) re-usability, which on one hand requires clear and tested methods to manage heterogeneous metadata and provenance, while on the other hand can be considered a frontier research field; c) FAIR services provision, which presents many open questions related to the application of FAIR principles to services for data stewardship, and to services for the creation of data products taking in input FAIR raw data, for which is not clear how FAIRness compliancy of data products can be still guaranteed.</p>


F1000Research ◽  
2018 ◽  
Vol 6 ◽  
pp. 1618 ◽  
Author(s):  
Philippa C. Griffin ◽  
Jyoti Khadake ◽  
Kate S. LeMay ◽  
Suzanna E. Lewis ◽  
Sandra Orchard ◽  
...  

Throughout history, the life sciences have been revolutionised by technological advances; in our era this is manifested by advances in instrumentation for data generation, and consequently researchers now routinely handle large amounts of heterogeneous data in digital formats. The simultaneous transitions towards biology as a data science and towards a ‘life cycle’ view of research data pose new challenges. Researchers face a bewildering landscape of data management requirements, recommendations and regulations, without necessarily being able to access data management training or possessing a clear understanding of practical approaches that can assist in data management in their particular research domain. Here we provide an overview of best practice data life cycle approaches for researchers in the life sciences/bioinformatics space with a particular focus on ‘omics’ datasets and computer-based data processing and analysis. We discuss the different stages of the data life cycle and provide practical suggestions for useful tools and resources to improve data management practices.


Sign in / Sign up

Export Citation Format

Share Document