scholarly journals High-quality science requires high-quality open data infrastructure

2018 ◽  
Vol 5 (1) ◽  
Author(s):  
Susanna-Assunta Sansone ◽  
Patricia Cruse ◽  
Mark Thorley
2021 ◽  
Author(s):  
Damien Graux ◽  
Sina Mahmoodi

The growing web of data warrants better data management strategies. Data silos are single points of failure and they face availability problems which lead to broken links. Furthermore the dynamic nature of some datasets increases the need for a versioning scheme. In this work, we propose a novel architecture for a linked open data infrastructure, built on open decentralized technologies. IPFS is used for storage and retrieval of data, and the public Ethereum blockchain is used for naming, versioning and storing metadata of datasets. We furthermore exploit two mechanisms for maintaining a collection of relevant, high-quality datasets in a distributed manner in which participants are incentivized. The platform is shown to have a low barrier to entry and censorship-resistance. It benefits from the fault-tolerance of its underlying technologies. Furthermore, we validate the approach by implementing our solution.


2015 ◽  
Vol 28 (2) ◽  
pp. 30-35 ◽  
Author(s):  
Juan Bicarregui ◽  
Brian Matthews ◽  
Frank Schluenzen

Author(s):  
T. Kliment ◽  
V. Cetl ◽  
H. Tomič ◽  
J. Lisiak ◽  
M. Kliment

Nowadays, the availability of authoritative geospatial features of various data themes is becoming wider on global, regional and national levels. The reason is existence of legislative frameworks for public sector information and related spatial data infrastructure implementations, emergence of support for initiatives as open data, big data ensuring that online geospatial information are made available to digital single market, entrepreneurs and public bodies on both national and local level. However, the availability of authoritative reference spatial data linking the geographic representation of the properties and their owners are still missing in an appropriate quantity and quality level, even though this data represent fundamental input for local governments regarding the register of buildings used for property tax calculations, identification of illegal buildings, etc. We propose a methodology to improve this situation by applying the principles of participatory GIS and VGI used to collect observations, update authoritative datasets and verify the newly developed datasets of areas of buildings used to calculate property tax rates issued to their owners. The case study was performed within the district of the City of Požega in eastern Croatia in the summer 2015 and resulted in a total number of 16072 updated and newly identified objects made available online for quality verification by citizens using open source geospatial technologies.


2020 ◽  
Vol 6 ◽  
Author(s):  
Christoph Steinbeck ◽  
Oliver Koepler ◽  
Felix Bach ◽  
Sonja Herres-Pawlis ◽  
Nicole Jung ◽  
...  

The vision of NFDI4Chem is the digitalisation of all key steps in chemical research to support scientists in their efforts to collect, store, process, analyse, disclose and re-use research data. Measures to promote Open Science and Research Data Management (RDM) in agreement with the FAIR data principles are fundamental aims of NFDI4Chem to serve the chemistry community with a holistic concept for access to research data. To this end, the overarching objective is the development and maintenance of a national research data infrastructure for the research domain of chemistry in Germany, and to enable innovative and easy to use services and novel scientific approaches based on re-use of research data. NFDI4Chem intends to represent all disciplines of chemistry in academia. We aim to collaborate closely with thematically related consortia. In the initial phase, NFDI4Chem focuses on data related to molecules and reactions including data for their experimental and theoretical characterisation. This overarching goal is achieved by working towards a number of key objectives: Key Objective 1: Establish a virtual environment of federated repositories for storing, disclosing, searching and re-using research data across distributed data sources. Connect existing data repositories and, based on a requirements analysis, establish domain-specific research data repositories for the national research community, and link them to international repositories. Key Objective 2: Initiate international community processes to establish minimum information (MI) standards for data and machine-readable metadata as well as open data standards in key areas of chemistry. Identify and recommend open data standards in key areas of chemistry, in order to support the FAIR principles for research data. Finally, develop standards, if there is a lack. Key Objective 3: Foster cultural and digital change towards Smart Laboratory Environments by promoting the use of digital tools in all stages of research and promote subsequent Research Data Management (RDM) at all levels of academia, beginning in undergraduate studies curricula. Key Objective 4: Engage with the chemistry community in Germany through a wide range of measures to create awareness for and foster the adoption of FAIR data management. Initiate processes to integrate RDM and data science into curricula. Offer a wide range of training opportunities for researchers. Key Objective 5: Explore synergies with other consortia and promote cross-cutting development within the NFDI. Key Objective 6: Provide a legally reliable framework of policies and guidelines for FAIR and open RDM.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Beata Orlecka-Sikora ◽  
Stanisław Lasocki ◽  
Joanna Kocot ◽  
Tomasz Szepieniec ◽  
Jean Robert Grasso ◽  
...  

2020 ◽  
Author(s):  
Mohan Ramamurthy

<p>The geoscience disciplines are either gathering or generating data in ever-increasing volumes. To ensure that the science community and society reap the utmost benefits in research and societal applications from such rich and diverse data resources, there is a growing interest in broad-scale, open data sharing to foster myriad scientific endeavors. However, open access to data is not sufficient; research outputs must be reusable and reproducible to accelerate scientific discovery and catalyze innovation.</p><p>As part of its mission, Unidata, a geoscience cyberinfrastructure facility, has been developing and deploying data infrastructure and data-proximate scientific workflows and analysis tools using cloud computing technologies for accessing, analyzing, and visualizing geoscience data.</p><p>Specifically, Unidata has developed techniques that combine robust access to well-documented datasets with easy-to-use tools, using workflow technologies. In addition to fostering the adoption of technologies like pre-configured virtual machines through Docker containers and Jupyter notebooks, other computational and analytic methods are enabled via “Software as a Service” and “Data as a Service” techniques with the deployment of the Cloud IDV, AWIPS Servers, and the THREDDS Data Server in the cloud. The collective impact of these services and tools is to enable scientists to use the Unidata Science Gateway capabilities to not only conduct their research but also share and collaborate with other researchers and advance the intertwined goals of Reproducibility of Science and Open Science, and in the process, truly enabling “Science as a Service”.</p><p>Unidata has implemented the aforementioned services on the Unidata Science Gateway ((http://science-gateway.unidata.ucar.edu), which is hosted on the Jetstream cloud, a cloud-computing facility that is funded by the U. S. National Science Foundation. The aim is to give geoscientists an ecosystem that includes data, tools, models, workflows, and workspaces for collaboration and sharing of resources.</p><p>In this presentation, we will discuss our work to date in developing the Unidata Science Gateway and the hosted services therein, as well as our future directions toward increasing expectations from funders and scientific communities that they will be Open and FAIR (Findable, Accessible, Interoperable, Reusable). In particular, we will discuss how Unidata is advancing data and software transparency, open science, and reproducible research. We will share our experiences in how the geoscience and information science communities are using the data, tools and services provided through the Unidata Science Gateway to advance research and education in the geosciences.</p>


Author(s):  
Anatoliy Lyashchenko ◽  
Yuriy Karpinskyi ◽  
Yevheniy Havryliuk ◽  
Andriy Cherin

Interoperability is one of the key characteristics of the national geospatial data infrastructure (NSDI), on which depends the effectiveness of the interaction of holders, producers and users of geospatial data in the network of geoportals. The article substantiates the methods and means of achieving a high level of interoperability of the components of the Ukraine NSDI on the basis of ensuring the consistency of geospatial data supplied by different data producers, standardization of metadata and interfaces of geoinformation services. It is established that the bases of the legislative and organizational level of interoperability are defined in the Law of Ukraine "On the national geospatial data infrastructure " and in the "Procedure for the operation of NSDI". To ensure the interoperability of the components of the Ukraine NSDI at the semantic and technical levels, it is necessary to develop a set of technical regulations that define common requirements for: composition and structure of metadata, interfaces and functions of geographic information services, compatibility of geospatial data sets, classification systems, coding and unique identification of geospatial objects, open data exchange formats. These technical regulations should be based on the consistent and comprehensive implementation of the methodology of the basic international standards of the ISO 19100 Geographic Information / Geomatics series, the effectiveness of which has been confirmed by the successful implementation of NSDI in many countries of the world. 


Author(s):  
Mariya Dimitrova ◽  
Raïssa Meyer ◽  
Pier Luigi Buttigieg ◽  
Teodor Georgiev ◽  
Georgi Zhelezov ◽  
...  

Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer-review and increases the impact of datasets by enhancing their visibility, accessibility, and re-usability. We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivise researchers to publish high-quality data sets, we created a workflow for streamlined import of omics metadata directly into a data paper manuscript. An omics data paper template was designed by defining key article sections which encourage the description of omics datasets and methodologies. The workflow was based on REpresentational State Transfer services and Xpath to extract information from the European Nucleotide Archive, ArrayExpress and BioSamples databases, which follow community-agreed standards. The workflow for automatic import of standard-compliant metadata into an omics data paper manuscript facilitates the authoring process. It demonstrates the importance and potential of creating machine-readable and standard-compliant metadata. The omics data paper structure and workflow to import omics metadata improves the data publishing landscape by providing a novel mechanism for creating high-quality, enhanced metadata records, peer reviewing and publishing of these. It constitutes a powerful addition for distribution, visibility, reproducibility and re-usability of scientific data. We hope that streamlined metadata re-use for scholarly publishing encourages authors to improve the quality of their metadata to achieve a truly FAIR data world.


ABI-Technik ◽  
2020 ◽  
Vol 40 (2) ◽  
pp. 158-168
Author(s):  
Aline Le Provost ◽  
Yann Nicolas

AbstractAuthority data has always been at the core of library catalogues. Today, authority data is reference data on a wider scale. The former authorities of the “Sudoc” union catalogue mutated into “IdRef”, a read/write platform of open data and services which seeks to become a national supplier of reliable identifiers for French universities. To support their dissemination and comply with high quality standards, Paprika and Qualinka have been added to our toolbox, to expedite the massive and secure linking of scientific publications to IdRef authorities.


Author(s):  
Lorenzo Amato ◽  
Dimitri Dello Buono ◽  
Francesco Izzi ◽  
Giuseppe La Scaleia ◽  
Donato Maio

H.E.L.P is an early warning dashboard system built for the prevention, mitigation and assessment of disasters, be they earthquakes, fires, or meteorological systems. It was built to be easily manageable, customizable and accessible to all users, to facilitate humanitarian and governmental response. In its essence it is an emergency preparedness web tool, which can be used for decision making for a better level of mitigation and response on any level.Risks or disasters are not events in our control, rather, they are situations to which we can better manage with a framework based on preparedness. The earlier and more precise the monitoring of hazards allow for faster response to manage and mitigate a disaster’s impact on a society, economy and environment.This is exactly what HELP offers, it plays a main role in the cycle of early warning and risk (Preparedness, Risk, Mitigation, and Resilience). It provides information in real time on events and hazards, allowing for the possibility to analyze the situation and find a solution whose outcome protects the most lives and has the least economic impact. As a tool it also provides the opportunity to respond to a hazard with resilience in mind, this means that not only does HELP prepare for and mitigate events, it can also be used to implement better organizational methods for future events, thus, minimizing overall risk. Providing people with the means to better be able to take care of themselves, lessening the effects of future hazards each and every time. HELP is a tool in a framework which was created to support governments in their efforts to protect their people, building their response efficiency and resilience. HELP (with the name of E.W.A.R.E. Early Warning and Awareness of Risks and Emergencies) was born as WFP (The World Food Program) and IMAA-CNR (Institute of Methodologies for Environmental Analysis of the National Research Council of Italy) entered into a Cooperation Agreement concerning the development of a Geo-Spatial Data Infrastructure System for the Palestinian Civil Defense with the aim of building an enhanced preparedness capacity in Palestine.HELP has a simple and flexible but very effective logic to perform the early warning: Watch to open data sources on risk themes (NASA satellite data, Weather Forecast, world wide seismic networks, etc); Apply (programmable) “intelligence” to detect critical situations, exceeding of thresholds, population potentially involved by events, etc; Highlight critical elements on the map; Send alerts to emergency managers.


Sign in / Sign up

Export Citation Format

Share Document