scholarly journals The Principles of Data Reuse in Research Infrastructures

Author(s):  
Nikolay Skvortsov

The principles known by FAIR abbreviation have been applied for different kinds of data management technologies to support data reuse. In particular, they are important for investigations and development in research infrastructures but applied in significantly different ways. These principles are recognized as prospective since, according to them, data in the context of reuse should be readable and actionable by both humans and machines. The review of solutions for data interoperability and reuse in research infrastructures is presented in the paper. It is shown that conceptual modeling based on formal domain specifications still has good potential for data reuse in research infrastructures. It allows to relate data, methods, and other resources semantically, classify and identify them in the domain, integrate and verify the correctness of data reuse. Infrastructures based on formal domain modeling can make heterogeneous data management and research significantly more effective and automated.

2007 ◽  
Vol 40 (4) ◽  
pp. 2070 ◽  
Author(s):  
S. Vassilopoulou ◽  
K. Chousianitis ◽  
V. Sakkas ◽  
B. Damiata ◽  
E. Lagios

The present study is concerned with the management of multi-thematic geo-data of Cephallonia Island, related to crustal deformation. A large amount of heterogeneous data (vector, raster, ascii files) involving geology, tectonics, topography, geomorphology and DGPS measurements was compiled. Crustal deformation was studied using GPS network consisting of '23 stations. This was installed and measured in October 2001 and re-measured during September 2003 following the Lefkas earthquake of August 2003 (Mw=6.2), and also in July 2006. With proper spatial analysis, a large number of thematic and synthetic layers and maps were produced. Simultaneously, a GIS Data base was organized in order to make an easy extraction of conclusions in specific questions.


2015 ◽  
Vol 10 (1) ◽  
pp. 260-267 ◽  
Author(s):  
Kevin Read ◽  
Jessica Athens ◽  
Ian Lamb ◽  
Joey Nicholson ◽  
Sushan Chin ◽  
...  

A need was identified by the Department of Population Health (DPH) for an academic medical center to facilitate research using large, externally funded datasets. Barriers identified included difficulty in accessing and working with the datasets, and a lack of knowledge about institutional licenses. A need to facilitate sharing and reuse of datasets generated by researchers at the institution (internal datasets) was also recognized. The library partnered with a researcher in the DPH to create a catalog of external datasets, which provided detailed metadata and access instructions. The catalog listed researchers at the medical center and the main campus with expertise in using these external datasets in order to facilitate research and cross-campus collaboration. Data description standards were reviewed to create a set of metadata to facilitate access to both externally generated datasets, as well as the internally generated datasets that would constitute the next phase of development of the catalog. Interviews with a range of investigators at the institution identified DPH researchers as most interested in data sharing, therefore targeted outreach to this group was undertaken. Initial outreach resulted in additional external datasets being described, new local experts volunteering, proposals for additional functionality, and interest from researchers in inclusion of their internal datasets in the catalog. Despite limited outreach, the catalog has had ~250 unique page views in the three months since it went live. The establishment of the catalog also led to partnerships with the medical center’s data management core and the main university library. The Data Catalog in its present state serves a direct user need from the Department of Population Health to describe large, externally funded datasets. The library will use this initial strong community of users to expand the catalog and include internally generated research datasets. Future expansion plans will include working with DataCore and the main university library.


Author(s):  
Katarina Grolinger ◽  
Emna Mezghani ◽  
Miriam A. M. Capretz ◽  
Ernesto Exposito

Decision-making in disaster management requires information gathering, sharing, and integration by means of collaboration on a global scale and across governments, industries, and communities. Large volume of heterogeneous data is available; however, current data management solutions offer few or no integration capabilities and limited potential for collaboration. Moreover, recent advances in NoSQL, cloud computing, and Big Data open the door for new solutions in disaster data management. This chapter presents a Knowledge as a Service (KaaS) framework for disaster cloud data management (Disaster-CDM), with the objectives of facilitating information gathering and sharing; storing large amounts of disaster-related data; and facilitating search and supporting interoperability and integration. In the Disaster-CDM approach NoSQL data stores provide storage reliability and scalability while service-oriented architecture achieves flexibility and extensibility. The contribution of Disaster-CDM is demonstrated by integration capabilities, on examples of full-text search and querying services.


Atoms ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 76 ◽  
Author(s):  
Damien Albert ◽  
Bobby K. Antony ◽  
Yaye Awa Ba ◽  
Yuri L. Babikov ◽  
Philippe Bollard ◽  
...  

This paper presents an overview of the current status of the Virtual Atomic and Molecular Data Centre (VAMDC) e-infrastructure, including the current status of the VAMDC-connected (or to be connected) databases, updates on the latest technological development within the infrastructure and a presentation of some application tools that make use of the VAMDC e-infrastructure. We analyse the past 10 years of VAMDC development and operation, and assess their impact both on the field of atomic and molecular (A&M) physics itself and on heterogeneous data management in international cooperation. The highly sophisticated VAMDC infrastructure and the related databases developed over this long term make them a perfect resource of sustainable data for future applications in many fields of research. However, we also discuss the current limitations that prevent VAMDC from becoming the main publishing platform and the main source of A&M data for user communities, and present possible solutions under investigation by the consortium. Several user application examples are presented, illustrating the benefits of VAMDC in current research applications, which often need the A&M data from more than one database. Finally, we present our vision for the future of VAMDC.


2014 ◽  
Vol 111 ◽  
pp. S120
Author(s):  
O. Diesenbacher ◽  
M. Memelink ◽  
F. Sedlmayer ◽  
H. Deutschmann ◽  
P. Steininger

2021 ◽  
Author(s):  
AISDL

The Internet of Things (IoT) infrastructure forms a gigantic network of interconnected and interacting devices. This infrastructure involves a new generation of service delivery models, more advanced data management and policy schemes, sophisticated data analytics tools, and effective decision making applications. IoT technology brings automation to a new level wherein nodes can communicate and make autonomous decisions in the absence of human interventions. IoT enabled solutions generate and process enormous volumes of heterogeneous data exchanged among billions of nodes. This results in Big Data congestion, data management, storage issues and various inefficiencies. Fog Computing aims at solving the issues with data management as it includes intelligent computational components and storage closer to the data sources.


2018 ◽  
Vol 1 ◽  
pp. 1-5
Author(s):  
Dariusz Gotlib ◽  
Robert Olszewski

Nowadays almost every map is a component of the information system. Design and production of maps requires the use of specific rules for modeling information systems: conceptual, application and data modelling. While analyzing various stages of cartographic modeling the authors ask the question: at what stage of this process a map occurs. Can we say that the “life of the map” begins even before someone define its form of presentation? This question is particularly important at the time of exponentially increasing number of new geoinformation products. During the analysis of the theory of cartography and relations of the discipline to other fields of knowledge it has been attempted to define a few properties of cartographic modeling which distinguish the process from other methods of spatial modeling. Assuming that the map is a model of reality (created in the process of cartographic modeling supported by domain-modeling) the article proposes an analogy of the process of cartographic modeling to the scheme of conceptual modeling presented in ISO 19101 standard.


Sign in / Sign up

Export Citation Format

Share Document