data interoperability
Recently Published Documents


TOTAL DOCUMENTS

380
(FIVE YEARS 155)

H-INDEX

16
(FIVE YEARS 6)

2022 ◽  
Vol 16 (1) ◽  
pp. 10
Author(s):  
Bradley Wade Bishop ◽  
Carolyn F Hank ◽  
Joel T Webster

   This paper assesses data consumers’ perspectives on the interoperable and re-usable aspects of the FAIR Data Principles. Taking a domain-specific informatics approach, ten oceanographers were asked to think of a recent search for data and describe their process of discovery, evaluation, and use. The interview schedule, derived from the FAIR Data Principles, included questions about the interoperability and re-usability of data. Through this critical incident technique, findings on data interoperability and re-usability give data curators valuable insights into how real-world users access, evaluate, and use data. Results from this study show that oceanographers utilize tools that make re-use simple, with interoperability seamless within the systems used. The processes employed by oceanographers present a good baseline for other domains adopting the FAIR Data Principles. 


2021 ◽  
Vol 11 (24) ◽  
pp. 11978
Author(s):  
Gonçalo Amaro ◽  
Filipe Moutinho ◽  
Rogério Campos-Rebelo ◽  
Julius Köpke ◽  
Pedro Maló

As service-oriented architectures are a solution for large distributed systems, interoperability between these systems, which are often heterogeneous, can be a challenge due to the different syntax and semantics of the exchanged messages or even different data interchange formats. This paper addresses the data interchange format and data interoperability issues between XML-based and JSON-based systems. It proposes novel annotation mechanisms to add semantic annotations and complement date values to JSON Schemas, enabling an interoperability approach for JSON-based systems that, until now, was only possible for XML-based systems. A set of algorithms supporting the translation from JSON Schema to XML Schema, JSON to XML, and XML to JSON is also proposed. These algorithms were implemented in an existing prototype tool, which now supports these systems’ interoperability through semantic compatibility verification and the automatic generation of translators.


2021 ◽  
Vol 9 (1) ◽  
pp. 27
Author(s):  
Roos Bakker ◽  
Romy van Drie ◽  
Cornelis Bouter ◽  
Sander van Leeuwen ◽  
Lorijn van Rooijen ◽  
...  

Modern greenhouses have systems that continuously measure the properties of greenhouses and their crops. These measurements cannot be queried together without linking the relevant data. In this paper, we introduce the Common Greenhouse Ontology, a standard for sharing data on greenhouses and their measurable components. The ontology was created with domain experts and incorporates existing ontologies, SOSA and OM. It was evaluated using competency questions and SPARQL queries. The results of the evaluation show that the Common Greenhouse Ontology is an innovative solution for data interoperability and standardization, and an enabler for advanced data science techniques over larger databases.


2021 ◽  
Vol 12 (5) ◽  
Author(s):  
Angelo Augusto Frozza ◽  
Eduardo Dias Defreyn ◽  
Ronaldo Dos Santos Mello

Although NoSQL databases do not require a schema a priori, being aware of the database schema is essential for activities like data integration, data validation, or data interoperability. This paper presents a process for the extraction of columnar NoSQL database schemas. We adopt JSON as a canonical format for data representation, and we validate the proposed process through a prototype tool that is able to extract schemas from the HBase columnar NoSQL database system. HBase was chosen as a case study because it is one of the most popular columnar NoSQL solutions. When compared to related work, we innovate by proposing a simple solution for the inference of column data types for columnar NoSQL databases that store only byte arrays as column values, and a resulting schema that follows the JSON Schema format.


2021 ◽  
Author(s):  
Ester Alba ◽  
Mar Gaitán ◽  
Arabella León ◽  
Dunia Mladenic ◽  
Janez Branek

Abstract The cultural heritage domain in general and silk textiles, in particular, are characterized by large, rich and heterogeneous data sets. Silk heritage vocabulary comes from multiple sources that have been mixed up across time and space. This has led to the use of different terminology in specialized organizations in order to describe their artefacts. This makes data interoperability between independent catalogues very difficult. To address these issues, SILKNOW created a multilingual thesaurus related to silk textiles. It was carried out by experts in textile terminology and art historians and computationally implemented by experts in text mining, multi-/cross-linguality and semantic extraction from text. This paper presents the rationale behind the realization of this thesaurus.


2021 ◽  
Vol 79 (6) ◽  
pp. 897-901
Author(s):  
John R Srigley ◽  
Meagan Judge ◽  
Tim Helliwell ◽  
George G Birdsong ◽  
David W Ellis

2021 ◽  
Author(s):  
Nikolay Skvortsov

The principles known by FAIR abbreviation have been applied for different kinds of data management technologies to support data reuse. In particular, they are important for investigations and development in research infrastructures but applied in significantly different ways. These principles are recognized as prospective since, according to them, data in the context of reuse should be readable and actionable by both humans and machines. The review of solutions for data interoperability and reuse in research infrastructures is presented in the paper. It is shown that conceptual modeling based on formal domain specifications still has good potential for data reuse in research infrastructures. It allows to relate data, methods, and other resources semantically, classify and identify them in the domain, integrate and verify the correctness of data reuse. Infrastructures based on formal domain modeling can make heterogeneous data management and research significantly more effective and automated.


2021 ◽  
Vol 7 (4) ◽  
pp. 70
Author(s):  
David Jones ◽  
Jianyin Shao ◽  
Heidi Wallis ◽  
Cody Johansen ◽  
Kim Hart ◽  
...  

As newborn screening programs transition from paper-based data exchange toward automated, electronic methods, significant data exchange challenges must be overcome. This article outlines a data model that maps newborn screening data elements associated with patient demographic information, birthing facilities, laboratories, result reporting, and follow-up care to the LOINC, SNOMED CT, ICD-10-CM, and HL7 healthcare standards. The described framework lays the foundation for the implementation of standardized electronic data exchange across newborn screening programs, leading to greater data interoperability. The use of this model can accelerate the implementation of electronic data exchange between healthcare providers and newborn screening programs, which would ultimately improve health outcomes for all newborns and standardize data exchange across programs.


2021 ◽  
Author(s):  
Subhashis Das ◽  
Pamela Hussey

The global pandemic over the past two years has reset societal agendas by identifying both strengths and weaknesses across all sectors. Focusing in particular on global health delivery, the ability of health care facilities to scale requirements and to meet service demands has detected the need for some national services and organisations to modernise their organisational processes and infrastructures. Core to requirements for modernisation is infrastructure to share information, specifically structural standardised approaches for both operational procedures and terminology services. Problems of data sharing (aka interoperability) is a main obstacle when patients are moving across healthcare facilities or travelling across border countries in cases where emergency treatment is needed. Experts in healthcare service delivery suggest that the best possible way to manage individual care is at home, using remote patient monitoring which ultimately reduces cost burden both for the citizen and service provider. Core to this practice will be advancing digitalisation of health care underpinned with safe integration and access to relevant and timely information. To tackle the data interoperability issue and provide a quality driven continuous flow of information from different health care information systems semantic terminology needs to be provided intact. In this paper we propose and present ContSonto a formal ontology for continuity of care based on ISO 13940:2015 ContSy and W3C Semantic Web Standards Language OWL (Web Ontology Language). ContSonto has several benefits including semantic interoperability, data harmonization and data linking. It can be use as a base model for data integration for different healthcare information models to generate knowledge graph to support shared care and decision making.


2021 ◽  
Vol 5 ◽  
Author(s):  
Medha Devare ◽  
Céline Aubert ◽  
Omar Eduardo Benites Alfaro ◽  
Ivan Omar Perez Masias ◽  
Marie-Angélique Laporte

Agricultural research has been traditionally driven by linear approaches dictated by hypothesis-testing. With the advent of powerful data science capabilities, predictive, empirical approaches are possible that operate over large data pools to discern patterns. Such data pools need to contain well-described, machine-interpretable, and openly available data (represented by high-scoring Findable, Accessible, Interoperable, and Reusable—or FAIR—resources). CGIAR's Platform for Big Data in Agriculture has developed several solutions to help researchers generate open and FAIR outputs, determine their FAIRness in quantitative terms1, and to create high-value data products drawing on these outputs. By accelerating the speed and efficiency of research, these approaches facilitate innovation, allowing the agricultural sector to respond agilely to farmer challenges. In this paper, we describe the Agronomy Field Information Management System or AgroFIMS, a web-based, open-source tool that helps generate data that is “born FAIRer” by addressing data interoperability to enable aggregation and easier value derivation from data. Although license choice to determine accessibility is at the discretion of the user, AgroFIMS provides consistent and rich metadata helping users more easily comply with institutional, founder and publisher FAIR mandates. The tool enables the creation of fieldbooks through a user-friendly interface that allows the entry of metadata tied to the Dublin Core standard schema, and trial details via picklists or autocomplete that are based on semantic standards like the Agronomy Ontology (AgrO). Choices are organized by field operations or measurements of relevance to an agronomist, with specific terms drawn from ontologies. Once the user has stepped through required fields and desired modules to describe their trial management practices and measurement parameters, they can download the fieldbook to use as a standalone Excel-driven file, or employ via free Android-based KDSmart, Fieldbook, or ODK applications for digital data collection. Collected data can be imported back to AgroFIMS for statistical analysis and reports. Development plans for 2021 include new features such ability to clone fieldbooks and the creation of agronomic questionnaires. AgroFIMS will also allow archiving of FAIR data after collection and analysis from a database and to repository platforms for wider sharing.


Sign in / Sign up

Export Citation Format

Share Document