scholarly journals Data Integration Using Through Attentive Multi-View Graph Auto-Encoders

Author(s):  
S. Baskaran ◽  
P. Panchavarnam

Thematic integration represents a function in likeness judgments of sets of objects which are unrelated taxonomically, like soup and scoop. We hypothesized that integration provides as an even more key method in the likeness evaluation of abstract objects due to their temporality, their big variability, and relational nature. One therapy is always to influence information from different options - such as for example text information -equally to coach visible designs and to constrain their predictions. We provide a fresh serious visual-semantic embedding design experienced to spot visible things applying equally marked picture information along with semantic data learned from unannotated text. We show this design fits state-of-the-art efficiency on the 1000-class ImageNet item acceptance concern while creating more semantically realistic problems, and also reveal that the semantic data may be used to create forecasts about thousands of picture brands maybe not seen throughout training. we design the integration applying multi-view chart auto-encoders, and include receptive process to ascertain the loads for every see regarding equivalent jobs and characteristics for greater interpretability. Our design has variable style for equally semi-supervised and unsupervised settings. Fresh benefits shown substantial predictive precision improvement. Situation reports also revealed greater design volume introduce node characteristics and interpretability.

Author(s):  
Maria Vargas-Vera ◽  
Miklos Nagy

Ontology mapping as a semantic data integration approach has evolved from traditional data integration solutions. The core problems and open issues related to early data integration approaches are also applicable to ontology mapping on the Semantic Web community. Therefore, in this review the authors present the related literature, starting from the traditional data integration approaches, in order to highlight the evolution of data integration from the early approaches. Once the roots of semantic data integration have been presented, the authors proceed to introduce the state-of-the-art of the ontology mappings systems including the early approaches and the systems that can be compared through the Ontology Alignment Initiative (OAEI).


2015 ◽  
Vol 6 (1) ◽  
pp. 17-42
Author(s):  
Maria Vargas-Vera ◽  
Miklos Nagy

Ontology mapping as a semantic data integration approach has evolved from traditional data integration solutions. The core problems and open issues related to early data integration approaches are also applicable to ontology mapping on the Semantic Web community. Therefore, in this review the authors present the related literature, starting from the traditional data integration approaches, in order to highlight the evolution of data integration from the early approaches. Once the roots of semantic data integration have been presented, the authors proceed to introduce the state-of-the-art of the ontology mappings systems including the early approaches and the systems that can be compared through the Ontology Alignment Initiative (OAEI).


Author(s):  
Uwe Weissflog

Abstract This paper provides an overview of methods and ideas to achieve data integration in CIM. It describes a dictionary approach allowing participating applications to define their common constructs gradually as an additional service across application systems. Because of the importance of product definition data, the role of PDES/STEP as part of this dictionary approach is also described. The technical concepts of the dictionary, such as schema mapping, semantic data model, user methods and the required additions within participating applications are explained. Problems related to data integrity, data redundancy, performance and binding of dissimilar software components are discussed as well as the deficiencies related to today’s data modelling capabilities. The added value an active dictionary can provide to a CIM environment consisting of established applications in heterogeneous environments, where migration into one standardized homogeneous set of CIM applications is not likely, is also explained.


Author(s):  
D. Vinasco-Alvarez ◽  
J. Samuel ◽  
S. Servigne ◽  
G. Gesquière

Abstract. To enrich urban digital twins and better understand city evolution, the integration of heterogeneous, spatio-temporal data has become a large area of research in the enrichment of 3D and 4D (3D + Time) semantic city models. These models, which can represent the 3D geospatial data of a city and their evolving semantic relations, may require data-driven integration approaches to provide temporal and concurrent views of the urban landscape. However, data integration often requires the transformation or conversion of data into a single shared data format, which can be prone to semantic data loss. To combat this, this paper proposes a model-centric ontology-based data integration approach towards limiting semantic data loss in 4D semantic urban data transformations to semantic graph formats. By integrating the underlying conceptual models of urban data standards, a unified spatio-temporal data model can be created as a network of ontologies. Transformation tools can use this model to map datasets to interoperable semantic graph formats of 4D city models. This paper will firstly illustrate how this approach facilitates the integration of rich 3D geospatial, spatio-temporal urban data and semantic web standards with a focus on limiting semantic data loss. Secondly, this paper will demonstrate how semantic graphs based on these models can be implemented for spatial and temporal queries toward 4D semantic city model enrichment.


2020 ◽  
Vol 39 (2) ◽  
pp. 2249-2261
Author(s):  
Antonio Hernández-Illera ◽  
Miguel A. Martínez-Prieto ◽  
Javier D. Fernández ◽  
Antonio Fariña

RDF self-indexes compress the RDF collection and provide efficient access to the data without a previous decompression (via the so-called SPARQL triple patterns). HDT is one of the reference solutions in this scenario, with several applications to lower the barrier of both publication and consumption of Big Semantic Data. However, the simple design of HDT takes a compromise position between compression effectiveness and retrieval speed. In particular, it supports scan and subject-based queries, but it requires additional indexes to resolve predicate and object-based SPARQL triple patterns. A recent variant, HDT++, improves HDT compression ratios, but it does not retain the original HDT retrieval capabilities. In this article, we extend HDT++ with additional indexes to support full SPARQL triple pattern resolution with a lower memory footprint than the original indexed HDT (called HDT-FoQ). Our evaluation shows that the resultant structure, iHDT++ , requires 70 - 85% of the original HDT-FoQ space (and up to 48 - 72% for an HDT Community variant). In addition, iHDT++ shows significant performance improvements (up to one level of magnitude) for most triple pattern queries, being competitive with state-of-the-art RDF self-indexes.


Sign in / Sign up

Export Citation Format

Share Document