graph databases
Recently Published Documents


TOTAL DOCUMENTS

438
(FIVE YEARS 119)

H-INDEX

25
(FIVE YEARS 2)

2022 ◽  
pp. 101089
Author(s):  
Ciro M. Medeiros ◽  
Martin A. Musicante ◽  
Umberto S. Costa

2021 ◽  
Vol 7 (1 | 2) ◽  
pp. 157-190
Author(s):  
Maria Di Maro ◽  
Antonio Origlia ◽  
Francesco Cutugno

2021 ◽  
Author(s):  
Riddho Ridwanul Haque ◽  
Chowdhury Farhan Ahmed ◽  
Md. Samiullah ◽  
Carson K. Leung

Author(s):  
A.-H. Hor ◽  
G. Sohn

Abstract. The semantic integration modeling of BIM industry foundations classes and GIS City-geographic markup language are a milestone for many applications that involve both domains of knowledge. In this paper, we propose a system design architecture, and implementation of Extraction, Transformation and Loading (ETL) workflows of BIM and GIS model into RDF graph database model, these workflows were created from functional components and ontological frameworks supporting RDF SPARQL and graph databases Cypher query languages. This paper is about full understanding of whether RDF graph database is suitable for a BIM-GIS integrated information model, and it looks deeper into the assessment of translation workflows and evaluating performance metrics of a BIM-GIS integrated data model managed in an RDF graph database, the process requires designing and developing various pipelines of workflows with semantic tools in order to get the data and its structure into an appropriate format and demonstrate the potential of using RDF graph databases to integrate, manage and analyze information and relationships from both GIS and BIM models, the study also has introduced the concepts of Graph-Model occupancy indexes of nodes, attributes and relationships to measure queries outputs and giving insights on data richness and performance of the resulting BIM-GIS semantically integrated model.


2021 ◽  
Author(s):  
Telmo Henrique Valverde da Silva ◽  
Ronaldo dos Santos Mello

Several application domains hold highly connected data, like supply chain and social network. In this context, NoSQL graph databases raise as a promising solution since relationships are first class citizens in their data model. Nevertheless, a traditional database design methodology initially defines a conceptual schema of the domain data, and the Enhanced Entity-Relationship (EER) model is a common tool. This paper presents a rule-based conversion process from an EER schema to Neo4j schema constraints, as Neo4j is the most representative NoSQL graph database management system with an expressive data model. Different from related work, our conversion process deals with all EER model concepts and generates rules for ensuring schema constraints through a set of Cypher instructions ready to run into a Neo4j database instance, as Neo4J is a schemaless system, and it is not possible to create a schema a priori. We also present an experimental evaluation that demonstrates the viability of our process in terms of performance.


2021 ◽  
Vol Volume 17, Issue 3 ◽  
Author(s):  
Jakub Michaliszyn ◽  
Jan Otop ◽  
Piotr Wieczorek

We propose a new approach to querying graph databases. Our approach balances competing goals of expressive power, language clarity and computational complexity. A distinctive feature of our approach is the ability to express properties of minimal (e.g. shortest) and maximal (e.g. most valuable) paths satisfying given criteria. To express complex properties in a modular way, we introduce labelling-generating ontologies. The resulting formalism is computationally attractive - queries can be answered in non-deterministic logarithmic space in the size of the database.


2021 ◽  
Vol 11 (17) ◽  
pp. 7782
Author(s):  
Itziar Urbieta ◽  
Marcos Nieto ◽  
Mikel García ◽  
Oihana Otaegui

Modern Artificial Intelligence (AI) methods can produce a large quantity of accurate and richly described data, in domains such as surveillance or automation. As a result, the need to organize data at a large scale in a semantic structure has arisen for long-term data maintenance and consumption. Ontologies and graph databases have gained popularity as mechanisms to satisfy this need. Ontologies provide the means to formally structure descriptive and semantic relations of a domain. Graph databases allow efficient and well-adapted storage, manipulation, and consumption of these linked data resources. However, at present, there is no a universally defined strategy for building AI-oriented ontologies for the automotive sector. One of the key challenges is the lack of a global standardized vocabulary. Most private initiatives and large open datasets for Advanced Driver Assistance Systems (ADASs) and Autonomous Driving (AD) development include their own definitions of terms, with incompatible taxonomies and structures, thus resulting in a well-known lack of interoperability. This paper presents the Automotive Global Ontology (AGO) as a Knowledge Organization System (KOS) using a graph database (Neo4j). Two different use cases for the AGO domain ontology are presented to showcase its capabilities in terms of semantic labeling and scenario-based testing. The ontology and related material have been made public for their subsequent use by the industry and academic communities.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Daniel Hofer ◽  
Markus Jäger ◽  
Aya Khaled Youssef Sayed Mohamed ◽  
Josef Küng

Purpose For aiding computer security experts in their study, log files are a crucial piece of information. Especially the time domain is very important for us because in most cases, timestamps are the only linking points between events caused by attackers, faulty systems or simple errors and their corresponding entries in log files. With the idea of storing and analyzing this log information in graph databases, we need a suitable model to store and connect timestamps and their events. This paper aims to find and evaluate different approaches how to store timestamps in graph databases and their individual benefits and drawbacks. Design/methodology/approach We analyse three different approaches, how timestamp information can be represented and stored in graph databases. For checking the models, we set up four typical questions that are important for log file analysis and tested them for each of the models. During the evaluation, we used the performance and other properties as metrics, how suitable each of the models is for representing the log files’ timestamp information. In the last part, we try to improve one promising looking model. Findings We come to the conclusion, that the simplest model with the least graph database-specific concepts in use is also the one yielding the simplest and fastest queries. Research limitations/implications Limitations to this research are that only one graph database was studied and also improvements to the query engine might change future results. Originality/value In the study, we addressed the issue of storing timestamps in graph databases in a meaningful, practical and efficient way. The results can be used as a pattern for similar scenarios and applications.


2021 ◽  
Vol 11 (16) ◽  
pp. 7434
Author(s):  
Adam Niewiadomski ◽  
Agnieszka Duraj ◽  
Monika Bartczak

Datasets frequently contain uncertain data that, if not interpreted with care, may affect information analysis negatively. Such rare, strange, or imperfect data, here called “outliers” or “exceptions” can be ignored in further processing or, on the other hand, handled by dedicated algorithms to decide if they contain valuable, though very rare, information. There are different definitions and methods for handling outliers, and here, we are interested, in particular, in those based on linguistic quantification and fuzzy logic. In this paper, for the first time, we apply definitions of outliers and methods for recognizing them based on fuzzy sets and linguistically quantified statements to find outliers in non-relational, here graph-oriented, databases. These methods are proposed and exemplified to identify objects being outliers (e.g., to exclude them from processing). The novelty of this paper are the definitions and recognition algorithms for outliers using fuzzy logic and linguistic quantification, if traditional quantitative and/or measurable information is inaccessible, that frequently takes place in the graph nature of considered datasets.


Sign in / Sign up

Export Citation Format

Share Document