scholarly journals Ontological Re-engineering of Medical Databases

Author(s):  
Guntis Bārzdiņš ◽  
Sergejs Rikačovs ◽  
Marta Veilande ◽  
Mārtiņš Zviedris

Ontological Re-engineering of Medical Databases This paper describes data export from multiple medical databases (relational databases) into a single shared Medical Data Warehouse (RDF database structured according to an integrated OWL ontology). The exported data is conveniently accessible via SPARQL or via graphical query language ViziQuer based on UML profile for OWL. The approach is illustrated on one of Latvian Medical databases - Injury Register.

2009 ◽  
pp. 2360-2383
Author(s):  
Guntis Barzdins ◽  
Janis Barzdins ◽  
Karlis Cerans

This chapter introduces the UML profile for OWL as an essential instrument for bridging the gap between the legacy relational databases and OWL ontologies. We address one of the long-standing relational database design problems where initial conceptual model (a semantically clear domain conceptualization ontology) gets “lost” during conversion into the normalized database schema. The problem is that such “loss” makes database inaccessible for direct query by domain experts familiar with the conceptual model only. This problem can be avoided by exporting the database into RDF according to the original conceptual model (OWL ontology) and formulating semantically clear queries in SPARQL over the RDF database. Through a detailed example we show how UML/OWL profile is facilitating this new and promising approach.


Author(s):  
Guntis Barzdins

This chapter introduces the UML profile for OWL as an essential instrument for bridging the gap between the legacy relational databases and OWL ontologies. We address one of the long-standing relational database design problems where initial conceptual model (a semantically clear domain conceptualization ontology) gets “lost” during conversion into the normalized database schema. The problem is that such “loss” makes database inaccessible for direct query by domain experts familiar with the conceptual model only. This problem can be avoided by exporting the database into RDF according to the original conceptual model (OWL ontology) and formulating semantically clear queries in SPARQL over the RDF database. Through a detailed example we show how UML/OWL profile is facilitating this new and promising approach.


1993 ◽  
Vol 11 (2) ◽  
pp. 171-202 ◽  
Author(s):  
Gary H. Sockut ◽  
Luanne M. Burns ◽  
Ashok Malhotra ◽  
Kyu-Young Whang

AI ◽  
2021 ◽  
Vol 2 (4) ◽  
pp. 720-737
Author(s):  
Fadi H. Hazboun ◽  
Majdi Owda ◽  
Amani Yousef Owda

Structured Query Language (SQL) is commonly used in Relational Database Management Systems (RDBMS) and is currently one of the most popular data definition and manipulation languages. Its core functionality is implemented, with only some minor variations, throughout all RDBMS products. It is an effective tool in the process of managing and querying data in relational databases. This paper describes a method to effectively automate the conversion of a data query from a Natural Language Query (NLQ) to Structured Query Language (SQL) with Online Analytical Processing (OLAP) cube data warehouse objects. To obtain or manipulate the data from relational databases, the user must be familiar with SQL and must also write an appropriate and valid SQL statement. However, users who are not familiar with SQL are unable to obtain relevant data through relational databases. To address this, we propose a Natural Language Processing (NLP) model to convert an NLQ into an SQL query. This allows novice users to obtain the required data without having to know any complicated SQL details. The model is also capable of handling complex queries using the OLAP cube technique, which allows data to be pre-calculated and stored in a multi-dimensional and ready-to-use format. A multi-dimensional cube (hypercube) is used to connect with the NLP interface, thereby eliminating long-running data queries and enabling self-service business intelligence. The study demonstrated how the use of hypercube technology helps to increase the system response speed and the ability to process very complex query sentences. The system achieved impressive performance in terms of NLP and the accuracy of generating different query sentences. Using OLAP hypercube technology, the study achieved distinguished results compared to previous studies in terms of the speed of the response of the model to NLQ analysis, the generation of complex SQL statements, and the dynamic display of the results. As a plan for future work, it is recommended to use infinite-dimension (n-D) cubes instead of 4-D cubes to enable ingesting as much data as possible in a single object and to facilitate the execution of query statements that may be too complex in query interfaces running in a data warehouse. The study demonstrated how the use of hypercube technology helps to increase system response speed and process very complex query sentences.


Author(s):  
Anderson Chaves Carniel ◽  
Aried de Aguiar Sa ◽  
Vinicius Henrique Porto Brisighello ◽  
Marcela Xavier Ribeiro ◽  
Renato Bueno ◽  
...  

2018 ◽  
Vol 14 (3) ◽  
pp. 44-68 ◽  
Author(s):  
Fatma Abdelhedi ◽  
Amal Ait Brahim ◽  
Gilles Zurfluh

Nowadays, most organizations need to improve their decision-making process using Big Data. To achieve this, they have to store Big Data, perform an analysis, and transform the results into useful and valuable information. To perform this, it's necessary to deal with new challenges in designing and creating data warehouse. Traditionally, creating a data warehouse followed well-governed process based on relational databases. The influence of Big Data challenged this traditional approach primarily due to the changing nature of data. As a result, using NoSQL databases has become a necessity to handle Big Data challenges. In this article, the authors show how to create a data warehouse on NoSQL systems. They propose the Object2NoSQL process that generates column-oriented physical models starting from a UML conceptual model. To ensure efficient automatic transformation, they propose a logical model that exhibits a sufficient degree of independence so as to enable its mapping to one or more column-oriented platforms. The authors provide experiments of their approach using a case study in the health care field.


Author(s):  
Dr. C. K. Gomathy

Abstract: Apache Sqoop is mainly used to efficiently transfer large volumes of data between Apache Hadoop and relational databases. It helps to certain tasks, such as ETL (Extract transform load) processing, from an enterprise data warehouse to Hadoop, for efficient execution at a much less cost. Here first we import the table which presents in MYSQL Database with the help of command-line interface application called Sqoop and there is a chance of addition of new rows and updating new rows then we have to execute the query again. So, with the help of our project there is no need of executing queries again for that we are using Sqoop job, which consists of total commands for import and next after import we retrieve the data from hive using Java JDBC and we convert the data to JSON Format, which consists of data in an organized way and easy to access manner by using GSON Library. Keywords: Sqoop, Json, Gson, Maven and JDBC


2020 ◽  
Vol 4 (3) ◽  
pp. 577-577
Author(s):  
Vania V Estrela

Background: A database (DB) to store indexed information about drug delivery, test, and their temporal behavior is paramount in new Biomedical Cyber-Physical Systems (BCPSs). The term Database as a Service (DBaaS) means that a corporation delivers the hardware, software, and other infrastructure required by companies to operate their databases according to their demands instead of keeping an internal data warehouse. Methods: BCPSs attributes are presented and discussed.  One needs to retrieve detailed knowledge reliably to make adequate healthcare treatment decisions. Furthermore, these DBs store, organize, manipulate, and retrieve the necessary data from an ocean of Big Data (BD) associated processes. There are Search Query Language (SQL), and NoSQL DBs.  Results: This work investigates how to retrieve biomedical-related knowledge reliably to make adequate healthcare treatment decisions. Furthermore, Biomedical DBaaSs store, organize, manipulate, and retrieve the necessary data from an ocean of Big Data (BD) associated processes. Conclusion: A NoSQL DB allows more flexibility with changes while the BCPSs are running, which allows for queries and data handling according to the context and situation. A DBaaS must be adaptive and permit the DB management within an extensive variety of distinctive sources, modalities, dimensionalities, and data handling according to conventional ways.


Sign in / Sign up

Export Citation Format

Share Document