relational database systems
Recently Published Documents


TOTAL DOCUMENTS

205
(FIVE YEARS 18)

H-INDEX

22
(FIVE YEARS 0)

2021 ◽  
Vol 9 (4) ◽  
pp. 0-0

Most modern relational database systems use triggers to implement automatic tasks in response to specific events happening inside or outside a system. A database trigger is a human readable block code without any formal semantics. Frequently, people can check if a trigger is designed correctly after it is executed or with human inspection. In this article, the authors introduce a new method to model and verify database trigger systems using Event-B formal method at early design phase. First, the authors make use of the similar mechanism between triggers and Event-B events to propose a set of rules translating a database trigger system into Event-B constructs. Then, the authors show how to verify data constraint preservation properties and detect infinite loops of trigger execution with RODIN/Event-B. The authors also illustrate the proposed method on a case study. Finally, a tool named Trigger2B which partly supports the automatic modeling process is presented.


2021 ◽  
Vol 9 (4) ◽  
pp. 1-16
Author(s):  
Anh Hong Le ◽  
To Van Khanh ◽  
Truong Ninh Thuan

Most modern relational database systems use triggers to implement automatic tasks in response to specific events happening inside or outside a system. A database trigger is a human readable block code without any formal semantics. Frequently, people can check if a trigger is designed correctly after it is executed or by manual checking. In this article, the authors introduce a new method to model and verify database trigger systems using Event-B formal method at design phase. First, the authors make use of similar mechanism between triggers and Event-B events to propose a set of rules translating a database trigger system into Event-B constructs. Then, the authors show how to verify data constraint preservation properties and detect infinite loops of trigger execution with RODIN/Event-B. The authors also illustrate the proposed method with a case study. Finally, a tool named Trigger2B which partly supports the automatic modeling process is presented.


Technology has changed a lot in this digital era. Earlier we had landline phones but now wehave Smartphones, Laptops and Tablets that are making our life smarter. We were usingbulky desktops for processing huge amounts of data, we were using floppies and hard disksto store the data earlier. Now we can store data in the cloud. Due to the enhancement oftechnologyweweregeneratingalotofdata,forexampleeachsmartphoneuserapproximately generates 40 Exabyte’s of data every month in the form of texts, emails,phone calls, videos, photos, searches, music, etc., if this number is multiplied by 5 billionsmartphone users, that is a large amount. Traditional computing systems cannot handle thislarge amount of data. You have no idea about how much data you are generating in eachminute.Butthechallengingparthereisthatthedataisnotpresentinastructuredmannerand it is huge in size. Data is being generated in millions of ways and it is one of the biggestfactors for the evolution of Big Data. With the exponential growth of the data, people startedto store it in relational database systems. But with the advancements in the internet anddigitalization, they are insufficient. In order to overcome this, big data came into the picture.This Big Data.Provides a new set of tools and technologies to store a large amount ofunstructured data. Industryinfluencers,academicians,and other prominent stakeholders agree that Big Datahas become a big game-changer in most industries. The primary goal for most organizationsis to enhance customer experience, cost reduction, better-targeted marketing, and makingexistingprocessesmoreefficient.Inthispaper,welookintothevariousapplicationsthatBig Data offers to the industries, Industry-specific challenges that these industries face, andhow Big Data solves these challenges.


2021 ◽  
Vol 12 (2) ◽  
Author(s):  
Raphael Marins ◽  
Rafael Pereira de Oliveira ◽  
Edward Hermann Haeusler ◽  
Sérgio Lifschitz ◽  
Daniel Schwabe ◽  
...  

This paper presents the Outer-Tuning framework, which aims to support the (semi) automatic tuning of relational database systems through a domain-specific ontology. Ontologies have shown themselves to be increasingly promising, adding semantics and standardizing the different terms used in a domain. Thereby, our framework seeks to explain and make explicit the tuning heuristics reasoning while enabling the evaluation of new ontology-inferred methods. In this paper we focus on the main aspects of the Outer-Tuning component-based architecture. We also give an overview of our tool in practice. Finally, we show two useful extensions, concerning new DBMSs and a way of dockerizing into a container.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Kagiso Ndlovu ◽  
Richard E. Scott ◽  
Maurice Mars

Abstract Background Significant investments have been made towards the implementation of mHealth applications and eRecord systems globally. However, fragmentation of these technologies remains a big challenge, often unresolved in developing countries. In particular, evidence shows little consideration for linking mHealth applications and eRecord systems. Botswana is a typical developing country in sub-Saharan Africa that has explored mHealth applications, but the solutions are not interoperable with existing eRecord systems. This paper describes Botswana’s eRecord systems interoperability landscape and provides guidance for linking mHealth applications to eRecord systems, both for Botswana and for developing countries using Botswana as an exemplar. Methods A survey and interviews of health ICT workers and a review of the Botswana National eHealth Strategy were completed. Perceived interoperability benefits, opportunities and challenges were charted and analysed, and future guidance derived. Results Survey and interview responses showed the need for interoperable mHealth applications and eRecord systems within the health sector of Botswana and within the context of the National eHealth Strategy. However, the current Strategy does not address linking mHealth applications to eRecord systems. Across Botswana’s health sectors, global interoperability standards and Application Programming Interfaces are widely used, with some level of interoperability within, but not between, public and private facilities. Further, a mix of open source and commercial eRecord systems utilising relational database systems and similar data formats are supported. Challenges for linking mHealth applications and eRecord systems in Botswana were identified and categorised into themes which led to development of guidance to enhance the National eHealth Strategy. Conclusion Interoperability between mHealth applications and eRecord systems is needed and is feasible. Opportunities and challenges for linking mHealth applications to eRecord systems were identified, and future guidance stemming from this insight presented. Findings will aid Botswana, and other developing countries, in resolving the pervasive disconnect between mHealth applications and eRecord systems.


2021 ◽  
Vol 3 (2) ◽  
pp. 114-120
Author(s):  
Muhammad Yunus ◽  
M. Rodi Taufik Akbar

Relational database systems that exist until now are only able to handle data that is definite (crisp), deterministic and precise. In fact, in real conditions, vague data is often needed for the decision-making process. For decision making involving fuzzy variables based on crisp data in the database, you can use a query on the database system with the concept of fuzzification on the data. In every educational institution, especially universities, there are several types of scholarships given to students. To get a scholarship, students must meet all the requirements that have been set. This study discusses the application of the Fuzzy Tahani algorithm for the recommendation of Academic Achievement Improvement (PPA) scholarship recipients at Bumigora University, Mataram. Data for PPA scholarship recipients was used in 2014 with details of the number of registrants 64 people and recipients (quota) of 15 people. Every year the number of applicants for this scholarship is increasing, while the processing and selection process is still done semi-manually so that the expected results are less than optimal, especially in terms of transparency and distribution. There are several variables that must be calculated by PPA scholarship recipients, namely the value of the Grade Point Average (GPA), Parents' Income, Number of Dependent Parents and Number of Diplomas. From the results of trials conducted in this study, it can be seen that the system's accuracy level reaches a value of 73.3%. This value is obtained by comparing the results of the semi-manual selection of PPA scholarship recipients with the results of the PPA scholarship selection using a system that uses the Fuzzy Tahani Algorithm.


2021 ◽  
Vol 19 ◽  
pp. 151-158
Author(s):  
Piotr Rymarski ◽  
Grzegorz Kozieł

Most of today's web applications run on relational database systems. Communication with them is possible through statements written in Structured Query Language (SQL). This paper presents the most popular relational database management systems and describes common ways to optimize SQL queries. Using the research environment based on fragment of the imdb.com database, implementing OracleDb, MySQL, Microsoft SQL Server and PostgreSQL engines, a number of test scenarios were performed. The aim was to check the performance changes of SQL queries resulting from syntax modication while maintaining the result, the impact of database organization, indexing and advanced mechanisms aimed at increasing the eciency of operations performed, delivered in the systems used. The tests were carried out using a proprietary application written in Java using the Hibernate framework.


2020 ◽  
Vol 13 (12) ◽  
pp. 1891-1904
Author(s):  
Michael Freitag ◽  
Maximilian Bandle ◽  
Tobias Schmidt ◽  
Alfons Kemper ◽  
Thomas Neumann

Author(s):  
Shivangi Kanchan ◽  
Parmeet Kaur ◽  
Pranjal Apoorva

Aim: To evaluate the performance of Relational and NoSQL databases in terms of execution time and memory consumption during operations involving structured data. Objective: To outline the criteria that decision makers should consider while making a choice of the database most suited to an application. Methods: Extensive experiments were performed on MySQL, MongoDB, Cassandra, Redis using the data for a IMDB movies schema prorated into 4 datasets of 1000, 10000, 25000 and 50000 records. The experiments involved typical database operations of insertion, deletion, update read of records with and without indexing as well as aggregation operations. Databases’ performance has been evaluated by measuring the time taken for operations and computing memory usage. Results: * Redis provides the best performance for write, update and delete operations in terms of time elapsed and memory usage whereas MongoDB gives the worst performance when the size of data increases, due to its locking mechanism. * For the read operations, Redis provides better performance in terms of latency than Cassandra and MongoDB. MySQL shows worst performance due to its relational architecture. On the other hand, MongoDB shows the best performance among all databases in terms of efficient memory usage. * Indexing improves the performance of any database only for covered queries. * Redis and MongoDB give good performance for range based queries and for fetching complete data in terms of elapsed time whereas MySQL gives the worst performance. * MySQL provides better performance for aggregate functions. NoSQL is not suitable for complex queries and aggregate functions. Conclusion: It has been found from the extensive empirical analysis that NoSQL outperforms SQL based systems in terms of basic read and write operations. However, SQL based systems are better if queries on the dataset mainly involves aggregation operations.


Author(s):  
Fredi Edgardo Palominos ◽  
Felisa Córdova ◽  
Claudia Durán ◽  
Bryan Nuñez

OLAP and multidimensional database technology have contributed significantly to speed up and build confidence in the effectiveness of methodologies based on the use of management indicators in decision-making, industry, production, and services. Although there are a wide variety of tools related to the OLAP approach, many implementations are performed in relational database systems (R-OLAP). So, all interrogation actions are performed through queries that must be reinterpreted in the SQL language. This translation has several consequences because SQL language is based on a mixture of relational algebra and tuple relational calculus, which conceptually responds to the logic of the relational data model, very different from the needs of the multidimensional databases. This paper presents a multidimensional query language that allows expressing multidimensional queries directly over ROLAP databases. The implementation of the multidimensional query language will be done through a middleware that is responsible for mapping the queries, hiding the translation to a layer of software not noticeable to the end-user. Currently, progress has been made in the definition of a language where through a key statement, called aggregate, it is possible to execute the typical multidimensional operators which represent an important part of the most frequent operations in this type of database.


Sign in / Sign up

Export Citation Format

Share Document