scholarly journals Extended tuple constraint type as a complex integrity constraint type in XML data model - definition and enforcement

2018 ◽  
Vol 15 (3) ◽  
pp. 821-843
Author(s):  
Jovana Vidakovic ◽  
Sonja Ristic ◽  
Slavica Kordic ◽  
Ivan Lukovic

A database management system (DBMS) is based on a data model whose concepts are used to express a database schema. Each data model has a specific set of integrity constraint types. There are integrity constraint types, such as key constraint, unique constraint and foreign key constraint that are supported by most DBMSs. Other, more complex constraint types are difficult to express and enforce and are mostly completely disregarded by actual DBMSs. The users have to manage those using custom procedures or triggers. eXtended Markup Language (XML) has become the universal format for representing and exchanging data. Very often XML data are generated from relational databases and exported to a target application or another database. In this context, integrity constraints play the essential role in preserving the original semantics of data. Integrity constraints have been extensively studied in the relational data model. Mechanisms provided by XML schema languages rely on a simple form of constraints that is sufficient neither for expressing semantic constraints commonly found in databases nor for expressing more complex constraints induced by the business rules of the system under study. In this paper we present a classification of constraint types in relational data model, discuss possible declarative mechanisms for their specification and enforcement in the XML data model, and illustrate our approach to the definition and enforcement of complex constraint types in the XML data model on the example of extended tuple constraint type.

Author(s):  
Antonio Badia

Data warehouses (DW) appeared first in industry in the mid 1980s. When their impact on businesses and database practices became clear, a flurry or research took place in academia in the late 1980s and 1990s. However, the concept of DW still remains rooted on its practical origins. This entry describes the basic concepts behind a DW while keeping the discussion at an intuitive level. The entry is meant as an overview to complement more focused and detailed entries, and it assumes only familiarity with the relational data model and relational databases.


Author(s):  
Devendra K. Tayal ◽  
P. C. Saxena

In this paper we discuss an important integrity constraint called multivalued dependency (mvd), which occurs as a result of the first normal form, in the framework of a newly proposed model called fuzzy multivalued relational data model. The fuzzy multivalued relational data model proposed in this paper accommodates a wider class of ambiguities by representing the domain of attributes as a “set of fuzzy subsets”. We show that our model is able to represent multiple types of impreciseness occurring in the real world. To compute the equality of two fuzzy sets/values (which occur as tuple-values), we use the concept of fuzzy functions. So the main objective of this paper is to extend the mvds in context of fuzzy multivalued relational model so that a wider class of impreciseness can be captured. Since the mvds may not exist in isolation, a complete axiomatization for a set of fuzzy functional dependencies (ffds) and mvds in fuzzy multivalued relational schema is provided and the role of fmvds in obtaining the lossless join decomposition is discussed. We also provide a set of sound Inference Rules for the fmvds and derive the conditions for these Inference Rules to be complete. We also derive the conditions for obtaining the lossless join decomposition of a fuzzy multivalued relational schema in the presence of the fmvds. Finally we extend the ABU's Algorithm to find the lossless join decomposition in context of fuzzy multivalued relational databases. We apply all of the concepts of fmvds developed by us to a real world application of “Technical Institute” and demonstrate that how the concepts fit well to capture the multiple types of impreciseness.


2011 ◽  
Vol 8 (1) ◽  
pp. 27-40 ◽  
Author(s):  
Srdjan Skrbic ◽  
Milos Rackovic ◽  
Aleksandar Takaci

In this paper we examine the possibilities to extend the relational data model with the mechanisms that can handle imprecise, uncertain and inconsistent attribute values using fuzzy logic and fuzzy sets. We present a fuzzy relational data model which we use for fuzzy knowledge representation in relational databases that guarantees the model in 3rd normal form. We also describe the CASE tool for the fuzzy database model development which is apparently the first implementation of such a CASE tool. In this sense, this paper presents a leap forward towards the specification of a methodology for fuzzy relational database applications development.


Author(s):  
Mohammed Ragheb Hakawati ◽  
Yasmin Yacob ◽  
Rafikha Aliana A. Raof ◽  
Mustafa M.Khalifa Jabiry ◽  
Eiad Syaf Alhudiani

Data Cleaning as an essential phase to enhance the overall quality used for decades with different data models, the majority handled a relational dataset as the most dominant data model. However, the XML data model, besides the relational data model considered the most data model commonly used for storing, retrieving, and querying valuable data. In this paper, we introduce a model for detecting and repairing XML data inconsistencies using a set of conditional dependencies. Detecting inconsistencies will be done by joining the existed data source with a set of patterns tableaus as conditional dependencies and then update these values to match the proper patterns using a set of SQL statements. This research considered the final phase for a cleaning model introduced for XML datasets by firstly mapping the XML document to a set of related tables then discovering a set of conditional dependencies (Functional and Inclusions) and finally then applying the following algorithms as a closing step of quality enhancement.


Author(s):  
Shyue-Liang Wang ◽  
◽  
Tzung-Pei Hong ◽  
Wen-Yang Lin ◽  

We present here a method of using analogical reasoning to infer approximate answers for null queries on similarity-based fuzzy relational databases. Null queries are queries that elicit a null answer from a database. Analogical reasoning assumes that if two situations are known to be similar in some respects, it is likely that they will be similar in others. Application of analogical reasoning to infer approximate answers for null queries using fuzzy functional dependency and fuzzy equality relation on possibility-based fuzzy relational database has been studied. However, the problem of inferring approximate answers has not been fully explored on the similarity-based fuzzy relational data model. In this work, we introduce the concept of approximate dependency and define a similarity measure on the similaritybased fuzzy model, as extensions to the fuzzy functional dependency and fuzzy equality relation respectively. Under the framework of reasoning by analogy, our method provides a flexible query answering mechanism for null queries on the similarity-based fuzzy relational data model.


2003 ◽  
Vol 11 (3) ◽  
pp. 225 ◽  
Author(s):  
Mario Pranji� ◽  
Nenad Juki� ◽  
Kre�imir Fertalj

Author(s):  
Bálint Molnár ◽  
András Béleczki ◽  
Bence Sarkadi-Nagy

Data structures and especially the relationship among the data entities have changed in the last couple of years. The network-like graph representations of data-model are becoming more and more common nowadays, since they are more suitable to depict these, than the well-established relational data-model. The graphs can describe large and complex networks — like social networks — but also capable of storing rich information about complex data. This was mostly of relational data-model trait before. This also can be achieved with the use of the knowledge representation tool called “hypergraphs”. To utilize the possibilities of this model, we need a practical way to store and process hypergraphs. In this paper, we propose a way by which we can store hypergraphs model in the SAP HANA in-memory database system which has a “Graph Core” engine besides the relational data model. Graph Core has many graph algorithms by default however it is not capable to store or to work with hypergraphs neither are any of these algorithms specifically tailored for hypergraphs either. Hence in this paper, besides the case study of the two information systems, we also propose pseudo-code level algorithms to accommodate hypergraph semantics to process our IS model.


Sign in / Sign up

Export Citation Format

Share Document