scholarly journals Deriving Triggers from Integrity Constraint Specifications in the Database Management Systems

10.14311/286 ◽  
2001 ◽  
Vol 41 (6) ◽  
Author(s):  
M. Badawy ◽  
K. Richta

Supporting integrity constraints is essential for database systems. Integrity constraints are used to ensure that the data in a database complies with rules that have been set to establish accurate and acceptable information for a database. Triggers provide a very powerful and flexible means to realize effective constraint enforcing mechanisms. Implementing triggers based on constraint specifications follows some simple rules that are basically independent of a particular commercial database system. This paper gives these rules, which can be used to derive triggers from constraint specifications. A comparison of advantages of constraints and triggers is also given in this paper.

Author(s):  
Andreas M. Weiner ◽  
Theo Härder

Since the very beginning of query processing in database systems, cost-based query optimization has been the essential strategy for effectively answering complex queries on large documents. XML documents can be efficiently stored and processed using native XML database management systems. Even though such systems can choose from a huge repertoire of join operators (e. g., Structural Joins and Holistic Twig Joins) and various index access operators to efficiently evaluate queries on XML documents, the development of full-fledged XML query optimizers is still in its infancy. Especially the evaluation of complex XQuery expressions using these operators is not well understood and needs further research. The extensible, rule-based, and cost-based XML query optimization framework proposed in this chapter, serves as a testbed for exploring how and whether well-known concepts from relational query optimization (e. g., join reordering) can be reused and which new techniques can make a significant contribution to speed-up query execution. Using the best practices and an appropriate cost model that will be developed using this framework, it can be turned into a robust cost-based XML query optimizer in the future.


2008 ◽  
Vol 8 (2) ◽  
pp. 129-165 ◽  
Author(s):  
G. TERRACINA ◽  
N. LEONE ◽  
V. LIO ◽  
C. PANETTA

AbstractThis article considers the problem of reasoning on massive amounts of (possibly distributed) data. Presently, existing proposals show some limitations: (i) the quantity of data that can be handled contemporarily is limited, because reasoning is generally carried out in main-memory; (ii) the interaction with external (and independent) Database Management Systems is not trivial and, in several cases, not allowed at all; and (iii) the efficiency of present implementations is still not sufficient for their utilization in complex reasoning tasks involving massive amounts of data. This article provides a contribution in this setting; it presents a new system, called DLVDB, which aims to solve these problems. Moreover, it reports the results of a thorough experimental analysis we have carried out for comparing our system with several state-of-the-art systems (both logic and databases) on some classical deductive problems; the other tested systems are LDL++, XSB, Smodels, and three top-level commercial Database Management Systems. DLVDB significantly outperforms even the commercial database systems on recursive queries.


2009 ◽  
pp. 28-34
Author(s):  
Wenbing Zhao

The subject of highly available database systems has been studied for more than two decades, and there exist many alternative solutions (Agrawal, El Abbadi, & Steinke, 1997; Kemme, & Alonso, 2000; Patino-Martinez, Jimenez-Peris, Kemme, & Alonso, 2005). In this article, we provide an overview of two of the most popular database high availability strategies, namely database replication and database clustering. The emphasis is given to those that have been adopted and implemented by major database management systems (Davies & Fisk, 2006; Ault & Tumma, 2003).


Information ◽  
2020 ◽  
Vol 11 (12) ◽  
pp. 576
Author(s):  
Vitalii Yesin ◽  
Mikolaj Karpinski ◽  
Maryna Yesina ◽  
Vladyslav Vilihura ◽  
Kornel Warwas

The objective of the article is to reveal an approach to hiding the code of stored programs stored in the database. The essence of this approach is the complex use of the method of random permutation of code symbols related to a specific stored program, located in several rows of some attribute of the database system table, as well as the substitution method. Moreover, with the possible substitute of each character obtained after the permutation with another one randomly selected from the Unicode standard, a legitimate user with the appropriate privileges gets access to the source code of the stored program due to the ability to quickly perform the inverse to masking transformation and overwrite the program code into the database. All other users and attackers without knowledge of certain information can only read the codes of stored programs masked with format preserving. The proposed solution is more efficient than the existing methods of hiding the code of stored programs provided by the developers of some modern database management systems (DBMS), since an attacker will need much greater computational and time consumption to disclose the source code of stored programs.


Author(s):  
Shivankur Thapliyal

Abstract: In the modern era of today’s exceptional Information age, the day to day transactions of huge sensitive data sets, which is in the form of PBs (Peta-Bytes) 250 bytes and YBs (Yotta – Bytes) 280 bytes are drastically increases with enormous speed on CLOUD data storage environment. CLOUDs data storage environment are one of the most superior and reliable platform for storing a large sets of data both at enterprise level or local level. Because CLOUD provides online data fetching capability to restore or fetching data at any geographical locations through login their correspondent credentials. But to enhancement or spread of these large data sets are becomes also very complex with respect to maintenance of these data with take concern of consistency and data security, because to maintain these large data sets with full of consistency and integrity are really a very typical and rational tasks, so here In this paper we proposed a distributed database management systems for CLOUD interface also preserves or to take concern data security features with full restoration of CIA (Confidentiality, Integrity, Availability or Authenticity) trade of Information Security. Here we also improvised the mechanisms of traditional distributed database management systems because the tendency to preserves information and recover ability after any misconceptions happens that we restore data which belongs to similar person may have to be stored at different locations, but this newly proposed distributed database systems architecture contains all information or record which belong to similar person are stored in one database rather restore it different databases but the location of these data have to be changes mean while that the content or data which resides in one databases have to be moved to some other database and also preserves the security features, and this model also have capability to run older traditional methodology based distributed database management systems using this model. So the detailed description about these models and communication infrastructure among different CLOUDs are append in the upcoming sections of this paper. Keywords: Cloud based Distributed Database system model, Distributed system, Distributed Database model of CLOUD, Cloud Distributed Database, CLOUD based database systems


2013 ◽  
Vol 10 (1) ◽  
pp. 283-320 ◽  
Author(s):  
Slavica Aleksic ◽  
Sonja Ristic ◽  
Ivan Lukovic ◽  
Milan Celikovic

The inverse referential integrity constraints (IRICs) are specialization of non-key-based inclusion dependencies (INDs). Keybased INDs (referential integrity constraints) may be fully enforced by most current relational database management systems (RDBMSs). On the contrary, non-key-based INDs are completely disregarded by actual RDBMSs, obliging the users to manage them via custom procedures and/or triggers. In this paper we present an approach to the automated implementation of IRICs integrated in the SQL Generator tool that we developed as a part of the IIS


2020 ◽  
Vol 5 (12) ◽  
pp. 76-81
Author(s):  
HyunChul Joh

Popularity and marketshare are very important index for software users and vendors since more popular systems tend to engage better user experience and environments. periodical fluctuations in the popularity and marketshare could be vital factors when we estimate the potential risk analysis in target systems. Meanwhile, software vulnerabilities, in major relational database management systems, are detected every now and then. Today, all most every organizations depend on those database systems for store and retrieve their any kinds of informations for the reasons of security, effectiveness, etc. They have to manage and evaluate the level of risks created by the software vulnerabilities so that they could avoid potential losses before the security defects damage their reputations. Here, we examine the seasonal fluctuations with respect to the view of software security risks in the four major database systems, namely MySQL, MariaDB, Oracle Database and Microsoft SQL Server.


Author(s):  
Genoveva Vargas-Solar

Database management systems (DBMS) are becoming part of environments composed of large-scale distributed heterogeneous and networks of autonomous, loosely coupled components. In particular, federated database management systems (FDBMS) can be seen as networks that integrate a number of pre-existing autonomous DBMS which can be homogeneous or heterogeneous. They can use different underlying data models, data definition and manipulation facilities, transaction management, and concurrency control mechanisms. DBMS in the federation can be integrated by a mediator providing a unified view of data: a global schema, a global query language, a global catalogue, and a global transaction manager. The underlying transaction model considers, in general, a set of transactions synchronized by a global transaction. Synchronization is achieved using protocols such as the Two-Phase Commit protocol. FDBMS applications are built upon this global environment, and they interact with the mediator to execute global database operations (i.e., operations that can concern various DBMS in the federation).


2021 ◽  
Vol 50 (3) ◽  
pp. 29-31
Author(s):  
Marianne Winslett ◽  
Vanessa Braganholo

Welcome to this installment of the ACM SIGMOD Record's series of interviews with distinguished members of the database community. I'm Marianne Winslett, and today I have here with me Joy Arulraj, who won the 2019 ACM SIGMOD Jim Gray Dissertation Award for his thesis entitled The Design and Implementation of Non-volatile Memory Database Management Systems. Joy is now an Assistant Professor at Georgia Tech, and his PhD is from the Carnegie Mellon University, where he worked with Andy Pavlo, who won this same award in his time. So, Joy, welcome!


Sign in / Sign up

Export Citation Format

Share Document