Joy Arulraj Speaks Out on Non- Volatile Memory Database Systems

2021 ◽  
Vol 50 (3) ◽  
pp. 29-31
Author(s):  
Marianne Winslett ◽  
Vanessa Braganholo

Welcome to this installment of the ACM SIGMOD Record's series of interviews with distinguished members of the database community. I'm Marianne Winslett, and today I have here with me Joy Arulraj, who won the 2019 ACM SIGMOD Jim Gray Dissertation Award for his thesis entitled The Design and Implementation of Non-volatile Memory Database Management Systems. Joy is now an Assistant Professor at Georgia Tech, and his PhD is from the Carnegie Mellon University, where he worked with Andy Pavlo, who won this same award in his time. So, Joy, welcome!

Author(s):  
Andreas M. Weiner ◽  
Theo Härder

Since the very beginning of query processing in database systems, cost-based query optimization has been the essential strategy for effectively answering complex queries on large documents. XML documents can be efficiently stored and processed using native XML database management systems. Even though such systems can choose from a huge repertoire of join operators (e. g., Structural Joins and Holistic Twig Joins) and various index access operators to efficiently evaluate queries on XML documents, the development of full-fledged XML query optimizers is still in its infancy. Especially the evaluation of complex XQuery expressions using these operators is not well understood and needs further research. The extensible, rule-based, and cost-based XML query optimization framework proposed in this chapter, serves as a testbed for exploring how and whether well-known concepts from relational query optimization (e. g., join reordering) can be reused and which new techniques can make a significant contribution to speed-up query execution. Using the best practices and an appropriate cost model that will be developed using this framework, it can be turned into a robust cost-based XML query optimizer in the future.


2008 ◽  
Vol 8 (2) ◽  
pp. 129-165 ◽  
Author(s):  
G. TERRACINA ◽  
N. LEONE ◽  
V. LIO ◽  
C. PANETTA

AbstractThis article considers the problem of reasoning on massive amounts of (possibly distributed) data. Presently, existing proposals show some limitations: (i) the quantity of data that can be handled contemporarily is limited, because reasoning is generally carried out in main-memory; (ii) the interaction with external (and independent) Database Management Systems is not trivial and, in several cases, not allowed at all; and (iii) the efficiency of present implementations is still not sufficient for their utilization in complex reasoning tasks involving massive amounts of data. This article provides a contribution in this setting; it presents a new system, called DLVDB, which aims to solve these problems. Moreover, it reports the results of a thorough experimental analysis we have carried out for comparing our system with several state-of-the-art systems (both logic and databases) on some classical deductive problems; the other tested systems are LDL++, XSB, Smodels, and three top-level commercial Database Management Systems. DLVDB significantly outperforms even the commercial database systems on recursive queries.


2009 ◽  
pp. 28-34
Author(s):  
Wenbing Zhao

The subject of highly available database systems has been studied for more than two decades, and there exist many alternative solutions (Agrawal, El Abbadi, & Steinke, 1997; Kemme, & Alonso, 2000; Patino-Martinez, Jimenez-Peris, Kemme, & Alonso, 2005). In this article, we provide an overview of two of the most popular database high availability strategies, namely database replication and database clustering. The emphasis is given to those that have been adopted and implemented by major database management systems (Davies & Fisk, 2006; Ault & Tumma, 2003).


Author(s):  
Shivankur Thapliyal

Abstract: In the modern era of today’s exceptional Information age, the day to day transactions of huge sensitive data sets, which is in the form of PBs (Peta-Bytes) 250 bytes and YBs (Yotta – Bytes) 280 bytes are drastically increases with enormous speed on CLOUD data storage environment. CLOUDs data storage environment are one of the most superior and reliable platform for storing a large sets of data both at enterprise level or local level. Because CLOUD provides online data fetching capability to restore or fetching data at any geographical locations through login their correspondent credentials. But to enhancement or spread of these large data sets are becomes also very complex with respect to maintenance of these data with take concern of consistency and data security, because to maintain these large data sets with full of consistency and integrity are really a very typical and rational tasks, so here In this paper we proposed a distributed database management systems for CLOUD interface also preserves or to take concern data security features with full restoration of CIA (Confidentiality, Integrity, Availability or Authenticity) trade of Information Security. Here we also improvised the mechanisms of traditional distributed database management systems because the tendency to preserves information and recover ability after any misconceptions happens that we restore data which belongs to similar person may have to be stored at different locations, but this newly proposed distributed database systems architecture contains all information or record which belong to similar person are stored in one database rather restore it different databases but the location of these data have to be changes mean while that the content or data which resides in one databases have to be moved to some other database and also preserves the security features, and this model also have capability to run older traditional methodology based distributed database management systems using this model. So the detailed description about these models and communication infrastructure among different CLOUDs are append in the upcoming sections of this paper. Keywords: Cloud based Distributed Database system model, Distributed system, Distributed Database model of CLOUD, Cloud Distributed Database, CLOUD based database systems


2020 ◽  
Vol 5 (12) ◽  
pp. 76-81
Author(s):  
HyunChul Joh

Popularity and marketshare are very important index for software users and vendors since more popular systems tend to engage better user experience and environments. periodical fluctuations in the popularity and marketshare could be vital factors when we estimate the potential risk analysis in target systems. Meanwhile, software vulnerabilities, in major relational database management systems, are detected every now and then. Today, all most every organizations depend on those database systems for store and retrieve their any kinds of informations for the reasons of security, effectiveness, etc. They have to manage and evaluate the level of risks created by the software vulnerabilities so that they could avoid potential losses before the security defects damage their reputations. Here, we examine the seasonal fluctuations with respect to the view of software security risks in the four major database systems, namely MySQL, MariaDB, Oracle Database and Microsoft SQL Server.


10.14311/286 ◽  
2001 ◽  
Vol 41 (6) ◽  
Author(s):  
M. Badawy ◽  
K. Richta

Supporting integrity constraints is essential for database systems. Integrity constraints are used to ensure that the data in a database complies with rules that have been set to establish accurate and acceptable information for a database. Triggers provide a very powerful and flexible means to realize effective constraint enforcing mechanisms. Implementing triggers based on constraint specifications follows some simple rules that are basically independent of a particular commercial database system. This paper gives these rules, which can be used to derive triggers from constraint specifications. A comparison of advantages of constraints and triggers is also given in this paper.


Sign in / Sign up

Export Citation Format

Share Document