Developing Quality Complex Database Systems
Latest Publications


TOTAL DOCUMENTS

16
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781878289889, 9781930708822

Author(s):  
Cheryl L. Dunn ◽  
Severin V. Grabiski

In the past several years, huge investments have been made in enterprise resource planning (ERP) systems and related applications. While the integrated database and data warehouse in such systems provides value, more value could be realized if the databases could more semantically reflect the underlying reality of the organization. Inter-enterprise commerce can be facilitated with the use of ontologically based systems with common semantics (Geerts and McCarthy, 2000; Haugen and McCarthy, 2000) instead of reliance on electronic data interchange (EDI) standards. This chapter presents a normative semantic model for enterprise information systems that has its roots in transaction processing information systems. Empirical research on semantically modeled information systems is reviewed and an example company’s semantic model is provided as a proof of concept. This model is used as the basis for a discussion of its application to ERP systems and to inter-organizational systems. Future trends and research directions are also discussed.


Author(s):  
Ido Millet

Relational databases and the current SQL standard are poorly suited to retrieval of hierarchical data. After demonstrating the problem, this chapter describes how two approaches to data denormalization can facilitate hierarchical data retrieval. Both approaches solve the problem of data retrieval, but as expected, come at the cost of difficult and potentially inconsistent data updates. This chapter then describes how we can address these update-related shortcomings via back-end (triggers) logic. Using a proper combination of denormalized data structure and back-end logic, we can have the best of both worlds: easy data retrieval and simple, consistent data updates.


Author(s):  
Ronald Maier

This chapter presents a concept for the evaluation of data modeling which is based on existing theoretical approaches and three empirical studies conducted or supervised by the author. The main results of these studies with respect to evaluation suggest to extend existing approaches for the evaluation of data models. It is necessary to focus more on organizational issues of data modeling, more on process instead of product quality, to consider different application scenarios of data modeling as well as to distinguish the enterprise-wide evaluation of data modeling from the evaluation of single projects using data modeling. The evaluation concept presented here focuses on the evaluation of single data modeling projects and consists of recommendations for the evaluation procedure, persons involved, instruments, the design of important organizational dimensions as well as some concrete measures of process and product quality.


Author(s):  
Suk-Chung Yoon

The contribution of our approach is that we develop a framework for processing and answering queries flexibly by applying data mining techniques. In addition, we suggest strategies to reduce the computational complexity of the advanced query answer generation process. We believe that our approach enhances user-machine interfaces significantly to conventional databases with additional features. This chapter is structured as follows. The next section introduces motivating examples to show the advantages of advanced query processing. Following that we survey related works on intelligent query processing. Then we present our approach to process different types of queries using data mining techniques. The final section discusses our conclusions and possible extensions of our work for future research.


Author(s):  
Clare Atkins

An important contributor to the success of any complex database development is the comprehensive and accurate capture and recording of the users’ information requirements. Indeed, both the technical and economic success of the system under development is likely to rest largely on the quality of the data structure design and the information requirement analysis on which it is based. The data models, which represent the results of the analysis and design activities necessary to achieve this quality outcome, are therefore critical components of the database development process. Nevertheless, research suggests that this modeling is not always done well and in some cases is not done at all (e.g., Hitchman, 1995). However, implicit in the creation of a database is the design of a data model, and thus the only optional feature is the level of formality that has been followed in its development (Simsion, 1994). Since the publication of Chen’s (1976) original description of an Entity-relationship (E-R) model, a significant amount of academic research into data modeling has concentrated on providing ever richer, more complex and more formal models with which to better represent reality (Hirschheim, Klein & Lyytinen, 1995). In addition, researchers and practitioners have also recognized the importance of data models as a means of communication. However, little attention has been given to examining the appropriateness of various modeling techniques to the very different requirements of the analysis and design activities that they support, although matching tools to activities would seem to be an essential prerequisite for success. The INTECoM framework, described in this chapter, was developed to emphasize and better serve the differing nature of these activities, and also to improve access for all users to both the process and the outcome of data modeling. The framework was initially instantiated with two widely used data modeling techniques, the NIAM-CSDP (Natural Language Information Analysis-Conceptual Schema Design Procedure) and the Entity-Relationship (E-R) approach. This instantiation was chosen primarily because the two techniques represent significantly different ways of working (Bronts, Brouwer, Martens & Proper, 1995) towards the construction of a relational database. This is not to suggest that other instantiations are not possible or desirable, particularly where the target DBMS is of a different paradigm.


Author(s):  
Manoj K. Singh ◽  
Mahesh S. Raisinghani

The concept and philosophy behind supply chain management is to integrate and optimize business processes across all partners in the entire production chain. Since these are not simple supply chains but rather complex networks, tuning these complex networks comprising supply chain/s to the needs of the market can be facilitated by data mining. Data mining is a set of techniques used to uncover previously obscure or unknown patterns and relationships in very large databases. It provides better information for achieving competitive advantage, increases operating efficiency, reduces operating costs and provides flexibility in using the data by allowing the users to pull the data they need instead of letting the system push the data. However, making sense of all this data is an enormous technological and logistical challenge. This chapter helps you understand the key concepts of data mining, its methodology and application in the context of supply chain management of complex networks.


Author(s):  
Laura C. Rivero ◽  
Jorge H. Doorn ◽  
Viviana E. Ferraggine

The evaluation of conceptual schemes of actual databases may result in the discovery of inclusion dependencies. An inclusion dependency is defined as the existence of attributes in a table whose values must be a subset of the values of attributes in another table. When the latter set conforms a key for its table, the inclusion dependency is key-based. Key-based inclusion dependencies are fully enforced by most current database systems. On the contrary, if the second set is not the key of the relation, the inclusion dependency is non-key-based. This kind of inclusion dependency is completely disregarded by actual systems, obliging the users to manage them via special-case code or triggers. This implies an excessive effort to maintain integrity and develop applications, among other inconveniences. The chapter goal is to give a heuristics to redesign the conceptual schema. This is based on the identification of hidden business rules and the conversion of non-key inclusion dependencies into key-based ones.


Author(s):  
Esperenza Marcos ◽  
Paloma Caceres

In spite of the fact that relational databases still hold the first place in the market, object-oriented databases are becoming, each day, more widely accepted. Relational databases are suitable for traditional applications supporting management tasks such as payroll or library management. Recently, as a result of hardware improvements, more sophisticated applications have emerged. Engineering applications, such as CAD/CAM (Computer Aided Design/ Computer Aided Manufacturing), CASE (Computer Aided Software Engineering) or CIM (Computer Integrating Manufacturing), office automation systems, multimedia systems such as GIS (Geographic Information Systems) or medical information systems, can be characterized as consisting of complex objects related by complex interrelationships. Representing such objects and relationships in the relational model implies that the objects must be decomposed into a large number of tuples. Thus, a considerable number of joins is necessary to retrieve an object and, when tables are too deeply nested, performance is dramatically reduced (Bertino and Marcos, 2000).


Author(s):  
John A. Hoxmeier

Databases are a critical element of virtually all conventional and e-business applications. How does an organization know if the information derived from the database is any good? To ensure a quality database application, should the emphasis during model development be on the application of quality assurance metrics (designing it right)? A large number of database applications fail or are unusable. It is evident that a quality process does not necessarily lead to a usable database product. A database application can also be well-formed with high data quality but lack semantic or cognitive fidelity (the right design). This chapter expands on the growing body of literature in the area of data quality by proposing additions to a hierarchy of database quality dimensions that include model and behavioral factors in addition to process and data factors.


Author(s):  
Eduardo Fernandez-Medina Paton ◽  
Mario G. Piattini

Rapid technological advances in communications, transport, banking, manufacturing, medicine and other fields are demanding more sophisticated information requirements in organizations worldwide. As a result, large quantities of data must be handled, while a high level of security must be maintained in order to ensure information needs are met. The alarming growth in electronic crime is forcing organizations to take a look at how information systems can maintain security while meeting the technological needs of real-time systems in a global market. It is important therefore, that in information systems analysis and design, security requirements are taken into account.


Sign in / Sign up

Export Citation Format

Share Document