Intelligent Databases
Latest Publications


TOTAL DOCUMENTS

11
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781599041209, 9781599041223

2011 ◽  
pp. 197-237
Author(s):  
J. Gerard Wolff

This chapter describes some of the kinds of “intelligence” that may be exhibited by an intelligent database system based on the SP theory of computing and cognition. The chapter complements an earlier paper on the SP theory as the basis for an intelligent database system (Wolff, forthcoming b) but it does not depend on a reading of that earlier paper. The chapter introduces the SP theory and its main attractions as the basis for an intelligent database system: that it uses a simple but versatile format for diverse kinds of knowledge, that it integrates and simplifies a range of AI functions, and that it supports established database models when that is required. Then with examples and discussion, the chapter illustrates aspects of “intelligence” in the system: pattern recognition and information retrieval, several forms of probabilistic reasoning, the analysis and production of natural language, and the unsupervised learning of new knowledge.


2011 ◽  
pp. 137-166
Author(s):  
Gian Piero Zarri

In this chapter, we evoke first the ubiquity and the importance of the so-called ‘narrative’ information, showing that the usual ontological tools are unable to offer complete and reliable solutions for representing and exploiting this type of information. We then supply some details about NKRL (Narrative Knowledge Representation Language), a fully implemented knowledge representation and inferencing environment especially created for an ‘intelligent’ exploitation of narrative knowledge. The main innovation of NKRL consists in associating with the traditional ontologies of concepts an ‘ontology of events’, in other words, a new sort of hierarchical organization where the nodes correspond to n-ary structures representing formally generic classes of elementary events like ‘move a physical object’, ‘be present in a place’, or ‘send/receive a message’. More complex, second order tools based on the ‘reification’ principle allow one to encode the ‘connectivity phenomena’ like causality, goal, indirect speech, coordination, and subordination that, in narrative information, link together ‘elementary events’. The chapter includes a description of the inference techniques proper to NKRL, and some information about the last developments of this language.


2011 ◽  
pp. 117-136
Author(s):  
S.A. Oke

This work demonstrates the application of decision tree, a data mining tool, in the manufacturing system. Data mining has the capability for classification, prediction, estimation, and pattern recognition by using manufacturing databases. Databases of manufacturing systems contain significant information for decision making, which could be properly revealed with the application of appropriate data mining techniques. Decision trees are employed for identifying valuable information in manufacturing databases. Practically, industrial managers would be able to make better use of manufacturing data at little or no extra investment in data manipulation cost. The work shows that it is valuable for managers to mine data for better and more effective decision making. This work is therefore new in that it is the first time that proper documentation would be made in the direction of the current research activity.


2011 ◽  
pp. 44-60 ◽  
Author(s):  
Tzung-Pei Hong ◽  
Ching-Yao Wang

Developing an efficient mining algorithm that can incrementally maintain discovered information as a database grows is quite important in the field of data mining. In the past, we proposed an incremental mining algorithm for maintenance of association rules as new transactions were inserted. Deletion of records in databases is, however, commonly seen in real-world applications. In this chapter, we first review the maintenance of association rules from data insertion and then attempt to extend it to solve the data deletion issue. The concept of pre-large itemsets is used to reduce the need for rescanning the original database and to save maintenance costs. A novel algorithm is proposed to maintain discovered association rules for deletion of records. The proposed algorithm doesn’t need to rescan the original database until a number of records have been deleted. If the database is large, then the number of deleted records allowed will be large too. Therefore, as the database grows, our proposed approach becomes increasingly efficient. This characteristic is especially useful for real-world applications.


2011 ◽  
pp. 1-28
Author(s):  
Ioannis N. Kouris ◽  
Christos H. Makris ◽  
Athanasios K. Tsakalidis

Most algorithms and approaches dealing with data mining in general and especially those focusing on the task of association rule mining have assumed all items to be only positively correlated, and looked only into the items that remained finally in a shopping basket. Very few works have proposed the existence of negative correlations between items, based though on the absence of items from transactions rather than on their actual removals. In this specific chapter we look into mining that takes into consideration valuable information from rejected items and propose various alternatives for taking the specific items into account efficiently. Finally we provide experimental evidence on the existence and significance of these items.


2011 ◽  
pp. 286-309 ◽  
Author(s):  
Hassina Bounif

Information systems, including their core databases need to meet changing user requirements and adhere to evolving business strategies. Traditional database evolution techniques focus on reacting to change to smoothly perform schema evolution operations and to propagate corresponding updates to the data as effectively as possible. Adopting such a posteriori solution to such changes generates high costs in human resources and financial support. We advocate an alternate solution: a predictive approach to database evolution. In this approach, we anticipate future changes during the standard requirements analysis phase of schema development. Our approach enables potential future requirements to be planned for, as well as the standard, determining what data is to be stored and what access is required. This preparation contributes significantly in the ability of the database schema to adapt to future changes and to estimate their relative costs.


2011 ◽  
pp. 238-285 ◽  
Author(s):  
Davide Martinenghi ◽  
Henning Christiansen ◽  
Hendrik Decker

Integrity constraints are a key tool for characterizing the well-formedness and semantics of the information contained in databases. In this regard, it is essential that intelligent database management systems provide their users with automatic support to effectively and efficiently maintain the semantic correctness of data with respect to the given integrity constraints. This chapter gives an overview of the field of efficient integrity checking and maintenance for relational as well as deductive databases. It covers both theoretical and practical aspects of integrity control, including integrity maintenance via active rules. New lines of research are outlined, particularly with regard to two topics where a strong impact for future developments can be expected: integrity in XML document collections and in distributed databases. Both pose a number of new and highly relevant research challenges to the database community.


2011 ◽  
pp. 94-116
Author(s):  
Marcus Costa Sampaio ◽  
Cláudio de Souza Baptita ◽  
André Gomes de Sousa ◽  
Fabiana Ferreira do Nascimento

This chapter introduces spatial dimensions and measures as a means of enhancing decision support systems with spatial capabilities. By some way or other, spatial related data has been used for a long time; however, spatial dimensions have not been fully exploited. It is presented a data model that tightly integrates data warehouse and geographical information systems — so characterizing a spatial data warehouse (SDW) — ; more precisely, the focus is on a formalization of SDW concepts, on a spatial-aware data cube using object-relational technology, and on issues underlying a SDW — specially regarding spatial data aggregation operations. Finally, the MapWarehouse prototype is presented aiming to validate the ideas proposed. The authors believe that SDW allows for the efficient processing of queries that use, jointly, spatial and numerical temporal data (e.g., temporal series from summarized spatial and numerical measures).


2011 ◽  
pp. 167-196
Author(s):  
Z. M. Ma

Fuzzy set theory has been extensively applied to extend various data models and resulted in numerous contributions, mainly with respect to the popular relational model or to some related form of it. To satisfy the need of modeling complex objects with imprecision and uncertainty, recently many researches have been concentrated on fuzzy semantic (conceptual) and object-oriented data models. This chapter reviews fuzzy database modeling technologies, including fuzzy conceptual data models and database models. Concerning fuzzy database models, fuzzy relational databases, fuzzy nested relational databases, and fuzzy object-oriented databases are discussed, respectively.


2011 ◽  
pp. 61-93
Author(s):  
Rosa Meo ◽  
Giuseppe Psaila

Inductive databases have been proposed as general purpose databases to support the KDD process. Unfortunately, the heterogeneity of the discovered patterns and of the different conceptual tools used to extract them from source data make the integration in a unique framework difficult. In this chapter, we explore the feasibility of using XML as the unifying framework for inductive databases, and propose a new model, XML for data mining (XDM). We show the basic features of the model, based on the concepts of data item (source data and patterns) and statement (used to manage data and derive patterns). We make use of XML namespaces (to allow the effective coexistence and extensibility of data mining operators) and of XMLschema, by means of which we can define the schema, the state and the integrity constraints of an inductive database.


Sign in / Sign up

Export Citation Format

Share Document