Global Trends in Intelligent Computing Research and Development - Advances in Computational Intelligence and Robotics
Latest Publications


TOTAL DOCUMENTS

18
(FIVE YEARS 0)

H-INDEX

3
(FIVE YEARS 0)

Published By IGI Global

9781466649361, 9781466649378

Author(s):  
Durga Prasad Roy ◽  
Baisakhi Chakraborty

Case-Based Reasoning (CBR) arose out of research into cognitive science, most prominently that of Roger Schank and his students at Yale University, during the period 1977–1993. CBR may be defined as a model of reasoning that incorporates problem solving, understanding, and learning, and integrates all of them with memory processes. It focuses on the human problem solving approach such as how people learn new skills and generates solutions about new situations based on their past experience. Similar mechanisms to humans who intelligently adapt their experience for learning, CBR replicates the processes by considering experiences as a set of old cases and problems to be solved as new cases. To arrive at the conclusions, it uses four types of processes, which are retrieve, reuse, revise, and retain. These processes involve some basic tasks such as clustering and classification of cases, case selection and generation, case indexing and learning, measuring case similarity, case retrieval and inference, reasoning, rule adaptation, and mining to generate the solutions. This chapter provides the basic idea of case-based reasoning and a few typical applications. The chapter, which is unique in character, will be useful to researchers in computer science, electrical engineering, system science, and information technology. Researchers and practitioners in industry and R&D laboratories working in such fields as system design, control, pattern recognition, data mining, vision, and machine intelligence will benefit.


Author(s):  
B. K. Tripathy

Several models have been introduced to capture impreciseness in data. Fuzzy sets introduced by Zadeh and Rough sets introduced by Pawlak are two of the most popular such models. In addition, the notion of intuitionistic fuzzy sets introduced by Atanassov and the hybrid models obtained thereof have been very fruitful from the application point of view. The introduction of fuzzy logic and the approximate reasoning obtained through it are more realistic as they are closer to human reasoning. Equality of sets in crisp mathematics is too restricted from the application point of view. Therefore, extending these concepts, three types of approximate equalities were introduced by Novotny and Pawlak using rough sets. These notions were found to be restrictive in the sense that they again boil down to equality of sets and also the lower approximate equality is artificial. Keeping these points in view, three other types of approximate equalities were introduced by Tripathy in several papers. These approximate equalities were further generalised to cover the approximate equalities of fuzzy sets and intuitionistic fuzzy sets by him. In addition, considering the generalisations of basic rough sets like the covering-based rough sets and multigranular rough sets, the study has been carried out further. In this chapter, the authors provide a comprehensive study of all these forms of approximate equalities and illustrate their applicability through several examples. In addition, they provide some problems for future work.


Author(s):  
Harihar Kalia ◽  
Satchidananda Dehuri ◽  
Ashish Ghosh

Knowledge Discovery in Databases (KDD) is the process of automatically searching patterns from large volumes of data by using specific data mining techniques. Classification, association, and associative classification (integration of classification and association) rule mining are popularly used rule mining techniques in KDD for harvesting knowledge in the form of rule. The classical rule mining techniques based on crisp sets have bad experience of “sharp boundary problems” while mining rule from numerical data. Fuzzy rule mining approaches eliminate these problems and generate more human understandable rules. Several quality measures are used in order to quantify the quality of these discovered rules. However, most of these objectives/criteria are conflicting to each other. Thus, fuzzy rule mining problems are modeled as multi-objective optimization problems rather than single objective. Due to the ability of finding diverse trade-off solutions for several objectives in a single run, multi-objective genetic algorithms are popularly employed in rule mining. In this chapter, the authors discuss the multi-objective genetic-fuzzy approaches used in rule mining along with their advantages and disadvantages. In addition, some of the popular applications of these approaches are discussed.


Author(s):  
Natthakan Iam-On ◽  
Tossapon Boongoen

A need has long been identified for a more effective methodology to understand, prevent, and cure cancer. Microarray technology provides a basis of achieving this goal, with cluster analysis of gene expression data leading to the discrimination of patients, identification of possible tumor subtypes, and individualized treatment. Recently, soft subspace clustering was introduced as an accurate alternative to conventional techniques. This practice has proven effective for high dimensional data, especially for microarray gene expressions. In this review, the basis of weighted dimensional space and different approaches to soft subspace clustering are described. Since most of the models are parameterized, the application of consensus clustering has been identified as a new research direction that is capable of turning the difficulty with parameter selection to an advantage of increasing diversity within an ensemble.


Author(s):  
Manish Joshi ◽  
Pawan Lingras ◽  
Gajendra Wani ◽  
Peng Zhang

This chapter exemplifies how clustering can be a versatile tool in real life applications. Optimal inventory prediction is one of the important issues faced by owners of retail chain stores. Researchers have made several attempts to develop a generic forecasting model for accurate inventory prediction for all products. Regression analysis, neural networks, exponential smoothing, and Autoregressive Integrated Moving Average (ARIMA) are some of the widely used time series prediction techniques in inventory management. However, such generic models have limitations. The authors propose an approach that uses time series clustering and time series prediction techniques to forecast future demand for each product in an inventory management system. A stability and seasonality analysis of the time series is proposed to identify groups of products (local groups) exhibiting similar sales patterns. The details of the experimental techniques and results for obtaining optimal inventory predictions are shared in this chapter.


Author(s):  
J. Abdul Jaleel ◽  
Anish Benny ◽  
David K. Daniel

The control of pH is of great importance in chemical processes, biotechnological industries, and many other areas. High performance and robust control of pH neutralization is difficult to achieve due to the nonlinear and time-varying process characteristics. The process gain varies at higher order of magnitude over a small range of pH. This chapter uses the adaptive and neural control techniques for the pH neutralization process for a strong acid-strong base system. The simulation results are analyzed to show that an adaptive controller can be perfectly tuned and a properly trained neural network controller may outperform an adaptive controller.


Author(s):  
Kedar Nath Das

Real coded Genetic Algorithms (GAs) are the most effective and popular techniques for solving continuous optimization problems. In the recent past, researchers used the Laplace Crossover (LX) and Power Mutation (PM) in the GA cycle (namely LX-PM) efficiently for solving both constrained and unconstrained optimization problems. In this chapter, a local search technique, namely Quadratic Approximation (QA) is discussed. QA is hybridized with LX-PM in order to improve its efficiency and efficacy. The generated hybrid system is named H-LX-PM. The supremacy of H-LX-PM over LX-PM is validated through a test bed of 22 unconstrained and 15 constrained typical benchmark problems. In the later part of this chapter, a few applications of GA in networking optimization are highlighted as the scope for future research.


Author(s):  
Ch. Aswani Kumar ◽  
Prem Kumar Singh

Introduced by Rudolf Wille in the mid-80s, Formal Concept Analysis (FCA) is a mathematical framework that offers conceptual data analysis and knowledge discovery. FCA analyzes the data, which is represented in the form of a formal context, that describe the relationship between a particular set of objects and a particular set of attributes. From the formal context, FCA produces hierarchically ordered clusters called formal concepts and the basis of attribute dependencies, called attribute implications. All the concepts of a formal context form a hierarchical complete lattice structure called concept lattice that reflects the relationship of generalization and specialization among concepts. Several algorithms are proposed in the literature to extract the formal concepts from a given context. The objective of this chapter is to analyze, demonstrate, and compare a few standard algorithms that extract the formal concepts. For each algorithm, the analysis considers the functionality, output, complexity, delay time, exploration type, and data structures involved.


Author(s):  
Alexandre de Castro

In a seminal paper published in the early 1980s titled “Information Technology and the Science of Information,” Bertram C. Brookes theorized that a Shannon-Hartley's logarithmic-like measure could be applied to both information and recipient knowledge structure in order to satisfy his “Fundamental Equation of Information Science.” To date, this idea has remained almost forgotten, but, in what follows, the authors introduce a novel quantitative approach that shows that a Shannon-Hartley's log-like model can represent a feasible solution for the cognitive process of retention of information described by Brookes. They also show that if, and only if, the amount of information approaches 1 bit, the “Fundamental Equation” can be considered an equality in stricto sensu, as Brookes required.


Author(s):  
Sasanko Sekhar Gantayat ◽  
B. K. Tripathy

The concept of list is very important in functional programming and data structures in computer science. The classical definition of lists was redefined by Jena, Tripathy, and Ghosh (2001) by using the notion of position functions, which is an extension of the concept of count function of multisets and of characteristic function of sets. Several concepts related to lists have been defined from this new angle and properties are proved further in subsequent articles. In this chapter, the authors focus on crisp lists and present all the concepts and properties developed so far. Recently, the functional approach to realization of relational databases and realization of operations on them has been proposed. In this chapter, a list theory-based relational database model using position function approach is designed to illustrate how query processing can be realized for some of the relational algebraic operations. The authors also develop a list theoretic relational algebra (LRA) and realize analysis of Petri nets using this LRA.


Sign in / Sign up

Export Citation Format

Share Document