Diagnostic Test Approaches to Machine Learning and Commonsense Reasoning Systems
Latest Publications


TOTAL DOCUMENTS

11
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781466619005, 9781466619012

Author(s):  
Alexander Yakovlev

Today is the time of transnational corporations and large companies. They bring to their shareholders and owners the major profits, and they are the main sponsors of scientific and technological progress. However, the extensive way of its development is not possible for environmental, marketing, resource, and many other reasons. So, the main field of competition between companies becomes a fight for the client, the individualization of approach to him, and the maximum cost reduction. At the same time, a series of scandals that erupted in the early 2000s with such major corporations as Enron Corporation, WorldCom, Tyco International, Adelphia, and Peregrine Systems has shown that the system of corporate governance, on which depends the welfare of hundreds of thousands of people, requires serious improvements in terms of transparency and openness. In this regard, the U.S. adopted the Sarbanes-Oxley Act of 2002, under which management companies legally obliged to prove that his decisions are based on reliable, relevant, credible and accurate information (Devenport & Harris, 2010).


Author(s):  
Kunjal Mankad ◽  
Priti Srinivas Sajja

The chapter focuses on Genetic-Fuzzy Rule Based Systems of soft computing in order to deal with uncertainty and imprecision with evolving nature for different domains. It has been observed that major professional domains such as education and technology, human resources, psychology, etc, still lack intelligent decision support system with self evolving nature. The chapter proposes a novel framework implementing Theory of Multiple Intelligence of education to identify students’ technical and managerial skills. Detail methodology of proposed system architecture which includes the design of rule bases for technical and managerial skills, encoding strategy, fitness function, cross-over and mutation operations for evolving populations is presented in this chapter. The outcome and the supporting experimental results are also presented to justify the significance of the proposed framework. It concludes by discussing advantages and future scope in different domains.


Author(s):  
Krassimir Markov ◽  
Koen Vanhoof ◽  
Iliya Mitov ◽  
Benoit Depaire ◽  
Krassimira Ivanova ◽  
...  

The Multi-layer Pyramidal Growing Networks (MPGN) are memory structures based on multidimensional numbered information spaces (Markov, 2004), which permit us to create association links (bonds), hierarchically systematizing, and classification the information simultaneously with the input of it into memory. This approach is a successor of the main ideas of Growing Pyramidal Networks (Gladun, 2003), such as hierarchical structuring of memory that allows reflecting the structure of composing instances and gender-species bonds naturally, convenient for performing different operations of associative search. The recognition is based on reduced search in the multi-dimensional information space hierarchies. In this chapter, the authors show the advantages of using the growing numbered memory structuring via MPGN in the field of class association rule mining. The proposed approach was implemented in realization of association rules classifiers and has shown reliable results.


Author(s):  
Boris Kulik ◽  
Alexander Fridman ◽  
Alexander Zuenko

This chapter examines the usage potential of n-tuple algebra (NTA) developed by the authors as a theoretical generalization of structures and methods applied in intelligence systems. NTA supports formalization of a wide set of logical problems (abductive and modified conclusions, modelling graphs, semantic networks, expert rules, etc.). This chapter mostly focuses on implementation of logical inference and defeasible reasoning by means of NTA. Logical inference procedures in NTA can include, besides the known logical calculus methods, new algebraic methods for checking correctness of a consequence or for finding corollaries to a given axiom system. Inference methods consider (above feasibility of certain substitutions) inner structure of knowledge to be processed, thus providing faster solving of standard logical analysis tasks. Matrix properties of NTA objects allow decreasing the complexity of intellectual procedures. As for making databases more intelligent, NTA can be considered as an extension of relational algebra to knowledge processing.


Author(s):  
Xenia Naidenova

An analytical survey of some efficient current approaches to mining all kind of logical rules is presented including implicative and functional dependencies, association and classification rules. The interconnection between these approaches is analyzed. It is demonstrated that all the approaches are equivalent with respect to using the same key concepts of frequent itemsets (maximally redundant or closed itemset, generator, non-redundant or minimal generator, classification test) and the same procedures of their lattice structure construction. The main current tendencies in developing these approaches are considered.


Author(s):  
Tatiana V. Sambukova

The work is devoted to the decision of two interconnected key problems of Data Mining: discretization of numerical attributes, and inferring pattern recognition rules (decision rules) from training set of examples with the use of machine learning methods. The method of discretization is based on a learning procedure of extracting attribute values’ intervals the bounds of which are chosen in such a manner that the distributions of attribute’s values inside of these intervals should differ in the most possible degree for two classes of samples given by an expert. The number of intervals is defined to be not more than 3. The application of interval data analysis allowed more fully than by traditional statistical methods of comparing distributions of data sets to describe the functional state of persons in healthy condition depending on the absence or presence in their life of the episodes of secondary deficiency of their immunity system. The interval data analysis gives the possibility (1) to make the procedure of discretization to be clear and controlled by an expert, (2) to evaluate the information gain index of attributes with respect to the distinguishing of given classes of persons before any machine learning procedure (3) to decrease crucially the machine learning computational complexity.


Author(s):  
Nadezhda Kiselyova ◽  
Andrey Stolyarenko ◽  
Vladimir Ryazanov ◽  
Oleg Sen’ko ◽  
Alexandr Dokukin

The review of applications of machine training methods to inorganic chemistry and materials science is presented. The possibility of searching for classification regularities in large arrays of chemical information with the use precedent-based recognition methods is discussed. The system for computer-assisted design of inorganic compounds, with an integrated complex of databases for the properties of inorganic substances and materials, a subsystem for the analysis of data, based on computer training (including symbolic pattern recognition methods), a knowledge base, a predictions base, and a managing subsystem, has been developed. In many instances, the employment of the developed system makes it possible to predict new inorganic compounds and estimate various properties of those without experimental synthesis. The results of application of this information-analytical system to the computer-assisted design of inorganic compounds promising for the search for new materials for electronics are presented.


Author(s):  
Xenia Naidenova

The concept of good classification test is used in this chapter as a dual element of the interconnected algebraic lattices. The operations of lattice generation take their interpretations in human mental acts. Inferring the chains of dual lattice elements ordered by the inclusion relation lies in the foundation of generating good classification tests. The concept of an inductive transition from one element of a chain to its nearest element in the lattice is determined. The special reasoning rules for realizing inductive transitions are formed. The concepts of admissible and essential values (objects) are introduced. Searching for admissible and essential values (objects) as a part of reasoning is based on the inductive diagnostic rules. Next, the chapter discusses the relations between constructing good tests and the Formal Concept Analysis (FCA). The decomposition of inferring good classification tests is advanced into two kinds of subtasks that are in accordance with human mental acts. This decomposition allows modeling incremental inductive-deductive inferences. The problems of creating an integrative inductive-deductive model of commonsense reasoning are discussed in the last section of this chapter.


Author(s):  
Arkadij Zakrevskij

Systems of many Boolean equations with many variables are regarded, which have a lot of practical applications in logic design and diagnostics, pattern recognition, artificial intelligence, et cetera. Special attention is paid to systems of linear equations playing an important role in information security problems. A compact matrix representation is suggested for such systems. A series of original methods and algorithms for their solution is surveyed in this chapter, as well as the information concerning their program implementation and experimental estimation of their efficiency.


Author(s):  
Dmitry I. Ignatov ◽  
Jonas Poelmans

Recommender systems are becoming an inseparable part of many modern Internet web sites and web shops. The quality of recommendations made may significantly influence the browsing experience of the user and revenues made by web site owners. Developers can choose between a variety of recommender algorithms; unfortunately no general scheme exists for evaluation of their recall and precision. In this chapter, the authors propose a method based on cross-validation for diagnosing the strengths and weaknesses of recommender algorithms. The method not only splits initial data into a training and test subsets, but also splits the attribute set into a hidden and visible part. Experiments were performed on a user-based and item-based recommender algorithm. These algorithms were applied to the MovieLens dataset, and the authors found classical user-based methods perform better in terms of recall and precision.


Sign in / Sign up

Export Citation Format

Share Document