Developments in Natural Intelligence Research and Knowledge Engineering
Latest Publications


TOTAL DOCUMENTS

22
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By IGI Global

9781466617438, 9781466617445

Author(s):  
Kai Hu ◽  
Yingxu Wang ◽  
Yousheng Tian

Autonomous on-line knowledge discovery and acquisition play an important role in cognitive informatics, cognitive computing, knowledge engineering, and computational intelligence. On the basis of the latest advances in cognitive informatics and denotational mathematics, this paper develops a web knowledge discovery engine for web document restructuring and comprehension, which decodes on-line knowledge represented in informal documents into cognitive knowledge represented by concept algebra and concept networks. A visualized concept network explorer and a semantic analyzer are implemented to capture and refine queries based on concept algebra. A graphical interface is built using concept and semantic models to refine users’ queries. To enable autonomous information restructuring by machines, a two-level knowledge base that mimics human lexical/syntactical and semantic cognition is introduced. The information restructuring model provides a foundation for automatic concept indexing and knowledge extraction from web documents. The web knowledge discovery engine extends machine learning capability from imperative and adaptive information processing to autonomous and cognitive knowledge processing with unstructured documents in natural languages.


Author(s):  
Tadeusz Wibig

Standard experimental data analysis is based mainly on conventional, deterministic inference. The complexity of modern physics problems has become so large that new ideas in the field are received with the highest of appreciation. In this paper, the author has analyzed the problem of contemporary high-energy physics concerning the estimation of some parameters of the observed complex phenomenon. This article confronts the Natural and Artificial Networks performance with the standard statistical method of the data analysis and minimization. The general concept of the relations between CI and standard (external) classical and modern informatics was realized and studied by utilizing of Natural Neural Networks (NNN), Artificial Neural Networks (ANN) and MINUIT minimization package from CERN. The idea of Autonomic Computing was followed by using brains of high school students involved in the Roland Maze Project. Some preliminary results of the comparison are given and discussed.


Author(s):  
Yingxu Wang ◽  
George Baciu ◽  
Yiyu Yao ◽  
Witold Kinsner ◽  
Keith Chan ◽  
...  

Cognitive informatics is a transdisciplinary enquiry of computer science, information sciences, cognitive science, and intelligence science that investigates the internal information processing mechanisms and processes of the brain and natural intelligence, as well as their engineering applications in cognitive computing. Cognitive computing is an emerging paradigm of intelligent computing methodologies and systems based on cognitive informatics that implements computational intelligence by autonomous inferences and perceptions mimicking the mechanisms of the brain. This article presents a set of collective perspectives on cognitive informatics and cognitive computing, as well as their applications in abstract intelligence, computational intelligence, computational linguistics, knowledge representation, symbiotic computing, granular computing, semantic computing, machine learning, and social computing.


Author(s):  
Ke-Jia Chen ◽  
Jean-Paul A. Barthès

We consider Personal Assistant (PA) agents as cognitive agents capable of helping users handle tasks at their workplace. A PA must communicate with the user using casual language, sub-contract the requested tasks, and present the results in a timely fashion. This leads to fairly complex cognitive agents. However, in addition, such an agent should learn from previous tasks or exchanges, which will increase its complexity. Learning requires a memory, which leads to the two following questions: Is it possible to design and build a generic model of memory? If it is, is it worth the trouble? The article tries to answer the questions by presenting the design and implementation of a memory for PA agents, using a case approach, which results in an improved agent model called MemoPA.


Author(s):  
Abdesslem Layeb ◽  
Djamel-Eddine Saidouni

In this work, the authors focus on the quantum evolutionary quantum hybridization and its contribution in solving the binary decision diagram ordering problem. Therefore, a problem formulation in terms of quantum representation and evolutionary dynamic borrowing quantum operators are defined. The sifting search strategy is used in order to increase the efficiency of the exploration process, while experiments on a wide range of data sets show the effectiveness of the proposed framework and its ability to achieve good quality solutions. The proposed approach is distinguished by a reduced population size and a reasonable number of iterations to find the best order, thanks to the principles of quantum computing and to the sifting strategy.


Author(s):  
Jiayu Zhou ◽  
Shi Wang ◽  
Cungen Cao

Chinese information processing is a critical step toward cognitive linguistic applications like machine translation. Lexical hyponymy relation, which exists in some Eastern languages like Chinese, is a kind of hyponymy that can be directly inferred from the lexical compositions of concepts, and of great importance in ontology learning. However, a key problem is that the lexical hyponymy is so commonsense that it cannot be discovered by any existing acquisition methods. In this paper, we systematically define lexical hyponymy relationship, its linguistic features and propose a computational approach to semi-automatically learn hierarchical lexical hyponymy relations from a large-scale concept set, instead of analyzing lexical structures of concepts. Our novel approach discovered lexical hyponymy relation by examining statistic features in a Common Suffix Tree. The experimental results show that our approach can correctly discover most lexical hyponymy relations in a given large-scale concept set.


Author(s):  
Guilong Liu ◽  
William Zhu

Rough set theory is an important technique in knowledge discovery in databases. Classical rough set theory proposed by Pawlak is based on equivalence relations, but many interesting and meaningful extensions have been made based on binary relations and coverings, respectively. This paper makes a comparison between covering rough sets and rough sets based on binary relations. This paper also focuses on the authors’ study of the condition under which the covering rough set can be generated by a binary relation and the binary relation based rough set can be generated by a covering.


Author(s):  
J. Anitha ◽  
C. Kezi Selva Vijila ◽  
D. Jude Hemanth

Fuzzy approaches are one of the widely used artificial intelligence techniques in the field of ophthalmology. These techniques are used for classifying the abnormal retinal images into different categories that assist in treatment planning. The main characteristic feature that makes the fuzzy techniques highly popular is their accuracy. But, the accuracy of these fuzzy logic techniques depends on the expertise knowledge, which indirectly relies on the input samples. Insignificant input samples may reduce the accuracy that further reduces the efficiency of the fuzzy technique. In this work, the application of Genetic Algorithm (GA) for optimizing the input samples is explored in the context of abnormal retinal image classification. Abnormal retinal images from four different classes are used in this work and a comprehensive feature set is extracted from these images as classification is performed with the fuzzy classifier and also with the GA optimized fuzzy classifier. Experimental results suggest highly accurate results for the GA based classifier than the conventional fuzzy classifier.


Author(s):  
Yasuo Kudo ◽  
Tetsuya Murai

This paper focuses on rough set theory which provides mathematical foundations of set-theoretical approximation for concepts, as well as reasoning about data. Also presented in this paper is the concept of relative reducts which is one of the most important notions for rule generation based on rough set theory. In this paper, from the viewpoint of approximation, the authors introduce an evaluation criterion for relative reducts using roughness of partitions that are constructed from relative reducts. The proposed criterion evaluates each relative reduct by the average of coverage of decision rules based on the relative reduct, which also corresponds to evaluate the roughness of partition constructed from the relative reduct,


Author(s):  
Witold Kinsner ◽  
Warren Grieder

This paper describes how the selection of parameters for the variance fractal dimension (VFD) multiscale time-domain algorithm can create an amplification of the fractal dimension trajectory that is obtained for a natural-speech waveform in the presence of ambient noise. The technique is based on the variance fractal dimension trajectory (VFDT) algorithm that is used not only to detect the external boundaries of an utterance, but also its internal pauses representing the unvoiced speech. The VFDT algorithm can also amplify internal features of phonemes. This fractal feature amplification is accomplished when the time increments are selected in a dyadic manner rather than selecting the increments in a unit distance sequence. These amplified trajectories for different phonemes are more distinct, thus providing a better characterization of the individual segments in the speech signal. This approach is superior to other energy-based boundary-detection techniques. Observations are based on extensive experimental results on speech utterances digitized at 44.1 kilosamples per second, with 16 bits in each sample.


Sign in / Sign up

Export Citation Format

Share Document