Semantic Manipulations and Formal Ontology for Machine Learning based on Concept Algebra

Author(s):  
Yingxu Wang ◽  
Yousheng Tian ◽  
Kendal Hu

Towards the formalization of ontological methodologies for dynamic machine learning and semantic analyses, a new form of denotational mathematics known as concept algebra is introduced. Concept Algebra (CA) is a denotational mathematical structure for formal knowledge representation and manipulation in machine learning and cognitive computing. CA provides a rigorous knowledge modeling and processing tool, which extends the informal, static, and application-specific ontological technologies to a formal, dynamic, and general mathematical means. An operational semantics for the calculus of CA is formally elaborated using a set of computational processes in real-time process algebra (RTPA). A case study is presented on how machines, cognitive robots, and software agents may mimic the key ability of human beings to autonomously manipulate knowledge in generic learning using CA. This work demonstrates the expressive power and a wide range of applications of CA for both humans and machines in cognitive computing, semantic computing, machine learning, and computational intelligence.

Author(s):  
Yingxu Wang ◽  
Yousheng Tian ◽  
Kendal Hu

Towards the formalization of ontological methodologies for dynamic machine learning and semantic analyses, a new form of denotational mathematics known as concept algebra is introduced. Concept Algebra (CA) is a denotational mathematical structure for formal knowledge representation and manipulation in machine learning and cognitive computing. CA provides a rigorous knowledge modeling and processing tool, which extends the informal, static, and application-specific ontological technologies to a formal, dynamic, and general mathematical means. An operational semantics for the calculus of CA is formally elaborated using a set of computational processes in real-time process algebra (RTPA). A case study is presented on how machines, cognitive robots, and software agents may mimic the key ability of human beings to autonomously manipulate knowledge in generic learning using CA. This work demonstrates the expressive power and a wide range of applications of CA for both humans and machines in cognitive computing, semantic computing, machine learning, and computational intelligence.


Author(s):  
Luis-Felipe Rodríguez ◽  
Félix Ramos ◽  
Yingxu Wang

Emotions are one of the important subconscious mechanisms that influence human behaviors, attentions, and decision making. The emotion process helps to determine how humans perceive their internal status and needs in order to form consciousness of an individual. Emotions have been studied from multidisciplinary perspectives and covered a wide range of empirical and psychological topics, such as understanding the emotional processes, creating cognitive and computational models of emotions, and applications in computational intelligence. This paper presents a comprehensive survey of cognitive and computational models of emotions resulted from multidisciplinary studies. It explores how cognitive models serve as the theoretical basis of computational models of emotions. The mechanisms underlying affective behaviors are examined as important elements in the design of these computational models. A comparative analysis of current approaches is elaborated based on recent advances towards a coherent cognitive computational model of emotions, which leads to the machine simulated emotions for cognitive robots and autonomous agent systems in cognitive informatics and cognitive computing.


2010 ◽  
Vol 04 (03) ◽  
pp. 331-356 ◽  
Author(s):  
YINGXU WANG

Computing with words (CWW) is an intelligent computing methodology for processing words, linguistic variables, and their semantics, which mimics the natural-language-based reasoning mechanisms of human beings in soft computing, semantic computing, and cognitive computing. The central objects in CWW techniques are words and linguistic variables, which may be formally modeled by abstract concepts that are a basic cognitive unit to identify and model a concrete entity in the real world and an abstract object in the perceived world. Therefore, concepts are the most fundamental linguistic entities that carries certain meanings in expression, thinking, reasoning, and system modeling, which may be formally modeled as an abstract and dynamic mathematical structure in denotational mathematics. This paper presents a formal theory for concept and knowledge manipulations in CWW known as concept algebra. The mathematical models of abstract and concrete concepts are developed based on the object-attribute-relation (OAR) theory. The formal methodology for manipulating knowledge as a concept network is described. Case studies demonstrate that concept algebra provides a generic and formal knowledge manipulation means, which is capable of dealing with complex knowledge and their algebraic operations in CWW.


Author(s):  
Yingxu Wang

Inference as the basic mechanism of thought is abilities gifted to human beings, which is a cognitive process that creates rational causations between a pair of cause and effect based on empirical arguments, formal reasoning, and/or statistical norms. It’s recognized that a coherent theory and mathematical means are needed for dealing with formal causal inferences. Presented is a novel denotational mathematical means for formal inferences known as Inference Algebra (IA) and structured as a set of algebraic operators on a set of formal causations. The taxonomy and framework of formal causal inferences of IA are explored in three categories: a) Logical inferences; b) Analytic inferences; and c) Hybrid inferences. IA introduces the calculus of discrete causal differential and formal models of causations. IA enables artificial intelligence and computational intelligent systems to mimic human inference abilities by cognitive computing. A wide range of applications of IA are identified and demonstrated in cognitive informatics and computational intelligence towards novel theories and technologies for machine-enabled inferences and reasoning. This work is presented in two parts. The inference operators of IA as well as their extensions and applications will be presented in this paper; while the structure of formal inference, the framework of IA, and the mathematical models of formal causations has been published in the first part of the paper in IJCINI 5(4).


2018 ◽  
Author(s):  
Sherif Tawfik ◽  
Olexandr Isayev ◽  
Catherine Stampfl ◽  
Joseph Shapter ◽  
David Winkler ◽  
...  

Materials constructed from different van der Waals two-dimensional (2D) heterostructures offer a wide range of benefits, but these systems have been little studied because of their experimental and computational complextiy, and because of the very large number of possible combinations of 2D building blocks. The simulation of the interface between two different 2D materials is computationally challenging due to the lattice mismatch problem, which sometimes necessitates the creation of very large simulation cells for performing density-functional theory (DFT) calculations. Here we use a combination of DFT, linear regression and machine learning techniques in order to rapidly determine the interlayer distance between two different 2D heterostructures that are stacked in a bilayer heterostructure, as well as the band gap of the bilayer. Our work provides an excellent proof of concept by quickly and accurately predicting a structural property (the interlayer distance) and an electronic property (the band gap) for a large number of hybrid 2D materials. This work paves the way for rapid computational screening of the vast parameter space of van der Waals heterostructures to identify new hybrid materials with useful and interesting properties.


2020 ◽  
Author(s):  
Sina Faizollahzadeh Ardabili ◽  
Amir Mosavi ◽  
Pedram Ghamisi ◽  
Filip Ferdinand ◽  
Annamaria R. Varkonyi-Koczy ◽  
...  

Several outbreak prediction models for COVID-19 are being used by officials around the world to make informed-decisions and enforce relevant control measures. Among the standard models for COVID-19 global pandemic prediction, simple epidemiological and statistical models have received more attention by authorities, and they are popular in the media. Due to a high level of uncertainty and lack of essential data, standard models have shown low accuracy for long-term prediction. Although the literature includes several attempts to address this issue, the essential generalization and robustness abilities of existing models needs to be improved. This paper presents a comparative analysis of machine learning and soft computing models to predict the COVID-19 outbreak as an alternative to SIR and SEIR models. Among a wide range of machine learning models investigated, two models showed promising results (i.e., multi-layered perceptron, MLP, and adaptive network-based fuzzy inference system, ANFIS). Based on the results reported here, and due to the highly complex nature of the COVID-19 outbreak and variation in its behavior from nation-to-nation, this study suggests machine learning as an effective tool to model the outbreak. This paper provides an initial benchmarking to demonstrate the potential of machine learning for future research. Paper further suggests that real novelty in outbreak prediction can be realized through integrating machine learning and SEIR models.


2020 ◽  
Vol 15 ◽  
Author(s):  
Shuwen Zhang ◽  
Qiang Su ◽  
Qin Chen

Abstract: Major animal diseases pose a great threat to animal husbandry and human beings. With the deepening of globalization and the abundance of data resources, the prediction and analysis of animal diseases by using big data are becoming more and more important. The focus of machine learning is to make computers learn how to learn from data and use the learned experience to analyze and predict. Firstly, this paper introduces the animal epidemic situation and machine learning. Then it briefly introduces the application of machine learning in animal disease analysis and prediction. Machine learning is mainly divided into supervised learning and unsupervised learning. Supervised learning includes support vector machines, naive bayes, decision trees, random forests, logistic regression, artificial neural networks, deep learning, and AdaBoost. Unsupervised learning has maximum expectation algorithm, principal component analysis hierarchical clustering algorithm and maxent. Through the discussion of this paper, people have a clearer concept of machine learning and understand its application prospect in animal diseases.


2021 ◽  
Vol 15 ◽  
Author(s):  
Alhassan Alkuhlani ◽  
Walaa Gad ◽  
Mohamed Roushdy ◽  
Abdel-Badeeh M. Salem

Background: Glycosylation is one of the most common post-translation modifications (PTMs) in organism cells. It plays important roles in several biological processes including cell-cell interaction, protein folding, antigen’s recognition, and immune response. In addition, glycosylation is associated with many human diseases such as cancer, diabetes and coronaviruses. The experimental techniques for identifying glycosylation sites are time-consuming, extensive laboratory work, and expensive. Therefore, computational intelligence techniques are becoming very important for glycosylation site prediction. Objective: This paper is a theoretical discussion of the technical aspects of the biotechnological (e.g., using artificial intelligence and machine learning) to digital bioinformatics research and intelligent biocomputing. The computational intelligent techniques have shown efficient results for predicting N-linked, O-linked and C-linked glycosylation sites. In the last two decades, many studies have been conducted for glycosylation site prediction using these techniques. In this paper, we analyze and compare a wide range of intelligent techniques of these studies from multiple aspects. The current challenges and difficulties facing the software developers and knowledge engineers for predicting glycosylation sites are also included. Method: The comparison between these different studies is introduced including many criteria such as databases, feature extraction and selection, machine learning classification methods, evaluation measures and the performance results. Results and conclusions: Many challenges and problems are presented. Consequently, more efforts are needed to get more accurate prediction models for the three basic types of glycosylation sites.


Author(s):  
_______ Archana ◽  
Charu Datta ◽  
Pratibha Tiwari

Degradation of environment is one of the most serious challenges before the mankind in today’s world. Mankind has been facing a wide range of problem arising out of the degradation of environment. Not only the areas under human inhabitation, but the areas of the planet without human population have also been suffering from these problems. As the population increase day by day, the amenities are not improved simultaneously. With the advancement of science and technologies the needs of human beings has been changing rapidly. As a result different types of environmental problems have been rising. Environmental degradation is a wide- reaching problem and it is likely to influence the health of human population is great. It may be defined the deterioration of the environment through depletion of resources such as air, water, and soil. The destruction of ecosystem and extinction of wildlife. Environmental degradation has occurred due to the recent activities in the field of socio-economic, institute and technology. Poverty still remains a problem as the root of several environmental problems to create awareness among the people about the ill effect of environmental pollution. In the whole research it is clear that all factors of environmental degradation may be reduced through- Framing the new laws on environmental degradation, Environment friend policy, Controlling all the ways and means of noise, air, soil and water pollution, Through growing more and more trees and by adapting the proper sanitation policy.  


Sign in / Sign up

Export Citation Format

Share Document