scholarly journals A Core Method for the Weak Completion Semantics with Skeptical Abduction

2018 ◽  
Vol 63 ◽  
pp. 51-86 ◽  
Author(s):  
Emmanuelle-Anna Dietz Saldanha ◽  
Steffen Hölldobler ◽  
Carroline Dewi Puspa Kencana Ramli ◽  
Luis Palacios Medinacelli

The Weak Completion Semantics is a novel cognitive theory which has been successfully applied to the suppression task, the selection task, syllogistic reasoning, the belief bias effect, spatial reasoning as well as reasoning with conditionals. It is based on logic programming with skeptical abduction. Each program admits a least model under the three-valued Lukasiewicz logic, which can be computed as the least fixed point of an appropriate semantic operator. The semantic operator can be represented by a three-layer feed-forward network using the core method. Its least fixed point is the unique stable state of a recursive network which is obtained from the three-layer feed-forward core by mapping the activation of the output layer back to the input layer. The recursive network is embedded into a novel network to compute skeptical abduction. This paper presents a fully connectionist realization of the Weak Completion Semantics.

Author(s):  
Emmanuelle-Anna Dietz Saldanha ◽  
Steffen Hölldobler ◽  
Carroline Dewi Puspa Kencana Ramli ◽  
Luis Palacios Medinacelli

The Weak Completion Semantics is a novel cognitive theory which has been successfully applied -- among others -- to the suppression task, the selection task and syllogistic reasoning. It is based on logic programming with skeptical abduction. Each weakly completed program admits a least model under the three-valued Lukasiewicz logic which can be computed as the least fixed point of an appropriate semantic operator. The operator can be represented by a three-layer feed-forward network using the Core method. Its least fixed point is the unique stable state of a recursive network which is obtained from the three-layer feed-forward core by mapping the activation of the output layer back to the input layer. The recursive network is embedded into a novel network to compute skeptical abduction. This extended abstract outlines a fully connectionist realization of the Weak Completion Semantics.


10.29007/pr47 ◽  
2018 ◽  
Author(s):  
Emmanuelle-Anna Dietz Saldanha ◽  
Steffen Hölldobler ◽  
Sibylle Schwarz ◽  
Lim Yohanes Stefanus

The weak completion semantics is an integrated and computational cognitive theory which is based on normal logic programs,three-valued Lukasiewicz logic, weak completion, and skeptical abduction. It has been successfully applied – among others – to the suppression task, the selection task, and to human syllogistic reasoning. In order to solve ethical decision problems like – for example – trolley problems, we need to extend the weak completion semantics to deal with actions and causality. To this end we consider normal logic programs and a set E of equations as in the fluent calculus. We formally show that normal logic programs with equality admit a least E-model under the weak completion semantics and that this E-model can be computed as the least fixed point of an associated semantic operator. We show that the operator is not continuous in general, but is continuous if the logic program is a propositional, a finite-ground, or a finite datalog program and the Herbrand E-universe is finite. Finally, we show that the weak completion semantics with equality can solve a variety of ethical decision problems like the bystander case, the footbridge case, and the loop case by computing the least E-model and reasoning with respect to this E-model. The reasoning process involves counterfactuals which is necessary to model the different ethical dilemmas.


2015 ◽  
Vol 793 ◽  
pp. 483-488
Author(s):  
N. Aminudin ◽  
Marayati Marsadek ◽  
N.M. Ramli ◽  
T.K.A. Rahman ◽  
N.M.M. Razali ◽  
...  

The computation of security risk index in identifying the system’s condition is one of the major concerns in power system analysis. Traditional method of this assessment is highly time consuming and infeasible for direct on-line implementation. Thus, this paper presents the application of Multi-Layer Feed Forward Network (MLFFN) to perform the prediction of voltage collapse risk index due to the line outage occurrence. The proposed ANN model consider load at the load buses as well as weather condition at the transmission lines as the input. In realizing the effectiveness of the proposed method, the results are compared with Generalized Regression Neural Network (GRNN) method. The results revealed that the MLFFN method shows a significant improvement over GRNN performance in terms of least error produced.


2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
Yasir Hassan Ali ◽  
Roslan Abd Rahman ◽  
Raja Ishak Raja Hamzah

The thickness of an oil film lubricant can contribute to less gear tooth wear and surface failure. The purpose of this research is to use artificial neural network (ANN) computational modelling to correlate spur gear data from acoustic emissions, lubricant temperature, and specific film thickness (λ). The approach is using an algorithm to monitor the oil film thickness and to detect which lubrication regime the gearbox is running either hydrodynamic, elastohydrodynamic, or boundary. This monitoring can aid identification of fault development. Feed-forward and recurrent Elman neural network algorithms were used to develop ANN models, which are subjected to training, testing, and validation process. The Levenberg-Marquardt back-propagation algorithm was applied to reduce errors. Log-sigmoid and Purelin were identified as suitable transfer functions for hidden and output nodes. The methods used in this paper shows accurate predictions from ANN and the feed-forward network performance is superior to the Elman neural network.


Author(s):  
Tanujit Chakraborty

Decision tree algorithms have been among the most popular algorithms for interpretable (transparent) machine learning since the early 1980s. On the other hand, deep learning methods have boosted the capacity of machine learning algorithms and are now being used for non-trivial applications in various applied domains. But training a fully-connected deep feed-forward network by gradient-descent backpropagation is slow and requires arbitrary choices regarding the number of hidden units and layers. In this paper, we propose near-optimal neural regression trees, intending to make it much faster than deep feed-forward networks and for which it is not essential to specify the number of hidden units in the hidden layers of the neural network in advance. The key idea is to construct a decision tree and then simulate the decision tree with a neural network. This work aims to build a mathematical formulation of neural trees and gain the complementary benefits of both sparse optimal decision trees and neural trees. We propose near-optimal sparse neural trees (NSNT) that is shown to be asymptotically consistent and robust in nature. Additionally, the proposed NSNT model obtain a fast rate of convergence which is near-optimal up to some logarithmic factor. We comprehensively benchmark the proposed method on a sample of 80 datasets (40 classification datasets and 40 regression datasets) from the UCI machine learning repository. We establish that the proposed method is likely to outperform the current state-of-the-art methods (random forest, XGBoost, optimal classification tree, and near-optimal nonlinear trees) for the majority of the datasets.


2014 ◽  
Vol 14 (4-5) ◽  
pp. 633-648 ◽  
Author(s):  
LUÍS MONIZ PEREIRA ◽  
EMMANUELLE-ANNA DIETZ ◽  
STEFFEN HÖLLDOBLER

AbstractThe belief bias effect is a phenomenon which occurs when we think that we judge an argument based on our reasoning, but are actually influenced by our beliefs and prior knowledge. Evans, Barston and Pollard carried out a psychological syllogistic reasoning task to prove this effect. Participants were asked whether they would accept or reject a given syllogism. We discuss one specific case which is commonly assumed to be believable but which is actually not logically valid. By introducing abnormalities, abduction and background knowledge, we adequately model this case under the weak completion semantics. Our formalization reveals new questions about possible extensions in abductive reasoning. For instance, observations and their explanations might include some relevant prior abductive contextual information concerning some side-effect or leading to a contestable or refutable side-effect. A weaker notion indicates the support of some relevant consequences by a prior abductive context. Yet another definition describes jointly supported relevant consequences, which captures the idea of two observations containing mutually supportive side-effects. Though motivated with and exemplified by the running psychology application, the various new general abductive context definitions are introduced here and given a declarative semantics for the first time, and have a much wider scope of application. Inspection points, a concept introduced by Pereira and Pinto, allows us to express these definitions syntactically and intertwine them into an operational semantics.


Sign in / Sign up

Export Citation Format

Share Document