scholarly journals A Core Method for the Weak Completion Semantics with Skeptical Abduction (Extended Abstract)

Author(s):  
Emmanuelle-Anna Dietz Saldanha ◽  
Steffen Hölldobler ◽  
Carroline Dewi Puspa Kencana Ramli ◽  
Luis Palacios Medinacelli

The Weak Completion Semantics is a novel cognitive theory which has been successfully applied -- among others -- to the suppression task, the selection task and syllogistic reasoning. It is based on logic programming with skeptical abduction. Each weakly completed program admits a least model under the three-valued Lukasiewicz logic which can be computed as the least fixed point of an appropriate semantic operator. The operator can be represented by a three-layer feed-forward network using the Core method. Its least fixed point is the unique stable state of a recursive network which is obtained from the three-layer feed-forward core by mapping the activation of the output layer back to the input layer. The recursive network is embedded into a novel network to compute skeptical abduction. This extended abstract outlines a fully connectionist realization of the Weak Completion Semantics.

2018 ◽  
Vol 63 ◽  
pp. 51-86 ◽  
Author(s):  
Emmanuelle-Anna Dietz Saldanha ◽  
Steffen Hölldobler ◽  
Carroline Dewi Puspa Kencana Ramli ◽  
Luis Palacios Medinacelli

The Weak Completion Semantics is a novel cognitive theory which has been successfully applied to the suppression task, the selection task, syllogistic reasoning, the belief bias effect, spatial reasoning as well as reasoning with conditionals. It is based on logic programming with skeptical abduction. Each program admits a least model under the three-valued Lukasiewicz logic, which can be computed as the least fixed point of an appropriate semantic operator. The semantic operator can be represented by a three-layer feed-forward network using the core method. Its least fixed point is the unique stable state of a recursive network which is obtained from the three-layer feed-forward core by mapping the activation of the output layer back to the input layer. The recursive network is embedded into a novel network to compute skeptical abduction. This paper presents a fully connectionist realization of the Weak Completion Semantics.


10.29007/pr47 ◽  
2018 ◽  
Author(s):  
Emmanuelle-Anna Dietz Saldanha ◽  
Steffen Hölldobler ◽  
Sibylle Schwarz ◽  
Lim Yohanes Stefanus

The weak completion semantics is an integrated and computational cognitive theory which is based on normal logic programs,three-valued Lukasiewicz logic, weak completion, and skeptical abduction. It has been successfully applied – among others – to the suppression task, the selection task, and to human syllogistic reasoning. In order to solve ethical decision problems like – for example – trolley problems, we need to extend the weak completion semantics to deal with actions and causality. To this end we consider normal logic programs and a set E of equations as in the fluent calculus. We formally show that normal logic programs with equality admit a least E-model under the weak completion semantics and that this E-model can be computed as the least fixed point of an associated semantic operator. We show that the operator is not continuous in general, but is continuous if the logic program is a propositional, a finite-ground, or a finite datalog program and the Herbrand E-universe is finite. Finally, we show that the weak completion semantics with equality can solve a variety of ethical decision problems like the bystander case, the footbridge case, and the loop case by computing the least E-model and reasoning with respect to this E-model. The reasoning process involves counterfactuals which is necessary to model the different ethical dilemmas.


2015 ◽  
Vol 793 ◽  
pp. 483-488
Author(s):  
N. Aminudin ◽  
Marayati Marsadek ◽  
N.M. Ramli ◽  
T.K.A. Rahman ◽  
N.M.M. Razali ◽  
...  

The computation of security risk index in identifying the system’s condition is one of the major concerns in power system analysis. Traditional method of this assessment is highly time consuming and infeasible for direct on-line implementation. Thus, this paper presents the application of Multi-Layer Feed Forward Network (MLFFN) to perform the prediction of voltage collapse risk index due to the line outage occurrence. The proposed ANN model consider load at the load buses as well as weather condition at the transmission lines as the input. In realizing the effectiveness of the proposed method, the results are compared with Generalized Regression Neural Network (GRNN) method. The results revealed that the MLFFN method shows a significant improvement over GRNN performance in terms of least error produced.


2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
Yasir Hassan Ali ◽  
Roslan Abd Rahman ◽  
Raja Ishak Raja Hamzah

The thickness of an oil film lubricant can contribute to less gear tooth wear and surface failure. The purpose of this research is to use artificial neural network (ANN) computational modelling to correlate spur gear data from acoustic emissions, lubricant temperature, and specific film thickness (λ). The approach is using an algorithm to monitor the oil film thickness and to detect which lubrication regime the gearbox is running either hydrodynamic, elastohydrodynamic, or boundary. This monitoring can aid identification of fault development. Feed-forward and recurrent Elman neural network algorithms were used to develop ANN models, which are subjected to training, testing, and validation process. The Levenberg-Marquardt back-propagation algorithm was applied to reduce errors. Log-sigmoid and Purelin were identified as suitable transfer functions for hidden and output nodes. The methods used in this paper shows accurate predictions from ANN and the feed-forward network performance is superior to the Elman neural network.


Agriculture ◽  
2020 ◽  
Vol 10 (7) ◽  
pp. 266 ◽  
Author(s):  
Ehsan Moradi ◽  
Jesús Rodrigo-Comino ◽  
Enric Terol ◽  
Gaspar Mora-Navarro ◽  
Alexandre Marco da Silva ◽  
...  

Agricultural activities induce micro-topographical changes, soil compaction and structural changes due to soil cultivation, which directly affect ecosystem services. However, little is known about how these soil structural changes occur during and after the planting of orchards, and which key factors and processes play a major role in soil compaction due to cultivation works. This study evaluates the improved stock unearthing method (ISUM) as a low-cost and precise alternative to the tedious and costly traditional core sampling method, to characterize the changes in soil compaction in a representative persimmon orchard in Eastern Spain. To achieve this goal, firstly, in the field, undisturbed soil samples using metallic core rings (in January 2016 and 2019) were collected at different soil depths between 45 paired-trees, and topographic variations were determined following the protocol established by ISUM (January 2019). Our results show that soil bulk density (Bd) increases with depth and in the inter-row area, due to the effect of tractor passes and human trampling. The bulk density values of the top surface layers (0–12 cm) showed the lowest soil accumulation, but the highest temporal and spatial variability. Soil consolidation within three years after planting as calculated using the core samples was 12 mm, whereas when calculated with ISUM, it was 14 mm. The quality of the results with ISUM was better than with the traditional core method, due to the higher amount of sampling points. The ISUM is a promising method to measure soil compaction, but it is restricted to the land where soil erosion does not take place, or where soil erosion is measured to establish a balance of soil redistribution. Another positive contribution of ISUM is that it requires 24 h of technician work to acquire the data, whereas the core method requires 272 h. Our research is the first approach to use ISUM to quantify soil compaction and will contribute to applying innovative and low-cost monitoring methods to agricultural land and conserving ecosystem services.


Author(s):  
Tanujit Chakraborty

Decision tree algorithms have been among the most popular algorithms for interpretable (transparent) machine learning since the early 1980s. On the other hand, deep learning methods have boosted the capacity of machine learning algorithms and are now being used for non-trivial applications in various applied domains. But training a fully-connected deep feed-forward network by gradient-descent backpropagation is slow and requires arbitrary choices regarding the number of hidden units and layers. In this paper, we propose near-optimal neural regression trees, intending to make it much faster than deep feed-forward networks and for which it is not essential to specify the number of hidden units in the hidden layers of the neural network in advance. The key idea is to construct a decision tree and then simulate the decision tree with a neural network. This work aims to build a mathematical formulation of neural trees and gain the complementary benefits of both sparse optimal decision trees and neural trees. We propose near-optimal sparse neural trees (NSNT) that is shown to be asymptotically consistent and robust in nature. Additionally, the proposed NSNT model obtain a fast rate of convergence which is near-optimal up to some logarithmic factor. We comprehensively benchmark the proposed method on a sample of 80 datasets (40 classification datasets and 40 regression datasets) from the UCI machine learning repository. We establish that the proposed method is likely to outperform the current state-of-the-art methods (random forest, XGBoost, optimal classification tree, and near-optimal nonlinear trees) for the majority of the datasets.


IEEE Software ◽  
1992 ◽  
Vol 9 (5) ◽  
pp. 22-33 ◽  
Author(s):  
S. Faulk ◽  
J. Brackett ◽  
P. Ward ◽  
J. Kirby

Sign in / Sign up

Export Citation Format

Share Document