Stable and Supported Semantics in Continuous Vector Spaces

Author(s):  
Yaniv Aspis ◽  
Krysia Broda ◽  
Alessandra Russo ◽  
Jorge Lobo

We introduce a novel approach for the computation of stable and supported models of normal logic programs in continuous vector spaces by a gradient-based search method. Specifically, the application of the immediate consequence operator of a program reduct can be computed in a vector space. To do this, Herbrand interpretations of a propositional program are embedded as 0-1 vectors in $\mathbb{R}^N$ and program reducts are represented as matrices in $\mathbb{R}^{N \times N}$. Using these representations we prove that the underlying semantics of a normal logic program is captured through matrix multiplication and a differentiable operation. As supported and stable models of a normal logic program can now be seen as fixed points in a continuous space, non-monotonic deduction can be performed using an optimisation process such as Newton's method. We report the results of several experiments using synthetically generated programs that demonstrate the feasibility of the approach and highlight how different parameter values can affect the behaviour of the system.

2019 ◽  
Vol 19 (5-6) ◽  
pp. 941-956
Author(s):  
JOÃO ALCÂNTARA ◽  
SAMY SÁ ◽  
JUAN ACOSTA-GUADARRAMA

AbstractAbstract Dialectical Frameworks (ADFs) are argumentation frameworks where each node is associated with an acceptance condition. This allows us to model different types of dependencies as supports and attacks. Previous studies provided a translation from Normal Logic Programs (NLPs) to ADFs and proved the stable models semantics for a normal logic program has an equivalent semantics to that of the corresponding ADF. However, these studies failed in identifying a semantics for ADFs equivalent to a three-valued semantics (as partial stable models and well-founded models) for NLPs. In this work, we focus on a fragment of ADFs, called Attacking Dialectical Frameworks (ADF+s), and provide a translation from NLPs to ADF+s robust enough to guarantee the equivalence between partial stable models, well-founded models, regular models, stable models semantics for NLPs and respectively complete models, grounded models, preferred models, stable models for ADFs. In addition, we define a new semantics for ADF+s, called L-stable, and show it is equivalent to the L-stable semantics for NLPs.


Author(s):  
Toshiko Wakaki ◽  
◽  
Ken Satoh ◽  
Katsumi Nitta ◽  
◽  
...  

To treat dynamic preferences correctly is inevitably required in legal reasoning. In this paper, we present a method which enables us to handle some class of dynamic preferences in the framework of circumscription and to consistently compute its metalevel and object-level reasoning by expressing them in an extended logic program. This is achieved on the basis of policy axioms and priority axioms which permit as to describe circumscription policy by axioms and play a role in intervening between metalevel and object-level reasoning. Not only the preference information among rules and metarules but also relations between dynamic preferences and priority axioms in circumscription are represented by a normal logic program. Thus, priorities can be derived from the preferences dynamically, which allows us to compute objectlevel circumscriptive theory using logic programming based on Wakaki and Satoh’s method.


Author(s):  
Martin Caminada ◽  
Claudia Schulz

In this work, we explain how Assumption-Based Argumentation (ABA) is subsumed by Logic Programming (LP). The translation from ABA to LP (with a few restrictions on the ABA framework) results in a normal logic program whose semantics coincide with the semantics of the underlying ABA framework. Although the precise technicalities are beyond the current extended abstract (these can be found in the associated full paper) we provide a number of examples to illustrate the general idea.


2007 ◽  
Vol 29 ◽  
pp. 353-389 ◽  
Author(s):  
T. C. Son ◽  
E. Pontelli ◽  
P. H. Tu

In this paper, we present two alternative approaches to defining answer sets for logic programs with arbitrary types of abstract constraint atoms (c-atoms). These approaches generalize the fixpoint-based and the level mapping based answer set semantics of normal logic programs to the case of logic programs with arbitrary types of c-atoms. The results are four different answer set definitions which are equivalent when applied to normal logic programs. The standard fixpoint-based semantics of logic programs is generalized in two directions, called answer set by reduct and answer set by complement. These definitions, which differ from each other in the treatment of negation-as-failure (naf) atoms, make use of an immediate consequence operator to perform answer set checking, whose definition relies on the notion of conditional satisfaction of c-atoms w.r.t. a pair of interpretations. The other two definitions, called strongly and weakly well-supported models, are generalizations of the notion of well-supported models of normal logic programs to the case of programs with c-atoms. As for the case of fixpoint-based semantics, the difference between these two definitions is rooted in the treatment of naf atoms. We prove that answer sets by reduct (resp. by complement) are equivalent to weakly (resp. strongly) well-supported models of a program, thus generalizing the theorem on the correspondence between stable models and well-supported models of a normal logic program to the class of programs with c-atoms. We show that the newly defined semantics coincide with previously introduced semantics for logic programs with monotone c-atoms, and they extend the original answer set semantics of normal logic programs. We also study some properties of answer sets of programs with c-atoms, and relate our definitions to several semantics for logic programs with aggregates presented in the literature.


2017 ◽  
Vol 56 (5) ◽  
pp. 959-972 ◽  
Author(s):  
Christian Krogh ◽  
Mathias H. Jungersen ◽  
Erik Lund ◽  
Esben Lindgaard

2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Fatmawati ◽  
Muhammad Altaf Khan ◽  
Cicik Alfiniyah ◽  
Ebraheem Alzahrani

AbstractIn this work, we study the dengue dynamics with fractal-factional Caputo–Fabrizio operator. We employ real statistical data of dengue infection cases of East Java, Indonesia, from 2018 and parameterize the dengue model. The estimated basic reduction number for this dataset is $\mathcal{R}_{0}\approx2.2020$ R 0 ≈ 2.2020 . We briefly show the stability results of the model for the case when the basic reproduction number is $\mathcal{R}_{0} <1$ R 0 < 1 . We apply the fractal-fractional operator in the framework of Caputo–Fabrizio to the model and present its numerical solution by using a novel approach. The parameter values estimated for the model are used to compare with fractal-fractional operator, and we suggest that the fractal-fractional operator provides the best fitting for real cases of dengue infection when varying the values of both operators’ orders. We suggest some more graphical illustration for the model variables with various orders of fractal and fractional.


2016 ◽  
Author(s):  
Ting Xu ◽  
Alexander Opitz ◽  
R. Cameron Craddock ◽  
Margaret Wright ◽  
Xi-Nian Zuo ◽  
...  

AbstractResting state fMRI (R-fMRI) is a powerful in-vivo tool for examining the functional architecture of the human brain. Recent studies have demonstrated the ability to characterize transitions between functionally distinct cortical areas through the mapping of gradients in intrinsic functional connectivity (iFC) profiles. To date, this novel approach has primarily been applied to iFC profiles averaged across groups of individuals, or in one case, a single individual scanned multiple times. Here, we used a publically available R-fMRI dataset, in which 30 healthy participants were scanned 10 times (10 minutes per session), to investigate differences in full-brain transition profiles (i.e., gradient maps, edge maps) across individuals, and their reliability. 10-minute R-fMRI scans were sufficient to achieve high accuracies in efforts to “fingerprint” individuals based upon full-brain transition profiles. Regarding testretest reliability, the image-wise intraclass correlation coefficient (ICC) was moderate, and vertex-level ICC varied depending on region; larger durations of data yielded higher reliability scores universally. Initial application of gradient-based methodologies to a recently published dataset obtained from twins suggested inter-individual variation in areal profiles might have genetic and familial origins. Overall, these results illustrate the utility of gradient-based iFC approaches for studying inter-individual variation in brain function.


2020 ◽  
Vol 34 (05) ◽  
pp. 8376-8383
Author(s):  
Dayiheng Liu ◽  
Jie Fu ◽  
Yidan Zhang ◽  
Chris Pal ◽  
Jiancheng Lv

Typical methods for unsupervised text style transfer often rely on two key ingredients: 1) seeking the explicit disentanglement of the content and the attributes, and 2) troublesome adversarial learning. In this paper, we show that neither of these components is indispensable. We propose a new framework that utilizes the gradients to revise the sentence in a continuous space during inference to achieve text style transfer. Our method consists of three key components: a variational auto-encoder (VAE), some attribute predictors (one for each attribute), and a content predictor. The VAE and the two types of predictors enable us to perform gradient-based optimization in the continuous space, which is mapped from sentences in a discrete space, to find the representation of a target sentence with the desired attributes and preserved content. Moreover, the proposed method naturally has the ability to simultaneously manipulate multiple fine-grained attributes, such as sentence length and the presence of specific words, when performing text style transfer tasks. Compared with previous adversarial learning based methods, the proposed method is more interpretable, controllable and easier to train. Extensive experimental studies on three popular text style transfer tasks show that the proposed method significantly outperforms five state-of-the-art methods.


Author(s):  
D.T.V. Dharmajee Rao ◽  
K.V. Ramana

<p style="text-indent: 1.27cm; margin-bottom: 0.35cm; line-height: 115%;" align="justify"><span style="font-family: Arial,serif;"><span style="font-size: small;"><em>Deep Neural Network training algorithms consumes long training time, especially when the number of hidden layers and nodes is large. Matrix multiplication is the key operation carried out at every node of each layer for several hundreds of thousands of times during the training of Deep Neural Network. Blocking is a well-proven optimization technique to improve the performance of matrix multiplication. Blocked Matrix multiplication algorithms can easily be parallelized to accelerate the performance further. This paper proposes a novel approach of implementing Parallel Blocked Matrix multiplication algorithms to reduce the long training time. The proposed approach was implemented using a parallel programming model OpenMP with collapse() clause for the multiplication of input and weight matrices of Backpropagation and Boltzmann Machine Algorithms for training Deep Neural Network and tested on multi-core processor system. Experimental results showed that the proposed approach achieved approximately two times speedup than classic algorithms.</em></span></span></p>


Sign in / Sign up

Export Citation Format

Share Document