scholarly journals A Formal Framework for Synthesis and Verification of Logic Programs

Author(s):  
Alessandro Avellone ◽  
Mauro Ferrari ◽  
Camillo Fiorentini
2016 ◽  
Vol 16 (5-6) ◽  
pp. 933-949 ◽  
Author(s):  
ALEXANDER VANDENBROUCKE ◽  
MACIEJ PIRÓG ◽  
BENOIT DESOUTER ◽  
TOM SCHRIJVERS

AbstractTabling is a powerful resolution mechanism for logic programs that captures their least fixed point semantics more faithfully than plain Prolog. In many tabling applications, we are not interested in the set of all answers to a goal, but only require an aggregation of those answers. Several works have studied efficient techniques, such as lattice-based answer subsumption and mode-directed tabling, to do so for various forms of aggregation.While much attention has been paid to expressivity and efficient implementation of the different approaches, soundness has not been considered. This paper shows that the different implementations indeed fail to produce least fixed points for some programs. As a remedy, we provide a formal framework that generalises the existing approaches and we establish a soundness criterion that explains for which programs the approach is sound.


1990 ◽  
Author(s):  
Chitta Baral ◽  
Jorge Lobo ◽  
Jack Minker
Keyword(s):  

1987 ◽  
Vol 10 (1) ◽  
pp. 1-33
Author(s):  
Egon Börger ◽  
Ulrich Löwen

We survey and give new results on logical characterizations of complexity classes in terms of the computational complexity of decision problems of various classes of logical formulas. There are two main approaches to obtain such results: The first approach yields logical descriptions of complexity classes by semantic restrictions (to e.g. finite structures) together with syntactic enrichment of logic by new expressive means (like e.g. fixed point operators). The second approach characterizes complexity classes by (the decision problem of) classes of formulas determined by purely syntactic restrictions on the formation of formulas.


Studia Logica ◽  
2021 ◽  
Author(s):  
Vincenzo Crupi ◽  
Andrea Iacona

AbstractThis paper develops a probabilistic analysis of conditionals which hinges on a quantitative measure of evidential support. In order to spell out the interpretation of ‘if’ suggested, we will compare it with two more familiar interpretations, the suppositional interpretation and the strict interpretation, within a formal framework which rests on fairly uncontroversial assumptions. As it will emerge, each of the three interpretations considered exhibits specific logical features that deserve separate consideration.


Semantic Web ◽  
2020 ◽  
pp. 1-21
Author(s):  
Manuel Atencia ◽  
Jérôme David ◽  
Jérôme Euzenat

Both keys and their generalisation, link keys, may be used to perform data interlinking, i.e. finding identical resources in different RDF datasets. However, the precise relationship between keys and link keys has not been fully determined yet. A common formal framework encompassing both keys and link keys is necessary to ensure the correctness of data interlinking tools based on them, and to determine their scope and possible overlapping. In this paper, we provide a semantics for keys and link keys within description logics. We determine under which conditions they are legitimate to generate links. We provide conditions under which link keys are logically equivalent to keys. In particular, we show that data interlinking with keys and ontology alignments can be reduced to data interlinking with link keys, but not the other way around.


2021 ◽  
Author(s):  
Ken Takashima ◽  
Daiki Miyahara ◽  
Takaaki Mizuki ◽  
Hideaki Sone

AbstractIn 1989, den Boer presented the first card-based protocol, called the “five-card trick,” that securely computes the AND function using a deck of physical cards via a series of actions such as shuffling and turning over cards. This protocol enables a couple to confirm their mutual love without revealing their individual feelings. During such a secure computation protocol, it is important to keep any information about the inputs secret. Almost all existing card-based protocols are secure under the assumption that all players participating in a protocol are semi-honest or covert, i.e., they do not deviate from the protocol if there is a chance that they will be caught when cheating. In this paper, we consider a more malicious attack in which a player as an active adversary can reveal cards illegally without any hesitation. Against such an actively revealing card attack, we define the t-secureness, meaning that no information about the inputs leaks even if at most t cards are revealed illegally. We then actually design t-secure AND protocols. Thus, our contribution is the construction of the first formal framework to handle actively revealing card attacks as well as their countermeasures.


1990 ◽  
Vol 13 (4) ◽  
pp. 465-483
Author(s):  
V.S. Subrahmanian

Large logic programs are normally designed by teams of individuals, each of whom designs a subprogram. While each of these subprograms may have consistent completions, the logic program obtained by taking the union of these subprograms may not. However, the resulting program still serves a useful purpose, for a (possibly) very large subset of it still has a consistent completion. We argue that “small” inconsistencies may cause a logic program to have no models (in the traditional sense), even though it still serves some useful purpose. A semantics is developed in this paper for general logic programs which ascribes a very reasonable meaning to general logic programs irrespective of whether they have consistent (in the classical logic sense) completions.


2021 ◽  
Vol 1846 (1) ◽  
pp. 012035
Author(s):  
Yuanxiu Liao ◽  
Mingrui Yan ◽  
Xinqiao Li

2002 ◽  
Vol 37 (3) ◽  
pp. 63-74
Author(s):  
Lunjin Lu

Sign in / Sign up

Export Citation Format

Share Document