Logical Decision Problems and Complexity of Logic Programs

1987 ◽  
Vol 10 (1) ◽  
pp. 1-33
Author(s):  
Egon Börger ◽  
Ulrich Löwen

We survey and give new results on logical characterizations of complexity classes in terms of the computational complexity of decision problems of various classes of logical formulas. There are two main approaches to obtain such results: The first approach yields logical descriptions of complexity classes by semantic restrictions (to e.g. finite structures) together with syntactic enrichment of logic by new expressive means (like e.g. fixed point operators). The second approach characterizes complexity classes by (the decision problem of) classes of formulas determined by purely syntactic restrictions on the formation of formulas.

1995 ◽  
Vol 60 (2) ◽  
pp. 517-527 ◽  
Author(s):  
Martin Grohe

The notion of logical reducibilities is derived from the idea of interpretations between theories. It was used by Lovász and Gács [LG77] and Immerman [Imm87] to give complete problems for certain complexity classes and hence establish new connections between logical definability and computational complexity.However, the notion is also interesting in a purely logical context. For example, it is helpful to establish nonexpressibility results.We say that a class of τ-structures is a >complete problem for a logic under L-reductions if it is definable in [τ] and if every class definable in can be ”translated” into by L-formulae (cf. §4).We prove the following theorem:1.1. Theorem. There are complete problemsfor partial fixed-point logic andfor inductive fixed-point logic under quantifier-free reductions.The main step of the proof is to establish a new normal form for fixed-point formulae (which might be of some interest itself). To obtain this normal form we use theorems of Abiteboul and Vianu [AV91a] that show the equivalence between the fixed-point logics we consider and certain extensions of the database query language Datalog.In [Dah87] Dahlhaus gave a complete problem for least fixed-point logic. Since least fixed-point logic equals inductive fixed-point logic by a well-known result of Gurevich and Shelah [GS86], this already proves one part of our theorem.However, our class gives a natural description of the fixed-point process of an inductive fixed-point formula and hence sheds some light on completely different aspects of the logic than Dahlhaus's construction, which is strongly based on the features of least fixed-point formulae.


10.29007/pr47 ◽  
2018 ◽  
Author(s):  
Emmanuelle-Anna Dietz Saldanha ◽  
Steffen Hölldobler ◽  
Sibylle Schwarz ◽  
Lim Yohanes Stefanus

The weak completion semantics is an integrated and computational cognitive theory which is based on normal logic programs,three-valued Lukasiewicz logic, weak completion, and skeptical abduction. It has been successfully applied – among others – to the suppression task, the selection task, and to human syllogistic reasoning. In order to solve ethical decision problems like – for example – trolley problems, we need to extend the weak completion semantics to deal with actions and causality. To this end we consider normal logic programs and a set E of equations as in the fluent calculus. We formally show that normal logic programs with equality admit a least E-model under the weak completion semantics and that this E-model can be computed as the least fixed point of an associated semantic operator. We show that the operator is not continuous in general, but is continuous if the logic program is a propositional, a finite-ground, or a finite datalog program and the Herbrand E-universe is finite. Finally, we show that the weak completion semantics with equality can solve a variety of ethical decision problems like the bystander case, the footbridge case, and the loop case by computing the least E-model and reasoning with respect to this E-model. The reasoning process involves counterfactuals which is necessary to model the different ethical dilemmas.


2010 ◽  
Vol 20 (1) ◽  
pp. 75-103 ◽  
Author(s):  
ADAM ANTONIK ◽  
MICHAEL HUTH ◽  
KIM G. LARSEN ◽  
ULRIK NYMAN ◽  
ANDRZEJ WĄSOWSKI

Modal and mixed transition systems are specification formalisms that allow the mixing of over- and under-approximation. We discuss three fundamental decision problems for such specifications: —whether a set of specifications has a common implementation;—whether an individual specification has an implementation; and—whether all implementations of an individual specification are implementations of another one. For each of these decision problems we investigate the worst-case computational complexity for the modal and mixed cases. We show that the first decision problem is EXPTIME-complete for both modal and mixed specifications. We prove that the second decision problem is EXPTIME-complete for mixed specifications (it is known to be trivial for modal ones). The third decision problem is also shown to be EXPTIME-complete for mixed specifications.


2010 ◽  
Vol 20 (04) ◽  
pp. 489-524 ◽  
Author(s):  
JEAN-CAMILLE BIRGET

We study the monoid generalization Mk, 1 of the Thompson–Higman groups, and we characterize the [Formula: see text]- and the [Formula: see text]-order of Mk, 1. Although Mk, 1 has only one nonzero [Formula: see text]-class and k-1 nonzero [Formula: see text]-classes, the [Formula: see text]- and the [Formula: see text]-order are complicated; in particular, [Formula: see text] is dense (even within an [Formula: see text]-class), and [Formula: see text] is dense (even within an [Formula: see text]-class). We study the computational complexity of the [Formula: see text]- and the [Formula: see text]-order. When inputs are given by words over a finite generating set of Mk, 1, the [Formula: see text]- and the [Formula: see text]-order decision problems are in P. However, over a "circuit-like" generating set the [Formula: see text]-order decision problem of Mk, 1 is [Formula: see text]-complete, whereas the [Formula: see text]-order decision problem is coNP-complete. Similarly, for acyclic circuits the surjectiveness problem is [Formula: see text]-complete, whereas the injectiveness problem is coNP-complete.


Author(s):  
Nico Potyka

Bipolar abstract argumentation frameworks allow modeling decision problems by defining pro and contra arguments and their relationships. In some popular bipolar frameworks, there is an inherent tendency to favor either attack or support relationships. However, for some applications, it seems sensible to treat attack and support equally. Roughly speaking, turning an attack edge into a support edge, should just invert its meaning. We look at a recently introduced bipolar argumentation semantics and two novel alternatives and discuss their semantical and computational properties. Interestingly, the two novel semantics correspond to stable semantics if no support relations are present and maintain the computational complexity of stable semantics in general bipolar frameworks.


1998 ◽  
Vol 9 ◽  
pp. 1-36 ◽  
Author(s):  
M. L. Littman ◽  
J. Goldsmith ◽  
M. Mundhenk

We examine the computational complexity of testing and finding small plans in probabilistic planning domains with both flat and propositional representations. The complexity of plan evaluation and existence varies with the plan type sought; we examine totally ordered plans, acyclic plans, and looping plans, and partially ordered plans under three natural definitions of plan value. We show that problems of interest are complete for a variety of complexity classes: PL, P, NP, co-NP, PP, NP^PP, co-NP^PP, and PSPACE. In the process of proving that certain planning problems are complete for NP^PP, we introduce a new basic NP^PP-complete problem, E-MAJSAT, which generalizes the standard Boolean satisfiability problem to computations involving probabilistic quantities; our results suggest that the development of good heuristics for E-MAJSAT could be important for the creation of efficient algorithms for a wide variety of problems.


2021 ◽  
Author(s):  
Jozo J Dujmovic ◽  
Daniel Tomasevich

Computing the COVID-19 vaccination priority is an urgent and ubiquitous decision problem. In this paper we propose a solution of this problem using the LSP evaluation method. Our goal is to develop a justifiable and explainable quantitative criterion for computing a vaccination priority degree for each individual in a population. Performing vaccination in the order of the decreasing vaccination priority produces maximum positive medical, social, and ethical effects for the whole population. The presented method can be expanded and refined using additional medical and social conditions. In addition, the same methodology is suitable for solving other similar medical priority decision problems, such as priorities for organ transplants.


2017 ◽  
Vol 668 ◽  
pp. 27-42 ◽  
Author(s):  
Angelos Charalambidis ◽  
Panos Rondogiannis ◽  
Ioanna Symeonidou

Author(s):  
Raymond Greenlaw ◽  
H. James Hoover ◽  
Walter L. Ruzzo

The goal of this chapter is to provide the formal basis for many key concepts that are used throughout the book. These include the notions of problem, definitions of important complexity classes, reducibility, and completeness, among others. Thus far, we have used the term "problem" somewhat vaguely. In order to compare the difficulty of various problems we need to make this concept precise. Problems typically come in two flavors: search problems and decision problems. Consider the following search problem, to find the value of the maximum flow in a network. Example 3.1.1 Maximum Flow Value (MaxFlow-V) Given: A directed graph G = (V,E) with each edge e labeled by an integer capacity c(e) ≥ 0, and two distinguished vertices, s and t. Problem: Compute the value of the maximum flow from source s to sink t in G. The problem requires us to compute a number — the value of the maximum flow. Note, in this case we are actually computing a function. Now consider a variant of this problem. Example 3.1.2 Maximum Flow Bit (MaxFlow-B) Given: A directed graph G = (V, E) with each edge e labeled by an integer capacity c(e)≥ 0, and two distinguished vertices, s and t, and an integer i. Problem: Is the ith bit of the value of the maximum flow from source s to sink t in G a 1? This is a decision problem version of the flow problem. Rather than asking for the computation of some value, the problem is asking for a "yes" or "no" answer to a specific question. Yet the decision problem MaxFlow-B is equivalent to the search problem MaxFlow-V in the sense that if one can be solved efficiently in parallel, so can the other. Why is this? First consider how solving an instance of MaxFlow-B can be reduced to solving an instance of MaxFlow-V. Suppose that you are asked a question for MaxFlow-B, that is, "Is bit i of the maximum flow a 1?" It is easy to answer this question by solving MaxFlow-V and then looking at bit i of the flow.


Sign in / Sign up

Export Citation Format

Share Document