scholarly journals Decidability and Complexity of Action-Based Temporal Planning over Dense Time

2020 ◽  
Vol 34 (06) ◽  
pp. 9859-9866
Author(s):  
Nicola Gigante ◽  
Andrea Micheli ◽  
Angelo Montanari ◽  
Enrico Scala

This paper studies the computational complexity of temporal planning, as represented by PDDL 2.1, interpreted over dense time. When time is considered discrete, the problem is known to be EXPSPACE-complete. However, the official PDDL 2.1 semantics, and many implementations, interpret time as a dense domain. This work provides several results about the complexity of the problem, studying a few interesting cases: whether a minimum amount ϵ of separation between mutually exclusive events is given, in contrast to the separation being simply required to be non-zero, and whether or not actions are allowed to overlap already running instances of themselves. We prove the problem to be PSPACE-complete when self-overlap is forbidden, whereas, when allowed, it becomes EXPSPACE-complete with ϵ-separation and undecidable with non-zero separation. These results clarify the computational consequences of different choices in the definition of the PDDL 2.1 semantics, which were vague until now.

Author(s):  
Vladimir Mic ◽  
Pavel Zezula

This chapter focuses on data searching, which is nowadays mostly based on similarity. The similarity search is challenging due to its computational complexity, and also the fact that similarity is subjective and context dependent. The authors assume the metric space model of similarity, defined by the domain of objects and the metric function that measures the dissimilarity of object pairs. The volume of contemporary data is large, and the time efficiency of similarity query executions is essential. This chapter investigates transformations of metric space to Hamming space to decrease the memory and computational complexity of the search. Various challenges of the similarity search with sketches in the Hamming space are addressed, including the definition of sketching transformation and efficient search algorithms that exploit sketches to speed-up searching. The indexing of Hamming space and a heuristic to facilitate the selection of a suitable sketching technique for any given application are also considered.


2003 ◽  
Vol 12 (04) ◽  
pp. 539-562 ◽  
Author(s):  
TAMÁS ROSKA

The CNN Universal Machine is generalized as the latest step in computational architectures: a Universal Machine on Flows. Computational complexity and computer complexity issues are studied in different architectural settings. Three mathematical machines are considered: the universal machine on integers (UMZ), the universal machine on reals (UMR) and the universal machine on flows (UMF). The three machines induce different kinds of computational difficulties: combinatorial, algebraic, and dynamic, respectively. After a broader overview on computational complexity issues, it is shown, following the reasoning related the UMR, that in many cases the size is not the most important parameter related to computational complexity. Emerging new computing and computer architectures as well as their physical implementation suggest a new look on computational and computer complexities. The new analog-and-logic (analogic) cellular array computer paradigm, based on the CNN Universal Machine, and its physical implementation in CMOS and optical technologies, proves experimentally the relevance of the role of accuracy and problem parameter in computational complexity. We introduce also the rigorous definition of computational complexity for UMF and prove an NP class of problems. It is also shown that choosing the spatial temporal elementary instructions, as well as taking into account the area and power dissipation, these choices inherently influence computational complexity and computer complexity, respectively. Comments related to relevance to biology of the UMF are presented in relation to complexity theory. It is shown that algorithms using spatial-temporal continuous elementary instructions (α-recursive functions) represent not only a new world in computing, but also, a more general type of logic inference.


Author(s):  
CHANGSONG QI ◽  
JIGUI SUN

Model net proposed in this paper is a kind of directed graph used to represent and analyze the static structure of a modelbase. After the formal definition of the model net was given, a construction algorithm is introduced. Then, two simplification algorithms are put forward to show how this approach can reduce the computational complexity of model composition for a specific decision problem. In succession, a model composition algorithm is worked out based on the simplification algorithms. As a result, this algorithm is capable of finding out all the candidate composite models for a specific decision problem. Finally, several advantages of the model net are discussed briefly.


2011 ◽  
Vol 21 (6) ◽  
pp. 1339-1362 ◽  
Author(s):  
SEBASTIAN DANICIC ◽  
ROBERT. M. HIERONS ◽  
MICHAEL R. LAURENCE

Given a program, a quotient can be obtained from it by deleting zero or more statements. The field of program slicing is concerned with computing a quotient of a program that preserves part of the behaviour of the original program. All program slicing algorithms take account of the structural properties of a program, such as control dependence and data dependence, rather than the semantics of its functions and predicates, and thus work, in effect, with program schemas. The dynamic slicing criterion of Korel and Laski requires only that program behaviour is preserved in cases where the original program follows a particular path, and that the slice/quotient follows this path. In this paper we formalise Korel and Laski's definition of a dynamic slice as applied to linear schemas, and also formulate a less restrictive definition in which the path through the original program need not be preserved by the slice. The less restrictive definition has the benefit of leading to smaller slices. For both definitions, we compute complexity bounds for the problems of establishing whether a given slice of a linear schema is a dynamic slice and whether a linear schema has a non-trivial dynamic slice, and prove that the latter problem is NP-hard in both cases. We also give an example to prove that minimal dynamic slices (whether or not they preserve the original path) need not be unique.


1987 ◽  
Vol 52 (1) ◽  
pp. 1-43 ◽  
Author(s):  
Larry Stockmeyer

One of the more significant achievements of twentieth century mathematics, especially from the viewpoints of logic and computer science, was the work of Church, Gödel and Turing in the 1930's which provided a precise and robust definition of what it means for a problem to be computationally solvable, or decidable, and which showed that there are undecidable problems which arise naturally in logic and computer science. Indeed, when one is faced with a new computational problem, one of the first questions to be answered is whether the problem is decidable or undecidable. A problem is usually defined to be decidable if and only if it can be solved by some Turing machine, and the class of decidable problems defined in this way remains unchanged if “Turing machine” is replaced by any of a variety of other formal models of computation. The division of all problems into two classes, decidable or undecidable, is very coarse, and refinements have been made on both sides of the boundary. On the undecidable side, work in recursive function theory, using tools such as effective reducibility, has exposed much additional structure such as degrees of unsolvability. The main purpose of this survey article is to describe a branch of computational complexity theory which attempts to expose more structure within the decidable side of the boundary.Motivated in part by practical considerations, the additional structure is obtained by placing upper bounds on the amounts of computational resources which are needed to solve the problem. Two common measures of the computational resources used by an algorithm are time, the number of steps executed by the algorithm, and space, the amount of memory used by the algorithm.


10.29007/t77g ◽  
2018 ◽  
Author(s):  
Daniel Leivant

We use notions originating in Computational Complexity to provide insight into the analogies between computational complexity and Higher Recursion Theory. We consider alternating Turing machines, but with a modified, global, definition of acceptance. We show that a language is accepted by such a machine iff it is Pi-1-1. Moreover, total alternating machines, which either accept or reject each input, accept precisely the hyper-arithmetical (Delta-1-1) languages. Also, bounding the permissible number of alternations we obtain a characterization of the levels of the arithmetical hierarchy..The novelty of these characterizations lies primarily in the use of finite computing devices, with finitary, discrete, computation steps. We thereby elucidate the correspondence between the polynomial-time and the arithmetical hierarchies, as well as that between the computably-enumerable, the inductive (Pi-1-1), and the PSpace languages.


2017 ◽  
Vol 58 ◽  
pp. 431-451 ◽  
Author(s):  
Gadi Aleksandrowicz ◽  
Hana Chockler ◽  
Joseph Y. Halpern ◽  
Alexander Ivrii

Halpern and Pearl introduced a definition of actual causality; Eiter and Lukasiewicz showed that computing whether X = x is a cause of Y = y is NP-complete in binary models (where all variables can take on only two values) and Σ^P_2 -complete in general models. In the final version of their paper, Halpern and Pearl slightly modified the definition of actual cause, in order to deal with problems pointed out by Hopkins and Pearl. As we show, this modification has a nontrivial impact on the complexity of computing whether {X} = {x} is a cause of Y = y. To characterize the complexity, a new family D_k^P , k = 1, 2, 3, . . ., of complexity classes is introduced, which generalises the class DP introduced by Papadimitriou and Yannakakis (DP is just D_1^P). We show that the complexity of computing causality under the updated definition is D_2^P -complete. Chockler and Halpern extended the definition of causality by introducing notions of responsibility and blame, and characterized the complexity of determining the degree of responsibility and blame using the original definition of causality. Here, we completely characterize the complexity using the updated definition of causality. In contrast to the results on causality, we show that moving to the updated definition does not result in a difference in the complexity of computing responsibility and blame.


2015 ◽  
Vol 31 (2) ◽  
pp. 259-274 ◽  
Author(s):  
Ronen Gradwohl ◽  
Eran Shmaya

Abstract:We propose to strengthen Popper’s notion of falsifiability by adding the requirement that when an observation is inconsistent with a theory, there must be a ‘short proof’ of this inconsistency. We model the concept of a short proof using tools from computational complexity, and provide some examples of economic theories that are falsifiable in the usual sense but not with this additional requirement. We consider several variants of the definition of ‘short proof’ and several assumptions about the difficulty of computation, and study their different implications on the falsifiability of theories.


HOW ◽  
2020 ◽  
pp. 107-124
Author(s):  
Zhila Gharaveisi ◽  
Adel Dastgoshadeh

This study aims at exploring L2 researchers’ perspectives on research ethics in Iran. A total of ten teacher researchers were selected among a larger group of researchers based on the criteria of academic degree and familiarity with research principles. They were interviewed about different aspects of research ethics. Their responses were audio-recorded and transcribed by the researcher. Finally, the emerging themes were extracted from the responses which showed plagiarism, data management, participant rights, and authorship rights as the most frequent themes discussed by the respondents. Furthermore, the extent of the participants’ self-expressed adherence to ethical considerations in research was differential, ranging from a minimum amount of adherence to an acceptable degree of adherence and commitment to research ethics. In addition, the results showed that not all participants had a clear understanding and definition of the four major themes which emerged from the results.


Sign in / Sign up

Export Citation Format

Share Document