Nonderived environment blocking and input-oriented computation

2021 ◽  
Vol 3 (2) ◽  
pp. 129-153
Author(s):  
Jane Chandlee

Abstract This paper presents a computational account of nonderived environment blocking (NDEB) that indicates the challenges it has posed for phonological theory do not stem from any inherent complexity of the patterns themselves. Specifically, it makes use of input strictly local (ISL) functions, which are among the most restrictive (i.e., lowest computational complexity) classes of functions in the subregular hierarchy (Heinz 2018) and shows that NDEB is ISL provided the derived and nonderived environments correspond to unique substrings in the input structure. Using three classic examples of NDEB from Finnish, Polish, and Turkish, it is shown that the distinction between derived and nonderived sequences is fully determined by the input structure and can be achieved without serial derivation or intermediate representations. This result reveals that such cases of NDEB are computationally unexceptional and lends support to proposals in rule- and constraint-based theories that make use of its input-oriented nature.

1987 ◽  
Vol 10 (1) ◽  
pp. 1-33
Author(s):  
Egon Börger ◽  
Ulrich Löwen

We survey and give new results on logical characterizations of complexity classes in terms of the computational complexity of decision problems of various classes of logical formulas. There are two main approaches to obtain such results: The first approach yields logical descriptions of complexity classes by semantic restrictions (to e.g. finite structures) together with syntactic enrichment of logic by new expressive means (like e.g. fixed point operators). The second approach characterizes complexity classes by (the decision problem of) classes of formulas determined by purely syntactic restrictions on the formation of formulas.


1998 ◽  
Vol 9 ◽  
pp. 1-36 ◽  
Author(s):  
M. L. Littman ◽  
J. Goldsmith ◽  
M. Mundhenk

We examine the computational complexity of testing and finding small plans in probabilistic planning domains with both flat and propositional representations. The complexity of plan evaluation and existence varies with the plan type sought; we examine totally ordered plans, acyclic plans, and looping plans, and partially ordered plans under three natural definitions of plan value. We show that problems of interest are complete for a variety of complexity classes: PL, P, NP, co-NP, PP, NP^PP, co-NP^PP, and PSPACE. In the process of proving that certain planning problems are complete for NP^PP, we introduce a new basic NP^PP-complete problem, E-MAJSAT, which generalizes the standard Boolean satisfiability problem to computations involving probabilistic quantities; our results suggest that the development of good heuristics for E-MAJSAT could be important for the creation of efficient algorithms for a wide variety of problems.


Algorithms ◽  
2020 ◽  
Vol 13 (5) ◽  
pp. 122
Author(s):  
Arne Meier

In this paper, we study the relationship of parameterized enumeration complexity classes defined by Creignou et al. (MFCS 2013). Specifically, we introduce two hierarchies (IncFPTa and CapIncFPTa) of enumeration complexity classes for incremental fpt-time in terms of exponent slices and show how they interleave. Furthermore, we define several parameterized function classes and, in particular, introduce the parameterized counterpart of the class of nondeterministic multivalued functions with values that are polynomially verifiable and guaranteed to exist, TFNP, known from Megiddo and Papadimitriou (TCS 1991). We show that this class TF(para-NP), the restriction of the function variant of NP to total functions, collapsing to F(FPT), the function variant of FPT, is equivalent to the result that OutputFPT coincides with IncFPT. In addition, these collapses are shown to be equivalent to TFNP = FP, and also equivalent to P equals NP intersected with coNP. Finally, we show that these two collapses are equivalent to the collapse of IncP and OutputP in the classical setting. These results are the first direct connections of collapses in parameterized enumeration complexity to collapses in classical enumeration complexity, parameterized function complexity, classical function complexity, and computational complexity theory.


1995 ◽  
Vol 60 (2) ◽  
pp. 517-527 ◽  
Author(s):  
Martin Grohe

The notion of logical reducibilities is derived from the idea of interpretations between theories. It was used by Lovász and Gács [LG77] and Immerman [Imm87] to give complete problems for certain complexity classes and hence establish new connections between logical definability and computational complexity.However, the notion is also interesting in a purely logical context. For example, it is helpful to establish nonexpressibility results.We say that a class of τ-structures is a >complete problem for a logic under L-reductions if it is definable in [τ] and if every class definable in can be ”translated” into by L-formulae (cf. §4).We prove the following theorem:1.1. Theorem. There are complete problemsfor partial fixed-point logic andfor inductive fixed-point logic under quantifier-free reductions.The main step of the proof is to establish a new normal form for fixed-point formulae (which might be of some interest itself). To obtain this normal form we use theorems of Abiteboul and Vianu [AV91a] that show the equivalence between the fixed-point logics we consider and certain extensions of the database query language Datalog.In [Dah87] Dahlhaus gave a complete problem for least fixed-point logic. Since least fixed-point logic equals inductive fixed-point logic by a well-known result of Gurevich and Shelah [GS86], this already proves one part of our theorem.However, our class gives a natural description of the fixed-point process of an inductive fixed-point formula and hence sheds some light on completely different aspects of the logic than Dahlhaus's construction, which is strongly based on the features of least fixed-point formulae.


2021 ◽  
Vol 43 (suppl 1) ◽  
Author(s):  
Daniel Jost Brod

Recent years have seen a flurry of activity in the fields of quantum computing and quantum complexity theory, which aim to understand the computational capabilities of quantum systems by applying the toolbox of computational complexity theory. This paper explores the conceptually rich and technologically useful connection between the dynamics of free quantum particles and complexity theory. I review results on the computational power of two simple quantum systems, built out of noninteracting bosons (linear optics) or noninteracting fermions. These rudimentary quantum computers display radically different capabilities—while free fermions are easy to simulate on a classical computer, and therefore devoid of nontrivial computational power, a free-boson computer can perform tasks expected to be classically intractable. To build the argument for these results, I introduce concepts from computational complexity theory. I describe some complexity classes, starting with P and NP and building up to the less common #P and polynomial hierarchy, and the relations between them. I identify how probabilities in free-bosonic and free-fermionic systems fit within this classification, which then underpins their difference in computational power. This paper is aimed at graduate or advanced undergraduate students with a Physics background, hopefully serving as a soft introduction to this exciting and highly evolving field.


2017 ◽  
Vol 58 ◽  
pp. 431-451 ◽  
Author(s):  
Gadi Aleksandrowicz ◽  
Hana Chockler ◽  
Joseph Y. Halpern ◽  
Alexander Ivrii

Halpern and Pearl introduced a definition of actual causality; Eiter and Lukasiewicz showed that computing whether X = x is a cause of Y = y is NP-complete in binary models (where all variables can take on only two values) and Σ^P_2 -complete in general models. In the final version of their paper, Halpern and Pearl slightly modified the definition of actual cause, in order to deal with problems pointed out by Hopkins and Pearl. As we show, this modification has a nontrivial impact on the complexity of computing whether {X} = {x} is a cause of Y = y. To characterize the complexity, a new family D_k^P , k = 1, 2, 3, . . ., of complexity classes is introduced, which generalises the class DP introduced by Papadimitriou and Yannakakis (DP is just D_1^P). We show that the complexity of computing causality under the updated definition is D_2^P -complete. Chockler and Halpern extended the definition of causality by introducing notions of responsibility and blame, and characterized the complexity of determining the degree of responsibility and blame using the original definition of causality. Here, we completely characterize the complexity using the updated definition of causality. In contrast to the results on causality, we show that moving to the updated definition does not result in a difference in the complexity of computing responsibility and blame.


2019 ◽  
Vol 29 (02) ◽  
pp. 245-262
Author(s):  
Olga Kharlampovich ◽  
Alina Vdovina

Agol, Haas and Thurston showed that the problem of determining a bound on the genus of a knot in a 3-manifold, is NP-complete. This shows that (unless P[Formula: see text]NP) the genus problem has high computational complexity even for knots in a 3-manifold. We initiate the study of classes of knots where the genus problem and even the equivalence problem have very low computational complexity. We show that the genus problem for alternating knots with n crossings has linear time complexity and is in Logspace[Formula: see text]. Alternating knots with some additional combinatorial structure will be referred to as standard. As expected, almost all alternating knots of a given genus are standard. We show that the genus problem for these knots belongs to [Formula: see text] circuit complexity class. We also show, that the equivalence problem for such knots with [Formula: see text] crossings has time complexity [Formula: see text] and is in Logspace[Formula: see text] and [Formula: see text] complexity classes.


2001 ◽  
Vol 11 (1) ◽  
pp. 1-1
Author(s):  
Daniel Leivant ◽  
Bob Constable

This issue of the Journal of Functional Programming is dedicated to work presented at the Workshop on Implicit Computational Complexity in Programming Languages, affiliated with the 1998 meeting of the International Conference on Functional Programming in Baltimore.Several machine-independent approaches to computational complexity have been developed in recent years; they establish a correspondence linking computational complexity to conceptual and structural measures of complexity of declarative programs and of formulas, proofs and models of formal theories. Examples include descriptive complexity of finite models, restrictions on induction in arithmetic and related first order theories, complexity of set-existence principles in higher order logic, and specifications in linear logic. We refer to these approaches collectively as Implicit Computational Complexity. This line of research provides a framework for a streamlined incorporation of computational complexity into areas such as formal methods in software development, programming language theory, and database theory.A fruitful thread in implicit computational complexity is based on exploring the computational complexity consequences of introducing various syntactic control mechanisms in functional programming, including restrictions (akin to static typing) on scoping, data re-use (via linear modalities), and iteration (via ramification of data). These forms of control, separately and in combination, can certify bounds on the time and space resources used by programs. In fact, all results in this area establish that each restriction considered yields precisely a major computational complexity class. The complexity classes thus obtained range from very restricted ones, such as NC and Alternating logarithmic time, through the central classes Poly-Time and Poly-Space, to broad classes such as the Elementary and the Primitive Recursive functions.Considerable effort has been invested in recent years to relax as much as possible the structural restrictions considered, allowing for more exible programming and proof styles, while still guaranteeing the same resource bounds. Notably, more exible control forms have been developed for certifying that functional programs execute in Poly-Time.The 1998 workshop covered both the theoretical foundations of the field and steps toward using its results in various implemented systems, for example in controlling the computational complexity of programs extracted from constructive proofs. The five papers included in this issue nicely represent this dual concern of theory and practice. As they are going to print, we should note that the field of Implicit Computational Complexity continues to thrive: successful workshops dedicated to it were affiliated with both the LICS'99 and LICS'00 conferences. Special issues, of Information and Computation dedicated to the former, and of Theoretical Computer Science to the latter, are in preparation.


1995 ◽  
Vol 60 (1) ◽  
pp. 103-121 ◽  
Author(s):  
Aleksandar Ignjatović

AbstractIn this paper we characterize the well-known computational complexity classes of the polynomial time hierarchy as classes of provably recursive functions (with graphs of suitable bounded complexity) of some second order theories with weak comprehension axiom schemas but without any induction schemas (Theorem 6). We also find a natural relationship between our theories and the theories of bounded arithmetic (Lemmas 4 and 5). Our proofs use a technique which enables us to “speed up” induction without increasing the bounded complexity of the induction formulas. This technique is also used to obtain an interpretability result for the theories of bounded arithmetic (Theorem 4).


Sign in / Sign up

Export Citation Format

Share Document