scholarly journals Succinctness of Descriptions of Context-Free, Regular, and Finite Languages

1978 ◽  
Vol 7 (84) ◽  
Author(s):  
Erik Meineche Schmidt

<p>This thesis analyzes the descriptional power of finite automata, regular expressions, pushdown automata, and certain generalized models of macro grammars. For finite automata and pushdown automata the emphasis is on ambiguity. It is shown that ambiguous nondeterminism allows more succinct definitions than unambiguous nondeterminism which in turn allows more succinct definitions than determinism. The succinctness gain is nonrecursive for pda's and nonpolynomial for finite automata.</p><p>The succinctness of regular expressions and macro grammars is measured in terms of complexity theory. It is shown that the inequivalence problem for Ol macro grammars generating finite languages is hard for nondeterministic double exponential time, and that the ''nonemptiness of complement'' problem for unambiguous regular expressions is in NP. This implies that unambiguous regular expressions are ''easier'' than general regular expressions (unless NP is equal to PSPACE).</p>

Author(s):  
Bruno Guillon ◽  
Giovanni Pighizzini ◽  
Luca Prigioniero

Non-self-embedding grammars are a restriction of context-free grammars which does not allow to describe recursive structures and, hence, which characterizes only the class of regular languages. A double exponential gap in size from non-self-embedding grammars to deterministic finite automata is known. The same size gap is also known from constant-height pushdown automata and [Formula: see text]-limited automata to deterministic finite automata. Constant-height pushdown automata and [Formula: see text]-limited automata are compared with non-self-embedding grammars. It is proved that non-self-embedding grammars and constant-height pushdown automata are polynomially related in size. Furthermore, a polynomial size simulation by [Formula: see text]-limited automata is presented. However, the converse transformation is proved to cost exponential. Finally, a different simulation shows that also the conversion of deterministic constant-height pushdown automata into deterministic [Formula: see text]-limited automata costs polynomial.


2014 ◽  
Vol 25 (07) ◽  
pp. 897-916 ◽  
Author(s):  
GIOVANNI PIGHIZZINI ◽  
ANDREA PISONI

Limited automata are one-tape Turing machines that are allowed to rewrite the content of any tape cell only in the first d visits, for a fixed constant d. In the case d = 1, namely, when a rewriting is possible only during the first visit to a cell, these models have the same power of finite state automata. We prove state upper and lower bounds for the conversion of 1-limited automata into finite state automata. In particular, we prove a double exponential state gap between nondeterministic 1-limited automata and one-way deterministic finite automata. The gap reduces to a single exponential in the case of deterministic 1-limited automata. This also implies an exponential state gap between nondeterministic and deterministic 1-limited automata. Another consequence is that 1-limited automata can have less states than equivalent two-way nondeterministic finite automata. We show that this is true even if we restrict to the case of the one-letter input alphabet. For each d ≥ 2, d-limited automata are known to characterize the class of context-free languages. Using the Chomsky-Schützenberger representation for contextfree languages, we present a new conversion from context-free languages into 2-limited automata.


1991 ◽  
Vol 02 (03) ◽  
pp. 183-205 ◽  
Author(s):  
Dung T. Huynh

In this paper, we investigate the complexity of computing the detector, constructor and lexicographic constructor functions for a given language. The following classes of languages will be considered: (1) context-free languages, (2) regular sets, (3) languages accepted by one-way nondeterministic auxiliary pushdown automata, (4) languages accepted by one-way nondeterministic logspace-bounded Turing machines, (5) two-way deterministic pushdown automaton languages, (6) languages accepted by uniform families of constant-depth polynomial-size Boolean circuits, and (7) languages accepted by multihead finite automata. We show that for the classes (1)–(4), efficient detectors, constructors and lexicographic constructors exist, whereas for (5)– (7) polynomial-time computable detectors, constructors and lexicographic constructors exist iff there are no sparse sets in NP−P (or equivalently, E=NE). Our results provide sharp boundaries between classes of languages which have efficient detectors, constructors and classes of languages for which efficient detectors and constructors do not appear to exist.


2016 ◽  
Vol 27 (02) ◽  
pp. 187-214 ◽  
Author(s):  
Martin Kutrib ◽  
Andreas Malcher ◽  
Matthias Wendlandt

We consider the model of deterministic set automata which are basically deterministic finite automata equipped with a set as an additional storage medium. The basic operations on the set are the insertion of elements, the removing of elements, and the test whether an element is in the set. We investigate the computational power of deterministic set automata and compare the language class accepted with the context-free languages and classes of languages accepted by queue automata. As result the incomparability to all classes considered is obtained. Furthermore, we examine the closure properties under several operations. Then we show that deterministic set automata may be an interesting model from a practical point of view by proving that their regularity problem as well as the problems of emptiness, finiteness, infiniteness, and universality are decidable. Finally, the descriptional complexity of deterministic and nondeterministic set automata is investigated. A conversion procedure that turns a deterministic set automaton accepting a regular language into a deterministic finite automaton is developed which leads to a double exponential upper bound. This bound is proved to be tight in the order of magnitude by presenting also a double exponential lower bound. In contrast to these recursive bounds we obtain non-recursive trade-offs when nondeterministic set automata are considered.


2009 ◽  
Vol 20 (04) ◽  
pp. 629-645 ◽  
Author(s):  
GIOVANNI PIGHIZZINI

The simulation of deterministic pushdown automata defined over a one-letter alphabet by finite state automata is investigated from a descriptional complexity point of view. We show that each unary deterministic pushdown automaton of size s can be simulated by a deterministic finite automaton with a number of states that is exponential in s. We prove that this simulation is tight. Furthermore, its cost cannot be reduced even if it is performed by a two-way nondeterministic automaton. We also prove that there are unary languages for which deterministic pushdown automata cannot be exponentially more succinct than finite automata. In order to state this result, we investigate the conversion of deterministic pushdown automata into context-free grammars. We prove that in the unary case the number of variables in the resulting grammar is strictly smaller than the number of variables needed in the case of nonunary alphabets.


Author(s):  
Holger Bock Axelsen ◽  
Martin Kutrib ◽  
Andreas Malcher ◽  
Matthias Wendlandt

It is well known that reversible finite automata do not accept all regular languages, that reversible pushdown automata do not accept all deterministic context-free languages, and that reversible queue automata are less powerful than deterministic real-time queue automata. It is of significant interest from both a practical and theoretical point of view to close these gaps. We here extend these reversible models by a preprocessing unit which is basically a reversible injective and length-preserving finite state transducer. It turns out that preprocessing the input using such weak devices increases the computational power of reversible deterministic finite automata to the acceptance of all regular languages, whereas for reversible pushdown automata the accepted family of languages lies strictly in between the reversible deterministic context-free languages and the real-time deterministic context-free languages. For reversible queue automata the preprocessing of the input leads to machines that are stronger than real-time reversible queue automata, but less powerful than real-time deterministic (irreversible) queue automata. Moreover, it is shown that the computational power of all three types of machines is not changed by allowing the preprocessing finite state transducer to work irreversibly. Finally, we examine the closure properties of the family of languages accepted by such machines.


2021 ◽  
Vol 180 (1-2) ◽  
pp. 103-122
Author(s):  
Giovanni Pighizzini ◽  
Luca Prigioniero

Non-self-embedding grammars are a subclass of context-free grammars which only generate regular languages. The size costs of the conversion of non-self-embedding grammars into equivalent finite automata are studied, by proving optimal bounds for the number of states of nondeterministic and deterministic automata equivalent to given non-self-embedding grammars. In particular, each non-self-embedding grammar of size s can be converted into an equivalent nondeterministic automaton which has an exponential size in s and into an equivalent deterministic automaton which has a double exponential size in s. These costs are shown to be optimal. Moreover, they do not change if the larger class of quasi-non-self-embedding grammars, which still generate only regular languages, is considered. In the case of letter bounded languages, the cost of the conversion of non-self-embedding grammars and quasi-non-self-embedding grammars into deterministic automata reduces to an exponential of a polynomial in s.


2021 ◽  
Vol 58 (4) ◽  
pp. 335-356
Author(s):  
Sebastian Jakobi ◽  
Katja Meckel ◽  
Carlo Mereghetti ◽  
Beatrice Palano

AbstractWe consider the notion of a constant length queue automaton—i.e., a traditional queue automaton with a built-in constant limit on the length of its queue—as a formalism for representing regular languages. We show that the descriptional power of constant length queue automata greatly outperforms that of traditional finite state automata, of constant height pushdown automata, and of straight line programs for regular expressions, by providing optimal exponential and double-exponential size gaps. Moreover, we prove that constant height pushdown automata can be simulated by constant length queue automata paying only by a linear size increase, and that removing nondeterminism in constant length queue automata requires an optimal exponential size blow-up, against the optimal double-exponential cost for determinizing constant height pushdown automata. Finally, we investigate the size cost of implementing Boolean language operations on deterministic and nondeterministic constant length queue automata.


Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 329
Author(s):  
Tomoyuki Morimae ◽  
Suguru Tamaki

It is known that several sub-universal quantum computing models, such as the IQP model, the Boson sampling model, the one-clean qubit model, and the random circuit model, cannot be classically simulated in polynomial time under certain conjectures in classical complexity theory. Recently, these results have been improved to ``fine-grained" versions where even exponential-time classical simulations are excluded assuming certain classical fine-grained complexity conjectures. All these fine-grained results are, however, about the hardness of strong simulations or multiplicative-error sampling. It was open whether any fine-grained quantum supremacy result can be shown for a more realistic setup, namely, additive-error sampling. In this paper, we show the additive-error fine-grained quantum supremacy (under certain complexity assumptions). As examples, we consider the IQP model, a mixture of the IQP model and log-depth Boolean circuits, and Clifford+T circuits. Similar results should hold for other sub-universal models.


2010 ◽  
Vol 7 (2) ◽  
pp. 331-357 ◽  
Author(s):  
Tomás Flouri ◽  
Jan Janousek ◽  
Bořivoj Melichar

Subtree matching is an important problem in Computer Science on which a number of tasks, such as mechanical theorem proving, term-rewriting, symbolic computation and nonprocedural programming languages are based on. A systematic approach to the construction of subtree pattern matchers by deterministic pushdown automata, which read subject trees in prefix and postfix notation, is presented. The method is analogous to the construction of string pattern matchers: for a given pattern, a nondeterministic pushdown automaton is created and is then determinised. In addition, it is shown that the size of the resulting deterministic pushdown automata directly corresponds to the size of the existing string pattern matchers based on finite automata.


Sign in / Sign up

Export Citation Format

Share Document