Investigations on Automata and Languages Over a Unary Alphabet

2015 ◽  
Vol 26 (07) ◽  
pp. 827-850 ◽  
Author(s):  
Giovanni Pighizzini

The investigation of automata and languages defined over a one letter alphabet shows interesting differences with respect to the case of alphabets with at least two letters. Probably, the oldest example emphasizing one of these differences is the collapse of the classes of regular and context-free languages in the unary case (Ginsburg and Rice, 1962). Many differences have been proved concerning the state costs of the simulations between different variants of unary finite state automata (Chrobak, 1986, Mereghetti and Pighizzini, 2001). We present an overview of these results. Because important connections with fundamental questions in space complexity, we give emphasis to unary two-way automata. Furthermore, we discuss unary versions of other computational models, as probabilistic automata, one-way and two-way pushdown automata, even extended with auxiliary workspace, and multi-head automata.

2009 ◽  
Vol 20 (04) ◽  
pp. 629-645 ◽  
Author(s):  
GIOVANNI PIGHIZZINI

The simulation of deterministic pushdown automata defined over a one-letter alphabet by finite state automata is investigated from a descriptional complexity point of view. We show that each unary deterministic pushdown automaton of size s can be simulated by a deterministic finite automaton with a number of states that is exponential in s. We prove that this simulation is tight. Furthermore, its cost cannot be reduced even if it is performed by a two-way nondeterministic automaton. We also prove that there are unary languages for which deterministic pushdown automata cannot be exponentially more succinct than finite automata. In order to state this result, we investigate the conversion of deterministic pushdown automata into context-free grammars. We prove that in the unary case the number of variables in the resulting grammar is strictly smaller than the number of variables needed in the case of nonunary alphabets.


2014 ◽  
Vol 25 (07) ◽  
pp. 897-916 ◽  
Author(s):  
GIOVANNI PIGHIZZINI ◽  
ANDREA PISONI

Limited automata are one-tape Turing machines that are allowed to rewrite the content of any tape cell only in the first d visits, for a fixed constant d. In the case d = 1, namely, when a rewriting is possible only during the first visit to a cell, these models have the same power of finite state automata. We prove state upper and lower bounds for the conversion of 1-limited automata into finite state automata. In particular, we prove a double exponential state gap between nondeterministic 1-limited automata and one-way deterministic finite automata. The gap reduces to a single exponential in the case of deterministic 1-limited automata. This also implies an exponential state gap between nondeterministic and deterministic 1-limited automata. Another consequence is that 1-limited automata can have less states than equivalent two-way nondeterministic finite automata. We show that this is true even if we restrict to the case of the one-letter input alphabet. For each d ≥ 2, d-limited automata are known to characterize the class of context-free languages. Using the Chomsky-Schützenberger representation for contextfree languages, we present a new conversion from context-free languages into 2-limited automata.


Author(s):  
Holger Bock Axelsen ◽  
Martin Kutrib ◽  
Andreas Malcher ◽  
Matthias Wendlandt

It is well known that reversible finite automata do not accept all regular languages, that reversible pushdown automata do not accept all deterministic context-free languages, and that reversible queue automata are less powerful than deterministic real-time queue automata. It is of significant interest from both a practical and theoretical point of view to close these gaps. We here extend these reversible models by a preprocessing unit which is basically a reversible injective and length-preserving finite state transducer. It turns out that preprocessing the input using such weak devices increases the computational power of reversible deterministic finite automata to the acceptance of all regular languages, whereas for reversible pushdown automata the accepted family of languages lies strictly in between the reversible deterministic context-free languages and the real-time deterministic context-free languages. For reversible queue automata the preprocessing of the input leads to machines that are stronger than real-time reversible queue automata, but less powerful than real-time deterministic (irreversible) queue automata. Moreover, it is shown that the computational power of all three types of machines is not changed by allowing the preprocessing finite state transducer to work irreversibly. Finally, we examine the closure properties of the family of languages accepted by such machines.


Author(s):  
H. BUNKE ◽  
J. DVORAK

In this paper, we discuss how rule based expert system shells can be used in pattern recognition for the implementation of algorithms which are procedurally oriented in their nature rather than rule based, and have traditionally been implemented in procedural programming languages. Particular examples include finite state automata, context free parsing, string matching, graph matching and discrete relaxation.


2008 ◽  
Vol 19 (03) ◽  
pp. 581-595 ◽  
Author(s):  
YO-SUB HAN ◽  
KAI SALOMAA

We investigate the state complexity of union and intersection for finite languages. Note that the problem of obtaining the tight bounds for both operations was open. First we compute upper bounds using structural properties of minimal deterministic finite-state automata for finite languages. Then, we show that the upper bounds are tight if we have a variable sized alphabet that can depend on the size of input deterministic finite-state automata. In addition, we prove that the upper bounds are unreachable for any fixed sized alphabet.


1975 ◽  
Vol 4 (54) ◽  
Author(s):  
Marvin Solomon

<p>This paper analyzes some of the consequences of allowing the definition of parameterized data types in programming languages. A typical use of such types is:</p><p>type queue(x) = struct (x, ref (queue (x))),</p><p>intqueue = queue(int).</p><p>It is shown that the addition of parameters permits the definition of new types not definable without parameters. In particular, the types definable with parameters are closely related to the deterministic context-free languages, whereas the author has previously shown that the types definable without parameters are characterized by the regular (i.e. finite state) languages.</p><p>An important consequence of this fact is that the type equivalence problem, which is easily solvable in the absence of parameters, becomes equivalent to the (currently open) equivalence problem for deterministic pushdown automata.</p>


Author(s):  
Ernesto Rodrigues ◽  
Heitor Silvério Lopes

Grammatical Inference (also known as grammar induction) is the problem of learning a grammar for a language from a set of examples. In a broad sense, some data is presented to the learner that should return a grammar capable of explaining to some extent the input data. The grammar inferred from data can then be used to classify unseen data or provide some suitable model for it. The classical formalization of Grammatical Inference (GI) is known as Language Identification in the Limit (Gold, 1967). Here, there are a finite set S+ of strings known to belong to the language L (the positive examples) and another finite set S- of strings not belonging to L (the negative examples). The language L is said to be identifiable in the limit if there exists a procedure to find a grammar G such that S+ ? L(G), S- ? L(G) and, in the limit, for sufficiently large S+ and S-, L = L(G). The disjoint sets S+ and S- are given to provide clues for the inference of the production rules P of the unknown grammar G used to generate the language L. Grammatical inference include such diverse fields as speech and natural language processing, gene analysis, pattern recognition, image processing, sequence prediction, information retrieval, cryptography, and many more. An excellent source for a state-of-the art overview of the subject is provided in (de la Higuera, 2005). Traditionally, most work in GI has been focused on the inference of regular grammars trying to induce finite-state automata, which can be efficiently learned. For context free languages some recent approaches have shown limited success (Starckie, Costie & Zaanen, 2004), because the search space of possible grammars is infinite. Basically, the parenthesis and palindrome languages are common test cases for the effectiveness of grammatical inference methods. Both languages are context-free. The parenthesis language is deterministic but the palindrome language is nondeterministic (de la Higuera, 2005). The use of evolutionary methods for context-free grammatical inference are not new, but only a few attempts have been successful. Wyard (1991) used Genetic Algorithm (GA) to infer grammars for the language of correctly balanced and nested parentheses with success, but fails on the language of sentences containing the same number of a’s and b’s (anbn language). In another attempt (Wyard, 1994), he obtained positive results on the inference of two classes of context-free grammars: the class of n-symbol palindromes with 2 = n = 4 and a class of small natural language grammars. Sen and Janakiraman (1992) applied a GA using a pushdown automata to the inference and successfully learned the anbn language and the parentheses balancing problem. But their approach does not scale well. Huijsen (1994) applied GA to infer context-free grammars for the parentheses balancing problem, the language of equal numbers of a’s and b’s and the even-length 2-symbol palindromes. Huijsen uses a “markerbased” encoding scheme with has the main advantage of allowing variable length chromosomes. The inference of regular grammars was successful but the inference of context-free grammars failed. Those results obtained in earlier attempts using GA to context-free grammatical inference were limited. The first attempt to use Genetic Programming (GP) for grammatical inference used a pushdown automata (Dunay, 1994) and successfully learned the parenthesis language, but failed for the anbn language. Korkmaz and Ucoluk (2001) also presented a GP approach using a prototype theory, which provides a way to recognize similarity between the grammars in the population. With this representation, it is possible to recognize the so-called building blocks but the results are preliminary. Javed and his colleagues (2004) proposed a Genetic Programming (GP) approach with grammar-specific heuristic operators with non-random construction of the initial grammar population. Their approach succeeded in inducing small context-free grammars. More recently, Rodrigues and Lopes (2006) proposed a hybrid GP approach that uses a confusion matrix to compute the fitness. They also proposed a local search mechanism that uses information obtained from the sentence parsing to generate a set of useful productions. The system was used for the parenthesis and palindromes languages with success.


Sign in / Sign up

Export Citation Format

Share Document