scholarly journals Efficient Construction of the Equation Automaton

Algorithms ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 238
Author(s):  
Faissal Ouardi ◽  
Zineb Lotfi ◽  
Bilal Elghadyry

This paper describes a fast algorithm for constructing directly the equation automaton from the well-known Thompson automaton associated with a regular expression. Allauzen and Mohri have presented a unified construction of small automata and gave a construction of the equation automaton with time and space complexity in O(mlogm+m2), where m denotes the number of Thompson automaton transitions. It is based on two classical automata operations, namely epsilon-removal and Hopcroft’s algorithm for deterministic Finite Automata (DFA) minimization. Using the notion of c-continuation, Ziadi et al. presented a fast computation of the equation automaton in O(m2) time complexity. In this paper, we design an output-sensitive algorithm combining advantages of the previous algorithms and show that its computational complexity can be reduced to O(m×|Q≡e|), where |Q≡e| denotes the number of states of the equation automaton, by an epsilon-removal and Bubenzer minimization algorithm of an Acyclic Deterministic Finite Automata (ADFA).

2013 ◽  
Vol 24 (07) ◽  
pp. 1083-1097 ◽  
Author(s):  
MARKUS HOLZER ◽  
SEBASTIAN JAKOBI

We introduce E-equivalence, which is a straightforward generalization of almost-equivalence. While almost-equivalence asks for ordinary equivalence up to a finite number of exceptions, in E-equivalence these exceptions or errors must belong to a (regular) set E. The computational complexity of deterministic finite automata (DFAs) minimization problems and their variants w.r.t. almost- and E-equivalence are studied. We show that there is a significant difference in the complexity of problems related to almost-equivalence, and those related to E-equivalence. Moreover, since hyper-minimal and E-minimal automata are not necessarily unique (up to isomorphism as for minimal DFAs), we consider the problem of counting the number of these minimal automata.


2016 ◽  
Vol 27 (02) ◽  
pp. 161-185
Author(s):  
Markus Holzer ◽  
Sebastian Jakobi

We compare deterministic finite automata (DFAs) and biautomata under the following two aspects: structural similarities between minimal and hyper-minimal automata, and computational complexity of the minimization and hyper-minimization problem. Concerning classical minimality, the known results such as isomorphism between minimal DFAs, and NL-completeness of the DFA minimization problem carry over to the biautomaton case. But surprisingly this is not the case for hyper-minimization: the similarity between almost-equivalent hyper-minimal biautomata is not as strong as it is between almost-equivalent hyper-minimal DFAs. Moreover, while hyper-minimization is NL-complete for DFAs, we prove that this problem turns out to be computationally intractable, i.e., NP-complete, for biautomata.


2012 ◽  
Vol 23 (06) ◽  
pp. 1207-1225 ◽  
Author(s):  
ANDREAS MALETTI ◽  
DANIEL QUERNHEIM

Hyper-minimization of deterministic finite automata (DFA) is a recently introduced state reduction technique that allows a finite change in the recognized language. A generalization of this lossy compression method to the weighted setting over semifields is presented, which allows the recognized weighted language to differ for finitely many input strings. First, the structure of hyper-minimal deterministic weighted finite automata is characterized in a similar way as in classical weighted minimization and unweighted hyper-minimization. Second, an efficient hyper-minimization algorithm, which runs in time [Formula: see text], is derived from this characterization. Third, the closure properties of canonical regular languages, which are languages recognized by hyper-minimal DFA, are investigated. Finally, some recent results in the area of hyper-minimization are recalled.


1977 ◽  
Vol 6 (82) ◽  
Author(s):  
Erik Meineche Schmidt

<p>The gain in succinctness of descriptions of regular languages when nondeterministic (unambiguous) finite automata are used rather than unambiguous (deterministic) finite automata, is not bounded by any polynomium.</p><p>The problem of deciding whether an unambiguous regular expression does not generate all words over its terminal alphabet, is in NP.</p>


2009 ◽  
Vol 20 (04) ◽  
pp. 563-580 ◽  
Author(s):  
MARKUS HOLZER ◽  
MARTIN KUTRIB

Nondeterministic finite automata (NFAs) were introduced in [68], where their equivalence to deterministic finite automata was shown. Over the last 50 years, a vast literature documenting the importance of finite automata as an enormously valuable concept has been developed. In the present paper, we tour a fragment of this literature. Mostly, we discuss recent developments relevant to NFAs related problems like, for example, (i) simulation of and by several types of finite automata, (ii) minimization and approximation, (iii) size estimation of minimal NFAs, and (iv) state complexity of language operations. We thus come across descriptional and computational complexity issues of nondeterministic finite automata. We do not prove these results but we merely draw attention to the big picture and some of the main ideas involved.


2005 ◽  
Vol 16 (03) ◽  
pp. 441-451 ◽  
Author(s):  
J.-M. CHAMPARNAUD ◽  
F. COULON ◽  
T. PARANTHOËN

Finite automata determinization is a critical operation for numerous practical applications such as regular expression search. Algorithms have to deal with the possible blow up of determinization. There exist solutions to control the space and time complexity like the so called "on the fly" determinization. Another solution consists in performing brute force determinization, which is robust and technically fast, although a priori its space complexity constitutes a weakness. However, one can reduce this complexity by perfoming a partial brute force determinization. This paper provides optimizations that consist in detecting classes of unreachable states and transitions of the subset automaton, which leads in average to an exponential reduction of the complexity of brute force and partial brute force determinization.


2013 ◽  
Vol 487 ◽  
pp. 17-22 ◽  
Author(s):  
Manuel Vázquez de Parga ◽  
Pedro García ◽  
Damián López

Axioms ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 338
Author(s):  
Cezar Câmpeanu

Deterministic Finite Cover Automata (DFCA) are compact representations of finite languages. Deterministic Finite Automata with “do not care” symbols and Multiple Entry Deterministic Finite Automata are both compact representations of regular languages. This paper studies the benefits of combining these representations to get even more compact representations of finite languages. DFCAs are extended by accepting either “do not care” symbols or considering multiple entry DFCAs. We study for each of the two models the existence of the minimization or simplification algorithms and their computational complexity, the state complexity of these representations compared with other representations of the same language, and the bounds for state complexity in case we perform a representation transformation. Minimization for both models proves to be NP-hard. A method is presented to transform minimization algorithms for deterministic automata into simplification algorithms applicable to these extended models. DFCAs with “do not care” symbols prove to have comparable state complexity as Nondeterministic Finite Cover Automata. Furthermore, for multiple entry DFCAs, we can have a tight estimate of the state complexity of the transformation into equivalent DFCA.


2020 ◽  
Vol 9 (3) ◽  
pp. 1238-1250
Author(s):  
Dmitry V. Pashchenko ◽  
Dmitry A. Trokoz ◽  
Alexey I. Martyshkin ◽  
Mihail P. Sinev ◽  
Boris L. Svistunov

The paper proposed an algorithm which purpose is searching for a substring of characters in a string. Principle of its operation is based on the theory of non-deterministic finite automata and vector-character architecture. It is able to provide the linear computational complexity of searching for a substring depending on the length of the searched string measured in the number of operations with hyperdimensional vectors when repeatedly searching for different strings in a target line. None of the existing algorithms has such a low level of computational complexity. The disadvantages of the proposed algorithm are the fact that the existing hardware implementations of computing systems for performing operations with hyperdimensional vectors require a large number of machine instructions, which reduces the gain from this algorithm. Despite this, in the future, it is possible to create a hardware implementation that can ensure the execution of operations with hyperdimensional vectors in one cycle, which will allow the proposed algorithm to be applied in practice.


Sign in / Sign up

Export Citation Format

Share Document