BRUTE FORCE DETERMINIZATION OF NFAs BY MEANS OF STATE COVERS

2005 ◽  
Vol 16 (03) ◽  
pp. 441-451 ◽  
Author(s):  
J.-M. CHAMPARNAUD ◽  
F. COULON ◽  
T. PARANTHOËN

Finite automata determinization is a critical operation for numerous practical applications such as regular expression search. Algorithms have to deal with the possible blow up of determinization. There exist solutions to control the space and time complexity like the so called "on the fly" determinization. Another solution consists in performing brute force determinization, which is robust and technically fast, although a priori its space complexity constitutes a weakness. However, one can reduce this complexity by perfoming a partial brute force determinization. This paper provides optimizations that consist in detecting classes of unreachable states and transitions of the subset automaton, which leads in average to an exponential reduction of the complexity of brute force and partial brute force determinization.

2020 ◽  
Vol 30 (6) ◽  
pp. 1239-1255
Author(s):  
Merlin Carl

Abstract We consider notions of space by Winter [21, 22]. We answer several open questions about these notions, among them whether low space complexity implies low time complexity (it does not) and whether one of the equalities P=PSPACE, P$_{+}=$PSPACE$_{+}$ and P$_{++}=$PSPACE$_{++}$ holds for ITTMs (all three are false). We also show various separation results between space complexity classes for ITTMs. This considerably expands our earlier observations on the topic in Section 7.2.2 of Carl (2019, Ordinal Computability: An Introduction to Infinitary Machines), which appear here as Lemma $6$ up to Corollary $9$.


Author(s):  
Subandijo Subandijo

Efficiency or the running time of an algorithm is usually calculated with time complexity or space complexity as a function of various inputs. It is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Brute-force algorithm is the easiest way to calculate the performance of the algorithm. However, it is not recommended since it does not sufficiently explain the efficiency of the algorithm. Asymptotic estimaties are used because different implementations of the same algorithm may differ in efficiency. The big-O notation is used to generate the estimation. 


2001 ◽  
Vol 11 (06) ◽  
pp. 707-735 ◽  
Author(s):  
J.-M. CHAMPARNAUD ◽  
D. ZIADI

Two classical non-deterministic automata recognize the language denoted by a regular expression: the position automaton which deduces from the position sets defined by Glushkov and McNaughton–Yamada, and the equation automaton which can be computed via Mirkin's prebases or Antimirov's partial derivatives. Let |E| be the size of the expression and ‖E‖ be its alphabetic width, i.e. the number of symbol occurrences. The number of states in the equation automaton is less than or equal to the number of states in the position automaton, which is equal to ‖E‖+1. On the other hand, the worst-case time complexity of Antimirov algorithm is O(‖E‖3· |E|2), while it is only O(‖E‖·|E|) for the most efficient implementations yielding the position automaton (Brüggemann–Klein, Chang and Paige, Champarnaud et al.). We present an O(|E|2) space and time algorithm to compute the equation automaton. It is based on the notion of canonical derivative which makes it possible to efficiently handle sets of word derivatives. By the way, canonical derivatives also lead to a new O(|E|2) space and time algorithm to construct the position automaton.


2020 ◽  
Vol 12 (4) ◽  
pp. 1
Author(s):  
Yaozhi Jiang

P vs. NP problem is very important research direction in computation complexity theory. In this paper author, by an engineer’s viewpoint, establishes universal multiple-tape Turing-machine and k-homogeneous multiple-tape Turing-machine, and by them we can obtain an unified mathematical model for algorithm-tree, from the unified model for algorithm-tree, we can conclude that computation complexity for serial processing NP problem if under parallel processing sometimes we can obtain P=NP  in time-complexity, but that will imply another NP, non-deterministic space-complexity NP, i.e., under serial processing P≠NP  in space-complexity, and the result is excluded the case of NP problem that there exists a faster algorithm to replace the brute-force algorithm, and hence we can proof that under parallel processing time-complexity is depended on space-complexity, and vice verse, within P vs. NP problem, this point is just the natural property of P vs. NP problem so that “P≠NP ”.


2016 ◽  
Vol 42 (2) ◽  
pp. 207-243
Author(s):  
Daniel Gildea ◽  
Giorgio Satta

The complexity of parsing with synchronous context-free grammars is polynomial in the sentence length for a fixed grammar, but the degree of the polynomial depends on the grammar. Specifically, the degree depends on the length of rules, the permutations represented by the rules, and the parsing strategy adopted to decompose the recognition of a rule into smaller steps. We address the problem of finding the best parsing strategy for a rule, in terms of space and time complexity. We show that it is NP-hard to find the binary strategy with the lowest space complexity. We also show that any algorithm for finding the strategy with the lowest time complexity would imply improved approximation algorithms for finding the treewidth of general graphs.


Algorithms ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 238
Author(s):  
Faissal Ouardi ◽  
Zineb Lotfi ◽  
Bilal Elghadyry

This paper describes a fast algorithm for constructing directly the equation automaton from the well-known Thompson automaton associated with a regular expression. Allauzen and Mohri have presented a unified construction of small automata and gave a construction of the equation automaton with time and space complexity in O(mlogm+m2), where m denotes the number of Thompson automaton transitions. It is based on two classical automata operations, namely epsilon-removal and Hopcroft’s algorithm for deterministic Finite Automata (DFA) minimization. Using the notion of c-continuation, Ziadi et al. presented a fast computation of the equation automaton in O(m2) time complexity. In this paper, we design an output-sensitive algorithm combining advantages of the previous algorithms and show that its computational complexity can be reduced to O(m×|Q≡e|), where |Q≡e| denotes the number of states of the equation automaton, by an epsilon-removal and Bubenzer minimization algorithm of an Acyclic Deterministic Finite Automata (ADFA).


Author(s):  
Vinoth Kumar K

The vast majority of the system security applications in today's systems depend on deep packet inspection. In recent years, regular expression matching was used as an important operator. It examines whether or not the packet's payload can be matched with a group of predefined regular expressions. Regular expressions are parsed using the deterministic finite automata representations. Conversely, to represent regular expression sets as DFA, the system needs a large amount of memory, an excessive amount of time, and an excessive amount of per flow state, limiting their practical applications. This chapter explores network intrusion detection systems.


Kant-Studien ◽  
2020 ◽  
Vol 111 (3) ◽  
pp. 331-385
Author(s):  
Christian Martin

AbstractAccording to a widespread view, the essentials of Kant’s critical conception of space and time as set forth in the Transcendental Aesthetic can already be found in his 1770 Inaugural Dissertation. Contrary to this assumption, the present article shows that Kant’s later arguments for the a priori intuitive character of our original representations of space and time differ crucially from those contained in the Dissertation. This article highlights profound differences between Kant’s transcendental and his pre-critical conception of pure sensibility by systematically comparing the topic, method and argumentation of the First Critique with that of the Inaugural Dissertation. It thus contributes to a better understanding of the Transcendental Aesthetics itself, which allows one to distinguish its peculiar transcendental mode of argumentation from considerations made by the pre-critical Kant, with which it can easily be conflated.


2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Li Li ◽  
Yanping Zhou

Abstract In this work, we consider the density-dependent incompressible inviscid Boussinesq equations in $\mathbb{R}^{N}\ (N\geq 2)$ R N ( N ≥ 2 ) . By using the basic energy method, we first give the a priori estimates of smooth solutions and then get a blow-up criterion. This shows that the maximum norm of the gradient velocity field controls the breakdown of smooth solutions of the density-dependent inviscid Boussinesq equations. Our result extends the known blow-up criteria.


Author(s):  
Amandine Aftalion ◽  
Manuel del Pino ◽  
René Letelier

We consider the problem Δu = λf(u) in Ω, u(x) tends to +∞ as x approaches ∂Ω. Here, Ω is a bounded smooth domain in RN, N ≥ 1 and λ is a positive parameter. In this paper, we are interested in analysing the role of the sign changes of the function f in the number of solutions of this problem. As a consequence of our main result, we find that if Ω is star-shaped and f behaves like f(u) = u(u−a)(u−1) with ½ < a < 1, then there is a solution bigger than 1 for all λ and there exists λ0 > 0 such that, for λ < λ0, there is no positive solution that crosses 1 and, for λ > λ0, at least two solutions that cross 1. The proof is based on a priori estimates, the construction of barriers and topological-degree arguments.


Sign in / Sign up

Export Citation Format

Share Document