Linear pattern matching of compressed terms and polynomial rewriting

2018 ◽  
Vol 28 (8) ◽  
pp. 1415-1450 ◽  
Author(s):  
MANFRED SCHMIDT-SCHAUß

We consider term rewriting under sharing in the form of compression by singleton tree grammars (STG), which is more general than the term dags. Algorithms for the subtasks of rewriting are analysed: finding a redex for rewriting by locating a position for a match, performing a rewrite step by constructing the compressed result and executing a sequence of rewrite steps. The first main result is that locating a match of a linear termsin another termtcan be performed in polynomial time ifs,tare both STG-compressed. This generalizes results on matching of STG-compressed terms, matching of straight-line-program-compressed strings with character-variables, where every variable occurs at most once, and on fully compressed matching of strings. Also, for the case wheresis directed-acyclic-graph (DAG)-compressed, it is shown that submatching can be performed in polynomial time. The general case of compressed submatching can be computed in non-deterministic polynomial time, and an algorithm is described that may be exponential in the worst case, its complexity isnO(k), wherekis the number of variables with double occurrences insandnis the size of the input. The second main result is that in case there is an oracle for the redex position, a sequence ofmparallel or single-step rewriting steps under STG-compression can be performed in polynomial time. This generalizes results on DAG-compressed rewriting sequences. Combining these results implies that for an STG-compressed term rewrite system with left-linear rules,mparallel or single-step term rewrite steps can be performed in polynomial time in the input sizenandm.

1994 ◽  
Vol 04 (01n02) ◽  
pp. 171-180
Author(s):  
R. RAMESH

Term rewriting is a popular computational paradigm for symbolic computations such as formula manipulation, theorem proving and implementations of nonprocedural programming languages. In rewriting, the most demanding operation is repeated simplification of terms by pattern matching them against rewrite rules. We describe a parallel architecture, R2M, for accelerating this operation. R2M can operate either as a stand-alone processor using its own memory or as a backend device attached to a host using the host’s main memory. R2M uses only a fixed number (independent of input size) of processing units and fixed capacity auxiliary memory units, yet it is capable of handling variable-size rewrite rules that change during simplification. This is made possible by a simple and reconfigurable interconnection present in R2M. Finally, R2M uses a hybrid scheme that combines the ease, and efficiency of parallel pattern matching using the tree representation of terms, and the naturalness of their dag representation for replacements.


2015 ◽  
Vol 07 (03) ◽  
pp. 1550032 ◽  
Author(s):  
Abdullah N. Arslan ◽  
Betsy George ◽  
Kirsten Stor

The pattern matching with wildcards and length constraints problem is an interesting problem in the literature whose computational complexity is still open. There are polynomial time exact algorithms for its special cases. There are heuristic algorithms, and online algorithms that do not guarantee an optimal solution to the original problem. We consider two special cases of the problem for which we develop offline solutions. We give an algorithm for one case with provably better worst case time complexity compared to existing algorithms. We present the first exact algorithm for the second case. This algorithm uses integer linear programming (ILP) and it takes polynomial time under certain conditions.


1992 ◽  
Vol 21 (400) ◽  
Author(s):  
Dexter Kozen

Let T_Sigma be the set of ground terms over a finite ranked alphabet Sigma. We define <em> partial autornata on</em> T_Sigma and prove that the finitely generated congruences on T_Sigma are in one-to one correspondence (up to isomorphism) with the finite partial automata on Sigma with no inaccessible and no inessential states. We give an application in term rewriting: every ground term rewrite system has a canonical equivalent system that can be constructed in polynomial time.


2002 ◽  
Vol 13 (06) ◽  
pp. 873-887
Author(s):  
NADIA NEDJAH ◽  
LUIZA DE MACEDO MOURELLE

We compile pattern matching for overlapping patterns in term rewriting systems into a minimal, tree matching automata. The use of directed acyclic graphs that shares all the isomorphic subautomata allows us to reduce space requirements. These are duplicated in the tree automaton. We design an efficient method to identify such subautomata and avoid duplicating their construction while generating the dag automaton. We compute some bounds on the size of the automata, thereby improving on previously known equivalent bounds for the tree automaton.


2018 ◽  
Vol 29 (02) ◽  
pp. 315-329 ◽  
Author(s):  
Timothy Ng ◽  
David Rappaport ◽  
Kai Salomaa

The neighbourhood of a language [Formula: see text] with respect to an additive distance consists of all strings that have distance at most the given radius from some string of [Formula: see text]. We show that the worst case deterministic state complexity of a radius [Formula: see text] neighbourhood of a language recognized by an [Formula: see text] state nondeterministic finite automaton [Formula: see text] is [Formula: see text]. In the case where [Formula: see text] is deterministic we get the same lower bound for the state complexity of the neighbourhood if we use an additive quasi-distance. The lower bound constructions use an alphabet of size linear in [Formula: see text]. We show that the worst case state complexity of the set of strings that contain a substring within distance [Formula: see text] from a string recognized by [Formula: see text] is [Formula: see text].


Author(s):  
Marko Samer ◽  
Stefan Szeider

Parameterized complexity is a new theoretical framework that considers, in addition to the overall input size, the effects on computational complexity of a secondary measurement, the parameter. This two-dimensional viewpoint allows a fine-grained complexity analysis that takes structural properties of problem instances into account. The central notion is “fixed-parameter tractability” which refers to solvability in polynomial time for each fixed value of the parameter such that the order of the polynomial time bound is independent of the parameter. This chapter presents main concepts and recent results on the parameterized complexity of the satisfiability problem and it outlines fundamental algorithmic ideas that arise in this context. Among the parameters considered are the size of backdoor sets with respect to various tractable base classes and the treewidth of graph representations of satisfiability instances.


Sign in / Sign up

Export Citation Format

Share Document