rewrite rules
Recently Published Documents


TOTAL DOCUMENTS

140
(FIVE YEARS 10)

H-INDEX

19
(FIVE YEARS 1)

2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-28
Author(s):  
Chandrakana Nandi ◽  
Max Willsey ◽  
Amy Zhu ◽  
Yisu Remy Wang ◽  
Brett Saiki ◽  
...  

Many compilers, synthesizers, and theorem provers rely on rewrite rules to simplify expressions or prove equivalences. Developing rewrite rules can be difficult: rules may be subtly incorrect, profitable rules are easy to miss, and rulesets must be rechecked or extended whenever semantics are tweaked. Large rulesets can also be challenging to apply: redundant rules slow down rule-based search and frustrate debugging. This paper explores how equality saturation, a promising technique that uses e-graphs to apply rewrite rules, can also be used to infer rewrite rules. E-graphs can compactly represent the exponentially large sets of enumerated terms and potential rewrite rules. We show that equality saturation efficiently shrinks both sets, leading to faster synthesis of smaller, more general rulesets. We prototyped these strategies in a tool dubbed Ruler. Compared to a similar tool built on CVC4, Ruler synthesizes 5.8× smaller rulesets 25× faster without compromising on proving power. In an end-to-end case study, we show Ruler-synthesized rules which perform as well as those crafted by domain experts, and addressed a longstanding issue in a popular open source tool.


2021 ◽  
Vol 15 (1) ◽  
pp. 46-58
Author(s):  
Xuanhe Zhou ◽  
Guoliang Li ◽  
Chengliang Chai ◽  
Jianhua Feng

Query rewrite transforms a SQL query into an equivalent one but with higher performance. However, SQL rewrite is an NP-hard problem, and existing approaches adopt heuristics to rewrite the queries. These heuristics have two main limitations. First, the order of applying different rewrite rules significantly affects the query performance. However, the search space of all possible rewrite orders grows exponentially with the number of query operators and rules and it is rather hard to find the optimal rewrite order. Existing methods apply a pre-defined order to rewrite queries and will fall in a local optimum. Second, different rewrite rules have different benefits for different queries. Existing methods work on single plans but cannot effectively estimate the benefits of rewriting a query. To address these challenges, we propose a policy tree based query rewrite framework, where the root is the input query and each node is a rewritten query from its parent. We aim to explore the tree nodes in the policy tree to find the optimal rewrite query. We propose to use Monte Carlo Tree Search to explore the policy tree, which navigates the policy tree to efficiently get the optimal node. Moreover, we propose a learning-based model to estimate the expected performance improvement of each rewritten query, which guides the tree search more accurately. We also propose a parallel algorithm that can explore the tree search in parallel in order to improve the performance. Experimental results showed that our method significantly outperformed existing approaches.


2021 ◽  
Vol 5 (ICFP) ◽  
pp. 1-29
Author(s):  
John M. Li ◽  
Andrew W. Appel

An efficient optimizing compiler can perform many cascading rewrites in a single pass, using auxiliary data structures such as variable binding maps, delayed substitutions, and occurrence counts. Such optimizers often perform transformations according to relatively simple rewrite rules, but the subtle interactions between the data structures needed for efficiency make them tricky to write and trickier to prove correct. We present a system for semi-automatically deriving both an efficient program transformation and its correctness proof from a list of rewrite rules and specifications of the auxiliary data structures it requires. Dependent types ensure that the holes left behind by our system (for the user to fill in) are filled in correctly, allowing the user low-level control over the implementation without having to worry about getting it wrong. We implemented our system in Coq (though it could be implemented in other logics as well), and used it to write optimization passes that perform uncurrying, inlining, dead code elimination, and static evaluation of case expressions and record projections. The generated implementations are sometimes faster, and at most 40% slower, than hand-written counterparts on a small set of benchmarks; in some cases, they require significantly less code to write and prove correct.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 421 ◽  
Author(s):  
Miriam Backens ◽  
Hector Miller-Bakewell ◽  
Giovanni de Felice ◽  
Leo Lobski ◽  
John van de Wetering

Translations between the quantum circuit model and the measurement-based one-way model are useful for verification and optimisation of quantum computations. They make crucial use of a property known as gflow. While gflow is defined for one-way computations allowing measurements in three different planes of the Bloch sphere, most research so far has focused on computations containing only measurements in the XY-plane. Here, we give the first circuit-extraction algorithm to work for one-way computations containing measurements in all three planes and having gflow. The algorithm is efficient and the resulting circuits do not contain ancillae. One-way computations are represented using the ZX-calculus, hence the algorithm also represents the most general known procedure for extracting circuits from ZX-diagrams. In developing this algorithm, we generalise several concepts and results previously known for computations containing only XY-plane measurements. We bring together several known rewrite rules for measurement patterns and formalise them in a unified notation using the ZX-calculus. These rules are used to simplify measurement patterns by reducing the number of qubits while preserving both the semantics and the existence of gflow. The results can be applied to circuit optimisation by translating circuits to patterns and back again.


2021 ◽  
Author(s):  
Dávid J. Németh ◽  
Dániel Horpácsi ◽  
Máté Tejfel

Many development environments offer refactorings aimed at improving non-functional properties of software, but we have no guarantees that these transformations indeed preserve the observable behavior of the source code they are applied on. An existing domain-specific language makes it possible to formalize automatically verifiable refactorings via instantiating predefined transformation schemes with conditional term rewrite rules. We present a proposal for adapting this language from the functional to the object-oriented programming paradigm, using Java instead of Erlang as a representative. The behavior-preserving property of discussed refactorings is characterized with a multilayered definition of equivalence for Java programs, including the conformity relation of class hierarchies. Based on the decomposition of a complex refactoring rule, we show how new transformation schemes can be identified, along with modifications and extensions of the description language required to accommodate them. Finally, we formally define the chosen base refactoring as a composition of scheme instances.


Author(s):  
N. D. Gilbert ◽  
E. A. McDougall

Abstract Presentations of groups by rewriting systems (that is, by monoid presentations), have been fruitfully studied by encoding the rewriting system in a 2-complex—the Squier complex—whose fundamental groupoid then describes the derivation of consequences of the rewrite rules. We describe a reduced form of the Squier complex, investigate the structure of its fundamental groupoid, and show that key properties of the presentation are still encoded in the reduced form.


2020 ◽  
Vol 30 (6) ◽  
pp. 710-735
Author(s):  
Yunus Kutz ◽  
Manfred Schmidt-Schauß

AbstractWe consider matching, rewriting, critical pairs and the Knuth–Bendix confluence test on rewrite rules in a nominal setting extended by atom-variables. We utilize atom-variables instead of atoms to formulate and rewrite rules on constrained expressions, which is an improvement of expressiveness over previous approaches. Nominal unification and nominal matching are correspondingly extended. Rewriting is performed using nominal matching, and computing critical pairs is done using nominal unification. We determine the complexity of several problems in a quantified freshness logic. In particular we show that nominal matching is $$\prod _2^p$$ -complete. We prove that the adapted Knuth–Bendix confluence test is applicable to a nominal rewrite system with atom-variables, and thus that there is a decidable test whether confluence of the ground instance of the abstract rewrite system holds. We apply the nominal Knuth–Bendix confluence criterion to the theory of monads and compute a convergent nominal rewrite system modulo alpha-equivalence.


Author(s):  
Pius ten Hacken

The scope of classical generative morphology is not clearly determined. All three components need clarification. The boundaries of what counts as generative linguistics are not unambiguously set, but it can be assumed that all generative work in linguistics is inspired by the work of Noam Chomsky. Morphology was a much more prominent component of linguistic theory in earlier approaches, but of course the subject field had to be accounted for also in generative linguistics. The label classical can be seen as restricting the scope both to the more mainstream approaches and to a period that ends before the present. Here, the early 1990s will be taken as the time when classical theorizing gave way to contemporary generative morphology. In the earliest presentations of generative linguistics, there was no lexicon. The introduction of the lexicon made many of the ideas formulated before obsolete. Chomsky’s Lexicalist Hypothesis provided the basis for a new start of research in morphology. Two contrasting elaborations appeared in the early 1970s. Halle proposed a model based on the combination of morphemes, Jackendoff one based on the representation and analysis of full words. Against this background, a number of characteristic issues were discussed in the 1970s and 1980s. One such issue was the form of rules. Here there was a shift from transformations to rewrite rules. This shift can be seen particularly well in the discussion of verbal compounds, e.g., truck driver. The question whether and how morphology should be distinguished from syntax generated a lot of discussion. Another broad question was the degree to which rules of morphology should be thought of as operating in separate components. This can be observed in the issue of the distinction of inflection and derivation and in level ordering. The latter was a proposal to divide affixes into classes with different phonological and other effects on the base they attach to. A side effect of level ordering was the appearance of bracketing paradoxes, where, for instance, generative grammarian has a phonological constituent grammarian but a semantic constituent generative grammar. Another aspect of rule application which can be constructed as a difference between morphology and syntax is productivity. In general, syntactic rules are more productive and morphological rules display blocking effects, where, for instance, unpossible is blocked by the existence of impossible. Being classical, much of the discussions in this period serves as a shared background for the emergence and discussion of current generative approaches in morphology. The transition to these theories started in the 1990s, although some of them appeared only in the early 2000s.


2020 ◽  
Vol 16 (4) ◽  
pp. 1-25
Author(s):  
Larisa Stoltzfus ◽  
Bastian Hagedorn ◽  
Michel Steuwer ◽  
Sergei Gorlatch ◽  
Christophe Dubach

2019 ◽  
Vol 9 (1) ◽  
pp. 33-51
Author(s):  
Alejandro Serrano ◽  
Jurriaan Hage

Abstract Domain-specific languages (DSLs) permeate current programming practices. An important kind of DSLs includes those developed and integrated within a host language, which we call embedded or internal DSLs. Unfortunately, embedded DSLs usually fall short on domain-specific error diagnosis, that is, they do not give control to DSL authors over how errors are reported to the programmer. As a consequence, implementation details of the DSL leak through in error messages, and programmers need to understand the internals of the DSL implementation to fix their code in a productive way. This paper addresses the challenge of building a compiler with integrated support for domain-specific error diagnosis. We assume that the type system is described using a constraint-based approach, and constraint solving is specified using rewrite rules. Domain information can then be injected at constraint gathering time via type rules, during constraint solving via specialized rules and axioms, and finally at blaming and reparation time via transformations. Furthermore, we define error contexts as a way to control the order in which solving and blaming proceeds. We engineer domain-specific error diagnosis in such a way that the compiler can also reuse the techniques for improving general error diagnosis.


Sign in / Sign up

Export Citation Format

Share Document