scholarly journals Deriving efficient program transformations from rewrite rules

2021 ◽  
Vol 5 (ICFP) ◽  
pp. 1-29
Author(s):  
John M. Li ◽  
Andrew W. Appel

An efficient optimizing compiler can perform many cascading rewrites in a single pass, using auxiliary data structures such as variable binding maps, delayed substitutions, and occurrence counts. Such optimizers often perform transformations according to relatively simple rewrite rules, but the subtle interactions between the data structures needed for efficiency make them tricky to write and trickier to prove correct. We present a system for semi-automatically deriving both an efficient program transformation and its correctness proof from a list of rewrite rules and specifications of the auxiliary data structures it requires. Dependent types ensure that the holes left behind by our system (for the user to fill in) are filled in correctly, allowing the user low-level control over the implementation without having to worry about getting it wrong. We implemented our system in Coq (though it could be implemented in other logics as well), and used it to write optimization passes that perform uncurrying, inlining, dead code elimination, and static evaluation of case expressions and record projections. The generated implementations are sometimes faster, and at most 40% slower, than hand-written counterparts on a small set of benchmarks; in some cases, they require significantly less code to write and prove correct.

2006 ◽  
Vol 31 (7) ◽  
pp. 573-584 ◽  
Author(s):  
Bodo Billerbeck ◽  
Justin Zobel

Author(s):  
Giuseppe Sindoni

A hypertext view is a hypertext containing data from an underlying database. The materialization of such hypertexts, that is, the actual storage of their pages in the site server, is often a valid option1. Suitable auxiliary data structures and algorithms must be designed to guarantee consistency between the structures and contents of each heterogeneous component where base data is stored and those of the derived hypertext view.


Author(s):  
Gabriel Zachmann

Abstract Many companies have started to investigate Virtual Reality as a tool for evaluating digital mock-ups. One of the key functions needed for interactive evaluation is real-time collision detection. An algorithm for exact collision detection is presented which can handle arbitrary non-convex polyhedra efficiently. The approach attains its speed by a hierarchical adaptive space subdivision scheme, the BoxTree, and an associated divide-and-conquer traversal algorithm, which exploits the very special geometry of boxes. The traversal algorithm is generic, so it can be endowed with other semantics operating on polyhedra, e.g., distance computations. The algorithm is fairly simple to implement and it is described in great detail in an “ftp-able” appendix to facilitate easy implementation. Pre-computation of auxiliary data structures is very simple and fast. The efficiency of the approach is shown by timing results and two real-world digital mock-up scenarios.


Entropy ◽  
2019 ◽  
Vol 21 (1) ◽  
pp. 40 ◽  
Author(s):  
Chao-Jen Tsai ◽  
Huan-Chih Wang ◽  
Ja-Ling Wu

In this work, three techniques for enhancing various chaos-based joint compression and encryption (JCAE) schemes are proposed. They respectively improved the execution time, compression ratio, and estimation accuracy of three different chaos-based JCAE schemes. The first uses auxiliary data structures to significantly accelerate an existing chaos-based JCAE scheme. The second solves the problem of huge multidimensional lookup table overheads by sieving out a small number of important sub-tables. The third increases the accuracy of frequency distribution estimations, used for compressing streaming data, by weighting symbols in the plaintext stream according to their positions in the stream. Finally, two modified JCAE schemes leveraging the above three techniques are obtained, one applicable to static files and the other working for streaming data. Experimental results show that the proposed schemes do run faster and generate smaller files than existing JCAE schemes, which verified the effectiveness of the three newly proposed techniques.


1992 ◽  
Vol 2 (1) ◽  
pp. 81-126 ◽  
Author(s):  
James M. Boyle ◽  
Terence J. Harmer

AbstractOne can have all the advantages of functional programming – correctness, clarity, simplicity, and flexibility – without any sacrifice in performance, even for a scientifically significant computation on a supercomputer. Therefore, why use Fortran? We demonstrate parity – equality of speed and storage use – between a program generated automatically from a functional specification and a program written by hand in the procedural style. To our knowledge, this demonstration of parity is the first for a program that solves a scientifically significant problem – quasi-linear hyperbolic partial differential equations – on a scientifically interesting supercomputer – the CRAY X-MP. We use pure Lisp, including higher-order functions, to express the functional specification for the PDE solver. We designed this specification for maximal clarity and flexibility, rather than for efficiency. Nevertheless, we obtain a highly efficient program to solve the PDEs: automated program transformations put back the missing efficiency as they produce an executable Fortran program from the specification. The generated Fortran program vectorizes on the CRAY X-MP and runs about 4% faster than a handwritten Fortran program for the same problem. We describe the problem and the specification, and some of the problem-domain-specific and hardware-specific transformations that we use to obtain the high-efficiency program.


2002 ◽  
Vol 13 (03) ◽  
pp. 431-443 ◽  
Author(s):  
MARCUS HUTTER

An algorithm M is described that solves any well-defined problem p as quickly a the fastest algorithm computing a solution to p, save for a factor of 5 and low-order additive terms. M optimally distributes resources between the execution of provably correct p-solving programs and an enumeration of all proofs, including relevant proofs of program correctness and of time bounds on program runtimes. M avoids Blum's speed-up theorem by ignoring programs without correctness proof. M has broader applicability and can be faster than Levin's universal search, the fastest method for inverting functions save for a large multiplicative constant. An extension of Kolmogorov complexity and two novel natural measures of function complexity are used to show the most efficient program computing some function f is also among the shortest programs provably computing f.


Author(s):  
Bojie Shen ◽  
Muhammad Aamir Cheema ◽  
Daniel Harabor ◽  
Peter J. Stuckey

We consider optimal and anytime algorithms for the Euclidean Shortest Path Problem (ESPP) in two dimensions. Our approach leverages ideas from two recent works: Polyanya, a mesh-based ESPP planner which we use to represent and reason about the environment, and Compressed Path Databases, a speedup technique for pathfinding on grids and spatial networks, which we exploit to compute fast candidate paths. In a range of experiments and empirical comparisons we show that: (i) the auxiliary data structures required by the new method are cheap to build and store; (ii) for optimal search, the new algorithm is faster than a range of recent ESPP planners, with speedups ranging from several factors to over one order of magnitude; (iii) for anytime search, where feasible solutions are needed fast, we report even better runtimes.


Author(s):  
C. M. Sperberg-McQueen

In building up subroutine libraries for XSLT and XQuery, it is sometimes useful to re-implement standard algorithms in the new language. Such re-implementation can be challenging, because standard algorithms are often described in imperative terms; before being reimplemented in XSLT or XQuery, the algorithm must first be re-understood in a declarative and functional way. Some of the challenges which arise in this process can be illustrated by the example of Earley parsing. Earley’s algorithm can parse an input string against any context-free grammar in Backus-Naur Form. Unlike recursive-descent or table-driven LALR(1) parsers it is not limited to “well-behaved” grammars. Unlike other general context-free parsing algorithms such as CYK, it does not devote time and space to operations which can be seen in advance to have no possible use in a full parse. Earley’s procedural description involves successive changes to a small set of data structures representing sets of Earley items; these procedural changes cannot be translated directly into a functional language lacking assignment. But Earley’s data-structure updates can be understood as defining relations among Earley items, and the algorithm as a whole can be interpreted as calculating the smallest set of Earley items which contains a given starter item and is closed over a small number of relations on items. Re-thinking the Earley algorithm in this way not only makes it easier to implement it in XSLT and XQuery, but helps make it clear why the parser is both complete (it will always find a parse if there is one) and correct (any parse it finds will be a real parse).


2019 ◽  
Vol 16 (2) ◽  
pp. 409-442
Author(s):  
A.L. Nicolini ◽  
C.M. Lorenzetti ◽  
A.G. Maguitman ◽  
C.I. Chesñevar

P2P networks have become a commonly used way of disseminating content on the Internet. In this context, constructing efficient and distributed P2P routing algorithms for complex environments that include a huge number of distributed nodes with different computing and network capabilities is a major challenge. In the last years, query routing algorithms have evolved by taking into account different features (provenance, nodes? history, topic similarity, etc.). Such features are usually stored in auxiliary data structures (tables, matrices, etc.), which provide an extra knowledge engineering layer on top of the network, resulting in an added semantic value for specifying algorithms for efficient query routing. This article examines the main existing algorithms for query routing in unstructured P2P networks in which semantic aspects play a major role. A general comparative analysis is included, associated with a taxonomy of P2P networks based on their degree of decentralization and the different approaches adopted to exploit the available semantic aspects.


Sign in / Sign up

Export Citation Format

Share Document