scholarly journals Higher-Order Rewriting and Partial Evaluation

1997 ◽  
Vol 4 (46) ◽  
Author(s):  
Olivier Danvy ◽  
Kristoffer H. Rose

We demonstrate the usefulness of higher-order rewriting techniques for specializing programs, i.e., for partial evaluation. More precisely, we demonstrate how casting program specializers as combinatory reduction systems (CRSs) makes it possible to formalize the corresponding program transformations as meta-reductions, i.e., reductions in the internal "substitution calculus." For partial-evaluation problems, this means that instead of having to prove on a case-by-case basis that one's "two-level functions" operate properly, one can concisely formalize them as a combinatory reduction system and obtain as a corollary that static reduction does not go wrong and yields a well-formed residual program.<br />We have found that the CRS substitution calculus provides an adequate expressive power to formalize partial evaluation: it provides sufficient termination strength while avoiding the need for additional restrictions such as types that would complicate the description unnecessarily (for our purpose). We also review the benefits and penalties entailed by more expressive higher-order formalisms. In addition, partial evaluation provides a number of examples of higher-order rewriting where being higher order is a central (rather than an occasional or merely exotic) property. We illustrate this by demonstrating how standard but non-trivial partial-evaluation examples are<br />handled with higher-order rewriting.

1991 ◽  
Vol 1 (1) ◽  
pp. 21-69 ◽  
Author(s):  
Carsten K. Gomard ◽  
Neil D. Jones

AbstractThis article describes theoretical and practical aspects of an implemented self-applicable partial evaluator for the untyped lambda-calculus with constants and a fixed point operator. To the best of our knowledge, it is the first partial evaluator that is simultaneously higher-order, non-trivial, and self-applicable.Partial evaluation produces aresidual programfrom a source program and some of its input data. When given the remaining input data the residual program yields the same result that the source program would when given all its input data. Our partial evaluator produces a residual lambda-expression given a source lambda-expression and the values of some of its free variables. By self-application, the partial evaluator can be used to compile and to generate stand-alone compilers from a denotational or interpretive specification of a programming language.An essential component in our self-applicable partial evaluator is the use of explicitbinding time information.We use this to annotate the source program, marking asresidualthe parts for which residual code is to be generated and marking aseliminablethe parts that can be evaluated using only the data that is known during partial evaluation. We give a simple criterion,well-annotatedness,that can be used to check that the partial evaluator can handle the annotated higher-order programs without committing errors.Our partial evaluator is simple, is implemented in a side-effect free subset of Scheme, and has been used to compile and to generate compilers and a compiler generator. In this article we examine two machine-generated compilers and find that their structures are surprisingly natural.


1991 ◽  
Vol 1 (4) ◽  
pp. 459-494 ◽  
Author(s):  
Hanne Riis Nielson ◽  
Flemming Nielson

AbstractTraditional functional languages do not have an explicit distinction between binding times. It arises implicitly, however, as one typically instantiates a higher-order function with the arguments that are known, whereas the unknown arguments remain to be taken as parameters. The distinction between ‘known’ and ‘unknown’ is closely related to the distinction between binding times like ‘compile-time’ and ‘run-time’. This leads to the use of a combination of polymorphic type inference and binding time analysis for obtaining the required information about which arguments are known.Following the current trend in the implementation of functional languages we then transform the run-time level of the program (not the compile-time level) into categorical combinators. At this stage we have a natural distinction between two kinds of program transformations: partial evaluation, which involves the compile-time level of our notation, and algebraic transformations (i.e., the application of algebraic laws), which involves the run-time level of our notation.By reinterpreting the combinators in suitable ways we obtain specifications of abstract interpretations (or data flow analyses). In particular, the use of combinators makes it possible to use a general framework for specifying both forward and backward analyses. The results of these analyses will be used to enable program transformations that are not applicable under all circumstances.Finally, the combinators can also be reinterpreted to specify code generation for various (abstract) machines. To improve the efficiency of the code generated, one may apply abstract interpretations to collect extra information for guiding the code generation. Peephole optimizations may be used to improve the code at the lowest level.


2003 ◽  
Vol 10 (2) ◽  
Author(s):  
Olivier Danvy ◽  
Pablo E. Martínez López

A partial evaluator is said to be Jones-optimal if the result of specializing a self-interpreter with respect to a source program is textually identical to the source program, modulo renaming. Jones optimality has already been obtained if the self-interpreter is untyped. If the self-interpreter is typed, however, residual programs are cluttered with type tags. To obtain the original source program, these tags must be removed.<br /> <br />A number of sophisticated solutions have already been proposed. We observe, however, that with a simple representation shift, ordinary partial evaluation is already Jones-optimal, modulo an encoding. The representation shift amounts to reading the type tags as constructors for higher-order abstract syntax. We substantiate our observation by considering a typed self-interpreter whose input syntax is higher-order. Specializing this interpreter with respect to a source program yields a residual program that is textually identical to the source program, modulo renaming.


1990 ◽  
Vol 19 (337) ◽  
Author(s):  
Hanne Riis Nielson ◽  
Flemming Nielson

<p>Traditional functional languages do not have an explicit distinction between binding times. It arises implicitly, however, as typically one instantiates a higher-order function with the arguments that are known whereas the unknown arguments remain to be taken as parameters. The distinction between ''known'' and ''unknown'' is closely related to the distinction between <em>binding times</em> like ''<em>compile-time</em>'' and ''<em> run-time</em>''. This leads to using a combination of <em>polymorphic type inference</em> and <em> binding time</em> analysis for obtaining the required information about which arguments are known.</p><p> </p><p>Following the current trend in the implementation of functional languages we then transform the run-time level of the program (<em> not</em> the compile-time level) into <em> categorial combinators</em>. At this stage we have a natural distinction between two kinds of program transformations: <em>partial evaluation</em> which involves the compile-time level of our notation and <em> algebraic transformations</em> (i.e. the application of algebraic laws) which involves the run-time level of our notation.</p><p> </p><p>By reinterpreting the combinators in suitable ways we obtain specifications of <em>abstract interpretations</em> (or data flow analyses). In particular, the use of combinators makes it possible to use a general framework for specifying both <em> forward</em> and <em> backward analyses</em>. The results of these analyses will be used to enable program transformations that are not applicable under all circumstances.</p><p> </p><p>Finally the combinators can also be reinterpreted to specify <em>code generation</em> for various (abstract) machines. To improve the efficiency of the code generated, one may apply abstract interpretations to collect extra information for guiding the code generation. Peephole optimizations may be used to improve the code at the lowest level.</p>


2021 ◽  
Vol 178 (1-2) ◽  
pp. 1-30
Author(s):  
Florian Bruse ◽  
Martin Lange ◽  
Etienne Lozes

Higher-Order Fixpoint Logic (HFL) is a modal specification language whose expressive power reaches far beyond that of Monadic Second-Order Logic, achieved through an incorporation of a typed λ-calculus into the modal μ-calculus. Its model checking problem on finite transition systems is decidable, albeit of high complexity, namely k-EXPTIME-complete for formulas that use functions of type order at most k < 0. In this paper we present a fragment with a presumably easier model checking problem. We show that so-called tail-recursive formulas of type order k can be model checked in (k − 1)-EXPSPACE, and also give matching lower bounds. This yields generic results for the complexity of bisimulation-invariant non-regular properties, as these can typically be defined in HFL.


2006 ◽  
Vol 16 (4-5) ◽  
pp. 375-414 ◽  
Author(s):  
MATTHIAS BLUME ◽  
DAVID McALLESTER

Even in statically typed languages it is useful to have certain invariants checked dynamically. Findler and Felleisen gave an algorithm for dynamically checking expressive higher-order types called contracts. They did not, however, give a semantics of contracts. The lack of a semantics makes it impossible to define and prove soundness and completeness of the checking algorithm. (Given a semantics, a sound checker never reports violations that do not exist under that semantics; a complete checker is – in principle – able to find violations when violations exist.) Ideally, a semantics should capture what programmers intuitively feel is the meaning of a contract or otherwise clearly point out where intuition does not match reality. In this paper we give an interpretation of contracts for which we prove the Findler-Felleisen algorithm sound and (under reasonable assumptions) complete. While our semantics mostly matches intuition, it also exposes a problem with predicate contracts where an arguably more intuitive interpretation than ours would render the checking algorithm unsound. In our semantics we have to make use of a notion of safety (which we define in the paper) to avoid unsoundness. We are able to eliminate the “leakage” of safety into the semantics by changing the language, replacing the original version of unrestricted predicate contracts with a restricted form. The corresponding loss in expressive power can be recovered by making safety explicit as a contract. This can be done either in ad-hoc fashion or by including general recursive contracts. The addition of recursive contracts has far-reaching implications, deeply affecting the formulation of our model and requiring different techniques for proving soundness.


Sign in / Sign up

Export Citation Format

Share Document