scholarly journals A Simple Solution to Type Specialization

1998 ◽  
Vol 5 (1) ◽  
Author(s):  
Olivier Danvy

Partial evaluation specializes terms, but traditionally this<br />specialization does not apply to the type of these terms. As a result, specializing, e.g., an interpreter written in a typed language, which requires a "universal" type to encode expressible values, yields residual programs with type tags all over. Neil Jones has stated that getting rid of these type tags was an open problem, despite possible solutions such as Torben Mogensen's "constructor specialization." To solve this problem, John Hughes has proposed a new paradigm for partial evaluation, "Type Specialization," based on type inference instead of being based on symbolic interpretation. Type Specialization is very elegant in principle but it also appears non-trivial in practice. Stating the problem in terms of types instead of in terms of type encodings suggests a very simple type-directed solution, namely, to use a projection from the universal type to the specific type of the residual program. Standard partial evaluation then yields a residual program<br />without type tags, simply and efficiently.

1995 ◽  
Vol 5 (2) ◽  
pp. 201-224 ◽  
Author(s):  
Tobias Nipkow ◽  
Christian Prehofer

AbstractWe study the type inference problem for a system with type classes as in the functional programming language Haskell. Type classes are an extension of ML-style polymorphism with overloading. We generalize Milner's work on polymorphism by introducing a separate context constraining the type variables in a typing judgement. This leads to simple type inference systems and algorithms which closely resemble those for ML. In particular, we present a new unification algorithm which is an extension of syntactic unification with constraint solving. The existence of principal types follows from an analysis of this unification algorithm.


1997 ◽  
Vol 4 (46) ◽  
Author(s):  
Olivier Danvy ◽  
Kristoffer H. Rose

We demonstrate the usefulness of higher-order rewriting techniques for specializing programs, i.e., for partial evaluation. More precisely, we demonstrate how casting program specializers as combinatory reduction systems (CRSs) makes it possible to formalize the corresponding program transformations as meta-reductions, i.e., reductions in the internal "substitution calculus." For partial-evaluation problems, this means that instead of having to prove on a case-by-case basis that one's "two-level functions" operate properly, one can concisely formalize them as a combinatory reduction system and obtain as a corollary that static reduction does not go wrong and yields a well-formed residual program.<br />We have found that the CRS substitution calculus provides an adequate expressive power to formalize partial evaluation: it provides sufficient termination strength while avoiding the need for additional restrictions such as types that would complicate the description unnecessarily (for our purpose). We also review the benefits and penalties entailed by more expressive higher-order formalisms. In addition, partial evaluation provides a number of examples of higher-order rewriting where being higher order is a central (rather than an occasional or merely exotic) property. We illustrate this by demonstrating how standard but non-trivial partial-evaluation examples are<br />handled with higher-order rewriting.


1991 ◽  
Vol 1 (1) ◽  
pp. 21-69 ◽  
Author(s):  
Carsten K. Gomard ◽  
Neil D. Jones

AbstractThis article describes theoretical and practical aspects of an implemented self-applicable partial evaluator for the untyped lambda-calculus with constants and a fixed point operator. To the best of our knowledge, it is the first partial evaluator that is simultaneously higher-order, non-trivial, and self-applicable.Partial evaluation produces aresidual programfrom a source program and some of its input data. When given the remaining input data the residual program yields the same result that the source program would when given all its input data. Our partial evaluator produces a residual lambda-expression given a source lambda-expression and the values of some of its free variables. By self-application, the partial evaluator can be used to compile and to generate stand-alone compilers from a denotational or interpretive specification of a programming language.An essential component in our self-applicable partial evaluator is the use of explicitbinding time information.We use this to annotate the source program, marking asresidualthe parts for which residual code is to be generated and marking aseliminablethe parts that can be evaluated using only the data that is known during partial evaluation. We give a simple criterion,well-annotatedness,that can be used to check that the partial evaluator can handle the annotated higher-order programs without committing errors.Our partial evaluator is simple, is implemented in a side-effect free subset of Scheme, and has been used to compile and to generate compilers and a compiler generator. In this article we examine two machine-generated compilers and find that their structures are surprisingly natural.


2003 ◽  
Vol 10 (20) ◽  
Author(s):  
Mads Sig Ager ◽  
Olivier Danvy ◽  
Henning Korsholm Rohde

We show how to obtain all of Knuth, Morris, and Pratt's linear-time string matcher by partial evaluation of a quadratic-time string matcher with respect to a pattern string. Although it has been known for 15 years how to obtain this linear matcher by partial evaluation of a quadratic one, how to obtain it <em>in linear time</em> has remained an open problem.<br /> <br />Obtaining a linear matcher by partial evaluation of a quadratic one is achieved by performing its backtracking at specialization time and memoizing its results. We show (1) how to rewrite the source matcher such that its static intermediate computations can be shared at specialization time and (2) how to extend the memoization capabilities of a partial evaluator to static functions. Such an extended partial evaluator, if its memoization is implemented efficiently, specializes the rewritten source matcher in linear time.<br /><br />Supersedes BRICS-RS-03-11 and is superseded by BRICS-RS-04-40.


1996 ◽  
Vol 3 (13) ◽  
Author(s):  
Olivier Danvy ◽  
René Vestergaard

<p>We illustrate a simple and effective solution to semantics-based<br />compiling. Our solution is based on type-directed partial evaluation, where</p><p><br />- our compiler generator is expressed in a few lines, and is efficient; <p> - its input is a well-typed, purely functional definitional interpreter in the manner of denotational semantics;</p>- the output of the generated compiler is three-address code, in the fashion and efficiency of the Dragon Book;</p><p>- the generated compiler processes several hundred lines of source code per second.</p><p><br />The source language considered in this case study is imperative, block-structured,<br />higher-order, call-by-value, allows subtyping, and obeys<br />stack discipline. It is bigger than what is usually reported in the literature on semantics-based compiling and partial evaluation.<br />Our compiling technique uses the first Futamura projection, i.e., we compile programs by specializing a definitional interpreter with respect to this program. Our denitional interpreter is completely straightforward, stack-based, and in direct style. In particular, it requires no clever staging technique (currying, continuations, binding-time improvements, etc.), nor does it rely on any other framework (attribute grammars, annotations, etc.) than the typed lambda-calculus. In particular, it uses no other program analysis than traditional type inference. The overall simplicity and effectiveness of the approach has encouraged us to write this paper, to illustrate this genuine solution to denotational semantics-directed compilation, in the spirit of Scott and Strachey.</p>


1998 ◽  
Vol 5 (2) ◽  
Author(s):  
Olivier Danvy

Lambda-lifting and lambda-dropping respectively transform a block-structured functional program into recursive equations and vice versa. Lambda-lifting is known since the early 80's, whereas lambda-dropping is more recent. Both are split into an analysis and a transformation. Published work, however, has only concentrated on the analysis part. We focus here on the transformation part and more precisely on its formal correctness, which is an open problem. One of our two main theorems suggests us to define extensional versions of lambda-lifting and lambda-dropping, which we visualize both using ML and using type-directed partial evaluation.<br /><br />See revised version BRICS-RS-99-21.


1992 ◽  
Vol 21 (386) ◽  
Author(s):  
Jens Palsberg ◽  
Michael I. Schwartzbach

<p>We present a polyvariant closure, safety, and binding time analysis for the untyped lambda calculus. The innovation is to analyze each abstraction afresh at all syntactic application points. This is achieved by a semantics-preserving program transformation followed by a novel monovariant analysis, expressed using type constraints. The constraints are solved in cubic time by a single fixed-point computation.</p><p>Safety analysis is aimed at determining if a term will cause an error during evaluation. We have recently proved that the monovariant safety analysis accepts strictly more terms than simple type inference. This paper demonstrates that the polyvariant transformation makes even more terms acceptable, even some without higher-order polymorphic types. Furthermore, polyvariant binding time analysis can improve the partial evaluators that base a polyvariant specialization on only monovariant binding time analysis.</p>


Sign in / Sign up

Export Citation Format

Share Document