scholarly journals Mœbius: metaprogramming using contextual types: the stage where system f can pattern match on itself

2022 ◽  
Vol 6 (POPL) ◽  
pp. 1-27
Author(s):  
Junyoung Jang ◽  
Samuel Gélineau ◽  
Stefan Monnier ◽  
Brigitte Pientka

We describe the foundation of the metaprogramming language, Mœbius, which supports the generation of polymorphic code and, more importantly, the analysis of polymorphic code via pattern matching. Mœbius has two main ingredients: 1) we exploit contextual modal types to describe open code together with the context in which it is meaningful. In Mœbius, open code can depend on type and term variables (level 0) whose values are supplied at a later stage, as well as code variables (level 1) that stand for code templates supplied at a later stage. This leads to a multi-level modal lambda-calculus that supports System-F style polymorphism and forms the basis for polymorphic code generation. 2) we extend the multi-level modal lambda-calculus to support pattern matching on code. As pattern matching on polymorphic code may refine polymorphic type variables, we extend our type-theoretic foundation to generate and track typing constraints that arise. We also give an operational semantics and prove type preservation. Our multi-level modal foundation for Mœbius provides the appropriate abstractions for both generating and pattern matching on open code without committing to a concrete representation of variable binding and contexts. Hence, our work is a step towards building a general type-theoretic foundation for multi-staged metaprogramming that, on the one hand, enforces strong type guarantees and, on the other hand, makes it easy to generate and manipulate code. This will allow us to exploit the full potential of metaprogramming without sacrificing the reliability of and trust in the code we are producing and running.

1997 ◽  
Vol 4 (51) ◽  
Author(s):  
James McKinna ◽  
Robert Pollack

"This paper is about our hobby." That is the first sentence of [MP93], the first report on our formal development of lambda calculus and type theory, written in autumn 1992. We have continued to pursue this hobby on and off ever since, and have developed a substantial body of formal knowledge, including Church-Rosser and standardization<br />theorems for beta reduction, and the basic theory of<br />Pure Type Systems (PTS) leading to the strengthening theorem and type checking algorithms for PTS. Some of this work is reported in [MP93, vBJMP94, Pol94b, Pol95]. In the present paper we survey this work, including some new proofs, and point out what we feel has been learned about the general issues of formalizing mathematics. On the technical side, we describe an abstract, and simplified, proof of standardization for beta reduction, not previously published, that does<br />not mention redex positions or residuals. On the general issues, we emphasize the search for formal definitions that are convenient for formal proof and convincingly represent the intended informal concepts. The LEGO Proof Development System [LP92] was used to check the work in an implementation of the Extended Calculus of Constructions<br />(ECC) with inductive types [Luo94]. LEGO is a refinement style<br />proof checker, publicly available by ftp and WWW, with a User's Manual [LP92] and a large collection of examples. Section 1.3 contains information on accessing the formal development described in this paper. Other interesting examples formalized in LEGO include program specification and data refinement [Luo91], strong normalization of System F [Alt93], synthetic domain theory [Reu95, Reu96], and operational semantics for imperative programs [Sch97].


2008 ◽  
Vol 18 (3) ◽  
pp. 501-553 ◽  
Author(s):  
DAVID SABEL ◽  
MANFRED SCHMIDT-SCHAUSS

We present a higher-order call-by-need lambda calculus enriched with constructors, case expressions, recursive letrec expressions, a seq operator for sequential evaluation and a non-deterministic operator amb that is locally bottom-avoiding. We use a small-step operational semantics in the form of a single-step rewriting system that defines a (non-deterministic) normal-order reduction. This strategy can be made fair by adding resources for book-keeping. As equational theory, we use contextual equivalence (that is, terms are equal if, when plugged into any program context, their termination behaviour is the same), in which we use a combination of may- and must-convergence, which is appropriate for non-deterministic computations. We show that we can drop the fairness condition for equational reasoning, since the valid equations with respect to normal-order reduction are the same as for fair normal-order reduction. We develop a number of proof tools for proving correctness of program transformations. In particular, we prove a context lemma for both may- and must- convergence that restricts the number of contexts that need to be examined for proving contextual equivalence. Combining this with so-called complete sets of commuting and forking diagrams, we show that all the deterministic reduction rules and some additional transformations preserve contextual equivalence. We also prove a standardisation theorem for fair normal-order reduction. The structure of the ordering ≤c is also analysed, and we show that Ω is not a least element and ≤c already implies contextual equivalence with respect to may-convergence.


2003 ◽  
Vol 10 (25) ◽  
Author(s):  
Dariusz Biernacki ◽  
Olivier Danvy

Starting from a continuation-based interpreter for a simple logic programming language, propositional Prolog with cut, we derive the corresponding logic engine in the form of an abstract machine. The derivation originates in previous work (our article at PPDP 2003) where it was applied to the lambda-calculus. The key transformation here is Reynolds's defunctionalization that transforms a tail-recursive, continuation-passing interpreter into a transition system, i.e., an abstract machine. Similar denotational and operational semantics were studied by de Bruin and de Vink in previous work (their article at TAPSOFT 1989), and we compare their study with our derivation. Additionally, we present a direct-style interpreter of propositional Prolog expressed with control operators for delimited continuations.<br /><br />Superseded by BRICS-RS-04-5.


1996 ◽  
Vol 6 (3) ◽  
pp. 393-418 ◽  
Author(s):  
Robert Harper ◽  
Mark Lillibridge

AbstractWe study the operational semantics of an extension of Girard's System Fω with two control operators: an abort operation that abandons the current control context, and a callcc operation that captures the current control context. Two classes of operational semantics are considered, each with a call-by-value and a call-by-name variant, differing in their treatment of polymorphic abstraction and instantiation. Under the standard semantics, polymorphic abstractions are values and polymorphic instantiation is a significant computation step; under the ML-like semantics evaluation proceeds beneath polymorphic abstractions and polymorphic instantiation is computationally insignificant. Compositional, type-preserving continuation-passing style (cps) transformation algorithms are given for the standard semantics, resulting in terms on which all four evaluation strategies coincide. This has as a corollary the soundness and termination of well-typed programs under the standard evaluation strategies. In contrast, such results are obtained for the call-by-value ML-like strategy only for a restricted sub-language in which constructor abstractions are limited to values. The ML-like call-by-name semantics is indistinguishable from the standard call-by-name semantics when attention is limited to complete programs.


2018 ◽  
Vol 28 (9) ◽  
pp. 1606-1638 ◽  
Author(s):  
ANDREW CAVE ◽  
BRIGITTE PIENTKA

Proofs with logical relations play a key role to establish rich properties such as normalization or contextual equivalence. They are also challenging to mechanize. In this paper, we describe two case studies using the proof environmentBeluga: First, we explain the mechanization of the weak normalization proof for the simply typed lambda-calculus; second, we outline how to mechanize the completeness proof of algorithmic equality for simply typed lambda-terms where we reason about logically equivalent terms. The development of these proofs inBelugarelies on three key ingredients: (1) we encode lambda-terms together with their typing rules, operational semantics, algorithmic and declarative equality using higher order abstract syntax (HOAS) thereby avoiding the need to manipulate and deal with binders, renaming and substitutions, (2) we take advantage ofBeluga's support for representing derivations that depend on assumptions and first-class contexts to directly state inductive properties such as logical relations and inductive proofs, (3) we exploitBeluga's rich equational theory for simultaneous substitutions; as a consequence, users do not need to establish and subsequently use substitution properties, and proofs are not cluttered with references to them. We believe these examples demonstrate thatBelugaprovides the right level of abstractions and primitives to mechanize challenging proofs using HOAS encodings. It also may serve as a valuable benchmark for other proof environments.


2021 ◽  
Vol 5 (ICFP) ◽  
pp. 1-30
Author(s):  
Taro Sekiyama ◽  
Takeshi Tsukada

Transformation of programs into continuation-passing style (CPS) reveals the notion of continuations, enabling many applications such as control operators and intermediate representations in compilers. Although type preservation makes CPS transformation more beneficial, achieving type-preserving CPS transformation for implicit polymorphism with call-by-value (CBV) semantics is known to be challenging. We identify the difficulty in the problem that we call scope intrusion. To address this problem, we propose a new CPS target language Λ open that supports two additional constructs for polymorphism: one only binds and the other only generalizes type variables. Unfortunately, their unrestricted use makes Λ open unsafe due to undesired generalization of type variables. We thus equip Λ open with affine types to allow only the type-safe generalization. We then define a CPS transformation from Curry-style CBV System F to type-safe Λ open and prove that the transformation is meaning and type preserving. We also study parametricity of Λ open as it is a fundamental property of polymorphic languages and plays a key role in applications of CPS transformation. To establish parametricity, we construct a parametric, step-indexed Kripke logical relation for Λ open and prove that it satisfies the Fundamental Property as well as soundness with respect to contextual equivalence.


1992 ◽  
Vol 21 (389) ◽  
Author(s):  
Jens Palsberg ◽  
Michael I. Schwartzbach

<p>Safety analysis is an algorithm for determining if a term in the untyped lambda calculus with constants is <em>safe</em>, i.e., if it does not cause an error during evaluation. This ambition is also shared by algorithms for type inference. Safety analysis and type inference are based on rather different perspectives, however. Safety analysis is based on closure analysis, whereas type inference attempts to assign a type to all subterms.</p><p>In this paper we prove that safety analysis is <em>sound</em>, relative to both a strict and a lazy operational semantics, and <em>superior</em> to type inference, in the sense that it accepts strictly more safe lambda terms.</p><p>The latter result may indicate the relative potentials of static program analyses based on respectively closure analysis and type inference.</p>


2002 ◽  
Vol 9 (49) ◽  
Author(s):  
Mikkel Nygaard ◽  
Glynn Winskel

A small but powerful language for higher-order nondeterministic processes is introduced. Its roots in a linear domain theory for concurrency are sketched though for the most part it lends itself to a more operational account. The language can be viewed as an extension of the lambda calculus with a ``prefixed sum'', in which types express the form of computation path of which a process is capable. Its operational semantics, bisimulation, congruence properties and expressive power are explored; in particular, it is shown how it can directly encode process languages such as CCS, CCS with process passing, and mobile ambients with public names.


2004 ◽  
Vol 11 (26) ◽  
Author(s):  
Olivier Danvy ◽  
Lasse R. Nielsen

The evaluation function of a reduction semantics (i.e., a small-step operational semantics with an explicit representation of the reduction context) is canonically defined as the transitive closure of (1) decomposing a term into a reduction context and a redex, (2) contracting this redex, and (3) plugging the contractum in the context. Directly implementing this evaluation function therefore yields an interpreter with a worst-case overhead, for each step, that is linear in the size of the input term. <br /> <br />We present sufficient conditions over the constituents of a reduction semantics to circumvent this overhead, by replacing the composition of (3) plugging and (1) decomposing by a single ``refocus'' function mapping a contractum and a context into a new context and a new redex, if any. We also show how to construct such a refocus function, we prove the correctness of this construction, and we analyze the complexity of the resulting refocus function. <br /> <br />The refocused evaluation function of a reduction semantics implements the transitive closure of the refocus function, i.e., a ``pre-abstract machine.'' Fusing the refocus function with the trampoline function computing the transitive closure gives a state-transition function, i.e., an abstract machine. This abstract machine clearly separates between the transitions implementing the congruence rules of the reduction semantics and the transitions implementing its reduction rules. The construction of a refocus function therefore shows how to mechanically obtain an abstract machine out of a reduction semantics, which was done previously on a case-by-case basis. <br /> <br />We illustrate refocusing by mechanically constructing Felleisen et al.'s CK machine from a call-by-value reduction semantics of the lambda-calculus, and by constructing a substitution-based version of Krivine's machine from a call-by-name reduction semantics of the lambda-calculus. We also mechanically construct three one-pass CPS transformers from three quadratic context-based CPS transformers for the lambda-calculus.


Sign in / Sign up

Export Citation Format

Share Document