scholarly journals A call-by-need lambda calculus with locally bottom-avoiding choice: context lemma and correctness of transformations

2008 ◽  
Vol 18 (3) ◽  
pp. 501-553 ◽  
Author(s):  
DAVID SABEL ◽  
MANFRED SCHMIDT-SCHAUSS

We present a higher-order call-by-need lambda calculus enriched with constructors, case expressions, recursive letrec expressions, a seq operator for sequential evaluation and a non-deterministic operator amb that is locally bottom-avoiding. We use a small-step operational semantics in the form of a single-step rewriting system that defines a (non-deterministic) normal-order reduction. This strategy can be made fair by adding resources for book-keeping. As equational theory, we use contextual equivalence (that is, terms are equal if, when plugged into any program context, their termination behaviour is the same), in which we use a combination of may- and must-convergence, which is appropriate for non-deterministic computations. We show that we can drop the fairness condition for equational reasoning, since the valid equations with respect to normal-order reduction are the same as for fair normal-order reduction. We develop a number of proof tools for proving correctness of program transformations. In particular, we prove a context lemma for both may- and must- convergence that restricts the number of contexts that need to be examined for proving contextual equivalence. Combining this with so-called complete sets of commuting and forking diagrams, we show that all the deterministic reduction rules and some additional transformations preserve contextual equivalence. We also prove a standardisation theorem for fair normal-order reduction. The structure of the ordering ≤c is also analysed, and we show that Ω is not a least element and ≤c already implies contextual equivalence with respect to may-convergence.

2018 ◽  
Vol 28 (9) ◽  
pp. 1606-1638 ◽  
Author(s):  
ANDREW CAVE ◽  
BRIGITTE PIENTKA

Proofs with logical relations play a key role to establish rich properties such as normalization or contextual equivalence. They are also challenging to mechanize. In this paper, we describe two case studies using the proof environmentBeluga: First, we explain the mechanization of the weak normalization proof for the simply typed lambda-calculus; second, we outline how to mechanize the completeness proof of algorithmic equality for simply typed lambda-terms where we reason about logically equivalent terms. The development of these proofs inBelugarelies on three key ingredients: (1) we encode lambda-terms together with their typing rules, operational semantics, algorithmic and declarative equality using higher order abstract syntax (HOAS) thereby avoiding the need to manipulate and deal with binders, renaming and substitutions, (2) we take advantage ofBeluga's support for representing derivations that depend on assumptions and first-class contexts to directly state inductive properties such as logical relations and inductive proofs, (3) we exploitBeluga's rich equational theory for simultaneous substitutions; as a consequence, users do not need to establish and subsequently use substitution properties, and proofs are not cluttered with references to them. We believe these examples demonstrate thatBelugaprovides the right level of abstractions and primitives to mechanize challenging proofs using HOAS encodings. It also may serve as a valuable benchmark for other proof environments.


Author(s):  
ÁLVARO GARCÍA-PÉREZ ◽  
PABLO NOGUEIRA

AbstractWe exploit the idea of proving properties of an abstract machine by using a corresponding semantic artefact better suited to their proof. The abstract machine is an improved version of Pierre Crégut’s full-reducing Krivine machine KN. The original version works with closed terms of the pure lambda calculus with de Bruijn indices. The improved version reduces in similar fashion but works on closures where terms may be open. The corresponding semantic artefact is a structural operational semantics of a calculus of closures whose reduction relation is purposely a reduction strategy. As shown in previous work, improved KN and the structural operational semantics ‘correspond’, i.e. both artefacts realise the same reduction strategy. In this paper, we prove in the calculus of closures that the reduction strategy simulates in lockstep (at every reduction step) the complete and standard normal-order strategy (i.e. leftmost reduction to normal form) of the pure lambda calculus. The simulation is witnessed by a substitution function from closures of the closure calculus to pure terms of the pure lambda calculus. Thus, KN also simulates normal-order in lockstep by the correspondence. This result is stronger than the known proof that KN is complete, for in the pure lambda calculus there are complete but non-standard strategies. The lockstep simulation proof consists of straightforward structural inductions, thanks to three properties of the closure calculus we call ‘index alignment’, ‘parameters-as-levels’ and ‘balanced derivations’. The first two come from KN. Thanks to these properties, a proof in a calculus of closures involving de Bruijn indices and de Bruijn levels is unproblematic. There is no lexical adjustment at binding lookup, on-the-fly alpha-conversion or recursive traversals of the term to deal with bound and free variables as in other calculi. This paper contributes to the framework for environment machines of Biernacka and Danvy a full-reducing open-terms closure calculus, its corresponding abstract machine, and a lockstep simulation proof via a substitution function.


2005 ◽  
Vol 12 (12) ◽  
Author(s):  
Malgorzata Biernacka ◽  
Olivier Danvy ◽  
Kristian Støvring

We formalize two proofs of weak head normalization for the simply typed lambda-calculus in first-order minimal logic: one for normal-order reduction, and one for applicative-order reduction in the object language. Subsequently we use Kreisel's modified realizability to extract evaluation algorithms from the proofs, following Berger; the proofs are based on Tait-style reducibility predicates, and hence the extracted algorithms are instances of (weak head) normalization by evaluation, as already identified by Coquand and Dybjer.


2018 ◽  
Vol 29 (8) ◽  
pp. 1309-1343 ◽  
Author(s):  
ALBERTO MOMIGLIANO ◽  
BRIGITTE PIENTKA ◽  
DAVID THIBODEAU

Bisimulation proofs play a central role in programming languages in establishing rich properties such as contextual equivalence. They are also challenging to mechanize, since they require a combination of inductive and coinductive reasoning on open terms. In this paper, we describe mechanizing the property that similarity in the call-by-name lambda calculus is a pre-congruence using Howe’s method in the Beluga formal reasoning system. The development relies on three key ingredients: (1) we give a higher order abstract syntax (HOAS) encoding of lambda terms together with their operational semantics as intrinsically typed terms, thereby avoiding not only the need to deal with binders, renaming and substitutions, but keeping all typing invariants implicit; (2) we take advantage of Beluga’s support for representing open terms using built-in contexts and simultaneous substitutions: this allows us to directly state central definitions such as open simulation without resorting to the usual inductive closure operation and to encode very elegantly notoriously painful proofs such as the substitutivity of the Howe relation; (3) we exploit the possibility of reasoning by coinduction in Beluga’s reasoning logic. The end result is succinct and elegant, thanks to the high-level abstractions and primitives Beluga provides. We believe that this mechanization is a significant example that illustrates Beluga’s strength at mechanizing challenging (co)inductive proofs using HOAS encodings.


2003 ◽  
Vol 10 (25) ◽  
Author(s):  
Dariusz Biernacki ◽  
Olivier Danvy

Starting from a continuation-based interpreter for a simple logic programming language, propositional Prolog with cut, we derive the corresponding logic engine in the form of an abstract machine. The derivation originates in previous work (our article at PPDP 2003) where it was applied to the lambda-calculus. The key transformation here is Reynolds's defunctionalization that transforms a tail-recursive, continuation-passing interpreter into a transition system, i.e., an abstract machine. Similar denotational and operational semantics were studied by de Bruin and de Vink in previous work (their article at TAPSOFT 1989), and we compare their study with our derivation. Additionally, we present a direct-style interpreter of propositional Prolog expressed with control operators for delimited continuations.<br /><br />Superseded by BRICS-RS-04-5.


2017 ◽  
Vol 18 (1) ◽  
pp. 1-29
Author(s):  
WŁODZIMIERZ DRABENT

AbstractThis paper presents an example of formal reasoning about the semantics of a Prolog program of practical importance (the SAT solver of Howe and King). The program is treated as a definite clause logic program with added control. The logic program is constructed by means of stepwise refinement, hand in hand with its correctness and completeness proofs. The proofs are declarative – they do not refer to any operational semantics. Each step of the logic program construction follows a systematic approach to constructing programs which are provably correct and complete. We also prove that correctness and completeness of the logic program is preserved in the final Prolog program. Additionally, we prove termination, occur-check freedom and non-floundering.Our example shows how dealing with “logic” and with “control” can be separated. Most of the proofs can be done at the “logic” level, abstracting from any operational semantics.The example employs approximate specifications; they are crucial in simplifying reasoning about logic programs. It also shows that the paradigm of semantics-preserving program transformations may be not sufficient. We suggest considering transformations which preserve correctness and completeness with respect to an approximate specification.


1992 ◽  
Vol 21 (389) ◽  
Author(s):  
Jens Palsberg ◽  
Michael I. Schwartzbach

<p>Safety analysis is an algorithm for determining if a term in the untyped lambda calculus with constants is <em>safe</em>, i.e., if it does not cause an error during evaluation. This ambition is also shared by algorithms for type inference. Safety analysis and type inference are based on rather different perspectives, however. Safety analysis is based on closure analysis, whereas type inference attempts to assign a type to all subterms.</p><p>In this paper we prove that safety analysis is <em>sound</em>, relative to both a strict and a lazy operational semantics, and <em>superior</em> to type inference, in the sense that it accepts strictly more safe lambda terms.</p><p>The latter result may indicate the relative potentials of static program analyses based on respectively closure analysis and type inference.</p>


2002 ◽  
Vol 9 (49) ◽  
Author(s):  
Mikkel Nygaard ◽  
Glynn Winskel

A small but powerful language for higher-order nondeterministic processes is introduced. Its roots in a linear domain theory for concurrency are sketched though for the most part it lends itself to a more operational account. The language can be viewed as an extension of the lambda calculus with a ``prefixed sum'', in which types express the form of computation path of which a process is capable. Its operational semantics, bisimulation, congruence properties and expressive power are explored; in particular, it is shown how it can directly encode process languages such as CCS, CCS with process passing, and mobile ambients with public names.


2004 ◽  
Vol 11 (26) ◽  
Author(s):  
Olivier Danvy ◽  
Lasse R. Nielsen

The evaluation function of a reduction semantics (i.e., a small-step operational semantics with an explicit representation of the reduction context) is canonically defined as the transitive closure of (1) decomposing a term into a reduction context and a redex, (2) contracting this redex, and (3) plugging the contractum in the context. Directly implementing this evaluation function therefore yields an interpreter with a worst-case overhead, for each step, that is linear in the size of the input term. <br /> <br />We present sufficient conditions over the constituents of a reduction semantics to circumvent this overhead, by replacing the composition of (3) plugging and (1) decomposing by a single ``refocus'' function mapping a contractum and a context into a new context and a new redex, if any. We also show how to construct such a refocus function, we prove the correctness of this construction, and we analyze the complexity of the resulting refocus function. <br /> <br />The refocused evaluation function of a reduction semantics implements the transitive closure of the refocus function, i.e., a ``pre-abstract machine.'' Fusing the refocus function with the trampoline function computing the transitive closure gives a state-transition function, i.e., an abstract machine. This abstract machine clearly separates between the transitions implementing the congruence rules of the reduction semantics and the transitions implementing its reduction rules. The construction of a refocus function therefore shows how to mechanically obtain an abstract machine out of a reduction semantics, which was done previously on a case-by-case basis. <br /> <br />We illustrate refocusing by mechanically constructing Felleisen et al.'s CK machine from a call-by-value reduction semantics of the lambda-calculus, and by constructing a substitution-based version of Krivine's machine from a call-by-name reduction semantics of the lambda-calculus. We also mechanically construct three one-pass CPS transformers from three quadratic context-based CPS transformers for the lambda-calculus.


Sign in / Sign up

Export Citation Format

Share Document