scholarly journals A case study in programming coinductive proofs: Howe’s method

2018 ◽  
Vol 29 (8) ◽  
pp. 1309-1343 ◽  
Author(s):  
ALBERTO MOMIGLIANO ◽  
BRIGITTE PIENTKA ◽  
DAVID THIBODEAU

Bisimulation proofs play a central role in programming languages in establishing rich properties such as contextual equivalence. They are also challenging to mechanize, since they require a combination of inductive and coinductive reasoning on open terms. In this paper, we describe mechanizing the property that similarity in the call-by-name lambda calculus is a pre-congruence using Howe’s method in the Beluga formal reasoning system. The development relies on three key ingredients: (1) we give a higher order abstract syntax (HOAS) encoding of lambda terms together with their operational semantics as intrinsically typed terms, thereby avoiding not only the need to deal with binders, renaming and substitutions, but keeping all typing invariants implicit; (2) we take advantage of Beluga’s support for representing open terms using built-in contexts and simultaneous substitutions: this allows us to directly state central definitions such as open simulation without resorting to the usual inductive closure operation and to encode very elegantly notoriously painful proofs such as the substitutivity of the Howe relation; (3) we exploit the possibility of reasoning by coinduction in Beluga’s reasoning logic. The end result is succinct and elegant, thanks to the high-level abstractions and primitives Beluga provides. We believe that this mechanization is a significant example that illustrates Beluga’s strength at mechanizing challenging (co)inductive proofs using HOAS encodings.

2018 ◽  
Vol 28 (9) ◽  
pp. 1606-1638 ◽  
Author(s):  
ANDREW CAVE ◽  
BRIGITTE PIENTKA

Proofs with logical relations play a key role to establish rich properties such as normalization or contextual equivalence. They are also challenging to mechanize. In this paper, we describe two case studies using the proof environmentBeluga: First, we explain the mechanization of the weak normalization proof for the simply typed lambda-calculus; second, we outline how to mechanize the completeness proof of algorithmic equality for simply typed lambda-terms where we reason about logically equivalent terms. The development of these proofs inBelugarelies on three key ingredients: (1) we encode lambda-terms together with their typing rules, operational semantics, algorithmic and declarative equality using higher order abstract syntax (HOAS) thereby avoiding the need to manipulate and deal with binders, renaming and substitutions, (2) we take advantage ofBeluga's support for representing derivations that depend on assumptions and first-class contexts to directly state inductive properties such as logical relations and inductive proofs, (3) we exploitBeluga's rich equational theory for simultaneous substitutions; as a consequence, users do not need to establish and subsequently use substitution properties, and proofs are not cluttered with references to them. We believe these examples demonstrate thatBelugaprovides the right level of abstractions and primitives to mechanize challenging proofs using HOAS encodings. It also may serve as a valuable benchmark for other proof environments.


2021 ◽  
Vol 43 (1) ◽  
pp. 1-46
Author(s):  
David Sanan ◽  
Yongwang Zhao ◽  
Shang-Wei Lin ◽  
Liu Yang

To make feasible and scalable the verification of large and complex concurrent systems, it is necessary the use of compositional techniques even at the highest abstraction layers. When focusing on the lowest software abstraction layers, such as the implementation or the machine code, the high level of detail of those layers makes the direct verification of properties very difficult and expensive. It is therefore essential to use techniques allowing to simplify the verification on these layers. One technique to tackle this challenge is top-down verification where by means of simulation properties verified on top layers (representing abstract specifications of a system) are propagated down to the lowest layers (that are an implementation of the top layers). There is no need to say that simulation of concurrent systems implies a greater level of complexity, and having compositional techniques to check simulation between layers is also desirable when seeking for both feasibility and scalability of the refinement verification. In this article, we present CSim 2 a (compositional) rely-guarantee-based framework for the top-down verification of complex concurrent systems in the Isabelle/HOL theorem prover. CSim 2 uses CSimpl, a language with a high degree of expressiveness designed for the specification of concurrent programs. Thanks to its expressibility, CSimpl is able to model many of the features found in real world programming languages like exceptions, assertions, and procedures. CSim 2 provides a framework for the verification of rely-guarantee properties to compositionally reason on CSimpl specifications. Focusing on top-down verification, CSim 2 provides a simulation-based framework for the preservation of CSimpl rely-guarantee properties from specifications to implementations. By using the simulation framework, properties proven on the top layers (abstract specifications) are compositionally propagated down to the lowest layers (source or machine code) in each concurrent component of the system. Finally, we show the usability of CSim 2 by running a case study over two CSimpl specifications of an Arinc-653 communication service. In this case study, we prove a complex property on a specification, and we use CSim 2 to preserve the property on lower abstraction layers.


2008 ◽  
Vol 18 (3) ◽  
pp. 501-553 ◽  
Author(s):  
DAVID SABEL ◽  
MANFRED SCHMIDT-SCHAUSS

We present a higher-order call-by-need lambda calculus enriched with constructors, case expressions, recursive letrec expressions, a seq operator for sequential evaluation and a non-deterministic operator amb that is locally bottom-avoiding. We use a small-step operational semantics in the form of a single-step rewriting system that defines a (non-deterministic) normal-order reduction. This strategy can be made fair by adding resources for book-keeping. As equational theory, we use contextual equivalence (that is, terms are equal if, when plugged into any program context, their termination behaviour is the same), in which we use a combination of may- and must-convergence, which is appropriate for non-deterministic computations. We show that we can drop the fairness condition for equational reasoning, since the valid equations with respect to normal-order reduction are the same as for fair normal-order reduction. We develop a number of proof tools for proving correctness of program transformations. In particular, we prove a context lemma for both may- and must- convergence that restricts the number of contexts that need to be examined for proving contextual equivalence. Combining this with so-called complete sets of commuting and forking diagrams, we show that all the deterministic reduction rules and some additional transformations preserve contextual equivalence. We also prove a standardisation theorem for fair normal-order reduction. The structure of the ordering ≤c is also analysed, and we show that Ω is not a least element and ≤c already implies contextual equivalence with respect to may-convergence.


2001 ◽  
Vol 9 (2-3) ◽  
pp. 109-122 ◽  
Author(s):  
Beniamino Di Martino ◽  
Sergio Briguglio ◽  
Gregorio Vlad ◽  
Giuliana Fogaccia

A crucial issue in parallel programming (both for distributed and shared memory architectures) is work decomposition. Work decomposition task can be accomplished without large programming effort with use of high-level parallel programming languages, such as OpenMP. Anyway particular care must still be payed on achieving performance goals. In this paper we introduce and compare two decomposition strategies, in the framework of shared memory systems, as applied to a case study particle in cell application. A number of different implementations of them, based on the OpenMP language, are discussed with regard to time efficiency, memory occupancy, and program restructuring effort.


1999 ◽  
Vol 6 (30) ◽  
Author(s):  
Luca Aceto ◽  
Willem Jan Fokkink ◽  
Chris Verhoef

The importance of giving precise semantics to programming and specification<br />languages was recognized since the sixties with the development of the<br />first high-level programming languages (cf. e.g. [30, 206] for some early accounts).<br />The use of operational semantics - i.e. of a semantics that explicitly<br />describes how programs compute in stepwise fashion, and the possible<br />state-transformations they perform - was already advocated by McCarthy<br />in [147], and elaborated upon in references like [142, 143]. Examples of full-blown<br />languages that have been endowed with an operational semantics are<br />Algol 60 [140], PL/I [173], and CSP [178].


Author(s):  
Lavindra de Silva ◽  
Felipe Meneguzzi ◽  
Brian Logan

The Procedural Reasoning System (PRS) is arguably the first implementation of the Belief--Desire--Intention (BDI) approach to agent programming. PRS remains extremely influential, directly or indirectly inspiring the development of subsequent BDI agent programming languages. However, perhaps surprisingly given its centrality in the BDI paradigm, PRS lacks a formal operational semantics, making it difficult to determine its expressive power relative to other agent programming languages. This paper takes a first step towards closing this gap, by giving a formal semantics for a significant fragment of PRS. We prove key properties of the semantics relating to PRS-specific programming constructs, and show that even the fragment of PRS we consider is strictly more expressive than the plan constructs found in typical BDI languages.


2010 ◽  
Vol 10 (4-6) ◽  
pp. 433-447 ◽  
Author(s):  
JON SNEYERS ◽  
WANNES MEERT ◽  
JOOST VENNEKENS ◽  
YOSHITAKA KAMEYA ◽  
TAISUKE SATO

AbstractPRISM is an extension of Prolog with probabilistic predicates and built-in support for expectation-maximization learning. Constraint Handling Rules (CHR) is a high-level programming language based on multi-headed multiset rewrite rules.In this paper, we introduce a new probabilistic logic formalism, called CHRiSM, based on a combination of CHR and PRISM. It can be used for high-level rapid prototyping of complex statistical models by means of “chance rules”. The underlying PRISM system can then be used for several probabilistic inference tasks, including probability computation and parameter learning. We define the CHRiSM language in terms of syntax and operational semantics, and illustrate it with examples. We define the notion of ambiguous programs and define a distribution semantics for unambiguous programs. Next, we describe an implementation of CHRiSM, based on CHR(PRISM). We discuss the relation between CHRiSM and other probabilistic logic programming languages, in particular PCHR. Finally, we identify potential application domains.


2005 ◽  
Vol 8 ◽  
pp. 145-194 ◽  
Author(s):  
John Howse ◽  
Gem Stapleton ◽  
John Taylor

AbstractThe use of diagrams in mathematics has traditionally been restricted to guiding intuition and communication. With rare exceptions such as Peirce's alpha and beta systems, purely diagrammatic formal reasoning has not been in the mathematician's or logician's toolkit. This paper develops a purely diagrammatic reasoning system of “spider diagrams” that builds on Euler, Venn and Peirce diagrams. The system is known to be expressively equivalent to first-order monadic logic with equality. Two levels of diagrammatic syntax have been developed: an ‘abstract’ syntax that captures the structure of diagrams, and a ‘concrete’ syntax that captures topological properties of drawn diagrams. A number of simple diagrammatic transformation rules are given, and the resulting reasoning system is shown to be sound and complete.


Author(s):  
ANDREAS ABEL ◽  
GUILLAUME ALLAIS ◽  
ALIYA HAMEER ◽  
BRIGITTE PIENTKA ◽  
ALBERTO MOMIGLIANO ◽  
...  

Abstract We propose a new collection of benchmark problems in mechanizing the metatheory of programming languages, in order to compare and push the state of the art of proof assistants. In particular, we focus on proofs using logical relations (LRs) and propose establishing strong normalization of a simply typed calculus with a proof by Kripke-style LRs as a benchmark. We give a modern view of this well-understood problem by formulating our LR on well-typed terms. Using this case study, we share some of the lessons learned tackling this problem in different dependently typed proof environments. In particular, we consider the mechanization in Beluga, a proof environment that supports higher-order abstract syntax encodings and contrast it to the development and strategies used in general-purpose proof assistants such as Coq and Agda. The goal of this paper is to engage the community in discussions on what support in proof environments is needed to truly bring mechanized metatheory to the masses and engage said community in the crafting of future benchmarks.


2018 ◽  
Vol 29 (3) ◽  
pp. 465-510 ◽  
Author(s):  
RASMUS E. MØGELBERG ◽  
MARCO PAVIOTTI

Just like any other branch of mathematics, denotational semantics of programming languages should be formalised in type theory, but adapting traditional domain theoretic semantics, as originally formulated in classical set theory to type theory has proven challenging. This paper is part of a project on formulating denotational semantics in type theories with guarded recursion. This should have the benefit of not only giving simpler semantics and proofs of properties such as adequacy, but also hopefully in the future to scale to languages with advanced features, such as general references, outside the reach of traditional domain theoretic techniques.Working inGuarded Dependent Type Theory(GDTT), we develop denotational semantics for Fixed Point Calculus (FPC), the simply typed lambda calculus extended with recursive types, modelling the recursive types of FPC using the guarded recursive types ofGDTT. We prove soundness and computational adequacy of the model inGDTTusing a logical relation between syntax and semantics constructed also using guarded recursive types. The denotational semantics is intensional in the sense that it counts the number of unfold-fold reductions needed to compute the value of a term, but we construct a relation relating the denotations of extensionally equal terms, i.e., pairs of terms that compute the same value in a different number of steps. Finally, we show how the denotational semantics of terms can be executed inside type theory and prove that executing the denotation of a boolean term computes the same value as the operational semantics of FPC.


Sign in / Sign up

Export Citation Format

Share Document