Pure iso-type systems

Author(s):  
YANPENG YANG ◽  
BRUNO C. D. S. OLIVEIRA

Abstract Traditional designs for functional languages (such as Haskell or ML) have separate sorts of syntax for terms and types. In contrast, many dependently typed languages use a unified syntax that accounts for both terms and types. Unified syntax has some interesting advantages over separate syntax, including less duplication of concepts, and added expressiveness. However, integrating unrestricted general recursion in calculi with unified syntax is challenging when some level of type-level computation is present, since properties such as decidable type-checking are easily lost. This paper presents a family of calculi called pure iso-type systems (PITSs), which employs unified syntax, supports general recursion and preserves decidable type-checking. PITS is comparable in simplicity to pure type systems (PTSs), and is useful to serve as a foundation for functional languages that stand in-between traditional ML-like languages and fully blown dependently typed languages. In PITS, recursion and recursive types are completely unrestricted and type equality is simply based on alpha-equality, just like traditional ML-style languages. However, like most dependently typed languages, PITS uses unified syntax, naturally supporting many advanced type system features. Instead of implicit type conversion, PITS provides a generalization of iso-recursive types called iso-types. Iso-types replace the conversion rule typically used in dependently typed calculus and make every type-level computation explicit via cast operators. Iso-types avoid the complexity of explicit equality proofs employed in other approaches with casts. We study three variants of PITS that differ on the reduction strategy employed by the cast operators: call-by-name, call-by-value and parallel reduction. One key finding is that while using call-by-value or call-by-name reduction in casts loses some expressive power, it allows those variants of PITS to have simple and direct operational semantics and proofs. In contrast, the variant of PITS with parallel reduction retains the expressive power of PTS conversion, at the cost of a more complex metatheory.

2021 ◽  
Vol 54 (5) ◽  
pp. 1-38
Author(s):  
Jana Dunfield ◽  
Neel Krishnaswami

Bidirectional typing combines two modes of typing: type checking, which checks that a program satisfies a known type, and type synthesis, which determines a type from the program. Using checking enables bidirectional typing to support features for which inference is undecidable; using synthesis enables bidirectional typing to avoid the large annotation burden of explicitly typed languages. In addition, bidirectional typing improves error locality. We highlight the design principles that underlie bidirectional type systems, survey the development of bidirectional typing from the prehistoric period before Pierce and Turner’s local type inference to the present day, and provide guidance for future investigations.


1990 ◽  
Vol 19 (341) ◽  
Author(s):  
Jens Palsberg ◽  
Michael I. Schwartzbach

We present a new type system for object-oriented languages with assignments. Types are sets of classes, subtyping is set inclusion, and genericity is class substitution. The type system enables separate compilation, and unifies, generalizes, and simplifies the type systems underlying SIMULA/BETA, C++, EIFFEL, and Typed Smalltalk, and the type system with type substitutions proposed by Palsberg and Schwartzbach, Classes and types are both modeled as node-labeled, ordered regular trees; this allows an efficient type-checking algorithm.


2019 ◽  
Vol 9 (1) ◽  
pp. 1-32 ◽  
Author(s):  
Joseph Eremondi ◽  
Wouter Swierstra ◽  
Jurriaan Hage

AbstractDependently-typed programming languages provide a powerful tool for establishing code correctness. However, it can be hard for newcomers to learn how to employ the advanced type system of such languages effectively. For simply-typed languages, several techniques have been devised to generate helpful error messages and suggestions for the programmer. We adapt these techniques to dependently-typed languages, to facilitate their more widespread adoption. In particular, we modify a higher-order unification algorithm that is used to resolve and type-check implicit arguments. We augment this algorithm with replay graphs, allowing for a global heuristic analysis of a unification problem-set, error-tolerant typing, which allows type-checking to continue after errors are found, and counter-factual unification, which makes error messages less affected by the order in which types are checked. A formalization of our algorithm is presented with an outline of its correctness. We implement replay graphs, and compare the generated error messages to those from existing languages, highlighting the improvements we achieved.


1997 ◽  
Vol 4 (51) ◽  
Author(s):  
James McKinna ◽  
Robert Pollack

"This paper is about our hobby." That is the first sentence of [MP93], the first report on our formal development of lambda calculus and type theory, written in autumn 1992. We have continued to pursue this hobby on and off ever since, and have developed a substantial body of formal knowledge, including Church-Rosser and standardization<br />theorems for beta reduction, and the basic theory of<br />Pure Type Systems (PTS) leading to the strengthening theorem and type checking algorithms for PTS. Some of this work is reported in [MP93, vBJMP94, Pol94b, Pol95]. In the present paper we survey this work, including some new proofs, and point out what we feel has been learned about the general issues of formalizing mathematics. On the technical side, we describe an abstract, and simplified, proof of standardization for beta reduction, not previously published, that does<br />not mention redex positions or residuals. On the general issues, we emphasize the search for formal definitions that are convenient for formal proof and convincingly represent the intended informal concepts. The LEGO Proof Development System [LP92] was used to check the work in an implementation of the Extended Calculus of Constructions<br />(ECC) with inductive types [Luo94]. LEGO is a refinement style<br />proof checker, publicly available by ftp and WWW, with a User's Manual [LP92] and a large collection of examples. Section 1.3 contains information on accessing the formal development described in this paper. Other interesting examples formalized in LEGO include program specification and data refinement [Luo91], strong normalization of System F [Alt93], synthetic domain theory [Reu95, Reu96], and operational semantics for imperative programs [Sch97].


1993 ◽  
Vol 3 (4) ◽  
pp. 465-484 ◽  
Author(s):  
Robert Harper ◽  
Bruce F. Duba ◽  
David Macqueen

AbstractAn extension of ML with continuation primitives similar to those found in Scheme is considered. A number of alternative type systems are discussed, and several programming examples are given. A continuation-based operational semantics is defined for a small, purely functional language, and the soundness of the Damas–Milner polymorphic type assignment system with respect to this semantics is proved. The full Damas–Milner type system is shown to be unsound in the presence of first-class continuations. Restrictions on polymorphism similar to those introduced in connection with reference types are shown to suffice for soundness.


1995 ◽  
Vol 24 (498) ◽  
Author(s):  
Kirsten Lackner Solberg

<p>In this Ph.D. thesis, we study four program analyses. Three of them are specified by annotated type systems and the last one by abstract interpretation.</p><p>We present a combined strictness and totality analysis. We are specifying the analysis as an annotated type system. The type system allows conjunctions of annotated types, but only at the top-level. The analysis is somewhat more powerful than the strictness analysis by Kuo and Mishra due to the conjunctions and in that we also consider totality. The analysis is shown sound with respect to a natural-style operational semantics. The analysis is not immediately extendable to full conjunction.</p><p>The second analysis is also a combined strictness and totality analysis, however with ``full´´ conjunction. Soundness of the analysis is shown with respect to a denotational semantics. The analysis is more powerful than the strictness analyses by Jensen and Benton in that it in addition to strictness considers totality. So far we have only specified the analyses, however in order for the analyses to be practically useful we need an algorithm for inferring the annotated types. We construct an algorithm for the second analysis using the lazy type approach by Hankin and Le Métayer. The reason for choosing the second analysis from the thesis is that the approach is not applicable to the first analysis.</p><p>The third analysis we study is a binding time analysis. We take the analysis specified by Nielson and Nielson and we construct a more efficient algorithm than the one proposed by Nielson and Nielson. The algorithm collects constraints in a structural manner like the type inference algorithm by Damas. Afterwards the minimal solution to the set of constraints is found.</p><p>The last analysis in the thesis is specified by abstract interpretation. Hunt shows that projection based analyses are subsumed by PER (partial equivalence relation) based analyses using abstract interpretation. The PERs used by Hunt are strict, i.e. bottom is related to bottom. Here we lift this restriction by requiring the PERs to be uniform, in the sense that they treat all the integers equally. By allowing non-strict PERs we get three properties on the integers, corresponding to the three annotations used in the first and second analysis in the thesis.</p>


Author(s):  
Satoshi Kura

AbstractDependent refinement types are types equipped with predicates that specify preconditions and postconditions of underlying functional languages. We propose a general semantic construction of dependent refinement type systems from underlying type systems and predicate logic, that is, a construction of liftings of closed comprehension categories from given (underlying) closed comprehension categories and posetal fibrations for predicate logic. We give sufficient conditions to lift structures such as dependent products, dependent sums, computational effects, and recursion from the underlying type systems to dependent refinement type systems. We demonstrate the usage of our construction by giving semantics to a dependent refinement type system and proving soundness.


2008 ◽  
Vol 18 (3) ◽  
pp. 467-500
Author(s):  
LUIGI LIQUORI ◽  
BERNARD PAUL SERPETTE

We propose an imperative version of the Rewriting Calculus, a calculus based on pattern matching, pattern abstraction and side effects, which we call iRho.We formulate both a static and big-stepcall-by-valueoperational semantics of iRho. The operational semantics is deterministic, and immediately suggests how an interpreter for the calculus may be built. The static semantics is given using a first-order type system based on a form of product types, which can be assigned to term-like structures (that is, records).The calculus isà laChurch, that is, pattern abstractions are decorated with the types of the free variables of the pattern.iRho is a good candidate for the core of a pattern-matching imperative language, where a (monomorphic) typed store can be safely manipulated and where fixed points are built into the language itself.Properties such as determinism of the interpreter and subject-reduction have been completely checked using a machine-assisted approach with theCoqproof assistant. Progress and decidability of type checking are proved using pen and paper.


Author(s):  
YANNICK FORSTER ◽  
OHAD KAMMAR ◽  
SAM LINDLEY ◽  
MATIJA PRETNAR

AbstractWe compare the expressive power of three programming abstractions for user-defined computational effects: Plotkin and Pretnar’s effect handlers, Filinski’s monadic reflection, and delimited control. This comparison allows a precise discussion about the relative expressiveness of each programming abstraction. It also demonstrates the sensitivity of the relative expressiveness of user-defined effects to seemingly orthogonal language features. We present three calculi, one per abstraction, extending Levy’s call-by-push-value. For each calculus, we present syntax, operational semantics, a natural type-and-effect system, and, for effect handlers and monadic reflection, a set-theoretic denotational semantics. We establish their basic metatheoretic properties: safety, termination, and, where applicable, soundness and adequacy. Using Felleisen’s notion of a macro translation, we show that these abstractions can macro express each other, and show which translations preserve typeability. We use the adequate finitary set-theoretic denotational semantics for the monadic calculus to show that effect handlers cannot be macro expressed while preserving typeability either by monadic reflection or by delimited control. Our argument fails with simple changes to the type system such as polymorphism and inductive types. We supplement our development with a mechanised Abella formalisation.


Author(s):  
ATSUSHI IGARASHI ◽  
PETER THIEMANN ◽  
YUYA TSUDA ◽  
VASCO T. VASCONCELOS ◽  
PHILIP WADLER

Abstract Session types are a rich type discipline, based on linear types, that lifts the sort of safety claims that come with type systems to communications. However, web-based applications and microservices are often written in a mix of languages, with type disciplines in a spectrum between static and dynamic typing. Gradual session types address this mixed setting by providing a framework which grants seamless transition between statically typed handling of sessions and any required degree of dynamic typing. We propose Gradual GV as a gradually typed extension of the functional session type system GV. Following a standard framework of gradual typing, Gradual GV consists of an external language, which relaxes the type system of GV using dynamic types; an internal language with casts, for which operational semantics is given; and a cast-insertion translation from the former to the latter. We demonstrate type and communication safety as well as blame safety, thus extending previous results to functional languages with session-based communication. The interplay of linearity and dynamic types requires a novel approach to specifying the dynamics of the language.


Sign in / Sign up

Export Citation Format

Share Document