scholarly journals Typing first-class continuations in ML

1993 ◽  
Vol 3 (4) ◽  
pp. 465-484 ◽  
Author(s):  
Robert Harper ◽  
Bruce F. Duba ◽  
David Macqueen

AbstractAn extension of ML with continuation primitives similar to those found in Scheme is considered. A number of alternative type systems are discussed, and several programming examples are given. A continuation-based operational semantics is defined for a small, purely functional language, and the soundness of the Damas–Milner polymorphic type assignment system with respect to this semantics is proved. The full Damas–Milner type system is shown to be unsound in the presence of first-class continuations. Restrictions on polymorphism similar to those introduced in connection with reference types are shown to suffice for soundness.

1995 ◽  
Vol 5 (1) ◽  
pp. 1-35 ◽  
Author(s):  
Mark P. Jones

AbstractThis paper describes a flexible type system that combines overloading and higher-order polymorphism in an implicitly typed language using a system of constructor classes—a natural generalization of type classes in Haskell. We present a range of examples to demonstrate the usefulness of such a system. In particular, we show how constructor classes can be used to support the use of monads in a functional language. The underlying type system permits higher-order polymorphism but retains many of the attractive features that have made Hindley/Milner type systems so popular. In particular, there is an effective algorithm that can be used to calculate principal types without the need for explicit type or kind annotations. A prototype implementation has been developed providing, amongst other things, the first concrete implementation of monad comprehensions known to us at the time of writing.


2000 ◽  
Vol 11 (01) ◽  
pp. 65-87
Author(s):  
MASATOMO HASHIMOTO

This paper develops an ML-style programming language with first-class contexts i.e. expressions with holes. The crucial operation for contexts is hole-filling. Filling a hole with an expression has the effect of dynamic binding or macro expansion which provides the advanced feature of manipulating open program fragments. Such mechanisms are useful in many systems including distributed/mobile programming and program modules. If we can treat a context as a first-class citizen in a programming language, then we can manipulate open program fragments in a flexible and seamless manner. A possibility of such a programming language was shown by the theory of simply typed context calculus developed by Hashimoto and Ohori. This paper extends the simply typed system of the context calculus to an ML-style polymorphic type system, and gives an operational semantics and a sound and complete type inference algorithm.


1996 ◽  
Vol 6 (1) ◽  
pp. 111-141 ◽  
Author(s):  
John Greiner

AbstractThe weak polymorphic type system of Standard ML of New Jersey (SML/NJ) (MacQueen, 1992) has only been presented as part of the implementation of the SML/NJ compiler, not as a formal type system. As a result, it is not well understood. And while numerous versions of the implementation have been shown unsound, the concept has not been proved sound or unsound. We present an explanation of weak polymorphism and show that a formalization of this is sound. We also relate this to the SML/NJ implementation of weak polymorphism through a series of type systems that incorporate elements of the SML/NJ type inference algorithm.


2006 ◽  
Vol 16 (6) ◽  
pp. 793-811 ◽  
Author(s):  
ALEX POTANIN ◽  
JAMES NOBLE ◽  
DAVE CLARKE ◽  
ROBERT BIDDLE

Existing approaches to object encapsulation either rely on ad hoc syntactic restrictions or require the use of specialised type systems. Syntactic restrictions are difficult to scale and to prove correct, while specialised type systems require extensive changes to programming languages. We demonstrate that confinement can be enforced cheaply in Featherweight Generic Java, with no essential change to the underlying language or type system. This result demonstrates that polymorphic type parameters can simultaneously act as ownership parameters and should facilitate the adoption of confinement and ownership type systems in general-purpose programming languages.


1999 ◽  
Vol 9 (4) ◽  
pp. 427-462 ◽  
Author(s):  
SUSUMU NISHIMURA ◽  
ATSUSHI OHORI

This article proposes a new language mechanism for data-parallel processing of dynamically allocated recursively defined data. Different from the conventional array-based data- parallelism, it allows parallel processing of general recursively defined data such as lists or trees in a functional way. This is achieved by representing a recursively defined datum as a system of equations, and defining new language constructs for parallel transformation of a system of equations. By integrating them with a higher-order functional language, we obtain a functional programming language suitable for describing data-parallel algorithms on recursively defined data in a declarative way. The language has an ML style polymorphic type system and a type sound operational semantics that uniformly integrates the parallel evaluation mechanism with the semantics of a typed functional language. We also show the intended parallel execution model behind the formal semantics, assuming an idealized distributed memory multicomputer.


1995 ◽  
Vol 24 (492) ◽  
Author(s):  
Flemming Nielson ◽  
Hanne Riis Nielson

In principle termination analysis is easy: find a well-founded partial order and prove that calls decrease with respect to this order. In practice this often requires an oracle (or a theorem prover) for determining the well-founded order and this oracle may not be easily implementable. Our approach circumvents some of these problems by exploiting the inductive definition of algebraic data types and using pattern matching as in functional languages. We develop a termination analysis for a higher-order functional language; the analysis incorporates and extends polymorphic type inference and axiomatizes a class of well-founded partial orders for multiple-argument functions (as in Standard ML and Miranda). Semantics is given by means of operational (natural-style) semantics and soundness is proved; this involves making extensions to the semantic universe and we relate this to the techniques of denotational semantics. For dealing with the partiality aspects of the soundness proof, it suffices to incorporate approximations to the desired fixed points; for dealing with the totality aspects of the soundness proof, we also have to incorporate functions that are forced to terminate (in a way that might violate the monotonicity of denotational semantics).


1995 ◽  
Vol 24 (498) ◽  
Author(s):  
Kirsten Lackner Solberg

<p>In this Ph.D. thesis, we study four program analyses. Three of them are specified by annotated type systems and the last one by abstract interpretation.</p><p>We present a combined strictness and totality analysis. We are specifying the analysis as an annotated type system. The type system allows conjunctions of annotated types, but only at the top-level. The analysis is somewhat more powerful than the strictness analysis by Kuo and Mishra due to the conjunctions and in that we also consider totality. The analysis is shown sound with respect to a natural-style operational semantics. The analysis is not immediately extendable to full conjunction.</p><p>The second analysis is also a combined strictness and totality analysis, however with ``full´´ conjunction. Soundness of the analysis is shown with respect to a denotational semantics. The analysis is more powerful than the strictness analyses by Jensen and Benton in that it in addition to strictness considers totality. So far we have only specified the analyses, however in order for the analyses to be practically useful we need an algorithm for inferring the annotated types. We construct an algorithm for the second analysis using the lazy type approach by Hankin and Le Métayer. The reason for choosing the second analysis from the thesis is that the approach is not applicable to the first analysis.</p><p>The third analysis we study is a binding time analysis. We take the analysis specified by Nielson and Nielson and we construct a more efficient algorithm than the one proposed by Nielson and Nielson. The algorithm collects constraints in a structural manner like the type inference algorithm by Damas. Afterwards the minimal solution to the set of constraints is found.</p><p>The last analysis in the thesis is specified by abstract interpretation. Hunt shows that projection based analyses are subsumed by PER (partial equivalence relation) based analyses using abstract interpretation. The PERs used by Hunt are strict, i.e. bottom is related to bottom. Here we lift this restriction by requiring the PERs to be uniform, in the sense that they treat all the integers equally. By allowing non-strict PERs we get three properties on the integers, corresponding to the three annotations used in the first and second analysis in the thesis.</p>


2007 ◽  
Vol 17 (1) ◽  
pp. 1-82 ◽  
Author(s):  
SIMON PEYTON JONES ◽  
DIMITRIOS VYTINIOTIS ◽  
STEPHANIE WEIRICH ◽  
MARK SHIELDS

AbstractHaskell's popularity has driven the need for ever more expressive type system features, most of which threaten the decidability and practicality of Damas-Milner type inference. One such feature is the ability to write functions with higher-rank types – that is, functions that take polymorphic functions as their arguments. Complete type inference is known to be undecidable for higher-rank (impredicative) type systems, but in practice programmers are more than willing to add type annotations to guide the type inference engine, and to document their code. However, the choice of just what annotations are required, and what changes are required in the type system and its inference algorithm, has been an ongoing topic of research. We take as our starting point a λ-calculus proposed by Odersky and Läufer. Their system supports arbitrary-rank polymorphism through the exploitation of type annotations on λ-bound arguments and arbitrary sub-terms. Though elegant, and more convenient than some other proposals, Odersky and Läufer's system requires many annotations. We show how to use local type inference (invented by Pierce and Turner) to greatly reduce the annotation burden, to the point where higher-rank types become eminently usable. Higher-rank types have a very modest impact on type inference. We substantiate this claim in a very concrete way, by presenting a complete type-inference engine, written in Haskell, for a traditional Damas-Milner type system, and then showing how to extend it for higher-rank types. We write the type-inference engine using a monadic framework: it turns out to be a particularly compelling example of monads in action. The paper is long, but is strongly tutorial in style. Although we use Haskell as our example source language, and our implementation language, much of our work is directly applicable to any ML-like functional language.


Author(s):  
YANPENG YANG ◽  
BRUNO C. D. S. OLIVEIRA

Abstract Traditional designs for functional languages (such as Haskell or ML) have separate sorts of syntax for terms and types. In contrast, many dependently typed languages use a unified syntax that accounts for both terms and types. Unified syntax has some interesting advantages over separate syntax, including less duplication of concepts, and added expressiveness. However, integrating unrestricted general recursion in calculi with unified syntax is challenging when some level of type-level computation is present, since properties such as decidable type-checking are easily lost. This paper presents a family of calculi called pure iso-type systems (PITSs), which employs unified syntax, supports general recursion and preserves decidable type-checking. PITS is comparable in simplicity to pure type systems (PTSs), and is useful to serve as a foundation for functional languages that stand in-between traditional ML-like languages and fully blown dependently typed languages. In PITS, recursion and recursive types are completely unrestricted and type equality is simply based on alpha-equality, just like traditional ML-style languages. However, like most dependently typed languages, PITS uses unified syntax, naturally supporting many advanced type system features. Instead of implicit type conversion, PITS provides a generalization of iso-recursive types called iso-types. Iso-types replace the conversion rule typically used in dependently typed calculus and make every type-level computation explicit via cast operators. Iso-types avoid the complexity of explicit equality proofs employed in other approaches with casts. We study three variants of PITS that differ on the reduction strategy employed by the cast operators: call-by-name, call-by-value and parallel reduction. One key finding is that while using call-by-value or call-by-name reduction in casts loses some expressive power, it allows those variants of PITS to have simple and direct operational semantics and proofs. In contrast, the variant of PITS with parallel reduction retains the expressive power of PTS conversion, at the cost of a more complex metatheory.


Author(s):  
KUEN-BANG HOU (Favonia) ◽  
NICK BENTON ◽  
ROBERT HARPER

AbstractThe connection between polymorphic and dynamic typing was originally considered by Curry et al. (1972, Combinatory Logic, vol. ii) in the form of “polymorphic type assignment” for untyped λ-terms. Types are assigned after the fact to what is, in modern terminology, a dynamic language. Interest in type assignment was revitalized by the proposals of Bracha et al. (1998, OOPSLA) and Bank et al. (1997, POPL) to enrich Java with polymorphism (generics), which in turn sparked the development of other languages, such as Scala, with similar combinations of features. In such a setting, where the target language already has a monomorphic type system, it is desirable to compile polymorphism to dynamic typing in such a way that as much static typing as possible is preserved, relying on dynamics only insofar as genericity is actually required. The basic approach is to compile polymorphism using embeddings from each type into a universal “top” type, ${\mathbb{D}}$, and partial projections that go in the other direction. This scheme is intuitively reasonable, and, indeed, has been used in practice many times. Proving its correctness, however, is non-trivial. This paper studies the compilation of System F to an extension of Moggi's computational meta-language with a dynamic type and shows how the compilation may be proved correct using a logical relation.


Sign in / Sign up

Export Citation Format

Share Document