scholarly journals Making Random Judgments: Automatically Generating Well-Typed Terms from the Definition of a Type-System

Author(s):  
Burke Fetscher ◽  
Koen Claessen ◽  
Michał Pałka ◽  
John Hughes ◽  
Robert Bruce Findler
Keyword(s):  
Author(s):  
Guy de Tre ◽  
Rita de Caluwe

The objective of this chapter is to define a fuzzy object-oriented formal database model that allows us to model and manipulate information in a (true to nature) natural way. Not all the elements (data) that occur in the real world are fully known or defined in a perfect way. Classical database models only allow the manipulation of accurately defined data in an adequate way. The presented model was built upon an object-oriented type system and an elaborated constraint system, which, respectively, support the definitions of types and constraints. Types and constraints are the basic building blocks of object schemes, which, in turn, are used for defining database schemes. Finally, the definition of the database model was obtained by providing adequate data definition operators and data manipulation operators. Novelties in the approach are the incorporation of generalized constraints and of extended possibilistic truth values, which allow for a better representation of data(base) semantics.


1999 ◽  
Vol 9 (1) ◽  
pp. 77-91 ◽  
Author(s):  
RICHARD S. BIRD ◽  
ROSS PATERSON

“I have no data yet. It is a capital mistake to theorise before one has data.” Sir Arthur Conan Doyle The Adventures of Sherlock Holmesde Bruijn notation is a coding of lambda terms in which each occurrence of a bound variable x is replaced by a natural number, indicating the ‘distance’ from the occurrence to the abstraction that introduced x. One might suppose that in any datatype for representing de Bruijn terms, the distance restriction on numbers would have to be maintained as an explicit datatype invariant. However, by using a nested (or non-regular) datatype, we can define a representation in which all terms are well-formed, so that the invariant is enforced automatically by the type system. Programming with nested types is only a little more difficult than programming with regular types, provided we stick to well-established structuring techniques. These involve expressing inductively defined functions in terms of an appropriate fold function for the type, and using fusion laws to establish their properties. In particular, the definition of lambda abstraction and beta reduction is particularly simple, and the proof of their associated properties is entirely mechanical.


Author(s):  
PIERRE-EVARISTE DAGAND

AbstractFunctional programmers from all horizons strive to use, and sometimes abuse, their favorite type system in order to capture the invariants of their programs. A widely used tool in that trade consists in defining finely indexed datatypes. Operationally, these types classify the programmer's data, following the ML tradition. Logically, these types enforce the program invariants in a novel manner. This new programming pattern, by which one programs over inductive definitions to account for some invariants, lead to the development of a theory of ornaments (McBride, 2011 Ornamental Algebras, Algebraic Ornaments. Unpublished). However, ornaments originate as a dependently-typed object and may thus appear rather daunting to a functional programmer of the non-dependent kind. This article aims at presenting ornaments from first-principles and, in particular, to declutter their presentation from syntactic considerations. To do so, we shall give a sufficiently abstract model of indexed datatypes by means of many-sorted signatures. In this process, we formalize our intuition that an indexed datatype is the combination of a data-structure and a data-logic. Over this abstraction of datatypes, we shall recast the definition of ornaments, effectively giving a model of ornaments. Benefiting both from the operational and abstract nature of many-sorted signatures, ornaments should appear applicable and, one hopes, of interest beyond the type-theoretic circles, case in point being languages with generalized abstract datatypes or refinement types.


2012 ◽  
Vol 23 (1) ◽  
pp. 38-144 ◽  
Author(s):  
FRANÇOIS POTTIER

AbstractThis paper presents a formal definition and machine-checked soundness proof for a very expressive type-and-capability system, that is, a low-level type system that keeps precise track of ownership and side effects. The programming language has first-class functions and references. The type system's features include the following: universal, existential, and recursive types; subtyping; a distinction between affine and unrestricted data; support for strong updates; support for naming values and heap fragments via singleton and group regions; a distinction between ordinary values (which exist at runtime) and capabilities (which do not); support for dynamic reorganizations of the ownership hierarchy by disassembling and reassembling capabilities; and support for temporarily or permanently hiding a capability via frame and anti-frame rules. One contribution of the paper is the definition of the type-and-capability system itself. We present the system as modularly as possible. In particular, at the core of the system, the treatment of affinity, in the style of dual intuitionistic linear logic, is formulated in terms of an arbitrarymonotonic separation algebra, a novel axiomatization of resources, ownership, and the manner in which they evolve with time. Only the peripheral layers of the system are aware that we are dealing with a specific monotonic separation algebra, whose resources are references and regions. This semi-abstract organization should facilitate further extensions of the system with new forms of resources. The other main contribution is a machine-checked proof of type soundness. The proof is carried out in the Wright and Felleisen's syntactic style. This offers an evidence that this relatively simple-minded proof technique can scale up to systems of this complexity, and constitutes a viable alternative to more sophisticated semantic proof techniques. We do not claim that the syntactic technique is superior: We simply illustrate how it is used and highlight its strengths and shortcomings.


Author(s):  
Markus Schneider

Spatial database systems and geographical information systems are currently only able to support geographical applications that deal with only crisp spatial objects, that is, objects whose extent, shape, and boundary are precisely determined. Examples are land parcels, school districts, and state territories. However, many new, emerging applications are interested in modeling and processing geographic data that are inherently characterized by spatial vagueness or spatial indeterminacy. Examples are air polluted areas, temperature zones, and lakes. These applications require novel concepts due to the lack of adequate approaches and systems. In this chapter, the authors show how soft computing techniques can provide a solution to this problem. They give an overview of two type systems or algebras that can be integrated into database systems and utilized for the modeling and handling of spatial vagueness. The first type system, called Vague Spatial Algebra (VASA), is based on well known, general, and exact models of crisp spatial data types and introduces vague points, vague lines, and vague regions. This enables an exact definition of the vague spatial data model since we can build it upon an already existing theory of spatial data types. The second type system, called Fuzzy Spatial Algebra (FUSA), leverages fuzzy set theory and fuzzy topology and introduces novel fuzzy spatial data types for fuzzy points, fuzzy lines, and fuzzy regions. This enables an even more fine-grained modeling of spatial objects that do not have sharp boundaries and interiors or whose boundaries and interiors cannot be precisely determined. This chapter provides a formal definition of the structure and semantics of both type systems. Further, the authors introduce spatial set operations for both algebras and obtain vague and fuzzy versions of geometric intersection, union, and difference. Finally, they describe how these data types can be embedded into extensible databases and show some example queries.


2000 ◽  
Vol 10 (4) ◽  
pp. 327-351 ◽  
Author(s):  
RALF HINZE

A trie is a search tree scheme that employs the structure of search keys to organize information. Tries were originally devised as a means to represent a collection of records indexed by strings over a fixed alphabet. Based on work by C. P. Wadsworth and others, R. H. Connelly and F. L. Morris generalized the concept to permit indexing by elements built according to an arbitrary signature. Here we go one step further, and define tries and operations on tries generically for arbitrary datatypes of first-order kind, including parameterized and nested datatypes. The derivation employs techniques recently developed in the context of polytypic programming and can be regarded as a comprehensive case study in this new programming paradigm. It is well known that for the implementation of generalized tries, nested datatypes and polymorphic recursion are needed. Implementing tries for first-order kinded datatypes places even greater demands on the type system: it requires rank-2 type signatures and second-order nested datatypes. Despite these requirements, the definition of tries is surprisingly simple, which is mostly due to the framework of polytypic programming.


2009 ◽  
Vol 19 (5) ◽  
pp. 1029-1059 ◽  
Author(s):  
LIONEL VAUX

We introduce an extension of the pure lambda calculus by endowing the set of terms with the structure of a vector space, or, more generally, of a module, over a fixed set of scalars. Moreover, terms are subject to identities similar to the usual pointwise definition of linear combinations of functions with values in a vector space. We then study a natural extension of beta reduction in this setting: we prove it is confluent, then discuss consistency and conservativity over the ordinary lambda calculus. We also provide normalisation results for a simple type system.


2018 ◽  
Vol 18 (02) ◽  
pp. e12
Author(s):  
Carlos Alvez ◽  
Ernesto Miranda ◽  
Graciela Etchart ◽  
Silvia Ruiz

Biometric applications have grown significantly in recent years, particularly iris-based systems. In the present work, an extension of an Object Relational Database Management System for the integral management of a biometric system based on the human iris was presented. Although at present, there are many database extensions for different domains, in no case for biometric applications. The proposed extension includes both the extension of the type system and the definition of domain indexes for performance improvement. The aim of this work is to provide a tool that facilitates the development of biometric applications based on the iris feature. Its development is based on a reference architecture that includes both the management of images of the iris trait, its associated metadata and the necessary methods for both manipulation and queries. An implementation of the extension is performed for PostgreSQL DBMS, and SP-GiST framework is used in the implementation of a domain index. Experiments were carried out to evaluate the performance of the proposed index, which shows improvements in query execution times.


2014 ◽  
Vol 26 (1) ◽  
pp. 89-113 ◽  
Author(s):  
NEIL GHANI ◽  
PETER HANCOCK

Induction recursion offers the possibility of a clean, simple and yet powerful meta-language for the type system of a dependently typed programming language. At its crux, induction recursion allows us to define a universe, that is a setUofcodesand a decoding functionT : U → Dwhich assigns to every codeu : U, a valueT, uof some typeD, e.g.the large type Set of small types or sets. The name induction recursion refers to the build-up of codes inUusing inductive clauses, simultaneously with the definition of the functionT, by structural recursion on codes.Our contribution is to (i) bring out explicitly algebraic structure which is less visible in the original type-theoretic presentation – in particular showing how containers and monads play a pivotal role within induction recursion; and (ii) use these structures to present a clean and high level definition of induction recursion suitable for use in functional programming.


2008 ◽  
Vol 5 (2) ◽  
pp. 63-86 ◽  
Author(s):  
João Seco ◽  
Ricardo Silva ◽  
Margarida Piriquito

This paper describes an evolution of the ComponentJ programming language, a component-based Java-like programming language where composition is the chosen structuring mechanism. ComponentJ constructs allow for the high-level specification of component structures, which are the basis for the definition of compound objects. In this paper we present a new language design for ComponentJ which is more flexible and also allows the dynamic reconfiguration of objects. The manipulation of components and composition operations at the programming language level allows for the compile time verification, by a type system, of safety structural properties of ComponentJ programs. This work is based on earlier fundamental results where the main concepts are presented and justified in the form of a core component calculus. .


Sign in / Sign up

Export Citation Format

Share Document