A First-Order Functional Language

Author(s):  
Peter Sestoft
1991 ◽  
Vol 13 (4) ◽  
pp. 577-625 ◽  
Author(s):  
Radha Jagadeesan ◽  
Keshav Pingali ◽  
Prakash Panangaden

2010 ◽  
Vol 20 (5) ◽  
pp. 723-751
Author(s):  
THOMAS ANBERRÉE

We consider a functional language that performs non-deterministic tests on real numbers and define a denotational semantics for that language based on Smyth powerdomains. The semantics is only an approximate one because the denotation of a program for a real number may not be precise enough to tell which real number the program computes. However, for many first-order total functions f : n → , there exists a program for f whose denotation is precise enough to show that the program indeed computes the function f. In practice, it is not difficult to find programs like this that possess a faithful denotation. We provide a few examples of such programs and the corresponding proofs of correctness.


1998 ◽  
Vol 8 (4) ◽  
pp. 367-412 ◽  
Author(s):  
ANDREW TOLMACH ◽  
DINO P. OLIVA

We describe a system that supports source-level integration of ML-like functional language code with ANSI C or Ada83 code. The system works by translating the functional code into type-correct, ‘vanilla’ C or Ada; it offers simple, efficient, type-safe inter-operation between new functional code components and ‘legacy’ third-generation-language components. Our translator represents a novel synthesis of techniques including user-parameterized specification of primitive types and operators; removal of polymorphism by code specialization; removal of higher-order functions using closure datatypes and interpretation; and aggressive optimization of the resulting first-order code, which can be viewed as encoding the result of a closure analysis. Programs remain fully typed at every stage of the translation process, using only simple, standard type systems. Target code runs at speeds comparable to the output of current optimizing ML compilers, even though handicapped by a conservative garbage collector.


2000 ◽  
Vol 7 (47) ◽  
Author(s):  
Lasse R. Nielsen

<p>Defunctionalization was introduced by John Reynolds in his 1972<br />article Definitional Interpreters for Higher-Order Programming <br />Languages. Defunctionalization transforms a higher-order program into a first-order one, representing functional values as data structures. Since then it has been used quite widely, but we observe that it has never been proven correct. We formalize defunctionalization denotationally for a typed functional language, and we prove that it preserves the meaning of any terminating program. Our proof uses logical relations.</p><p>Keywords: defunctionalization, program transformation, denotational semantics, logical relations.</p>


2018 ◽  
Vol 25 (5) ◽  
pp. 534-548
Author(s):  
Sergei Grechanik

A polyprogram is a generalization of a program which admits multiple definitions of a single function. Such objects arise in different transformation systems, such as the Burstall-Darlington framework or equality saturation. In this paper, we introduce the notion of a polyprogram in a non-strict first-order functional language. We define denotational semantics for polyprograms and describe some possible transformations of polyprograms, namely we present several main transformations in two different styles: in the style of the Burstall-Darlington framework and in the style of equality saturation. Transformations in the style of equality saturation are performed on polyprograms in decomposed form, where the difference between functions and expressions is blurred, and so is the difference between substitution and unfolding. Decomposed polyprograms are well suited for implementation and reasoning, although they are not very human-readable. We also introduce the notion of polyprogram bisimulation which enables a powerful transformation called merging by bisimulation, corresponding to proving equivalence of functions by induction or coinduction. Polyprogram bisimulation is a concept inspired by bisimulation of labelled transition systems, but yet it is quite different, because polyprogram bisimulation treats every definition as self-sufficient, that is a function is considered to be defined by any of its definitions, whereas in an LTS the behaviour of a state is defined by all transitions from this state. We present an algorithm for enumerating polyprogram bisimulations of a certain form. The algorithm consists of two phases: enumerating prebisimulations and converting them to proper bisimulations. This separation is required because polyprogram bisimulations take into account the possibility of parameter permutation. We prove correctness of this algorithm and formulate a certain weak form of its completeness. The article is published in the author’s wording.


1995 ◽  
Vol 146 (1-2) ◽  
pp. 69-108 ◽  
Author(s):  
Zena M. Ariola ◽  
Arvind

1997 ◽  
Vol 7 (1) ◽  
pp. 103-123 ◽  
Author(s):  
J. HAMMES ◽  
S. SUR ◽  
W. BÖHM

In this paper we investigate the effectiveness of functional language features when writing scientific codes. Our programs are written in the purely functional subset of Id and executed on a one node Motorola Monsoon machine, and in Haskell and executed on a Sparc 2. In the application we study – the NAS FT benchmark, a three-dimensional heat equation solver – it is necessary to target and select one-dimensional sub-arrays in three-dimensional arrays. Furthermore, it is important to be able to share computation in array definitions. We compare first order and higher order implementations of this benchmark. The higher order version uses functions to select one-dimensional sub-arrays, or slices, from a three-dimensional object, whereas the first order version creates copies to achieve the same result. We compare various representations of a three-dimensional object, and study the effect of strictness in Haskell. We also study the performance of our codes when employing recursive and iterative implementations of the one-dimensional FFT, which forms the kernel of this benchmark. It turns out that these languages still have quite inefficient implementations, with respect to both space and time. For the largest problem we could run (323), Haskell is 15 times slower than Fortran and uses three times more space than is absolutely necessary, whereas Id on Monsoon uses nine times more cycles than Fortran on the MIPS R3000, and uses five times more space than is absolutely necessary. This code, and others like it, should inspire compiler writers to improve the performance of functional language implementations.


Sign in / Sign up

Export Citation Format

Share Document