scholarly journals A Denotational Investigation of Defunctionalization

2000 ◽  
Vol 7 (47) ◽  
Author(s):  
Lasse R. Nielsen

<p>Defunctionalization was introduced by John Reynolds in his 1972<br />article Definitional Interpreters for Higher-Order Programming <br />Languages. Defunctionalization transforms a higher-order program into a first-order one, representing functional values as data structures. Since then it has been used quite widely, but we observe that it has never been proven correct. We formalize defunctionalization denotationally for a typed functional language, and we prove that it preserves the meaning of any terminating program. Our proof uses logical relations.</p><p>Keywords: defunctionalization, program transformation, denotational semantics, logical relations.</p>

2010 ◽  
Vol 20 (5) ◽  
pp. 723-751
Author(s):  
THOMAS ANBERRÉE

We consider a functional language that performs non-deterministic tests on real numbers and define a denotational semantics for that language based on Smyth powerdomains. The semantics is only an approximate one because the denotation of a program for a real number may not be precise enough to tell which real number the program computes. However, for many first-order total functions f : n → , there exists a program for f whose denotation is precise enough to show that the program indeed computes the function f. In practice, it is not difficult to find programs like this that possess a faithful denotation. We provide a few examples of such programs and the corresponding proofs of correctness.


2002 ◽  
Vol 9 (2) ◽  
Author(s):  
Lasse R. Nielsen

We build on Danvy and Nielsen's first-order program transformation into continuation-passing style (CPS) to present a new correctness proof of the converse transformation, i.e., a one-pass transformation from CPS back to direct style. Previously published proofs were based on, e.g., a one-pass higher-order CPS transformation, and were complicated by having to reason about higher-order functions. In contrast, this work is based on a one-pass CPS transformation that is both compositional and first-order, and therefore the proof simply proceeds by structural induction on syntax.


2001 ◽  
Vol 8 (23) ◽  
Author(s):  
Olivier Danvy ◽  
Lasse R. Nielsen

We study practical applications of Reynolds's defunctionalization technique, which is a whole-program transformation from higher-order to first-order functional programs. This study leads us to discover new connections between seemingly unrelated higher-order and first-order specifications and their correctness proofs. We thus perceive defunctionalization both as a springboard and as a bridge: as a springboard for discovering new connections between the first-order world and the higher-order world; and as a bridge for transferring existing results between first-order and higher-order settings.


2018 ◽  
Vol 25 (5) ◽  
pp. 534-548
Author(s):  
Sergei Grechanik

A polyprogram is a generalization of a program which admits multiple definitions of a single function. Such objects arise in different transformation systems, such as the Burstall-Darlington framework or equality saturation. In this paper, we introduce the notion of a polyprogram in a non-strict first-order functional language. We define denotational semantics for polyprograms and describe some possible transformations of polyprograms, namely we present several main transformations in two different styles: in the style of the Burstall-Darlington framework and in the style of equality saturation. Transformations in the style of equality saturation are performed on polyprograms in decomposed form, where the difference between functions and expressions is blurred, and so is the difference between substitution and unfolding. Decomposed polyprograms are well suited for implementation and reasoning, although they are not very human-readable. We also introduce the notion of polyprogram bisimulation which enables a powerful transformation called merging by bisimulation, corresponding to proving equivalence of functions by induction or coinduction. Polyprogram bisimulation is a concept inspired by bisimulation of labelled transition systems, but yet it is quite different, because polyprogram bisimulation treats every definition as self-sufficient, that is a function is considered to be defined by any of its definitions, whereas in an LTS the behaviour of a state is defined by all transitions from this state. We present an algorithm for enumerating polyprogram bisimulations of a certain form. The algorithm consists of two phases: enumerating prebisimulations and converting them to proper bisimulations. This separation is required because polyprogram bisimulations take into account the possibility of parameter permutation. We prove correctness of this algorithm and formulate a certain weak form of its completeness. The article is published in the author’s wording.


1994 ◽  
Vol 4 (4) ◽  
pp. 515-555 ◽  
Author(s):  
Wei-Ngan Chin

AbstractLarge functional programs are often constructed by decomposing each big task into smaller tasks which can be performed by simpler functions. This hierarchical style of developing programs has been found to improve programmers' productivity because smaller functions are easier to construct and reuse. However, programs written in this way tend to be less efficient. Unnecessary intermediate data structures may be created. More function invocations may be required.To reduce such performance penalties, Phil Wadler proposed a transformation algorithm, called deforestation, which could automatically fuse certain composed expressions together to eliminate intermediate tree-like data structures. However, his technique is currently safe (terminates with no loss of efficiency) for only a subset of first-order expressions.This paper will generalise the deforestation technique to make it safe for all first-order and higher-order functional programs. Our generalisation is explained using a model for safe fusion which views each function as a producer and its parameters as consumers. Through this model, syntactic program properties are proposed to classify producers and consumers as either safe or unsafe. This classification is used to identify sub-terms that can be safely fused/eliminated. We present the generalised transformation algorithm, illustrate it with examples and provide a termination proof for the transformation algorithm of first-order programs. This paper also contains a suite of additional techniques to further improve the basic safe fusion method. These improvements could be viewed as enhancements to compensate for some inadequacies of the syntactic analyses used.


1997 ◽  
Vol 7 (1) ◽  
pp. 103-123 ◽  
Author(s):  
J. HAMMES ◽  
S. SUR ◽  
W. BÖHM

In this paper we investigate the effectiveness of functional language features when writing scientific codes. Our programs are written in the purely functional subset of Id and executed on a one node Motorola Monsoon machine, and in Haskell and executed on a Sparc 2. In the application we study – the NAS FT benchmark, a three-dimensional heat equation solver – it is necessary to target and select one-dimensional sub-arrays in three-dimensional arrays. Furthermore, it is important to be able to share computation in array definitions. We compare first order and higher order implementations of this benchmark. The higher order version uses functions to select one-dimensional sub-arrays, or slices, from a three-dimensional object, whereas the first order version creates copies to achieve the same result. We compare various representations of a three-dimensional object, and study the effect of strictness in Haskell. We also study the performance of our codes when employing recursive and iterative implementations of the one-dimensional FFT, which forms the kernel of this benchmark. It turns out that these languages still have quite inefficient implementations, with respect to both space and time. For the largest problem we could run (323), Haskell is 15 times slower than Fortran and uses three times more space than is absolutely necessary, whereas Id on Monsoon uses nine times more cycles than Fortran on the MIPS R3000, and uses five times more space than is absolutely necessary. This code, and others like it, should inspire compiler writers to improve the performance of functional language implementations.


10.29007/t4gz ◽  
2018 ◽  
Author(s):  
Geoff Hamilton ◽  
Morten Heine Sørensen

A program transformation technique should terminate, return efficient output programs and be efficient itself.For positive supercompilation ensuring termination requires memoisation of expressions, and these are subsequently used to determine when to perform generalization and folding. For a first-order language, every infinitesequence of transformation steps must include function unfolding, so it is sufficient to memoise only those expressions immediately prior to a function unfolding step.However, for a higher-order language, it is possible for an expression to have an infinite sequence of transformation steps which do not include function unfolding, so memoisation prior to a function unfolding step is not sufficient by itselfto ensure termination. But memoising additional expressions is expensive during transformation and may lead to less efficient output programs due to auxiliary functions. This additional memoisation may happen explicitly during transformationor implicitly via a pre-processing transformation as outlined in previous work by the first author.We introduce a new technique for local driving in higher-order positive supercompilation which obliviates the need for memoising other expressions than function unfolding steps, thereby improving efficiency of both the transformation and the generated programs. We exploit the fact, due to the second author in the setting of type-free lambda-calculus that every expression with an infinite sequence of transformation steps not involving function unfolding must have somthing like the term Omega = (lambda x. x x) (lambda x . x x) embedded within it in a certain sense. The technique has proven useful on a host of examples.


Author(s):  
Petar Vukmirović ◽  
Jasmin Blanchette ◽  
Simon Cruanes ◽  
Stephan Schulz

AbstractDecades of work have gone into developing efficient proof calculi, data structures, algorithms, and heuristics for first-order automatic theorem proving. Higher-order provers lag behind in terms of efficiency. Instead of developing a new higher-order prover from the ground up, we propose to start with the state-of-the-art superposition prover E and gradually enrich it with higher-order features. We explain how to extend the prover’s data structures, algorithms, and heuristics to $$\lambda $$ λ -free higher-order logic, a formalism that supports partial application and applied variables. Our extension outperforms the traditional encoding and appears promising as a stepping stone toward full higher-order logic.


2019 ◽  
Vol 42 ◽  
Author(s):  
Daniel J. Povinelli ◽  
Gabrielle C. Glorioso ◽  
Shannon L. Kuznar ◽  
Mateja Pavlic

Abstract Hoerl and McCormack demonstrate that although animals possess a sophisticated temporal updating system, there is no evidence that they also possess a temporal reasoning system. This important case study is directly related to the broader claim that although animals are manifestly capable of first-order (perceptually-based) relational reasoning, they lack the capacity for higher-order, role-based relational reasoning. We argue this distinction applies to all domains of cognition.


Author(s):  
Julian M. Etzel ◽  
Gabriel Nagy

Abstract. In the current study, we examined the viability of a multidimensional conception of perceived person-environment (P-E) fit in higher education. We introduce an optimized 12-item measure that distinguishes between four content dimensions of perceived P-E fit: interest-contents (I-C) fit, needs-supplies (N-S) fit, demands-abilities (D-A) fit, and values-culture (V-C) fit. The central aim of our study was to examine whether the relationships between different P-E fit dimensions and educational outcomes can be accounted for by a higher-order factor that captures the shared features of the four fit dimensions. Relying on a large sample of university students in Germany, we found that students distinguish between the proposed fit dimensions. The respective first-order factors shared a substantial proportion of variance and conformed to a higher-order factor model. Using a newly developed factor extension procedure, we found that the relationships between the first-order factors and most outcomes were not fully accounted for by the higher-order factor. Rather, with the exception of V-C fit, all specific P-E fit factors that represent the first-order factors’ unique variance showed reliable and theoretically plausible relationships with different outcomes. These findings support the viability of a multidimensional conceptualization of P-E fit and the validity of our adapted instrument.


Sign in / Sign up

Export Citation Format

Share Document