Compile-Time Garbage Collection for Higher-Order Functional Languages

1992 ◽  
Vol 2 (4) ◽  
pp. 483-509 ◽  
Author(s):  
SIMON HUGHES
2008 ◽  
Vol 18 (5-6) ◽  
pp. 821-864 ◽  
Author(s):  
MATTHEW MIGHT ◽  
OLIN SHIVERS

AbstractWe present two complementary improvements for abstract-interpretation-based flow analysis of higher-order languages: (1) abstract garbage collection and (2) abstract counting. Abstract garbage collection is an analog to its concrete counterpart: the analysis determines when an abstract resource has become unreachable, and then, re-allocates it as fresh. This prevents flow sets from joining during abstract interpretation, which has two immediate effects: (1) the precision of the interpretation increases and (2) its running time often falls. In abstract counting, the analysis tracks how many times an abstract resource has been allocated. A count of one implies that the abstract resource momentarily represents only one concrete resource. This knowledge, in turn, drives environment analysis, expanding the kind (rather than just the degree) of optimization available to the compiler.


2014 ◽  
Vol 24 (2-3) ◽  
pp. 218-283 ◽  
Author(s):  
J. IAN JOHNSON ◽  
ILYA SERGEY ◽  
CHRISTOPHER EARL ◽  
MATTHEW MIGHT ◽  
DAVID VAN HORN

AbstractIn the static analysis of functional programs, pushdown flow analysis and abstract garbage collection push the boundaries of what we can learn about programs statically. This work illuminates and poses solutions to theoretical and practical challenges that stand in the way of combining the power of these techniques. Pushdown flow analysis grants unbounded yet computable polyvariance to the analysis of return-flow in higher-order programs. Abstract garbage collection grants unbounded polyvariance to abstract addresses which become unreachable between invocations of the abstract contexts in which they were created. Pushdown analysis solves the problem of precisely analyzing recursion in higher-order languages; abstract garbage collection is essential in solving the “stickiness” problem. Alone, our benchmarks demonstrate that each method can reduce analysis times and boost precision by orders of magnitude. We combine these methods. The challenge in marrying these techniques is not subtle: computing the reachable control states of a pushdown system relies on limiting access during transition to the top of the stack; abstract garbage collection, on the other hand, needs full access to the entire stack to compute a root set, just as concrete collection does.Conditionalpushdown systems were developed for just such a conundrum, but existing methods are ill-suited for the dynamic nature of garbage collection. We show fully precise and approximate solutions to the feasible paths problem for pushdown garbage-collecting control-flow analysis. Experiments reveal synergistic interplay between garbage collection and pushdown techniques, and the fusion demonstrates “better-than-both-worlds” precision.


Author(s):  
KAZUTAKA MATSUDA ◽  
MENG WANG

AbstractA bidirectional transformation is a pair of mappings between source and view data objects, one in each direction. When the view is modified, the source is updated accordingly with respect to some laws. One way to reduce the development and maintenance effort of bidirectional transformations is to have specialized languages in which the resulting programs are bidirectional by construction—giving rise to the paradigm of bidirectional programming. In this paper, we develop a framework for applicative-style and higher-order bidirectional programming, in which we can write bidirectional transformations as unidirectional programs in standard functional languages, opening up access to the bundle of language features previously only available to conventional unidirectional languages. Our framework essentially bridges two very different approaches of bidirectional programming, namely the lens framework and Voigtländer's semantic bidirectionalization, creating a new programming style that is able to obtain benefits from both.


1991 ◽  
Vol 1 (4) ◽  
pp. 459-494 ◽  
Author(s):  
Hanne Riis Nielson ◽  
Flemming Nielson

AbstractTraditional functional languages do not have an explicit distinction between binding times. It arises implicitly, however, as one typically instantiates a higher-order function with the arguments that are known, whereas the unknown arguments remain to be taken as parameters. The distinction between ‘known’ and ‘unknown’ is closely related to the distinction between binding times like ‘compile-time’ and ‘run-time’. This leads to the use of a combination of polymorphic type inference and binding time analysis for obtaining the required information about which arguments are known.Following the current trend in the implementation of functional languages we then transform the run-time level of the program (not the compile-time level) into categorical combinators. At this stage we have a natural distinction between two kinds of program transformations: partial evaluation, which involves the compile-time level of our notation, and algebraic transformations (i.e., the application of algebraic laws), which involves the run-time level of our notation.By reinterpreting the combinators in suitable ways we obtain specifications of abstract interpretations (or data flow analyses). In particular, the use of combinators makes it possible to use a general framework for specifying both forward and backward analyses. The results of these analyses will be used to enable program transformations that are not applicable under all circumstances.Finally, the combinators can also be reinterpreted to specify code generation for various (abstract) machines. To improve the efficiency of the code generated, one may apply abstract interpretations to collect extra information for guiding the code generation. Peephole optimizations may be used to improve the code at the lowest level.


Sign in / Sign up

Export Citation Format

Share Document