scholarly journals Reasoning About Code-Generation in Two-Level Languages

2000 ◽  
Vol 7 (46) ◽  
Author(s):  
Zhe Yang

<p>We show that two-level languages are not only a good tool for describing code-generation algorithms, but a good tool for reasoning about them<br />as well. Indeed, some general properties of two-level languages capture common proof obligations of code-generation algorithms in the form of two-level programs.<br /></p><p>- To prove that the generated code behaves as desired, we use an erasure property, which equationally relates the generated code to an erasure of the original two-level program in the object language, thereby reducing the two-level proof obligation to a simpler one-level obligation.</p><p><br />- To prove that the generated code satisfies certain syntactic constraints, e.g., that it is in some normal form, we use a type-preservation property for a refined type system that enforces these constraints.</p><p><br />In addition, to justify concrete implementations of code-generation algorithms in one-level languages, we use a native embedding of a two-level language into a one-level language. We present two-level languages with these properties both for a call-by-name object language and for a call-by-value object language with computational effects. Indeed, it is these properties that guide our language design in the call-by-value case. We consider two classes of non-trivial applications:<br />one-pass transformations into continuation-passing style and<br />type-directed partial evaluation for call-by-name and for call-by-value.</p><p>Keywords. Two-level languages, erasure, type preservation, native implementation,<br />partial evaluation.</p>

1997 ◽  
Vol 4 (43) ◽  
Author(s):  
Vincent Balat ◽  
Olivier Danvy

We investigate the synergy between type-directed partial evaluation and run-time code generation for the Caml dialect of ML. Type-directed partial evaluation maps simply typed, closed Caml values to a representation of their long beta-eta-normal form. Caml uses a virtual machine and has the capability to load byte code at run time. Representing the long beta-eta-normal forms as byte code gives us the ability to strongly normalize higher-order values (i.e., weak head normal forms in ML), to compile the resulting strong normal forms into byte code, and to load this byte code all in one go, at run time.<br />We conclude this note with a preview of our current work on scaling<br />up strong normalization by run-time code generation to the Caml<br />module language.


1992 ◽  
Vol 2 (1) ◽  
pp. 55-91 ◽  
Author(s):  
Pierre-Louis Curien ◽  
Giorgio Ghelli

A subtyping relation ≤ between types is often accompanied by a typing rule, called subsumption: if a term a has type T and T≤U, then a has type U. In presence of subsumption, a well-typed term does not codify its proof of well typing. Since a semantic interpretation is most naturally defined by induction on the structure of typing proofs, a problem of coherence arises: different typing proofs of the same term must have related meanings. We propose a proof-theoretical, rewriting approach to this problem. We focus on F≤, a second-order lambda calculus with bounded quantification, which is rich enough to make the problem interesting. We define a normalizing rewriting system on proofs, which transforms different proofs of the same typing judgement into a unique normal proof, with the further property that all the normal proofs assigning different types to a given term in a given environment differ only by a final application of the subsumption rule. This rewriting system is not defined on the proofs themselves but on the terms of an auxiliary type system, in which the terms carry complete information about their typing proof. This technique gives us three different results:— Any semantic interpretation is coherent if and only if our rewriting rules are satisfied as equations.— We obtain a proof of the existence of a minimum type for each term in a given environment.— From an analysis of the shape of normal form proofs, we obtain a deterministic typechecking algorithm, which is sound and complete by construction.


2002 ◽  
Vol 9 (33) ◽  
Author(s):  
Vincent Balat ◽  
Olivier Danvy

We use a code generator--type-directed partial evaluation--to verify conversions between isomorphic types, or more precisely to verify that a composite function is the identity function at some complicated type. A typed functional language such as ML provides a natural support to express the functions and type-directed partial evaluation provides a convenient setting to obtain the normal form of their composition. However, off-the-shelf type-directed partial evaluation turns out to yield gigantic normal forms.<br /> <br />We identify that this gigantism is due to redundancies, and that these redundancies originate in the handling of sums, which uses delimited continuations. We successfully eliminate these redundancies by extending type-directed partial evaluation with memoization capabilities. The result only works for pure functional programs, but it provides an unexpected use of code generation and it yields orders-of-magnitude improvements both in time and in space for type isomorphisms.


2021 ◽  
Vol 20 (5s) ◽  
pp. 1-25
Author(s):  
Timothy Bourke ◽  
Paul Jeanmaire ◽  
Basile Pesin ◽  
Marc Pouzet

Dataflow languages allow the specification of reactive systems by mutually recursive stream equations, functions, and boolean activation conditions called clocks. Lustre and Scade are dataflow languages for programming embedded systems. Dataflow programs are compiled by a succession of passes. This article focuses on the normalization pass which rewrites programs into the simpler form required for code generation. Vélus is a compiler from a normalized form of Lustre to CompCert’s Clight language. Its specification in the Coq interactive theorem prover includes an end-to-end correctness proof that the values prescribed by the dataflow semantics of source programs are produced by executions of generated assembly code. We describe how to extend Vélus with a normalization pass and to allow subsampled node inputs and outputs. We propose semantic definitions for the unrestricted language, divide normalization into three steps to facilitate proofs, adapt the clock type system to handle richer node definitions, and extend the end-to-end correctness theorem to incorporate the new features. The proofs require reasoning about the relation between static clock annotations and the presence and absence of values in the dynamic semantics. The generalization of node inputs requires adding a compiler pass to ensure the initialization of variables passed in function calls.


2007 ◽  
Vol 19 (1) ◽  
pp. 294
Author(s):  
R. C. S. Yadav ◽  
A. Sharma ◽  
G. N. Purohit

Morphological changes and in vitro nuclear maturation of buffalo cumulus–oocyte complexes (COCs) was evaluated subsequent to their cryopreservation by vitrification in solutions containing 4 M, 6 M, 8 M, and 10 M concentrations of glycerol (G) or ethylene glycol (EG) or their combination. COCs collected from buffalo ovaries by aspiration (n = 1342) were equilibrated in 50% of the vitrification solution and then placed in the vitrification solution (Dulbecco's phosphate-buffered saline+0.5M sucrose + 0.5% BSA + cryoprotectant). COCs were transferred to empty semen straws, kept over LN vapor for 2–3 min, and then plunged into LN. After 7–10 days of storage, COCs were warmed and evaluated for morphological damage. Morphologically normal COCs were cultured in vitro (9 replicates each with 5–10 oocytes in 50–100-µL culture drops) in TCM-199 medium supplemented with 5µgmL-1 FSH, 5µgmL-1 LH, and 1 ngmL-1 estradiol with 25mM HEPES, 0.25mM pyruvate, and antibiotics. The COCs were incubated for 24 h at 38±1°C and 5% CO2 in humidified air in a CO2 incubator and evaluated for nuclear maturation at the end of 24 h of culture. Freshly collected COCs were also matured in vitro and kept as controls (n=142). The proportions of COCs retrieved in morphologically normal form were compared by chi-square test; the arcsin transformed data of the proportions of oocytes matured was compared by Duncan's new multiple range test. The proportions of oocytes recovered in a morphologically normal form were highest in the 6M EG group (95.23%), followed by 8M EG (94.0%) and 6M G (90.6%) groups. At 10M concentration, a significantly (P <0.05) lower percentage of oocytes was morphologically normal. The morphological abnormalities recorded were change in shape, rupture of zona pellucida, and leakage of oocyte contents. A significantly higher (65.62%; P <0.05) proportion of fresh oocytes reached metaphase-II compared to oocytes vitrified in all concentrations of G and EG. The proportion of oocytes reaching metaphase-II increased with increasing concentrations of both G and EG, but at 10M concentration the proportion of oocytes reaching metaphase-II decreased. The proportions of COCs reaching metaphase-II in 4M, 6 M, 8M, and 10M glycerol were 6.9%, 21.2%, 25.7%, and 5.5%, respectively. The respective proportions of COCs reaching metaphase-II in 4M, 6 M, 8M, and 10M ethylene glycol were 21.9%, 34.3%, 40.8%, and 7.5%. No significant benefit of in vitro maturation of oocytes was seen for oocytes vitrified in a combination of both G and EG. It was concluded that although vitrification brings about some damage to the oocytes, yet it appears to be a good tool for oocyte cryopreservation, and 8M concentration of either G or EG appears to be optimum for vitrification of buffalo oocytes.


1994 ◽  
Vol 4 (2) ◽  
pp. 127-206 ◽  
Author(s):  
Kim B. Bruce

AbstractTo illuminate the fundamental concepts involved in object-oriented programming languages, we describe the design of TOOPL, a paradigmatic, statically-typed, functional, object-oriented programming language which supports classes, objects, methods, hidden instance variables, subtypes and inheritance.It has proven to be quite difficult to design such a language which has a secure type system. A particular problem with statically type checking object-oriented languages is designing typechecking rules which ensure that methods provided in a superclass will continue to be type correct when inherited in a subclass. The type-checking rules for TOOPL have this feature, enabling library suppliers to provide only the interfaces of classes with actual executable code, while still allowing users to safely create subclasses. To achieve greater expressibility while retaining type-safety, we choose to separate the inheritance and subtyping hierarchy in the language.The design of TOOPL has been guided by an analysis of the semantics of the language, which is given in terms of a model of the F-bounded second-order lambda calculus with fixed points at both the element and type level. This semantics supports the language design by providing a means to prove that the type-checking rules are sound, thus guaranteeing that the language is type-safe.While the semantics of our language is rather complex, involving fixed points at both the element and type level, we believe that this reflects the inherent complexity of the basic features of object-oriented programming languages. Particularly complex features include the implicit recursion inherent in the use of the keyword, self, to refer to the current object, and its corresponding type, MyType. The notions of subclass and inheritance introduce the greatest semantic complexities, whereas the notion of subtype is more straightforward to deal with. Our semantic investigations lead us to recommend caution in the use of inheritance, since small changes to method definitions in subclasses can result in major changes to the meanings of the other methods of the class.


1991 ◽  
Vol 1 (4) ◽  
pp. 459-494 ◽  
Author(s):  
Hanne Riis Nielson ◽  
Flemming Nielson

AbstractTraditional functional languages do not have an explicit distinction between binding times. It arises implicitly, however, as one typically instantiates a higher-order function with the arguments that are known, whereas the unknown arguments remain to be taken as parameters. The distinction between ‘known’ and ‘unknown’ is closely related to the distinction between binding times like ‘compile-time’ and ‘run-time’. This leads to the use of a combination of polymorphic type inference and binding time analysis for obtaining the required information about which arguments are known.Following the current trend in the implementation of functional languages we then transform the run-time level of the program (not the compile-time level) into categorical combinators. At this stage we have a natural distinction between two kinds of program transformations: partial evaluation, which involves the compile-time level of our notation, and algebraic transformations (i.e., the application of algebraic laws), which involves the run-time level of our notation.By reinterpreting the combinators in suitable ways we obtain specifications of abstract interpretations (or data flow analyses). In particular, the use of combinators makes it possible to use a general framework for specifying both forward and backward analyses. The results of these analyses will be used to enable program transformations that are not applicable under all circumstances.Finally, the combinators can also be reinterpreted to specify code generation for various (abstract) machines. To improve the efficiency of the code generated, one may apply abstract interpretations to collect extra information for guiding the code generation. Peephole optimizations may be used to improve the code at the lowest level.


1999 ◽  
Vol 6 (45) ◽  
Author(s):  
Torben Amtoft

We report on a case study in the application of partial evaluation, initiated<br />by the desire to speed up a constraint-based algorithm for control-flow<br /> analysis. We designed and implemented a dedicated partial evaluator,<br />able to specialize the analysis wrt. a given constraint graph and thus <br />remove the interpretive overhead, and measured it with Feeley's Scheme<br />benchmarks. Even though the gain turned out to be rather limited, our<br />investigation yielded valuable feed back in that it provided a better understanding<br />of the analysis, leading us to (re)invent an incremental version.<br />We believe this phenomenon to be a quite frequent spinoff from using <br />partial evaluation, since the removal of interpretive overhead makes the flow<br />of control more explicit and hence pinpoints sources of inefficiency. <br /> Finally, we observed that partial evaluation in our case yields such regular,<br />low-level specialized programs that it begs for run-time code generation.


2015 ◽  
Vol 27 (5) ◽  
pp. 603-625 ◽  
Author(s):  
MARIO COPPO ◽  
MARIANGIOLA DEZANI-CIANCAGLINI ◽  
INES MARGARIA ◽  
MADDALENA ZACCHI

This paper gives a complete characterisation of type isomorphism definable by terms of a λ-calculus with intersection and union types. Unfortunately, when union is considered the Subject Reduction property does not hold in general. However, it is well known that in the λ-calculus, independently of the considered type system, the isomorphism between two types can be realised only by invertible terms. Notably, all invertible terms are linear terms. In this paper, the isomorphism of intersection and union types is investigated using a relevant type system for linear terms enjoying the Subject Reduction property. To characterise type isomorphism, a similarity between types and a type reduction are introduced. Types have a unique normal form with respect to the reduction rules and two types are isomorphic if and only if their normal forms are similar.


Sign in / Sign up

Export Citation Format

Share Document