Coherence of subsumption, minimum typing and type-checking in F ≤

1992 ◽  
Vol 2 (1) ◽  
pp. 55-91 ◽  
Author(s):  
Pierre-Louis Curien ◽  
Giorgio Ghelli

A subtyping relation ≤ between types is often accompanied by a typing rule, called subsumption: if a term a has type T and T≤U, then a has type U. In presence of subsumption, a well-typed term does not codify its proof of well typing. Since a semantic interpretation is most naturally defined by induction on the structure of typing proofs, a problem of coherence arises: different typing proofs of the same term must have related meanings. We propose a proof-theoretical, rewriting approach to this problem. We focus on F≤, a second-order lambda calculus with bounded quantification, which is rich enough to make the problem interesting. We define a normalizing rewriting system on proofs, which transforms different proofs of the same typing judgement into a unique normal proof, with the further property that all the normal proofs assigning different types to a given term in a given environment differ only by a final application of the subsumption rule. This rewriting system is not defined on the proofs themselves but on the terms of an auxiliary type system, in which the terms carry complete information about their typing proof. This technique gives us three different results:— Any semantic interpretation is coherent if and only if our rewriting rules are satisfied as equations.— We obtain a proof of the existence of a minimum type for each term in a given environment.— From an analysis of the shape of normal form proofs, we obtain a deterministic typechecking algorithm, which is sound and complete by construction.

1991 ◽  
Vol 1 (1) ◽  
pp. 3-48 ◽  
Author(s):  
Luca Cardelli ◽  
John C. Mitchell

We define a simple collection of operations for creating and manipulating record structures, where records are intended as finite associations of values to labels. A second-order type system over these operations supports both subtyping and polymorphism. We provide typechecking algorithms and limited semantic models.Our approach unifies and extends previous notions of records, bounded quantification, record extension, and parametrization by row-variables. The general aim is to provide foundations for concepts found in object-oriented languages, within a framework based on typed lambda-calculus.


1999 ◽  
Vol 9 (6) ◽  
pp. 719-739 ◽  
Author(s):  
VENANZIO CAPRETTA ◽  
SILVIO VALENTINI

In this paper we describe a method for proving the normalization property for a large variety of typed lambda calculi of first and second order, which is based on a proof of equivalence of two deduction systems. We first illustrate the method on the elementary example of simply typed lambda calculus, and then we show how to extend it to a more expressive dependent type system. Finally we use it to prove the normalization theorem for Girard's system F.


2021 ◽  
Vol 43 (1) ◽  
pp. 1-73
Author(s):  
David J. Pearce

Rust is a relatively new programming language that has gained significant traction since its v1.0 release in 2015. Rust aims to be a systems language that competes with C/C++. A claimed advantage of Rust is a strong focus on memory safety without garbage collection. This is primarily achieved through two concepts, namely, reference lifetimes and borrowing . Both of these are well-known ideas stemming from the literature on region-based memory management and linearity / uniqueness . Rust brings both of these ideas together to form a coherent programming model. Furthermore, Rust has a strong focus on stack-allocated data and, like C/C++ but unlike Java, permits references to local variables. Type checking in Rust can be viewed as a two-phase process: First, a traditional type checker operates in a flow-insensitive fashion; second, a borrow checker enforces an ownership invariant using a flow-sensitive analysis. In this article, we present a lightweight formalism that captures these two phases using a flow-sensitive type system that enforces “ type and borrow safety .” In particular, programs that are type and borrow safe will not attempt to dereference dangling pointers. Our calculus core captures many aspects of Rust, including copy- and move-semantics, mutable borrowing, reborrowing, partial moves, and lifetimes. In particular, it remains sufficiently lightweight to be easily digested and understood and, we argue, still captures the salient aspects of reference lifetimes and borrowing. Furthermore, extensions to the core can easily add more complex features (e.g., control-flow, tuples, method invocation). We provide a soundness proof to verify our key claims of the calculus. We also provide a reference implementation in Java with which we have model checked our calculus using over 500B input programs. We have also fuzz tested the Rust compiler using our calculus against 2B programs and, to date, found one confirmed compiler bug and several other possible issues.


Author(s):  
Sandip Moi ◽  
Suvankar Biswas ◽  
Smita Pal(Sarkar)

AbstractIn this article, some properties of neutrosophic derivative and neutrosophic numbers have been presented. This properties have been used to develop the neutrosophic differential calculus. By considering different types of first- and second-order derivatives, different kind of systems of derivatives have been developed. This is the first time where a second-order neutrosophic boundary-value problem has been introduced with different types of first- and second-order derivatives. Some numerical examples have been examined to explain different systems of neutrosophic differential equation.


Axiomathes ◽  
2021 ◽  
Author(s):  
Andrew Powell

AbstractThis article provides a survey of key papers that characterise computable functions, but also provides some novel insights as follows. It is argued that the power of algorithms is at least as strong as functions that can be proved to be totally computable in type-theoretic translations of subsystems of second-order Zermelo Fraenkel set theory. Moreover, it is claimed that typed systems of the lambda calculus give rise naturally to a functional interpretation of rich systems of types and to a hierarchy of ordinal recursive functionals of arbitrary type that can be reduced by substitution to natural number functions.


2017 ◽  
Vol 82 (2) ◽  
pp. 709-723 ◽  
Author(s):  
MICHAŁ WALICKI

AbstractGraph normal form, GNF, [1], was used in [2, 3] for analyzing paradoxes in propositional discourses, with the semantics—equivalent to the classical one—defined by kernels of digraphs. The paper presents infinitary, resolution-based reasoning with GNF theories, which is refutationally complete for the classical semantics. Used for direct (not refutational) deduction it is not explosive and allows to identify in an inconsistent discourse, a maximal consistent subdiscourse with its classical consequences. Semikernels, generalizing kernels, provide the semantic interpretation.


Author(s):  
Tarald O. Kvålseth

First- and second-order linear models of mean movement time for serial arm movements aimed at a target and subject to preview constraints and lateral constraints were formulated as extensions of the so-called Fitts's law of motor control. These models were validated on the basis of experimental data from five subjects and found to explain from 80% to 85% of the variation in movement time in the case of the first-order models and from 93% to 95% of such variation for the second-order models. Fitts's index of difficulty (ID) was generally found to contribute more to the movement time than did either the preview ID or the lateral ID defined. Of the different types of errors, target overshoots occurred far more frequently than undershoots.


Author(s):  
Simon A. Neild ◽  
Andrea Cammarano ◽  
David J. Wagg

In this paper we discuss a theoretical technique for decomposing multi-degree-of-freedom weakly nonlinear systems into a simpler form — an approach which has parallels with the well know method for linear modal analysis. The key outcome is that the system resonances, both linear and nonlinear are revealed by the transformation process. For each resonance, parameters can be obtained which characterise the backbone curves, and higher harmonic components of the response. The underlying mathematical technique is based on a near identity normal form transformation. This is an established technique for analysing weakly nonlinear vibrating systems, but in this approach we use a variation of the method for systems of equations written in second-order form. This is a much more natural approach for structural dynamics where the governing equations of motion are written in this form as standard practice. In fact the first step in the method is to carry out a linear modal transformation using linear modes as would typically done for a linear system. The near identity transform is then applied as a second step in the process and one which identifies the nonlinear resonances in the system being considered. For an example system with cubic nonlinearities, we show how the resulting transformed equations can be used to obtain a time independent representation of the system response. We will discuss how the analysis can be carried out with applied forcing, and how the approximations about response frequencies, made during the near-identity transformation, affect the accuracy of the technique. In fact we show that the second-order normal form approach can actually improve the predictions of sub- and super-harmonic responses. Finally we comment on how this theoretical technique could be used as part of a modal testing approach in future work.


2018 ◽  
Vol 20 (2) ◽  
pp. 408-416

Mesophilic biomass and thermophilic biomass samples were isolated and used to remove Dorasyn Red dye from aqueous solutions. The biosorption kinetics of dye uptake by four different types of biomass at three temperatures (20, 30, and 40 °C) were investigated using pseudo-first order kinetics, pseudo-second order kinetics, intraparticle diffusion, Elovich, and Bangham models. The pseudo-second-order kinetics model and the first stage of the intraparticle diffusion model were effective in describing the experimental kinetics data. The biosorption results showed that the mesophilic biomass samples could be useful for removing dye under acidic conditions.


2014 ◽  
Vol 1 (1) ◽  
pp. 7 ◽  
Author(s):  
Hugo Fjelsted Alrøe ◽  
Egon Noe

<p>Cross-disciplinary use of science is needed to solve complex, real-world problems, but carrying out scientific research with multiple very different disciplines is in itself a non-trivial problem. Perspectives matter. In this paper we carry out a philosophical analysis of the perspectival nature of science, focusing on the synchronic structure of scientific perspectives across disciplines and not on the diachronic, historical structure of shifting perspectives within single disciplines that has been widely discussed since Kuhn and Feyerabend. We show what kinds of cross-disciplinary disagreement to expect due to the perspectival structure of science, suggest how to handle different scientific perspectives in cross-disciplinary work through perspectives of a second order, and discuss some fundamental epistemic differences between different types of science.</p>


Sign in / Sign up

Export Citation Format

Share Document