A comprehensible guide to a new unifier for CIC including universe polymorphism and overloading

Author(s):  
BETA ZILIANI ◽  
MATTHIEU SOZEAU

AbstractUnification is a core component of every proof assistant or programming language featuring dependent types. In many cases, it must deal with higher order problems up to conversion. Since unification in such conditions is undecidable, unification algorithms may include several heuristics to solve common problems. However, when the stack of heuristics grows large, the result and complexity of the algorithm can become unpredictable. Our contributions are twofold: (1) We present a full description of a new unification algorithm for the Calculus of Inductive Constructions (the base logic of COQ), building it up from a basic calculus to the full Calculus of Inductive Constructions as it is implemented in COQ, including universe polymorphism, canonical structures (the overloading mechanism baked into COQ's unification), and a small set of useful heuristics. (2) We implemented our algorithm, and tested it on several libraries, providing evidence that the selected set of heuristics suffices for large developments.

10.29007/zhpc ◽  
2018 ◽  
Author(s):  
Tomer Libal

We present an algorithm for the bounded unification of higher-order terms.The algorithm extends G. P. Huet's pre-unification algorithm with rules for the generation and folding of regular terms.The concise form of the algorithm allows the reuse of the pre-unification correctness proof. Furthermore, the regular termscan be restricted in order to decide the unifiability problem.Finally, the algorithm avoids re-computation of terms in a non-deterministic search which leads to a better performance in practice when compared to other boundedunification algorithms.


10.29007/zpg2 ◽  
2018 ◽  
Author(s):  
Alexander Leitsch ◽  
Tomer Libal

The efficiency of the first-order resolution calculus is impaired when lifting it to higher-order logic. The main reason for that is the semi-decidability and infinitary natureof higher-order unification algorithms, which requires the integration of unification within the calculus and results in a non-efficient search for refutations.We present a modification of the constrained resolution calculus (Huet'72) which uses an eager unification algorithm while retaining completeness. Thealgorithm is complete with regard to bounded unification only, which for many cases, does not pose a problem in practice.


Author(s):  
JESPER COCKX ◽  
DOMINIQUE DEVRIESE

AbstractDependently typed languages such as Agda, Coq, and Idris use a syntactic first-order unification algorithm to check definitions by dependent pattern matching. However, standard unification algorithms implicitly rely on principles such asuniqueness of identity proofsandinjectivity of type constructors. These principles are inadmissible in many type theories, particularly in the new and promising branch known as homotopy type theory. As a result, programs and proofs in these new theories cannot make use of dependent pattern matching or other techniques relying on unification, and are as a result much harder to write, modify, and understand. This paper proposes a proof-relevant framework for reasoning formally about unification in a dependently typed setting. In this framework, unification rules compute not just a unifier but also a corresponding soundness proof in the form of anequivalencebetween two sets of equations. By rephrasing the standard unification rules in a proof-relevant manner, they are guaranteed to preserve soundness of the theory. In addition, it enables us to safely add new rules that can exploit the dependencies between the types of equations, such as rules for eta-equality of record types and higher dimensional unification rules for solving equations between equality proofs. Using our framework, we implemented a complete overhaul of the unification algorithm used by Agda. As a result, we were able to replace previousad-hocrestrictions with formally verified unification rules, fixing a substantial number of bugs in the process. In the future, we may also want to integrate new principles with pattern matching, for example, the higher inductive types introduced by homotopy type theory. Our framework also provides a solid basis for such extensions to be built on.


1998 ◽  
Vol 8 (5) ◽  
pp. 527-536 ◽  
Author(s):  
PATRIK JANSSON ◽  
JOHAN JEURING

Unification, or two-way pattern matching, is the process of solving an equation involving two first-order terms with variables. Unification is used in type inference in many programming languages and in the execution of logic programs. This means that unification algorithms have to be written over and over again for different term types. Many other functions also make sense for a large class of datatypes; examples are pretty printers, equality checks, maps etc. They can be defined by induction on the structure of user-defined datatypes. Implementations of these functions for different datatypes are closely related to the structure of the datatypes. We call such functions polytypic. This paper describes a unification algorithm parametrised on the type of the terms, and shows how to use polytypism to obtain a unification algorithm that works for all regular term types.


10.29007/vb87 ◽  
2018 ◽  
Author(s):  
Serdar Erbatur ◽  
Deepak Kapur ◽  
Andrew M Marshall ◽  
Paliath Narendran ◽  
Christophe Ringeissen

A critical question in unification theory is how to obtaina unification algorithm for the combination of non-disjointequational theories when there exists unification algorithmsfor the constituent theories. The problem is known to bedifficult and can easily be seen to be undecidable in thegeneral case. Therefore, previous work has focused onidentifying specific conditions and methods in which theproblem is decidable.We continue the investigation in this paper, building onprevious combination results and our own work.We are able to develop a novel approach to the non-disjointcombination problem. The approach is based on a new set ofrestrictions and combination method such that if the restrictionsare satisfied the method produces an algorithm for the unificationproblem in the union of non-disjoint equational theories.


Author(s):  
Judith M. Brock ◽  
Max T. Otten

A knowledge of the distribution of chemical elements in a specimen is often highly useful. In materials science specimens features such as grain boundaries and precipitates generally force a certain order on mental distribution, so that a single profile away from the boundary or precipitate gives a full description of all relevant data. No such simplicity can be assumed in life science specimens, where elements can occur various combinations and in different concentrations in tissue. In the latter case a two-dimensional elemental-distribution image is required to describe the material adequately. X-ray mapping provides such of the distribution of elements.The big disadvantage of x-ray mapping hitherto has been one requirement: the transmission electron microscope must have the scanning function. In cases where the STEM functionality – to record scanning images using a variety of STEM detectors – is not used, but only x-ray mapping is intended, a significant investment must still be made in the scanning system: electronics that drive the beam, detectors for generating the scanning images, and monitors for displaying and recording the images.


1981 ◽  
Vol 26 (7) ◽  
pp. 564-564
Author(s):  
Barclay Martin
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document