Higher-Order Types and Information Modeling

Author(s):  
Terry Halpin

While some information modeling approaches (e.g., the Relational Model and Object-Role Modeling) are typically formalized using first-order logic, other approaches to information modeling include support for higher-order types. There appear to be three main reasons for requiring higher-order types: (1) to permit instances of categorization types to be types themselves (e.g., the Unified Modeling Language introduced power types for this purpose); (2) to directly support quantification over sets and general concepts; (3) to specify business rules that cross levels/meta levels (or ignore level distinctions) in the same model. As the move to higher-order logic may add considerable complexity to the task of formalizing and implementing a modeling approach, it is worth investigating whether the same practical modeling objectives can be met while staying within a first-order framework. This chapter examines some key issues involved, suggests techniques for retaining a first-order formalization, and makes some suggestions for adopting a higher-order semantics.

10.29007/6shf ◽  
2018 ◽  
Author(s):  
Jasmin Christian Blanchette

Nitpick is a counterexample generator for Isabelle/HOL that builds on Kodkod, a SAT-based first-order relational model finder. Nitpick supports unbounded quantification, (co)inductive predicates and datatypes, and (co)recursive functions. Fundamentally a finite model finder, it approximates infinite types by finite subsets. Our experimental results on Isabelle theories and the TPTP library indicate that Nitpick generates more counterexamples than other model finders for higher-order logic, without restrictions on the form of the formulas to falsify.


Author(s):  
Terry Halpin

Some popular information-modeling approaches allow instances of relationships or associations to be treated as entities in their own right. Object-role modeling (ORM) calls this process “objectification” or “nesting.” In the unified modeling language (UML), this modeling technique is called “reification,” and is mediated by means of association classes. While this modeling option is rarely supported by industrial versions of entity-relationship modeling (ER), it is allowed in several academic versions of ER. Objectification is related to the linguistic activity of nominalization, of which two flavors may be distinguished: situational and propositional. In practice, objectification needs to be used judiciously, as its misuse can lead to implementation anomalies, and those modeling approaches that permit objectification often provide incomplete or flawed support for it. This chapter provides an in-depth analysis of objectification, shedding new light on its fundamental nature, and providing practical guidelines on using objectification to model information systems. Because of its richer semantics, the main graphic notation used is that of ORM 2 (the latest generation of ORM); however, the main ideas are relevant to UML and ER as well.


Author(s):  
Terry Halpin

To ensure that a software system accurately reflects the business domain that it models, the system needs to enforce the business rules (constraints and derivation rules) that apply to that domain. From a conceptual modeling perspective, many application domains involve constraints over one or more conceptual schema paths that include one or more conceptual joins (where the same conceptual object plays roles in two relationships). Popular information modeling approaches typically provide only weak support for such conceptual join constraints. This chapter contrasts how these join constraints are catered for in object-role modeling (ORM), the Unified Modeling Language (UML), the Object-oriented Systems Model (OSM), and some popular versions of entity-relationship (ER) modeling. Three main problems for rich support for join constraints are identified: disambiguation of schema paths, disambiguation of join types, and mapping of join constraints to implementation code. To address these problems, some notational, metamodel, and mapping extensions are proposed.


10.29007/n6j7 ◽  
2018 ◽  
Author(s):  
Simon Cruanes

We argue that automatic theorem provers should become more versatile and should be able to tackle problems expressed in richer input formats. Salient research directions include (i) developing tight combinations of SMT solvers and first-order provers; (ii) adding better handling of theories in first-order provers; (iii) adding support for inductive proving; (iv) adding support for user-defined theories and functions; and (v) bringing to the provers some basic abilities to deal with logics beyond first-order, such as higher-order logic.


1971 ◽  
Vol 36 (3) ◽  
pp. 414-432 ◽  
Author(s):  
Peter B. Andrews

In [8] J. A. Robinson introduced a complete refutation procedure called resolution for first order predicate calculus. Resolution is based on ideas in Herbrand's Theorem, and provides a very convenient framework in which to search for a proof of a wff believed to be a theorem. Moreover, it has proved possible to formulate many refinements of resolution which are still complete but are more efficient, at least in many contexts. However, when efficiency is a prime consideration, the restriction to first order logic is unfortunate, since many statements of mathematics (and other disciplines) can be expressed more simply and naturally in higher order logic than in first order logic. Also, the fact that in higher order logic (as in many-sorted first order logic) there is an explicit syntactic distinction between expressions which denote different types of intuitive objects is of great value where matching is involved, since one is automatically prevented from trying to make certain inappropriate matches. (One may contrast this with the situation in which mathematical statements are expressed in the symbolism of axiomatic set theory.).


2021 ◽  
pp. 14-52
Author(s):  
Cian Dorr ◽  
John Hawthorne ◽  
Juhani Yli-Vakkuri

This chapter presents the system of classical higher-order modal logic which will be employed throughout this book. Nothing more than a passing familiarity with classical first-order logic and standard systems of modal logic is presupposed. We offer some general remarks about the kind of commitment involved in endorsing this logic, and motivate some of its more non-standard features. We also discuss how talk about possible worlds can be represented within the system.


Author(s):  
Terry Halpin

When using natural language, people typically refer to individual things by using proper names or definite descriptions. Data modeling languages differ considerably in their support for such linguistic reference schemes. Understanding these differences is important for modeling reference schemes within such languages and for transforming models from one language to another. This article provides a comparative review of reference scheme modeling within the Unified Modeling Language (version 2.5), the Barker dialect of Entity Relationship modeling, Object-Role Modeling (version 2), relational database modeling, and the Web Ontology Language (version 2.0). The author identifies which kinds of reference schemes can be captured within these languages as well as those reference schemes that cannot be. The author's analysis covers simple reference schemes, compound reference schemes, disjunctive reference and context-dependent reference schemes.


1994 ◽  
Vol 4 (4) ◽  
pp. 435-477 ◽  
Author(s):  
Fritz Henglein ◽  
Harry G. Mairson

AbstractWe analyse the computational complexity of type inference for untyped λ-terms in the second-order polymorphic typed λ-calculus (F2) invented by Girard and Reynolds, as well as higher-order extensions F3, F4, …, Fω proposed by Girard. We prove that recognising the F2-typable terms requires exponential time, and for Fω the problem is non-elementary. We show as well a sequence of lower bounds on recognising the Fk-typable terms, where the bound for Fk+1 is exponentially larger than that for Fk.The lower bounds are based on generic simulation of Turing Machines, where computation is simulated at the expression and type level simultaneously. Non-accepting computations are mapped to non-normalising reduction sequences, and hence non-typable terms. The accepting computations are mapped to typable terms, where higher-order types encode reduction sequences, and first-order types encode the entire computation as a circuit, based on a unification simulation of Boolean logic. A primary technical tool in this reduction is the composition of polymorphic functions having different domains and ranges.These results are the first nontrivial lower bounds on type inference for the Girard/Reynolds system as well as its higher-order extensions. We hope that the analysis provides important combinatorial insights which will prove useful in the ultimate resolution of the complexity of the type inference problem.


Sign in / Sign up

Export Citation Format

Share Document