Handbook of Logic in Computer Science: Volume 5. Algebraic and Logical Structures
Latest Publications


TOTAL DOCUMENTS

5
(FIVE YEARS 0)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780198537816, 9780191916663

Author(s):  
B. Nördstrom ◽  
K. Petersson

The type theory described in this chapter has been developed by Martin-Löf with the original aim of being a clarification of constructive mathematics. Unlike most other formalizations of mathematics, type theory is not based on predicate logic. Instead, the logical constants are interpreted within type theory through the Curry-Howard correspondence between propositions and sets [Curry and Feys, 1958; Howard, 1980]: a proposition is interpreted as a set whose elements represent the proofs of the proposition. It is also possible to view a set as a problem description in a way similar to Kolmogorov’s explanation of the intuitionistic propositional calculus [Kolmogorov, 1932]. In particular, a set can be seen as a specification of a programming problem; the elements of the set are then the programs that satisfy the specification. An advantage of using type theory for program construction is that it is possible to express both specifications and programs within the same formalism. Furthermore, the proof rules can be used to derive a correct program from a specification as well as to verify that a given program has a certain property. As a programming language, type theory is similar to typed functional languages such as ML [Gordon et al., 1979; Milner et al., 1990] and Haskell [Hudak et al, 1992], but a major difference is that the evaluation of a well-typed program always terminates. The notion of constructive proof is closely related to the notion of computer program. To prove a proposition ("x Î A)($yÎB)P(x,y) constructively means to give a function f which when applied to an element a in A gives an element b in B such that P(a, b) holds. So if the proposition ("xÎ A)($yÎB)P(x,y) expresses a specification, then the function f obtained from the proof is a program satisfying the specification. A constructive proof could therefore itself be seen as a computer program and the process of computing the value of a program corresponds to the process of normalizing a proof. It is by this computational content of a constructive proof that type theory can be used as a programming language; and since the program is obtained from a proof of its specification, type theory can be used as a programming logic.


Author(s):  
Andrew M. Pitts

This chapter provides an introduction to the interaction between category theory and mathematical logic. Category theory describes properties of mathematical structures via their transformations, or ‘morphisms’. On the other hand, mathematical logic provides languages for formalizing properties of structures directly in terms of their constituent parts—elements of sets, functions between sets, relations on sets, and so on. It might seem that the kind of properties that can be described purely in terms of morphisms and their composition would be quite limited. However, beginning with the attempt of Lawvere [1964; 1966; 1969; 1970] to reformulate the foundations of mathematics using the language of category theory, the development of categorical logic over the last three decades has shown that this is far from true. Indeed it turns out that many logical constructs can be characterized in terms of relatively few categorical ones, principal among which is the concept of adjoint functor. In this chapter we will see such categorical characterizations for, amongst other things, the notions of variable, substitution, prepositional connectives and quantifiers, equality, and various type-theoretic constructs. We assume that the reader is familiar with some of the basic notions of category theory, such as functor, natural transformation, (co)limit, and adjunction: see Poigné’s [1992] chapter on Basic Category Theory in Vol. I of this handbook, or any of the several available introductions to category theory slanted towards computer science, such as [Barr and Wells, 1990] and [Pierce, 1991]. There are three recurrent themes in the material we present. Categorical semantics. Many systems of logic can only be modelled in a sufficiently complete way by going beyond the usual set-based structures of classical model theory. Categorical logic introduces the idea of a structure valued in a category C, with the classical model-theoretic notion of structure [Chang and Keisler, 1973] appearing as the special case when C is the category of sets and functions. For a particular logical concept, one seeks to identify what properties (or extra structure) are needed in an arbitrary category to interpret the concept in a way that respects given logical axioms and rules. A well-known example is the interpretation of simply typed lambda calculus in cartesian closed categories.


Author(s):  
J. V. Tucker ◽  
J. I. Zucker

The theory of the computable functions is a mathematical theory of total and partial functions of the form f : Nn →N, and sets of the form. . . SÍ Nn. . .that can be defined by means of algorithms on the set . . . N = {0,1,2, . . . } . . . of natural numbers. The theory establishes what can and cannot be computed in an explicit way using finitely many simple operations on numbers. The set of naturals and a selection of these simple operations together form an algebra. A mathematical objective of the theory is to develop, analyse and compare a variety of models of computation and formal systems for defining functions over a range of algebras of natural numbers. Computability theory on N is of importance in science because it establishes the scope and limits of digital computation. The numbers are realised as concrete symbolic objects and the operations on the numbers can be carried out explicitly, in finitely many concrete symbolic steps. More generally, the numbers can be used to represent or code any form of discrete data. However, the question arises: . . . Can we develop theories of functions that can be defined by means of algorithms on other sets of data?. . . The obvious examples of numerical data are the integer, rational, real and complex numbers; and associated with these numbers there are data such as matrices, polynomials, power series and various types of functions. In addition, there are geometric objects that are represented using the real and complex numbers, including algebraic curves and manifolds. Examples of syntactic data are finite and infinite strings, terms, formulae, trees and graphs. For each set of data there are many choices for a collection of operations from which to build algorithms. . .How specific to the set of data and chosen operations are these computability theories? What properties do the computability theories over different sets of data have in common? . . . The theory of the computable functions on N is stable, rich and useful; will the theory of computable functions on the sets of real and complex numbers, and the other data sets also be so? The theory of computable functions on arbitrary many-sorted algebras will answer these questions.


Author(s):  
J. Loeckx ◽  
H.-D. Ehrich

It is widely accepted that the quality of software can be improved if its design is systematically based on the principles of modularization and formalization. Modularization consists in replacing a problem by several “smaller” ones. Formalization consists in using a formal language; it obliges the software designer to be precise and principally allows a mechanical treatment. One may distinguish two modularization techniques for the software design. The first technique consists in a modularization on the basis of the control structures. It is used in classical programming languages where it leads to the notion of a procedure. Moreover, it is used in “imperative” specification languages such as VDM [Woodman and Heal, 1993; Andrews and Ince, 1991], Raise [Raise Development Group, 1995], Z [Spivey, 1989] and B [Abrial, 1996]. The second technique consists in a modularization on the basis of the data structures. While modern programming languages such as Ada [Barstow, 1983] and ML [Paulson, 1991] provide facilities for this modularization technique, its systematic use leads to the notion of abstract data types. This technique is particularly interesting in the design of software for non-numerical problems. Compared with the first technique it is more abstract in the sense that algebras are more abstract than algorithms; in fact, control structures are related to algorithms whereas data structures are related to algebras. Formalization leads to the use of logic. The logics used are generally variants of the equational logic or of the first-order predicate logic. The present chapter is concerned with the specification of abstract data types. The theory of abstract data type specification is not trivial, essentially because the objects considered — viz. algebras — have a more complex structure than, say, integers. For more clarity the present chapter treats algebras, logics, specification methods (“specification-in-the-small”), specification languages (“specification-in-the-large”) and parameterization separately. In order to be accessible to a large number of readers it makes use of set-theoretical notions only. This contrasts with a large number of publications on the subject that make use of category theory [Ehrig and Mahr, 1985; Ehrich et al., 1989; Sannella and Tarlecki, 2001].


Author(s):  
Kevin J. Compton ◽  
C. Ward Henson

In this chapter we present a method for obtaining lower bounds on the computational complexity of logical theories, and give several illustrations of its use. This method is an extension of widely used procedures for proving the recursive undecidability of logical theories. (See Rabin [1965] and Eršov et al. [1965].) One important aspect of this method is that it is based on a family of inseparability results for certain logical problems, closely related to the well-known inseparability result of Trakhtenbrot (as refined by Vaught), that no recursive set separates the logically valid sentences from those which are false in some finite model, as long as the underlying language has at least one non-unary relation symbol. By using these inseparability results as a foundation, we are able to obtain hereditary lower bounds, i.e., bounds which apply uniformly to all subtheories of the theory. The second important aspect of this method is that we use interpretations to transfer lower bounds from one theory to another. By doing this we eliminate the need to code machine computations into the models of the theory being studied. (The coding of computations is done once and for all in proving the inseparability results.) By using interpretations, attention is centred on much simpler definability considerations, viz., what kinds of binary relations on large finite sets can be defined using short formulas in models of the theory. This is conceptually much simpler than other approaches that have been proposed for obtaining lower bounds, such as the method of bounded concatenations of Fleischmann et al. [1977]. We will deal primarily with theories in first-order logic and monadic second-order logic.


Sign in / Sign up

Export Citation Format

Share Document