Handbook of Logic in Artificial Intelligence and Logic Programming: Volume 5: Logic Programming
Latest Publications


TOTAL DOCUMENTS

10
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By Oxford University Press

9780198537922, 9780191916670

Author(s):  
Joxan Jaffar ◽  
Michael J. Maher

Constraint Logic Programming (CLP) began as a natural merger of two declarative paradigms: constraint solving and logic programming. This combination helps make CLP programs both expressive and flexible, and in some cases, more efficient than other kinds of programs. Though a relatively new field, CLP has progressed in several and quite different directions. In particular, the early fundamental concepts have been adapted to better serve in different areas of applications. In this survey of CLP, a primary goal is to give a systematic description of the major trends in terms of common fundamental concepts. Consider first an example program in order to identify some crucial CLP concepts. The program below defines the relation sumto(n, 1 + 2 + . . . . . + n) for natural numbers n. . . . sumto(0, 0). . . . . . sumto(N, S) :- N >= 1, N <= S, sumto(N - 1, S - N). . . . The query S <= 3, sumto(N, S) gives rise to three answers (N = 0, S = 0), (N = 1, S =1), and (N = 2, S = 3), and terminates.


Author(s):  
Michael J. O’Donnell

Sections 2.3.4 and 2.3.5 of the chapter ‘Introduction: Logic and Logic Programming Languages’ are crucial prerequisites to this chapter. I summarize their relevance below, but do not repeat their content. Logic programming languages in general are those that compute by deriving semantic consequences of given formulae in order to answer questions. In equational logic programming languages, the formulae are all equations expressing postulated properties of certain functions, and the questions ask for equivalent normal forms for given terms. Section 2.3.4 of the ‘Introduction . . .’ chapter gives definitions of the models of equational logic, the semantic consequence relation . . . T |=≐(t1 ≐ t2) . . . (t1 ≐ t2 is a semantic consequence of the set T of equations, see Definition 2.3.14), and the question answering relation . . . (norm t1,…,ti : t) ?- ≐ (t ≐ s) . . . (t ≐ s asserts the equality of t to the normal form s, which contains no instances of t1, . . . , ti, see Definition 2.3.16).


Author(s):  
Michael J. O’Donnell

Logic, according to Webster’s dictionary [Webster, 1987], is ‘a science that deals with the principles and criteria of validity of inference and demonstration: the science of the formal principles of reasoning.' Such 'principles and criteria’ are always described in terms of a language in which inference, demonstration, and reasoning may be expressed. One of the most useful accomplishments of logic for mathematics is the design of a particular formal language, the First Order Predicate Calculus (FOPC). FOPC is so successful at expressing the assertions arising in mathematical discourse that mathematicians and computer scientists often identify logic with classical logic expressed in FOPC. In order to explore a range of possible uses of logic in the design of programming languages, we discard the conventional identification of logic with FOPC, and formalize a general schema for a variety of logical systems, based on the dictionary meaning of the word. Then, we show how logic programming languages may be designed systematically for any sufficiently effective logic, and explain how to view Prolog, Datalog, λProlog, Equational Logic Programming, and similar programming languages, as instances of the general schema of logic programming. Other generalizations of logic programming have been proposed independently by Meseguer [Meseguer, 1989], Miller, Nadathur, Pfenning and Scedrov [Miller et al., 1991], Goguen and Burstall [Goguen and Burstall, 1992]. The purpose of this chapter is to introduce a set of basic concepts for understanding logic programming, not in terms of its historical development, but in a systematic way based on retrospective insights. In order to achieve a systematic treatment, we need to review a number of elementary definitions from logic and theoretical computer science and adapt them to the needs of logic programming. The result is a slightly modified logical notation, which should be recognizable to those who know the traditional notation. Conventional logical notation is also extended to new and analogous concepts, designed to make the similarities and differences between logical relations and computational relations as transparent as possible. Computational notation is revised radically to make it look similar to logical notation.


Author(s):  
Alberto Pettorossi ◽  
Maurizio Proietti

Program transformation is a methodology for deriving correct and efficient programs from specifications. In this chapter, we will look at the so called ’rules + strategies’ approach, and we will report on the main techniques which have been introduced in the literature for that approach, in the case of logic programs. We will also present some examples of program transformation, and we hope that through those examples the reader may acquire some familiarity with the techniques we will describe. The program transformation approach to the development of programs has been first advocated in the case of functional languages by Burstall and Darlington [1977]. In that seminal paper the authors give a comprehensive account of some basic transformation techniques which they had already presented in [Darlington, 1972; Burstall and Darlington, 1975]. Similar techniques were also developed in the case of logic languages by Clark and Sickel [1977], and Hogger [1981], who investigated the use of predicate logic as a language for both program specification and program derivation. In the transformation approach the task of writing a correct and efficient program is realized in two phases. The first phase consists in writing an initial, maybe inefficient, program whose correctness can easily be shown, and the second phase, possibly divided into various subphases, consists in transforming the initial program with the objective of deriving a new program which is more efficient. The separation of the correctness concern from the efficiency concern is one of the major advantages of the transformation methodology. Indeed, using this methodology one may avoid some difficulties often encountered in other approaches. One such difficulty, which may occur when following the stepwise refinement approach, is the design of the invariant assertions, which may be quite intricate, especially when developing very efficient programs. The experience gained during the past two decades or so shows that the methodology of program transformation is very valuable and attractive, in particular for the task of programming ‘in the small’, that is, for writing single modules of large software systems.


Author(s):  
J. C. Shepherdson

The usual way of introducing negation into Horn clause logic programming is by ‘negation as failure’: if A is a ground atom . . . the goal ¬A succeeds if A fails the goal ¬A fails if A succeeds. . . . This is obviously not classical negation, at least not relative to the given program P; the fact that A fails from P does not mean that you can prove ¬A from P, e.g. if P is . . . a ← ¬b . . . then ? - b fails so, using negation as failure, ? – a succeeds, but a is not a logical consequence of P. You could deal with classical negation by using a form of resolution which gave a complete proof procedure for full first order logic. To a logician this would be the natural thing to do. Two reasons are commonly given for why this is not done. The first is that it is believed by most, but not all, practitioners, that this would be infeasible because it would lead to a combinatorial explosion, whereas negation as failure does not, since it is not introducing any radically new methods of inference, just turning the old ones round. The second is that, in practical logic programming, negation as failure is often more useful than classical negation. This is the case when the program is a database, e.g. an airline timetable. You list all the flights there are. If there is no listed flight from Zurich to London at 12.31, then you conclude that there is no such flight. The implicit use of negation as failure here saves us the enormous labour of listing all the non-existent flights. This implicit usage is made precise in the closed world assumption, one of the two commonest declarative semantics given for negation as failure. This was introduced by Reiter [1978] and formalises the idea that the database contains all the positive information about objects in the domain, that any positive ground literal which is not implied by the program is assumed to be false.


Author(s):  
A. C. Kakas ◽  
R. A. Kowalski

This paper extends and updates our earlier survey and analysis of work on the extension of logic programming to perform abductive reasoning [Kakas et al., 1993]. The purpose of the paper is to provide a critical overview of some of the main research results, in order to develop a common framework for evaluating these results, to identify the main unresolved problems, and to indicate directions for future work. The emphasis is not on technical details but on relationships and common features of different approaches. Some of the main issues we will consider are the contributions that abduction can make to the problems of reasoning with incomplete or negative information, the evolution of knowledge, and the semantics of logic programming and its extensions. We also discuss recent work on the argumentation-theoretic interpretation of abduction, which was introduced in the earlier version of this paper. The philosopher Peirce first introduced the notion of abduction. In [Peirce, 1931-58] he identified three distinguished forms of reasoning. Deduction, an analytic process based on the application of general rules to particular cases, with the inference of a result. Induction, synthetic reasoning which infers the rule from the case and the result. Abduction, another form of synthetic inference, but of the case from a rule and a result. Peirce further characterised abduction as the “probational adoption of a hypothesis” as explanation for observed facts (results), according to known laws. “It is however a weak kind of inference, because we cannot say that we believe in the truth of the explanation, but only that it may be true” [Peirce, 1931-58]. Abduction is widely used in common-sense reasoning, for instance in diagnosis, to reason from effect to cause [Charniak and McDermott, 1985; Pople, 1973]. We consider here an example drawn from [Pearl, 1987]. Abduction consists in computing such explanations for observations. It is a form of non-monotonic reasoning, because explanations which are consistent with one state of a knowledge base may become inconsistent with new information. In the example above the explanation rained-last-night may turn out to be false, and the alternative explanation sprinkler-was-on may be the true cause for the given observation.


Author(s):  
Donald W. Loveland ◽  
Gopalan Nadathur

A proof procedure is an algorithm (technically, a semi-decision procedure) which identifies a formula as valid (or unsatisfiable) when appropriate, and may not terminate when the formula is invalid (satisfiable). Since a proof procedure concerns a logic the procedure takes a special form, superimposing a search strategy on an inference calculus. We will consider a certain collection of proof procedures in the light of an inference calculus format that abstracts the concept of logic programming. This formulation allows us to look beyond SLD-resolution, the proof procedure that underlies Prolog, to generalizations and extensions that retain an essence of logic programming structure. The inference structure used in the formulation of the logic programming concept and first realization, Prolog, evolved from the work done in the subdiscipline called automated theorem proving. While many proof procedures have been developed within this subdiscipline, some of which appear in Volume 1 of this handbook, we will present a narrow selection, namely the proof procedures which are clearly ancestors of the first proof procedure associated with logic programming, SLD-resolution. Extensive treatment of proof procedures for automated theorem proving appear in Bibel [Bibel, 1982], Chang and Lee [Chang and Lee, 1973] and Loveland [Loveland, 1978]. Although the consideration of proof procedures for automated theorem proving began about 1958 we begin our overview with the introduction of the resolution proof procedure by Robinson in 1965. We then review the linear resolution procedures, model elimination and SL-resolution procedures. Our exclusion of other proof procedures from consideration here is due to our focus, not because other procedures are less important historically or for general use within automated or semi-automated theorem process. After a review of the general resolution proof procedure, we consider the linear refinement for resolution and then further restrict the procedure format to linear input resolution. Here we are no longer capable of treating full first-order logic, but have forced ourselves to address a smaller domain, in essence the renameable Horn clause formulas. By leaving the resolution format, indeed leaving traditional formula representation, we see there exists a linear input procedure for all of first-order logic.


Author(s):  
P. M. Hill ◽  
J. Gallagher

A meta-program, regardless of the nature of the programming language, is a program whose data denotes another (object) program. The importance of meta-programming can be gauged from its large number of applications. These include compilers, interpreters, program analysers, and program transformers. Furthermore, if the object program is a logic or functional program formalizing some knowledge, then the meta-program may be regarded as a meta-reasoner for reasoning about this knowledge. In this chapter, the meta-program is assumed to be a logic program. The object program does not have to be a logic program although much of the work in this chapter assumes this. We have identified three major topics for consideration. These are the theoretical foundations of meta-programming, the suitability of the alternative meta-programming techniques for different applications, and methods for improving the efficiency of meta-programs. As with logic programs generally, meta-programs have declarative and procedural semantics. The theoretical study of meta-programming shows that both aspects of the semantics depend crucially on the manner in which object programs are represented as data in a meta-program. The second theme of the paper is the problem of designing and choosing appropriate ways of specifying important meta-programming problems, including dynamic metaprogramming and problems involving self-application. The third theme concerns efficient implementation of meta-programs. Meta-programming systems require representations with facilities that minimize the overhead of interpreting the object program. In addition, efficiency can be gained by transforming the meta-program, specializing it for the particular object program it is reasoning about. This chapter, which concentrates on these aspects of meta-programming, is not intended to be a survey of the field. A more complete survey of meta-programming for logic programming can be found in [Barklund, 1995]. Many issues in meta-programming have their roots in problems in logic which have been studied for several decades. This chapter emphasizes meta-programming solutions. It is not intended to give a full treatment of the underlying logical problems, though we try to indicate some connections to wider topics in meta-logic. The meta-programs in this chapter are logic programs based on first order logic.


Author(s):  
Jorge Lobo ◽  
Jack Minker

During the past 20 years, logic programming has grown from a new discipline to a mature field. Logic programming is a direct outgrowth of work that started in automated theorem proving. The first programs based on logic were developed by Colmerauer and his students [Colmerauer et al., 1973] at the University of Marseilles in 1972 where the logic programming language PROLOG was developed. Kowalski [1974] published the first paper that formally described logic as a programming language in 1974. Alain Colmerauer and Robert Kowalski are considered the founders of the field of logic programming, van Emden and Kowalski [van Emden and Kowalski, 1976] laid down the theoretical foundation for logic programming. In the past decade the field has witnessed rapid progress with the publication of several theoretical results which have provided a strong foundation for logic programming and extended the scope of logic as a programming language. The objective of this article is to outline theoretical results that have been developed in the field of logic programming with particular emphasis to disjunctive logic programming. Disjunctive logic programming is an extension of logic programming and is useful in representing and reasoning with indefinite information. A disjunctive logic program consists of a finite set of implicitly quantified universal clauses of the form: . . . A1 , . . . , Am ← B1 , . . . , Bn m > 0 and n ≥ 0 (1) . . . where the Ai’s and the Bj’S are atoms. The atoms in the left of the implication sign form a disjunction and is called the head of the formula and those on the right form a conjunction and is called the body of the formula. The formula is read as “A1 or A2 or ... or Am if B1 and B2 and ... and Bn.” There are several forms of the formula that one usually distinguishes. If the body of the formula is empty, and the head is not, the formula is referred to as a fact. If both are not empty the formula is referred to as a procedure.


Author(s):  
Gopalan Nadathur ◽  
Dale Miller

Modern programming languages such as Lisp, Scheme and ML permit procedures to be encapsulated within data in such a way that they can subsequently be retrieved and used to guide computations. The languages that provide this kind of an ability are usually based on the functional programming paradigm, and the procedures that can be encapsulated in them correspond to functions. The objects that are encapsulated are, therefore, of higher-order type and so also are the functions that manipulate them. For this reason, these languages are said to allow for higher-order programming. This form of programming is popular among the users of these languages and its theory is well developed. The success of this style of encapsulation in functional programming makes is natural to ask if similar ideas can be supported within the logic programming setting. Noting that procedures are implemented by predicates in logic programming, higher-order programming in this setting would correspond to mechanisms for encapsulating predicate expressions within terms and for later retrieving and invoking such stored predicates. At least some devices supporting such an ability have been seen to be useful in practice. Attempts have therefore been made to integrate such features into Prolog (see, for example, [Warren, 1982]), and many existing implementations of Prolog provide for some aspects of higher-order programming. These attempts, however, are unsatisfactory in two respects. First, they have relied on the use of ad hoc mechanisms that are at variance with the declarative foundations of logic programming. Second, they have largely imported the notion of higher-order programming as it is understood within functional programming and have not examined a notion that is intrinsic to logic programming. In this chapter, we develop the idea of higher-order logic programming by utilizing a higher-order logic as the basis for computing. There are, of course, many choices for the higher-order logic that might be used in such a study. If the desire is only to emulate the higher-order features found in functional programming languages, it is possible to adopt a “minimalist” approach, i.e., to consider extending the logic of first-order Horn clauses— the logical basis of Prolog—in as small a way as possible to realize the additional functionality.


Sign in / Sign up

Export Citation Format

Share Document