scholarly journals Normative design using inductive learning

2011 ◽  
Vol 11 (4-5) ◽  
pp. 783-799 ◽  
Author(s):  
DOMENICO CORAPI ◽  
ALESSANDRA RUSSO ◽  
MARINA DE VOS ◽  
JULIAN PADGET ◽  
KEN SATOH

AbstractIn this paper we propose a use-case-driven iterative design methodology for normative frameworks, also called virtual institutions, which are used to govern open systems. Our computational model represents the normative framework as a logic program under answer set semantics (ASP). By means of an inductive logic programming approach, implemented using ASP, it is possible to synthesise new rules and revise the existing ones. The learning mechanism is guided by the designer who describes the desired properties of the framework through use cases, comprising (i) event traces that capture possible scenarios, and (ii) a state that describes the desired outcome. The learning process then proposes additional rules, or changes to current rules, to satisfy the constraints expressed in the use cases. Thus, the contribution of this paper is a process for the elaboration and revision of a normative framework by means of a semi-automatic and iterative process driven from specifications of (un)desirable behaviour. The process integrates a novel and general methodology for theory revision based on ASP.

2019 ◽  
Vol 19 (5-6) ◽  
pp. 688-704
Author(s):  
GIOVANNI AMENDOLA ◽  
FRANCESCO RICCA

AbstractIn the last years, abstract argumentation has met with great success in AI, since it has served to capture several non-monotonic logics for AI. Relations between argumentation framework (AF) semantics and logic programming ones are investigating more and more. In particular, great attention has been given to the well-known stable extensions of an AF, that are closely related to the answer sets of a logic program. However, if a framework admits a small incoherent part, no stable extension can be provided. To overcome this shortcoming, two semantics generalizing stable extensions have been studied, namely semi-stable and stage. In this paper, we show that another perspective is possible on incoherent AFs, called paracoherent extensions, as they have a counterpart in paracoherent answer set semantics. We compare this perspective with semi-stable and stage semantics, by showing that computational costs remain unchanged, and moreover an interesting symmetric behaviour is maintained.


1994 ◽  
Vol 9 (1) ◽  
pp. 73-77 ◽  
Author(s):  
F. Bergadano ◽  
D. Gunetti

Inductive Logic Programming (ILP) is an emerging research area at the intersection of machine learning, logic programming and software engineering. The first workshop on this topic was held in 1991 in Portugal (Muggleton, 1991). Subsequently, there was a workshop tied to the Future Generation Computer System Conference in Japan in 1992, and a third one in Bled, Slovenia, in April 1993 (Muggleton, 1993). Ideas related to ILP are also appearing in major AI and machine learning conferences and journals. Although European-based and mainly sponsored by ESPRIT, ILP aims at becoming equally represented elsewhere; for example, among researchers in America who are investigating relational learning and first order theory revision (see, for example, the papers in Birnbaum and Collins, 1991) and within the computational learning theory community. This year's IJCAI workshop on ILP is a first step in this direction, and includes recent work with a broader range of perspectives and techniques.


2010 ◽  
Vol 10 (4-6) ◽  
pp. 565-580 ◽  
Author(s):  
JAMES P. DELGRANDE

AbstractAn approach to the revision of logic programs under the answer set semantics is presented. For programs P and Q, the goal is to determine the answer sets that correspond to the revision of P by Q, denoted P * Q. A fundamental principle of classical (AGM) revision, and the one that guides the approach here, is the success postulate. In AGM revision, this stipulates that α ∈ K * α. By analogy with the success postulate, for programs P and Q, this means that the answer sets of Q will in some sense be contained in those of P * Q. The essential idea is that for P * Q, a three-valued answer set for Q, consisting of positive and negative literals, is first determined. The positive literals constitute a regular answer set, while the negated literals make up a minimal set of naf literals required to produce the answer set from Q. These literals are propagated to the program P, along with those rules of Q that are not decided by these literals. The approach differs from work in update logic programs in two main respects. First, we ensure that the revising logic program has higher priority, and so we satisfy the success postulate; second, for the preference implicit in a revision P * Q, the program Q as a whole takes precedence over P, unlike update logic programs, since answer sets of Q are propagated to P. We show that a core group of the AGM postulates are satisfied, as are the postulates that have been proposed for update logic programs.


2015 ◽  
Vol 15 (4-5) ◽  
pp. 481-494 ◽  
Author(s):  
CRAIG BLACKMORE ◽  
OLIVER RAY ◽  
KERSTIN EDER

AbstractThis paper introduces a new logic-based method for optimising the selection of compiler flags on embedded architectures. In particular, we use Inductive Logic Programming (ILP) to learn logical rules that relate effective compiler flags to specific program features. Unlike earlier work, we aim to infer human-readable rules and we seek to develop a relational first-order approach which automatically discovers relevant features rather than relying on a vector of predetermined attributes. To this end we generated a data set by measuring execution times of 60 benchmarks on an embedded system development board and we developed an ILP prototype which outperforms the current state-of-the-art learning approach in 34 of the 60 benchmarks. Finally, we combined the strengths of the current state of the art and our ILP method in a hybrid approach which reduced execution times by an average of 8% and up to 50% in some cases.


2021 ◽  
Author(s):  
Daniele Meli ◽  
Mohan Sridharan ◽  
Paolo Fiorini

AbstractThe quality of robot-assisted surgery can be improved and the use of hospital resources can be optimized by enhancing autonomy and reliability in the robot’s operation. Logic programming is a good choice for task planning in robot-assisted surgery because it supports reliable reasoning with domain knowledge and increases transparency in the decision making. However, prior knowledge of the task and the domain is typically incomplete, and it often needs to be refined from executions of the surgical task(s) under consideration to avoid sub-optimal performance. In this paper, we investigate the applicability of inductive logic programming for learning previously unknown axioms governing domain dynamics. We do so under answer set semantics for a benchmark surgical training task, the ring transfer. We extend our previous work on learning the immediate preconditions of actions and constraints, to also learn axioms encoding arbitrary temporal delays between atoms that are effects of actions under the event calculus formalism. We propose a systematic approach for learning the specifications of a generic robotic task under the answer set semantics, allowing easy knowledge refinement with iterative learning. In the context of 1000 simulated scenarios, we demonstrate the significant improvement in performance obtained with the learned axioms compared with the hand-written ones; specifically, the learned axioms address some critical issues related to the plan computation time, which is promising for reliable real-time performance during surgery.


1995 ◽  
Vol 2 ◽  
pp. 411-446 ◽  
Author(s):  
S. K. Donoho ◽  
L. A. Rendell

Theory revision integrates inductive learning and background knowledge by combining training examples with a coarse domain theory to produce a more accurate theory. There are two challenges that theory revision and other theory-guided systems face. First, a representation language appropriate for the initial theory may be inappropriate for an improved theory. While the original representation may concisely express the initial theory, a more accurate theory forced to use that same representation may be bulky, cumbersome, and difficult to reach. Second, a theory structure suitable for a coarse domain theory may be insufficient for a fine-tuned theory. Systems that produce only small, local changes to a theory have limited value for accomplishing complex structural alterations that may be required. Consequently, advanced theory-guided learning systems require flexible representation and flexible structure. An analysis of various theory revision systems and theory-guided learning systems reveals specific strengths and weaknesses in terms of these two desired properties. Designed to capture the underlying qualities of each system, a new system uses theory-guided constructive induction. Experiments in three domains show improvement over previous theory-guided systems. This leads to a study of the behavior, limitations, and potential of theory-guided constructive induction.


2011 ◽  
pp. 176-197
Author(s):  
Donato Malerba ◽  
Margherita Berardi ◽  
Michelangelo Ceci

This chapter introduces a data mining method for the discovery of association rules from images of scanned paper documents. It argues that a document image is a multi-modal unit of analysis whose semantics is deduced from a combination of both the textual content and the layout structure and the logical structure. Therefore, it proposes a method where both the spatial information derived from a complex document image analysis process (layout analysis), and the information extracted from the logical structure of the document (document image classification and understanding) and the textual information extracted by means of an OCR, are simultaneously considered to generate interesting patterns. The proposed method is based on an inductive logic programming approach, which is argued to be the most appropriate to analyze data available in more than one modality. It contributes to show a possible evolution of the unimodal knowledge discovery scheme, according to which different types of data describing the units of analysis are dealt with through the application of some preprocessing technique that transform them into a single double entry tabular data.


Sign in / Sign up

Export Citation Format

Share Document