scholarly journals Explanation in Constraint Satisfaction: A Survey

Author(s):  
Sharmi Dev Gupta ◽  
Begum Genc ◽  
Barry O'Sullivan

Much of the focus on explanation in the field of artificial intelligence has focused on machine learning methods and, in particular, concepts produced by advanced methods such as neural networks and deep learning. However, there has been a long history of explanation generation in the general field of constraint satisfaction, one of the AI's most ubiquitous subfields. In this paper we survey the major seminal papers on the explanation and constraints, as well as some more recent works. The survey sets out to unify many disparate lines of work in areas such as model-based diagnosis, constraint programming, Boolean satisfiability, truth maintenance systems, quantified logics, and related areas.

Author(s):  
U. CHOWDHURY ◽  
D. K. GUPTA

The backtracking algorithm is a prominent search technique in AI, particularly due to its use in Constraint Satisfaction Problems (CSPs), Truth Maintenance Systems (TMS), and PROLOG. In the context of CSPs, Dechter5 and Gashnig10 proposed two variants of the backtracking algorithm known as backjumping algorithms. One is graph-based and the other is failure-based backjumping algorithm. These algorithms attempt to backjump to the source of failure in case of a dead-end situation. This improves the backtracking performance. However, these algorithms are not consistent in the selection of the variable to backjump. In this paper, the modifications of both types of backjumping algorithms are proposed. These algorithms adopt a technique to select the variable to backjump in a consistent manner. This further increases the search efficiency in them. The merits of these modified algorithms are investigated theoretically. Experimental results on the zebra problem and random problems show that the modified algorithms give better results on most occasions.


Author(s):  
MASAHITO KURIHARA ◽  
HISASHI KONDO ◽  
AZUMA OHUCHI

Assumption-based truth maintenance systems (ATMS) have become powerful and widely used tools in artificial intelligence problem solvers. In this paper, we apply ATMS to verification of termination of computer programs written as a set of rewrite rules. Compared with the traditional methods based on the ordinary backtracking, our method can greatly improve the overall efficiency by virtue of the ATMS's ability to avoid futile backtracking, rediscovering inferences, and rediscovering contradictions. The originality of our work lies in the practical use of the ATMS in a software engineering problem and in the communication protocol between the termination verifier and the ATMS.


1996 ◽  
Vol 5 ◽  
pp. 27-52 ◽  
Author(s):  
R. Ben-Eliyahu

Finding the stable models of a knowledge base is a significant computational problem in artificial intelligence. This task is at the computational heart of truth maintenance systems, autoepistemic logic, and default logic. Unfortunately, it is NP-hard. In this paper we present a hierarchy of classes of knowledge bases, Omega_1,Omega_2,..., with the following properties: first, Omega_1 is the class of all stratified knowledge bases; second, if a knowledge base Pi is in Omega_k, then Pi has at most k stable models, and all of them may be found in time O(lnk), where l is the length of the knowledge base and n the number of atoms in Pi; third, for an arbitrary knowledge base Pi, we can find the minimum k such that Pi belongs to Omega_k in time polynomial in the size of Pi; and, last, where K is the class of all knowledge bases, it is the case that union{i=1 to infty} Omega_i = K, that is, every knowledge base belongs to some class in the hierarchy.


2021 ◽  
Vol 3 (4) ◽  
pp. 900-921
Author(s):  
Mi-Young Kim ◽  
Shahin Atakishiyev ◽  
Housam Khalifa Bashier Babiker ◽  
Nawshad Farruque ◽  
Randy Goebel ◽  
...  

The rapid growth of research in explainable artificial intelligence (XAI) follows on two substantial developments. First, the enormous application success of modern machine learning methods, especially deep and reinforcement learning, have created high expectations for industrial, commercial, and social value. Second, the emerging and growing concern for creating ethical and trusted AI systems, including compliance with regulatory principles to ensure transparency and trust. These two threads have created a kind of “perfect storm” of research activity, all motivated to create and deliver any set of tools and techniques to address the XAI demand. As some surveys of current XAI suggest, there is yet to appear a principled framework that respects the literature of explainability in the history of science and which provides a basis for the development of a framework for transparent XAI. We identify four foundational components, including the requirements for (1) explicit explanation knowledge representation, (2) delivery of alternative explanations, (3) adjusting explanations based on knowledge of the explainee, and (4) exploiting the advantage of interactive explanation. With those four components in mind, we intend to provide a strategic inventory of XAI requirements, demonstrate their connection to a basic history of XAI ideas, and then synthesize those ideas into a simple framework that can guide the design of AI systems that require XAI.


Sign in / Sign up

Export Citation Format

Share Document