decision lists
Recently Published Documents


TOTAL DOCUMENTS

62
(FIVE YEARS 4)

H-INDEX

11
(FIVE YEARS 0)

2021 ◽  
Vol 72 ◽  
pp. 1251-1279
Author(s):  
Jinqiang Yu ◽  
Alexey Ignatiev ◽  
Peter J. Stuckey ◽  
Pierre Le Bodic

Decision sets and decision lists are two of the most easily explainable machine learning models. Given the renewed emphasis on explainable machine learning decisions, both of these machine learning models are becoming increasingly attractive, as they combine small size and clear explainability. In this paper, we define size as the total number of literals in the SAT encoding of these rule-based models as opposed to earlier work that concentrates on the number of rules. In this paper, we develop approaches to computing minimum-size “perfect” decision sets and decision lists, which are perfectly accurate on the training data, and minimal in size, making use of modern SAT solving technology. We also provide a new method for determining optimal sparse alternatives, which trade off size and accuracy. The experiments in this paper demonstrate that the optimal decision sets computed by the SAT-based approach are comparable with the best heuristic methods, but much more succinct, and thus, more explainable. We contrast the size and test accuracy of optimal decisions lists versus optimal decision sets, as well as other state-of-the-art methods for determining optimal decision lists. Finally, we examine the size of average explanations generated by decision sets and decision lists.


2021 ◽  
Author(s):  
Gilles Audemard ◽  
Steve Bellart ◽  
Louenas Bounia ◽  
Frédéric Koriche ◽  
Jean-Marie Lagniez ◽  
...  

In this paper, we investigate the computational intelligibility of Boolean classifiers, characterized by their ability to answer XAI queries in polynomial time. The classifiers under consideration are decision trees, DNF formulae, decision lists, decision rules, tree ensembles, and Boolean neural nets. Using 9 XAI queries, including both explanation queries and verification queries, we show the existence of large intelligibility gap between the families of classifiers. On the one hand, all the 9 XAI queries are tractable for decision trees. On the other hand, none of them is tractable for DNF formulae, decision lists, random forests, boosted decision trees, Boolean multilayer perceptrons, and binarized neural networks.


Author(s):  
Alexey Ignatiev ◽  
Joao Marques-Silva ◽  
Nina Narodytska ◽  
Peter J. Stuckey

Artificial Intelligence (AI) is widely used in decision making procedures in myriads of real-world applications across important practical areas such as finance, healthcare, education, and safety critical systems. Due to its ubiquitous use in safety and privacy critical domains, it is often vital to understand the reasoning behind the AI decisions, which motivates the need for explainable AI (XAI). One of the major approaches to XAI is represented by computing so-called interpretable machine learning (ML) models, such as decision trees (DT), decision lists (DL) and decision sets (DS). These models build on the use of if-then rules and are thus deemed to be easily understandable by humans. A number of approaches have been proposed in the recent past to devising all kinds of interpretable ML models, the most prominent of which involve encoding the problem into a logic formalism, which is then tackled by invoking a reasoning or discrete optimization procedure. This paper overviews the recent advances of the reasoning and constraints based approaches to learning interpretable ML models and discusses their advantages and limitations.


Biometrics ◽  
2015 ◽  
Vol 71 (4) ◽  
pp. 895-904 ◽  
Author(s):  
Yichi Zhang ◽  
Eric B. Laber ◽  
Anastasios Tsiatis ◽  
Marie Davidian

2015 ◽  
Vol 590 ◽  
pp. 38-54 ◽  
Author(s):  
V. Arvind ◽  
Johannes Köbler ◽  
Sebastian Kuhnert ◽  
Gaurav Rattan ◽  
Yadu Vasudev

Sign in / Sign up

Export Citation Format

Share Document