scholarly journals Probabilistic Sufficient Explanations

Author(s):  
Eric Wang ◽  
Pasha Khosravi ◽  
Guy Van den Broeck

Understanding the behavior of learned classifiers is an important task, and various black-box explanations, logical reasoning approaches, and model-specific methods have been proposed. In this paper, we introduce probabilistic sufficient explanations, which formulate explaining an instance of classification as choosing the "simplest" subset of features such that only observing those features is "sufficient" to explain the classification. That is, sufficient to give us strong probabilistic guarantees that the model will behave similarly when all features are observed under the data distribution. In addition, we leverage tractable probabilistic reasoning tools such as probabilistic circuits and expected predictions to design a scalable algorithm for finding the desired explanations while keeping the guarantees intact. Our experiments demonstrate the effectiveness of our algorithm in finding sufficient explanations, and showcase its advantages compared to Anchors and logical explanations.

Author(s):  
Anton Tarasyuk ◽  
Elena Troubitsyna ◽  
Linas Laibinis

Formal refinement-based approaches have proved their worth in verifying system correctness. Often, besides ensuring functional correctness, we also need to quantitatively demonstrate that the desired level of dependability is achieved. However, the existing refinement-based frameworks do not provide sufficient support for quantitative reasoning. In this chapter, we show how to use probabilistic model checking to verify probabilistic refinement of Event-B models. Such integration allows us to combine logical reasoning about functional correctness with probabilistic reasoning about reliability.


2021 ◽  
Author(s):  
Klaus Johannsen ◽  
Nadine Goris ◽  
Bjørnar Jensen ◽  
Jerry Tjiputra

Abstract Optimization problems can be found in many areas of science and technology. Often, not only the global optimum, but also a (larger) number of near-optima are of interest. This gives rise to so-called multimodal optimization problems. In most of the cases, the number and quality of the optima is unknown and assumptions on the objective functions cannot be made. In this paper, we focus on continuous, unconstrained optimization in moderately high dimensional continuous spaces (<=10). We present a scalable algorithm with virtually no parameters, which performs well for general objective functions (non-convex, discontinuous). It is based on two well-established algorithms (CMA-ES, deterministic crowding). Novel elements of the algorithm are the detection of seed points for local searches and collision avoidance, both based on nearest neighbors, and a strategy for semi-sequential optimization to realize scalability. The performance of the proposed algorithm is numerically evaluated on the CEC2013 niching benchmark suite for 1-20 dimensional functions and a 9 dimensional real-world problem from constraint optimization in climate research. The algorithm shows good performance on the CEC2013 benchmarks and falls only short on higher dimensional and strongly inisotropic problems. In case of the climate related problem, the algorithm is able to find a high number (150) of optima, which are of relevance to climate research. The proposed algorithm does not require special configuration for the optimization problems considered in this paper, i.e. it shows good black-box behavior.


Author(s):  
Po-Jen Cheng ◽  
Yuan-Hsun Liao ◽  
Pao-Ta Yu

With rising societal interest in the subject areas of science, technology, engineering, art and mathematics (STEAM), a micro:bit robotics course with an online group study (OGS) system was designed to foster student learning anytime and anywhere. OGS enables the development of a learning environment that combines real-world and digital-world resources, and can enhance the effectiveness of learning among students from a remote area. In this pre- and post-test experiment design, we studied 22 (8 males and 14 females) 5th grade students from a remote area of Taiwan. A t test performed before and after the robotics course showed a positive increase in students’ proportional reasoning, probabilistic reasoning, and ability to analyze a problem. Results also revealed a gender difference in the association between students’ logical reasoning and problem-solving ability.


2009 ◽  
Vol 32 (1) ◽  
pp. 69-84 ◽  
Author(s):  
Mike Oaksford ◽  
Nick Chater

AbstractAccording to Aristotle, humans are the rational animal. The borderline between rationality and irrationality is fundamental to many aspects of human life including the law, mental health, and language interpretation. But what is it to be rational? One answer, deeply embedded in the Western intellectual tradition since ancient Greece, is that rationality concerns reasoning according to the rules of logic – the formal theory that specifies the inferential connections that hold with certainty between propositions. Piaget viewed logical reasoning as defining the end-point of cognitive development; and contemporary psychology of reasoning has focussed on comparing human reasoning against logical standards.Bayesian Rationalityargues that rationality is defined instead by the ability to reason aboutuncertainty. Although people are typically poor at numerical reasoning about probability, human thought is sensitive to subtle patterns of qualitative Bayesian, probabilistic reasoning. In Chapters 1–4 ofBayesian Rationality(Oaksford & Chater 2007), the case is made that cognition in general, and human everyday reasoning in particular, is best viewed as solving probabilistic, rather than logical, inference problems. In Chapters 5–7 the psychology of “deductive” reasoning is tackled head-on: It is argued that purportedly “logical” reasoning problems, revealing apparently irrational behaviour, are better understood from a probabilistic point of view. Data from conditional reasoning, Wason's selection task, and syllogistic inference are captured by recasting these problems probabilistically. The probabilistic approach makes a variety of novel predictions which have been experimentally confirmed. The book considers the implications of this work, and the wider “probabilistic turn” in cognitive science and artificial intelligence, for understanding human rationality.


2021 ◽  
Author(s):  
Dipanwita Sinha Mukherjee ◽  
Divyanshy Bhandari ◽  
Naveen Yeri

<div>Any predictive software deployed with this hypothesis that test data distribution will not differ from training data distribution. Real time scenario does not follow this rule, which results inconsistent and non-transferable observation in various cases. This makes the dataset shift, a growing concern. In this paper, we’ve explored the recent concept of Label shift detection and classifier correction with the help of Black Box shift detection(BBSD), Black Box shift estimation(BBSE) and Black Box shift correction(BBSC). Digits dataset from ”sklearn” and ”LogisticRegression” classifier have been used for this investigation. Knock out shift was clearly detected by applying Kolmogorov–Smirnov test for BBSD. Performance of the classifier got improved after applying BBSE and BBSC from 91% to 97% in terms of overall accuracy.</div>


2021 ◽  
Author(s):  
Dipanwita Sinha Mukherjee ◽  
Divyanshy Bhandari ◽  
Naveen Yeri

<div>Any predictive software deployed with this hypothesis that test data distribution will not differ from training data distribution. Real time scenario does not follow this rule, which results inconsistent and non-transferable observation in various cases. This makes the dataset shift, a growing concern. In this paper, we’ve explored the recent concept of Label shift detection and classifier correction with the help of Black Box shift detection(BBSD), Black Box shift estimation(BBSE) and Black Box shift correction(BBSC). Digits dataset from ”sklearn” and ”LogisticRegression” classifier have been used for this investigation. Knock out shift was clearly detected by applying Kolmogorov–Smirnov test for BBSD. Performance of the classifier got improved after applying BBSE and BBSC from 91% to 97% in terms of overall accuracy.</div>


2021 ◽  
Author(s):  
Dipanwita Sinha Mukherjee ◽  
Divyanshy Bhandari ◽  
Naveen Yeri

<div>Any predictive software deployed with this hypothesis that test data distribution will not differ from training data distribution. Real time scenario does not follow this rule, which results inconsistent and non-transferable observation in various cases. This makes the dataset shift, a growing concern. In this paper, we’ve explored the recent concept of Label shift detection and classifier correction with the help of Black Box shift detection(BBSD), Black Box shift estimation(BBSE) and Black Box shift correction(BBSC). Digits dataset from ”sklearn” and ”LogisticRegression” classifier have been used for this investigation. Knock out shift was clearly detected by applying Kolmogorov–Smirnov test for BBSD. Performance of the classifier got improved after applying BBSE and BBSC from 91% to 97% in terms of overall accuracy.</div>


2005 ◽  
Vol 38 (7) ◽  
pp. 49
Author(s):  
DEEANNA FRANKLIN
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document