scholarly journals Declarative Approaches to Counterfactual Explanations for Classification

Author(s):  
LEOPOLDO BERTOSSI

Abstract We propose answer-set programs that specify and compute counterfactual interventions on entities that are input on a classification model. In relation to the outcome of the model, the resulting counterfactual entities serve as a basis for the definition and computation of causality-based explanation scores for the feature values in the entity under classification, namely responsibility scores. The approach and the programs can be applied with black-box models, and also with models that can be specified as logic programs, such as rule-based classifiers. The main focus of this study is on the specification and computation of best counterfactual entities, that is, those that lead to maximum responsibility scores. From them one can read off the explanations as maximum responsibility feature values in the original entity. We also extend the programs to bring into the picture semantic or domain knowledge. We show how the approach could be extended by means of probabilistic methods, and how the underlying probability distributions could be modified through the use of constraints. Several examples of programs written in the syntax of the DLV ASP-solver, and run with it, are shown.

Author(s):  
Morgan Bruns ◽  
Christiaan J. J. Paredis

Since engineering design requires decision making under uncertainty, the degree to which good decisions can be made depends upon the degree to which the decision maker has expressive and accurate representations of his or her uncertain beliefs. Whereas traditional decision analysis uses precise probability distributions to represent uncertain beliefs, recent research has examined the effects of relaxing this assumption of precision. A specific example of this is the theory of imprecise probability. Imprecise probabilities are more expressive than precise probabilities, but they are also more computationally expensive to propagate through mathematical models. The probability box (p-box) is an alternative representation that is both more expressive than precise probabilities, and less computationally expensive than general imprecise probabilities. In this paper, we introduce a method for propagating p-boxes through black box models. Based on two example models, a new method, called p-box convolution sampling (PCS), is compared with three other p-box propagation methods. It is found that, although PCS is less expensive than the alternatives, it is still relatively expensive and therefore only justifiable when the expected benefits are large. Several directions for further improving the efficiency of PCS are discussed.


Energies ◽  
2020 ◽  
Vol 13 (24) ◽  
pp. 6749
Author(s):  
Reda El Bechari ◽  
Stéphane Brisset ◽  
Stéphane Clénet ◽  
Frédéric Guyomarch ◽  
Jean Claude Mipo

Metamodels proved to be a very efficient strategy for optimizing expensive black-box models, e.g., Finite Element simulation for electromagnetic devices. It enables the reduction of the computational burden for optimization purposes. However, the conventional approach of using metamodels presents limitations such as the cost of metamodel fitting and infill criteria problem-solving. This paper proposes a new algorithm that combines metamodels with a branch and bound (B&B) strategy. However, the efficiency of the B&B algorithm relies on the estimation of the bounds; therefore, we investigated the prediction error given by metamodels to predict the bounds. This combination leads to high fidelity global solutions. We propose a comparison protocol to assess the approach’s performances with respect to those of other algorithms of different categories. Then, two electromagnetic optimization benchmarks are treated. This paper gives practical insights into algorithms that can be used when optimizing electromagnetic devices.


We provide a framework for investment managers to create dynamic pretrade models. The approach helps market participants shed light on vendor black-box models that often do not provide any transparency into the model’s functional form or working mechanics. In addition, this allows portfolio managers to create consensus estimates based on their own expectations, such as forecasted liquidity and volatility, and to incorporate firm proprietary alpha estimates into the solution. These techniques allow managers to reduce overdependency on any one black-box model, incorporate costs into the stock selection and portfolio optimization phase of the investment cycle, and perform “what-if” and sensitivity analyses without the risk of information leakage to any outside party or vendor.


Author(s):  
Kacper Sokol ◽  
Peter Flach

Understanding data, models and predictions is important for machine learning applications. Due to the limitations of our spatial perception and intuition, analysing high-dimensional data is inherently difficult. Furthermore, black-box models achieving high predictive accuracy are widely used, yet the logic behind their predictions is often opaque. Use of textualisation -- a natural language narrative of selected phenomena -- can tackle these shortcomings. When extended with argumentation theory we could envisage machine learning models and predictions arguing persuasively for their choices.


Author(s):  
Marjan Popov ◽  
Bjørn Gustavsen ◽  
Juan A. Martinez-Velasco

Voltage surges arising from transient events, such as switching operations or lightning discharges, are one of the main causes of transformer winding failure. The voltage distribution along a transformer winding depends greatly on the waveshape of the voltage applied to the winding. This distribution is not uniform in the case of steep-fronted transients since a large portion of the applied voltage is usually concentrated on the first few turns of the winding. High frequency electromagnetic transients in transformers can be studied using internal models (i.e., models for analyzing the propagation and distribution of the incident impulse along the transformer windings), and black-box models (i.e., models for analyzing the response of the transformer from its terminals and for calculating voltage transfer). This chapter presents a summary of the most common models developed for analyzing the behaviour of transformers subjected to steep-fronted waves and a description of procedures for determining the parameters to be specified in those models. The main section details some test studies based on actual transformers in which models are validated by comparing simulation results to laboratory measurements.


Author(s):  
Evren Daglarli

Today, the effects of promising technologies such as explainable artificial intelligence (xAI) and meta-learning (ML) on the internet of things (IoT) and the cyber-physical systems (CPS), which are important components of Industry 4.0, are increasingly intensified. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. For these reasons, it is necessary to make serious efforts on the explanability and interpretability of black box models. In the near future, the integration of explainable artificial intelligence and meta-learning approaches to cyber-physical systems will have effects on a high level of virtualization and simulation infrastructure, real-time supply chain, cyber factories with smart machines communicating over the internet, maximizing production efficiency, analysis of service quality and competition level.


Sign in / Sign up

Export Citation Format

Share Document