scholarly journals To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods

2021 ◽  
Vol 7 ◽  
pp. e479
Author(s):  
Elvio Amparore ◽  
Alan Perotti ◽  
Paolo Bajardi

The main objective of eXplainable Artificial Intelligence (XAI) is to provide effective explanations for black-box classifiers. The existing literature lists many desirable properties for explanations to be useful, but there is a scarce consensus on how to quantitatively evaluate explanations in practice. Moreover, explanations are typically used only to inspect black-box models, and the proactive use of explanations as a decision support is generally overlooked. Among the many approaches to XAI, a widely adopted paradigm is Local Linear Explanations—with LIME and SHAP emerging as state-of-the-art methods. We show that these methods are plagued by many defects including unstable explanations, divergence of actual implementations from the promised theoretical properties, and explanations for the wrong label. This highlights the need to have standard and unbiased evaluation procedures for Local Linear Explanations in the XAI field. In this paper we address the problem of identifying a clear and unambiguous set of metrics for the evaluation of Local Linear Explanations. This set includes both existing and novel metrics defined specifically for this class of explanations. All metrics have been included in an open Python framework, named LEAF. The purpose of LEAF is to provide a reference for end users to evaluate explanations in a standardised and unbiased way, and to guide researchers towards developing improved explainable techniques.

Author(s):  
Evren Daglarli

Today, the effects of promising technologies such as explainable artificial intelligence (xAI) and meta-learning (ML) on the internet of things (IoT) and the cyber-physical systems (CPS), which are important components of Industry 4.0, are increasingly intensified. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. For these reasons, it is necessary to make serious efforts on the explanability and interpretability of black box models. In the near future, the integration of explainable artificial intelligence and meta-learning approaches to cyber-physical systems will have effects on a high level of virtualization and simulation infrastructure, real-time supply chain, cyber factories with smart machines communicating over the internet, maximizing production efficiency, analysis of service quality and competition level.


Author(s):  
Evren Dağlarli

The explainable artificial intelligence (xAI) is one of the interesting issues that has emerged recently. Many researchers are trying to deal with the subject with different dimensions and interesting results that have come out. However, we are still at the beginning of the way to understand these types of models. The forthcoming years are expected to be years in which the openness of deep learning models is discussed. In classical artificial intelligence approaches, we frequently encounter deep learning methods available today. These deep learning methods can yield highly effective results according to the data set size, data set quality, the methods used in feature extraction, the hyper parameter set used in deep learning models, the activation functions, and the optimization algorithms. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network-based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. This is an important open point in artificial neural networks and deep learning models. For these reasons, it is necessary to make serious efforts on the explainability and interpretability of black box models.


2019 ◽  
Vol 29 (Supplement_4) ◽  
Author(s):  
S Ram

Abstract With rapid developments in big data technology and the prevalence of large-scale datasets from diverse sources, the healthcare predictive analytics (HPA) field is witnessing a dramatic surge in interest. In healthcare, it is not only important to provide accurate predictions, but also critical to provide reliable explanations to the underlying black-box models making the predictions. Such explanations can play a crucial role in not only supporting clinical decision-making but also facilitating user engagement and patient safety. If users and decision makers do not have faith in the HPA model, it is highly likely that they will reject its use. Furthermore, it is extremely risky to blindly accept and apply the results derived from black-box models, which might lead to undesirable consequences or life-threatening outcomes in domains with high stakes such as healthcare. As machine learning and artificial intelligence systems are becoming more capable and ubiquitous, explainable artificial intelligence and machine learning interpretability are garnering significant attention among practitioners and researchers. The introduction of policies such as the General Data Protection Regulation (GDPR), has amplified the need for ensuring human interpretability of prediction models. In this talk I will discuss methods and applications for developing local as well as global explanations from machine learning and the value they can provide for healthcare prediction.


2021 ◽  
Vol 36 (Supplement_1) ◽  
Author(s):  
M Afnan ◽  
Y Liu ◽  
V Conitzer ◽  
C Rudin ◽  
A Mishra ◽  
...  

Abstract Study question What are the epistemic and ethical considerations of clinically implementing Artificial Intelligence (AI) algorithms in embryo selection? Summary answer AI embryo selection algorithms used to date are “black-box” models with significant epistemic and ethical issues, and there are no trials assessing their clinical effectiveness. What is known already The innovation of time-lapse imaging offers the potential to generate vast quantities of data for embryo assessment. Computer Vision allows image data to be analysed using algorithms developed via machine learning which learn and adapt as they are exposed to more data. Most algorithms are developed using neural networks and are uninterpretable (or “black box”). Uninterpretable models are either too complicated to understand or proprietary, in which case comprehension is impossible for outsiders. In the IVF context, these outsiders include doctors, embryologists and patients, which raises ethical questions for its use in embryo selection. Study design, size, duration We performed a scoping review of articles evaluating AI for embryo selection in IVF. We considered the epistemic and ethical implications of current approaches. Participants/materials, setting, methods We searched Medline, Embase, ClinicalTrials.gov and the EU Clinical Trials Register for full text papers evaluating AI for embryo selection using the following key words: artificial intelligence* OR AI OR neural network* OR machine learning OR support vector machine OR automatic classification AND IVF OR in vitro fertilisation OR embryo*, as well as relevant MeSH and Emtree terms for Medline and Embase respectively. Main results and the role of chance We found no trials evaluating clinical effectiveness either published or registered. We found efficacy studies which looked at 2 types of outcomes – accuracy for predicting pregnancy or live birth and agreement with embryologist evaluation. Some algorithms were shown to broadly differentiate well between “good-” and “poor-” quality embryos but not between embryos of similar quality, which is the clinical need. Almost universally, the AI models were opaque (“black box”) in that at least some part of the process was uninterpretable. “Black box” models are problematic for epistemic and ethical reasons. Epistemic concerns include information asymmetries between algorithm developers and doctors, embryologists and patients; the risk of biased prediction caused by known and/or unknown confounders during the training process; difficulties in real-time error checking due to limited interpretability; the economics of buying into commercial proprietary models, brittle to variation in the treatment process; and an overall difficulty troubleshooting. Ethical pitfalls include the risk of misrepresenting patient values; concern for the health and well-being of future children; the risk of disvaluing disability; possible societal implications; and a responsibility gap, in the event of adverse events. Limitations, reasons for caution Our search was limited to the two main medical research databases. Although we checked article references for more publications, we were less likely to identify studies that were not indexed in Medline or Embase, especially if they were not cited in studies identified in our search. Wider implications of the findings It is premature to implement AI for embryo selection outside of a clinical trial. AI for embryo selection is potentially useful, but must be done carefully and transparently, as the epistemic and ethical issues are significant. We advocate for the use of interpretable AI models to overcome these issues. Trial registration number not applicable


AI and Ethics ◽  
2021 ◽  
Author(s):  
Christian Herzog

AbstractThis Comment explores the implications of a lack of tools that facilitate an explicable utilization of epistemologically richer, but also more involved white-box approaches in AI. In contrast, advances in explainable artificial intelligence for black-box approaches have led to the availability of semi-standardized and attractive toolchains that offer a seemingly competitive edge over inherently interpretable white-box models in terms of intelligibility towards users. Consequently, there is a need for research on efficient tools for rendering interpretable white-box approaches in AI explicable to facilitate responsible use.


Energies ◽  
2020 ◽  
Vol 13 (24) ◽  
pp. 6749
Author(s):  
Reda El Bechari ◽  
Stéphane Brisset ◽  
Stéphane Clénet ◽  
Frédéric Guyomarch ◽  
Jean Claude Mipo

Metamodels proved to be a very efficient strategy for optimizing expensive black-box models, e.g., Finite Element simulation for electromagnetic devices. It enables the reduction of the computational burden for optimization purposes. However, the conventional approach of using metamodels presents limitations such as the cost of metamodel fitting and infill criteria problem-solving. This paper proposes a new algorithm that combines metamodels with a branch and bound (B&B) strategy. However, the efficiency of the B&B algorithm relies on the estimation of the bounds; therefore, we investigated the prediction error given by metamodels to predict the bounds. This combination leads to high fidelity global solutions. We propose a comparison protocol to assess the approach’s performances with respect to those of other algorithms of different categories. Then, two electromagnetic optimization benchmarks are treated. This paper gives practical insights into algorithms that can be used when optimizing electromagnetic devices.


Symmetry ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 102
Author(s):  
Mohammad Reza Davahli ◽  
Waldemar Karwowski ◽  
Krzysztof Fiok ◽  
Thomas Wan ◽  
Hamid R. Parsaei

In response to the need to address the safety challenges in the use of artificial intelligence (AI), this research aimed to develop a framework for a safety controlling system (SCS) to address the AI black-box mystery in the healthcare industry. The main objective was to propose safety guidelines for implementing AI black-box models to reduce the risk of potential healthcare-related incidents and accidents. The system was developed by adopting the multi-attribute value model approach (MAVT), which comprises four symmetrical parts: extracting attributes, generating weights for the attributes, developing a rating scale, and finalizing the system. On the basis of the MAVT approach, three layers of attributes were created. The first level contained six key dimensions, the second level included 14 attributes, and the third level comprised 78 attributes. The key first level dimensions of the SCS included safety policies, incentives for clinicians, clinician and patient training, communication and interaction, planning of actions, and control of such actions. The proposed system may provide a basis for detecting AI utilization risks, preventing incidents from occurring, and developing emergency plans for AI-related risks. This approach could also guide and control the implementation of AI systems in the healthcare industry.


We provide a framework for investment managers to create dynamic pretrade models. The approach helps market participants shed light on vendor black-box models that often do not provide any transparency into the model’s functional form or working mechanics. In addition, this allows portfolio managers to create consensus estimates based on their own expectations, such as forecasted liquidity and volatility, and to incorporate firm proprietary alpha estimates into the solution. These techniques allow managers to reduce overdependency on any one black-box model, incorporate costs into the stock selection and portfolio optimization phase of the investment cycle, and perform “what-if” and sensitivity analyses without the risk of information leakage to any outside party or vendor.


Sign in / Sign up

Export Citation Format

Share Document