scholarly journals On the risk of confusing interpretability with explicability

AI and Ethics ◽  
2021 ◽  
Author(s):  
Christian Herzog

AbstractThis Comment explores the implications of a lack of tools that facilitate an explicable utilization of epistemologically richer, but also more involved white-box approaches in AI. In contrast, advances in explainable artificial intelligence for black-box approaches have led to the availability of semi-standardized and attractive toolchains that offer a seemingly competitive edge over inherently interpretable white-box models in terms of intelligibility towards users. Consequently, there is a need for research on efficient tools for rendering interpretable white-box approaches in AI explicable to facilitate responsible use.

Author(s):  
Evren Daglarli

Today, the effects of promising technologies such as explainable artificial intelligence (xAI) and meta-learning (ML) on the internet of things (IoT) and the cyber-physical systems (CPS), which are important components of Industry 4.0, are increasingly intensified. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. For these reasons, it is necessary to make serious efforts on the explanability and interpretability of black box models. In the near future, the integration of explainable artificial intelligence and meta-learning approaches to cyber-physical systems will have effects on a high level of virtualization and simulation infrastructure, real-time supply chain, cyber factories with smart machines communicating over the internet, maximizing production efficiency, analysis of service quality and competition level.


Author(s):  
Evren Dağlarli

The explainable artificial intelligence (xAI) is one of the interesting issues that has emerged recently. Many researchers are trying to deal with the subject with different dimensions and interesting results that have come out. However, we are still at the beginning of the way to understand these types of models. The forthcoming years are expected to be years in which the openness of deep learning models is discussed. In classical artificial intelligence approaches, we frequently encounter deep learning methods available today. These deep learning methods can yield highly effective results according to the data set size, data set quality, the methods used in feature extraction, the hyper parameter set used in deep learning models, the activation functions, and the optimization algorithms. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network-based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. This is an important open point in artificial neural networks and deep learning models. For these reasons, it is necessary to make serious efforts on the explainability and interpretability of black box models.


2021 ◽  
Vol 7 ◽  
pp. e479
Author(s):  
Elvio Amparore ◽  
Alan Perotti ◽  
Paolo Bajardi

The main objective of eXplainable Artificial Intelligence (XAI) is to provide effective explanations for black-box classifiers. The existing literature lists many desirable properties for explanations to be useful, but there is a scarce consensus on how to quantitatively evaluate explanations in practice. Moreover, explanations are typically used only to inspect black-box models, and the proactive use of explanations as a decision support is generally overlooked. Among the many approaches to XAI, a widely adopted paradigm is Local Linear Explanations—with LIME and SHAP emerging as state-of-the-art methods. We show that these methods are plagued by many defects including unstable explanations, divergence of actual implementations from the promised theoretical properties, and explanations for the wrong label. This highlights the need to have standard and unbiased evaluation procedures for Local Linear Explanations in the XAI field. In this paper we address the problem of identifying a clear and unambiguous set of metrics for the evaluation of Local Linear Explanations. This set includes both existing and novel metrics defined specifically for this class of explanations. All metrics have been included in an open Python framework, named LEAF. The purpose of LEAF is to provide a reference for end users to evaluate explanations in a standardised and unbiased way, and to guide researchers towards developing improved explainable techniques.


Symmetry ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 102
Author(s):  
Mohammad Reza Davahli ◽  
Waldemar Karwowski ◽  
Krzysztof Fiok ◽  
Thomas Wan ◽  
Hamid R. Parsaei

In response to the need to address the safety challenges in the use of artificial intelligence (AI), this research aimed to develop a framework for a safety controlling system (SCS) to address the AI black-box mystery in the healthcare industry. The main objective was to propose safety guidelines for implementing AI black-box models to reduce the risk of potential healthcare-related incidents and accidents. The system was developed by adopting the multi-attribute value model approach (MAVT), which comprises four symmetrical parts: extracting attributes, generating weights for the attributes, developing a rating scale, and finalizing the system. On the basis of the MAVT approach, three layers of attributes were created. The first level contained six key dimensions, the second level included 14 attributes, and the third level comprised 78 attributes. The key first level dimensions of the SCS included safety policies, incentives for clinicians, clinician and patient training, communication and interaction, planning of actions, and control of such actions. The proposed system may provide a basis for detecting AI utilization risks, preventing incidents from occurring, and developing emergency plans for AI-related risks. This approach could also guide and control the implementation of AI systems in the healthcare industry.


2021 ◽  
Author(s):  
J. Eric T. Taylor ◽  
Graham Taylor

Artificial intelligence powered by deep neural networks has reached a levelof complexity where it can be difficult or impossible to express how a modelmakes its decisions. This black-box problem is especially concerning when themodel makes decisions with consequences for human well-being. In response,an emerging field called explainable artificial intelligence (XAI) aims to increasethe interpretability, fairness, and transparency of machine learning. In thispaper, we describe how cognitive psychologists can make contributions to XAI.The human mind is also a black box, and cognitive psychologists have overone hundred and fifty years of experience modeling it through experimentation.We ought to translate the methods and rigour of cognitive psychology to thestudy of artificial black boxes in the service of explainability. We provide areview of XAI for psychologists, arguing that current methods possess a blindspot that can be complemented by the experimental cognitive tradition. Wealso provide a framework for research in XAI, highlight exemplary cases ofexperimentation within XAI inspired by psychological science, and provide atutorial on experimenting with machines. We end by noting the advantages ofan experimental approach and invite other psychologists to conduct research inthis exciting new field.


Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1406
Author(s):  
Salih Sarp ◽  
Murat Kuzlu ◽  
Emmanuel Wilson ◽  
Umit Cali ◽  
Ozgur Guler

Artificial Intelligence (AI) has been among the most emerging research and industrial application fields, especially in the healthcare domain, but operated as a black-box model with a limited understanding of its inner working over the past decades. AI algorithms are, in large part, built on weights calculated as a result of large matrix multiplications. It is typically hard to interpret and debug the computationally intensive processes. Explainable Artificial Intelligence (XAI) aims to solve black-box and hard-to-debug approaches through the use of various techniques and tools. In this study, XAI techniques are applied to chronic wound classification. The proposed model classifies chronic wounds through the use of transfer learning and fully connected layers. Classified chronic wound images serve as input to the XAI model for an explanation. Interpretable results can help shed new perspectives to clinicians during the diagnostic phase. The proposed method successfully provides chronic wound classification and its associated explanation to extract additional knowledge that can also be interpreted by non-data-science experts, such as medical scientists and physicians. This hybrid approach is shown to aid with the interpretation and understanding of AI decision-making processes.


Author(s):  
Navid Nobani ◽  
Fabio Mercorio ◽  
Mario Mezzanzanica

Explainable Artificial Intelligence (XAI) is gaining interests in both academia and industry, mainly thanks to the proliferation of darker more complex black-box solutions which are replacing their more transparent ancestors. Believing that the overall performance of an XAI system can be augmented by considering the end-user as a human being, we are studying the ways we can improve the explanations by making them more informative and easier to use from one hand, and interactive and customisable from the other hand.


Author(s):  
Christian Lossos ◽  
Simon Geschwill ◽  
Frank Morelli

ZusammenfassungKünstliche Intelligenz (KI) und Machine Learning (ML) gelten gegenwärtig als probate Mittel, um betriebswirtschaftliche Entscheidungen durch mathematische Modelle zu optimieren. Allerdings werden die Technologien häufig in Form von „Black Box“-Ansätze mit entsprechenden Risiken realisiert. Der Einsatz von Offenheit kann in diesem Kontext mehr Objektivität schaffen und als Treiber für innovative Lösungen fungieren. Rationale Entscheidungen im Unternehmen dienen im Sinne einer Mittel-Zweck-Beziehung dazu, Wettbewerbsvorteile zu erlangen. Im Sinne von Governance und Compliance sind dabei regulatorische Rahmenwerke wie COBIT 2019 und gesetzliche Grundlagen wie die Datenschutz-Grundverordnung (DSGVO) zu berücksichtigen, die ihrerseits ein Mindestmaß an Transparenz einfordern. Ferner sind auch Fairnessaspekte, die durch Bias-Effekte bei ML-Systemen beeinträchtigt werden können, zu berücksichtigen. In Teilaspekten, wie z. B. bei der Modellerstellung, wird in den Bereichen der KI und des ML das Konzept der Offenheit bereits praktiziert. Das Konzept der erklärbaren KI („Explainable Artificial Intelligence“ – XAI) vermag es aber, das zugehörige Potenzial erheblich steigern. Hierzu stehen verschiedene generische Ansätze (Ante hoc‑, Design- und Post-hoc-Konzepte) sowie die Möglichkeit, diese untereinander zu kombinieren, zur Verfügung. Entsprechend müssen Chancen und Grenzen von XAI systematisch reflektiert werden. Ein geeignetes, XAI-basiertes Modell für das Fällen von Entscheidungen im Unternehmen lässt sich mit Hilfe von Heuristiken näher charakterisieren.


Author(s):  
Mohammad Reza Davahli ◽  
Waldemar Karwowski ◽  
Krzysztof Fiok ◽  
Thomas T.H. Wan ◽  
Hamid R Parsaei

In response to the need to address the safety challenges in the use of artificial intelligence (AI), this research aimed to develop a framework for a safety controlling system (SCS) to address the AI black-box mystery in the healthcare industry. The main objective was to propose safety guidelines for implementing AI black-box models to reduce the risk of potential healthcare-related incidents and accidents. The system was developed by adopting the multi-attribute value model approach (MAVT), which comprises four symmetrical parts: extracting attributes, generating weights for the attributes, developing a rating scale, and finalizing the system. On the basis of the MAVT approach, three layers of attributes were created. The first level contained 6 key dimensions, the second level included 14 attributes, and the third level comprised 78 attributes. The key first level dimensions of the SCS included safety policies, incentives for clinicians, clinician and patient training, communication and interaction, planning of actions, and control of such actions. The proposed system may provide a basis for detecting AI utilization risks, preventing incidents from occurring, and developing emergency plans for AI-related risks. This approach could also guide and control the implementation of AI systems in the healthcare industry.


Sign in / Sign up

Export Citation Format

Share Document