scholarly journals Explainable Artificial Intelligence (xAI) Approaches and Deep Meta-Learning Models

Author(s):  
Evren Dağlarli

The explainable artificial intelligence (xAI) is one of the interesting issues that has emerged recently. Many researchers are trying to deal with the subject with different dimensions and interesting results that have come out. However, we are still at the beginning of the way to understand these types of models. The forthcoming years are expected to be years in which the openness of deep learning models is discussed. In classical artificial intelligence approaches, we frequently encounter deep learning methods available today. These deep learning methods can yield highly effective results according to the data set size, data set quality, the methods used in feature extraction, the hyper parameter set used in deep learning models, the activation functions, and the optimization algorithms. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network-based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. This is an important open point in artificial neural networks and deep learning models. For these reasons, it is necessary to make serious efforts on the explainability and interpretability of black box models.

Author(s):  
Evren Daglarli

Today, the effects of promising technologies such as explainable artificial intelligence (xAI) and meta-learning (ML) on the internet of things (IoT) and the cyber-physical systems (CPS), which are important components of Industry 4.0, are increasingly intensified. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. For these reasons, it is necessary to make serious efforts on the explanability and interpretability of black box models. In the near future, the integration of explainable artificial intelligence and meta-learning approaches to cyber-physical systems will have effects on a high level of virtualization and simulation infrastructure, real-time supply chain, cyber factories with smart machines communicating over the internet, maximizing production efficiency, analysis of service quality and competition level.


2021 ◽  
Vol 7 ◽  
pp. e479
Author(s):  
Elvio Amparore ◽  
Alan Perotti ◽  
Paolo Bajardi

The main objective of eXplainable Artificial Intelligence (XAI) is to provide effective explanations for black-box classifiers. The existing literature lists many desirable properties for explanations to be useful, but there is a scarce consensus on how to quantitatively evaluate explanations in practice. Moreover, explanations are typically used only to inspect black-box models, and the proactive use of explanations as a decision support is generally overlooked. Among the many approaches to XAI, a widely adopted paradigm is Local Linear Explanations—with LIME and SHAP emerging as state-of-the-art methods. We show that these methods are plagued by many defects including unstable explanations, divergence of actual implementations from the promised theoretical properties, and explanations for the wrong label. This highlights the need to have standard and unbiased evaluation procedures for Local Linear Explanations in the XAI field. In this paper we address the problem of identifying a clear and unambiguous set of metrics for the evaluation of Local Linear Explanations. This set includes both existing and novel metrics defined specifically for this class of explanations. All metrics have been included in an open Python framework, named LEAF. The purpose of LEAF is to provide a reference for end users to evaluate explanations in a standardised and unbiased way, and to guide researchers towards developing improved explainable techniques.


Processes ◽  
2020 ◽  
Vol 8 (7) ◽  
pp. 749 ◽  
Author(s):  
Jorge E. Jiménez-Hornero ◽  
Inés María Santos-Dueñas ◽  
Isidoro García-García

Modelling techniques allow certain processes to be characterized and optimized without the need for experimentation. One of the crucial steps in vinegar production is the biotransformation of ethanol into acetic acid by acetic bacteria. This step has been extensively studied by using two predictive models: first-principles models and black-box models. The fact that first-principles models are less accurate than black-box models under extreme bacterial growth conditions suggests that the kinetic equations used by the former, and hence their goodness of fit, can be further improved. By contrast, black-box models predict acetic acid production accurately enough under virtually any operating conditions. In this work, we trained black-box models based on Artificial Neural Networks (ANNs) of the multilayer perceptron (MLP) type and containing a single hidden layer to model acetification. The small number of data typically available for a bioprocess makes it rather difficult to identify the most suitable type of ANN architecture in terms of indices such as the mean square error (MSE). This places ANN methodology at a disadvantage against alternative techniques and, especially, polynomial modelling.


SPE Journal ◽  
2021 ◽  
pp. 1-15
Author(s):  
Basma Alharbi ◽  
Zhenwen Liang ◽  
Jana M. Aljindan ◽  
Ammar K. Agnia ◽  
Xiangliang Zhang

Summary Trusting a machine-learning model is a critical factor that will speed the spread of the fourth industrial revolution. Trust can be achieved by understanding how a model is making decisions. For white-box models, it is easy to “see” the model and examine its prediction. For black-box models, the explanation of the decision process is not straightforward. In this work, we compare the performance of several white- and black-box models on two production data sets in an anomaly detection task. The presence of anomalies in production data can significantly influence business decisions and misrepresent the results of the analysis, if not identified. Therefore, identifying anomalies is a crucial and necessary step to maintain safety and ensure that the wells perform at full capacity. To achieve this, we compare the performance of K-nearest neighbor (KNN), logistic regression (Logit), support vector machines (SVMs), decision tree (DT), random forest (RF), and rule fit classifier (RFC). F1 and complexity are the two main metrics used to compare the prediction performance and interpretability of these models. In one data set, RFC outperformed the remaining models in both F1 and complexity, where F1 = 0.92, and complexity = 0.5. In the second data set, RF outperformed the rest in prediction performance with F1 = 0.84, yet it had the lowest complexity metric (0.04). We further analyzed the best performing models by explaining their predictions using local interpretable model-agnostic explanations, which provide justification for decisions made for each instance. Additionally, we evaluated the global rules learned from white-box models. Local and global analysis enable decision makers to understand how and why models are making certain decisions, which in turn allows trusting the models.


Author(s):  
D J Samatha Naidu ◽  
M.Gurivi Reddy

The farmer is a backbone to nation, but majority of the cultivated crops in india affecting by various diseases at various stages of its cultivation. Recent research works shows that diseases are not providing accurate results and few identifying but not providing optimized solutions to the system. In proposed work, the recent developments of Artificial intelligence through Deep Learning show that AIR (Automatic Image Recognition systems) using CNN algorithm models can be very beneficial in such scenarios. The Rice leaf diseases images related dataset is not easily available to automate , so that we have created our own trained data set which is small in size hence we have used transfer learning to develop our Proposed model which supports deep learning models. The Proposed CNN architecture illustrated based on VGG-16 model and it is trained, tested on given dataset collected from rice fields and the internet. The accuracy of the proposed model is moderately accurate with 92.46%.


2021 ◽  
Vol 7 ◽  
pp. e702
Author(s):  
Gaoming Yang ◽  
Mingwei Li ◽  
Xianjing Fang ◽  
Ji Zhang ◽  
Xingzhu Liang

Adversarial examples are regarded as a security threat to deep learning models, and there are many ways to generate them. However, most existing methods require the query authority of the target during their work. In a more practical situation, the attacker will be easily detected because of too many queries, and this problem is especially obvious under the black-box setting. To solve the problem, we propose the Attack Without a Target Model (AWTM). Our algorithm does not specify any target model in generating adversarial examples, so it does not need to query the target. Experimental results show that it achieved a maximum attack success rate of 81.78% in the MNIST data set and 87.99% in the CIFAR-10 data set. In addition, it has a low time cost because it is a GAN-based method.


2019 ◽  
Vol 29 (Supplement_4) ◽  
Author(s):  
S Ram

Abstract With rapid developments in big data technology and the prevalence of large-scale datasets from diverse sources, the healthcare predictive analytics (HPA) field is witnessing a dramatic surge in interest. In healthcare, it is not only important to provide accurate predictions, but also critical to provide reliable explanations to the underlying black-box models making the predictions. Such explanations can play a crucial role in not only supporting clinical decision-making but also facilitating user engagement and patient safety. If users and decision makers do not have faith in the HPA model, it is highly likely that they will reject its use. Furthermore, it is extremely risky to blindly accept and apply the results derived from black-box models, which might lead to undesirable consequences or life-threatening outcomes in domains with high stakes such as healthcare. As machine learning and artificial intelligence systems are becoming more capable and ubiquitous, explainable artificial intelligence and machine learning interpretability are garnering significant attention among practitioners and researchers. The introduction of policies such as the General Data Protection Regulation (GDPR), has amplified the need for ensuring human interpretability of prediction models. In this talk I will discuss methods and applications for developing local as well as global explanations from machine learning and the value they can provide for healthcare prediction.


2022 ◽  
pp. 146-164
Author(s):  
Duygu Bagci Das ◽  
Derya Birant

Explainable artificial intelligence (XAI) is a concept that has emerged and become popular in recent years. Even interpretation in machine learning models has been drawing attention. Human activity classification (HAC) systems still lack interpretable approaches. In this study, an approach, called eXplainable HAC (XHAC), was proposed in which the data exploration, model structure explanation, and prediction explanation of the ML classifiers for HAR were examined to improve the explainability of the HAR models' components such as sensor types and their locations. For this purpose, various internet of things (IoT) sensors were considered individually, including accelerometer, gyroscope, and magnetometer. The location of these sensors (i.e., ankle, arm, and chest) was also taken into account. The important features were explored. In addition, the effect of the window size on the classification performance was investigated. According to the obtained results, the proposed approach makes the HAC processes more explainable compared to the black-box ML techniques.


Author(s):  
Samet Oztoprak ◽  
Zeynep Orman

Recent advances in deep learning methodology led to artificial intelligence (AI) performance achieving and even surpassing human levels in an increasing number of complex tasks. There are many impressive examples of this development such as image classification, sensitivity analysis, speech understanding, or strategic gaming. The estimations based on the AI methods do not give any certain information due to the lack of transparency for the visualization, explanation, and interpretation of deep learning models which can be a major disadvantage in many applications. This chapter discusses studies on the prediction of precious metals in the financial field that need an explanatory model. Traditional AI and machine learning methods are insufficient to realize these predictions. There are many advantages to using explainable artificial intelligence (XAI), which enables us to make reasonable decisions based on inferences. In this chapter, the authors examine the precious metal prediction by XAI by presenting a comprehensive literature review on the related studies.


Sign in / Sign up

Export Citation Format

Share Document