scholarly journals Shapley Values of Reconstruction Errors of PCA for Explaining Anomaly Detection

Author(s):  
Naoya Takeishi
2020 ◽  
Vol 12 (1) ◽  
pp. 10
Author(s):  
Alex Mourer ◽  
Jérôme Lacaille ◽  
Madalina Olteanu ◽  
Marie Chavent

Engines are verified through production tests before delivering them to customers. During those tests, lot of measures are taken on different parts of the engine, considering multiple physical parameters. Unexpected measures can be observed. For this very reason, it is important to assess if these unusual observations are statistically significant. However, anomaly detection is a difficult problem in unsupervised learning. The obvious reason is that, unlike supervised classification, there is no ground truth against which we could evaluate results. Therefore, we propose a methodology based on two independent statistical algorithms to double check our results. One approach is the Isolation Forest (IF) model which is specific to anomaly detection and able to handle a large number of variables. The goal of the algorithm is to find rare items, events or observations which raise suspicions by differing significantly from the majority of the data and, at the same time, it discriminates non-informative variables to improve. One main issue of IF is its lack of interpretability. Within this scope, we extend the shapley values, interpretation indicators, to the unsupervised context to interpret the model outputs. The second approach is the Self-Organizing Map (SOM) model which has nice properties for data mining by providing both clustering and visual representation. The performance of the method and its interpretability depends on the chosen subset of variables. In this respect, we first implement a sparse-weighted K-means to reduce the input space, allowing the SOM to give an interpretable discretized representation. We apply the two methodologies on data on aircraft engines measurements. Both approaches show similar results which are easily interpretable and exploitable by the experts.


2021 ◽  
Vol 13 (6) ◽  
pp. 109-128
Author(s):  
Khushnaseeb Roshan ◽  
Aasim Zafar

Machine learning (ML) and Deep Learning (DL) methods are being adopted rapidly, especially in computer network security, such as fraud detection, network anomaly detection, intrusion detection, and much more. However, the lack of transparency of ML and DL based models is a major obstacle to their implementation and criticized due to its black-box nature, even with such tremendous results. Explainable Artificial Intelligence (XAI) is a promising area that can improve the trustworthiness of these models by giving explanations and interpreting its output. If the internal working of the ML and DL based models is understandable, then it can further help to improve its performance. The objective of this paper is to show that how XAI can be used to interpret the results of the DL model, the autoencoder in this case. And, based on the interpretation, we improved its performance for computer network anomaly detection. The kernel SHAP method, which is based on the shapley values, is used as a novel feature selection technique. This method is used to identify only those features that are actually causing the anomalous behaviour of the set of attack/anomaly instances. Later, these feature sets are used to train and validate the autoencoderbut on benign data only. Finally, the built SHAP_Model outperformed the other two models proposed based on the feature selection method. This whole experiment is conducted on the subset of the latest CICIDS2017 network dataset. The overall accuracy and AUC of SHAP_Model is 94% and 0.969, respectively.


2018 ◽  
Vol 18 (1) ◽  
pp. 20-32 ◽  
Author(s):  
Jong-Min Kim ◽  
Jaiwook Baik

2016 ◽  
Vol 136 (3) ◽  
pp. 363-372
Author(s):  
Takaaki Nakamura ◽  
Makoto Imamura ◽  
Masashi Tatedoko ◽  
Norio Hirai

2015 ◽  
Vol 135 (12) ◽  
pp. 749-755
Author(s):  
Taiyo Matsumura ◽  
Ippei Kamihira ◽  
Katsuma Ito ◽  
Takashi Ono

Sign in / Sign up

Export Citation Format

Share Document