Local Interpretable Model-agnostic Explanations (LIME)

2021 ◽  
pp. 107-123
Author(s):  
Przemyslaw Biecek ◽  
Tomasz Burzykowski
Keyword(s):  
AI and Ethics ◽  
2021 ◽  
Author(s):  
Ryan Steed ◽  
Aylin Caliskan

AbstractResearch in social psychology has shown that people’s biased, subjective judgments about another’s personality based solely on their appearance are not predictive of their actual personality traits. But researchers and companies often utilize computer vision models to predict similarly subjective personality attributes such as “employability”. We seek to determine whether state-of-the-art, black box face processing technology can learn human-like appearance biases. With features extracted with FaceNet, a widely used face recognition framework, we train a transfer learning model on human subjects’ first impressions of personality traits in other faces as measured by social psychologists. We find that features extracted with FaceNet can be used to predict human appearance bias scores for deliberately manipulated faces but not for randomly generated faces scored by humans. Additionally, in contrast to work with human biases in social psychology, the model does not find a significant signal correlating politicians’ vote shares with perceived competence bias. With Local Interpretable Model-Agnostic Explanations (LIME), we provide several explanations for this discrepancy. Our results suggest that some signals of appearance bias documented in social psychology are not embedded by the machine learning techniques we investigate. We shed light on the ways in which appearance bias could be embedded in face processing technology and cast further doubt on the practice of predicting subjective traits based on appearances.


2008 ◽  
pp. 1138-1156
Author(s):  
Can Yang ◽  
Jun Meng ◽  
Shanan Zhu

Input selection is an important step in nonlinear regression modeling. By input selection, an interpretable model can be built with less computational cost. Input selection thus has drawn great attention in recent years. However, most available input selection methods are model-based. In this case, the input data selection is insensitive to changes. In this paper, an effective model-free method is proposed for the input selection. This method is based on sensitivity analysis using Minimum Cluster Volume (MCV) algorithm. The advantage of our proposed method is that with no specific model needed to be built in advance for checking possible input combinations, the computational cost is reduced and changes of data patterns can be captured automatically. The effectiveness of the proposed method is evaluated by using three well-known benchmark problems, which show that the proposed method works effectively with small and medium sized data collections. With an input selection procedure, a concise fuzzy model is constructed with high accuracy of prediction and better interpretation of data, which serves the purpose of patterns discovery in data mining well.


Diagnostics ◽  
2020 ◽  
Vol 10 (7) ◽  
pp. 466
Author(s):  
Shinji Kitamura ◽  
Kensaku Takahashi ◽  
Yizhen Sang ◽  
Kazuhiko Fukushima ◽  
Kenji Tsuji ◽  
...  

Artificial Intelligence (AI) imaging diagnosis is developing, making enormous steps forward in medical fields. Regarding diabetic nephropathy (DN), medical doctors diagnose them with clinical course, clinical laboratory data and renal pathology, mainly evaluate with light microscopy images rather than immunofluorescent images because there are no characteristic findings in immunofluorescent images for DN diagnosis. Here, we examined the possibility of whether AI could diagnose DN from immunofluorescent images. We collected renal immunofluorescent images from 885 renal biopsy patients in our hospital, and we created a dataset that contains six types of immunofluorescent images of IgG, IgA, IgM, C3, C1q and Fibrinogen for each patient. Using the dataset, 39 programs worked without errors (Area under the curve (AUC): 0.93). Five programs diagnosed DN completely with immunofluorescent images (AUC: 1.00). By analyzing with Local interpretable model-agnostic explanations (Lime), the AI focused on the peripheral lesion of DN glomeruli. On the other hand, the nephrologist diagnostic ratio (AUC: 0.75833) was slightly inferior to AI diagnosis. These findings suggest that DN could be diagnosed only by immunofluorescent images by deep learning. AI could diagnose DN and identify classified unknown parts with the immunofluorescent images that nephrologists usually do not use for DN diagnosis.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4385 ◽  
Author(s):  
Carlo Dindorf ◽  
Wolfgang Teufl ◽  
Bertram Taetz ◽  
Gabriele Bleser ◽  
Michael Fröhlich

Many machine learning models show black box characteristics and, therefore, a lack of transparency, interpretability, and trustworthiness. This strongly limits their practical application in clinical contexts. For overcoming these limitations, Explainable Artificial Intelligence (XAI) has shown promising results. The current study examined the influence of different input representations on a trained model’s accuracy, interpretability, as well as clinical relevancy using XAI methods. The gait of 27 healthy subjects and 20 subjects after total hip arthroplasty (THA) was recorded with an inertial measurement unit (IMU)-based system. Three different input representations were used for classification. Local Interpretable Model-Agnostic Explanations (LIME) was used for model interpretation. The best accuracy was achieved with automatically extracted features (mean accuracy Macc = 100%), followed by features based on simple descriptive statistics (Macc = 97.38%) and waveform data (Macc = 95.88%). Globally seen, sagittal movement of the hip, knee, and pelvis as well as transversal movement of the ankle were especially important for this specific classification task. The current work shows that the type of input representation crucially determines interpretability as well as clinical relevance. A combined approach using different forms of representations seems advantageous. The results might assist physicians and therapists finding and addressing individual pathologic gait patterns.


2019 ◽  
Vol 86 (7-8) ◽  
pp. 404-412 ◽  
Author(s):  
Katharina Weitz ◽  
Teena Hassan ◽  
Ute Schmid ◽  
Jens-Uwe Garbas

AbstractDeep neural networks are successfully used for object and face recognition in images and videos. In order to be able to apply such networks in practice, for example in hospitals as a pain recognition tool, the current procedures are only suitable to a limited extent. The advantage of deep neural methods is that they can learn complex non-linear relationships between raw data and target classes without limiting themselves to a set of hand-crafted features provided by humans. However, the disadvantage is that due to the complexity of these networks, it is not possible to interpret the knowledge that is stored inside the network. It is a black-box learning procedure. Explainable Artificial Intelligence (AI) approaches mitigate this problem by extracting explanations for decisions and representing them in a human-interpretable form. The aim of this paper is to investigate the explainable AI methods Layer-wise Relevance Propagation (LRP) and Local Interpretable Model-agnostic Explanations (LIME). These approaches are applied to explain how a deep neural network distinguishes facial expressions of pain from facial expressions of emotions such as happiness and disgust.


2020 ◽  
Vol 2 (4) ◽  
pp. 490-504
Author(s):  
Md Manjurul Ahsan ◽  
Kishor Datta Gupta ◽  
Mohammad Maminur Islam ◽  
Sajib Sen ◽  
Md. Lutfar Rahman ◽  
...  

The outbreak of COVID-19 has caused more than 200,000 deaths so far in the USA alone, which instigates the necessity of initial screening to control the spread of the onset of COVID-19. However, screening for the disease becomes laborious with the available testing kits as the number of patients increases rapidly. Therefore, to reduce the dependency on the limited test kits, many studies suggested a computed tomography (CT) scan or chest radiograph (X-ray) based screening system as an alternative approach. Thereby, to reinforce these approaches, models using both CT scan and chest X-ray images need to develop to conduct a large number of tests simultaneously to detect patients with COVID-19 symptoms. In this work, patients with COVID-19 symptoms have been detected using eight distinct deep learning techniques, which are VGG16, InceptionResNetV2, ResNet50, DenseNet201, VGG19, MobilenetV2, NasNetMobile, and ResNet15V2, using two datasets: one dataset includes 400 CT scan and another 400 chest X-ray images. Results show that NasNetMobile outperformed all other models by achieving an accuracy of 82.94% in CT scan and 93.94% in chest X-ray datasets. Besides, Local Interpretable Model-agnostic Explanations (LIME) is used. Results demonstrate that the proposed models can identify the infectious regions and top features; ultimately, it provides a potential opportunity to distinguish between COVID-19 patients with others.


Sign in / Sign up

Export Citation Format

Share Document