scholarly journals Deep Learning Model Explainability for Inspection Accuracy Improvement in the Automotive Industry

Author(s):  
Anass El Houd ◽  
Charbel El Hachem ◽  
Loic Painvin

Abstract The welding seams visual inspection is still manually operated by humans in different companies, so the result of the test is still highly subjective and expensive. At present, the integration of deep learning methods for welds classification is a research focus in engineering applications. This work intends to apprehend and emphasize the contribution of deep learning model explainability to the improvement of welding seams classification accuracy and reliability, two of the various metrics affecting the production lines and cost in the automotive industry. For this purpose, we implement a novel hybrid method that relies on combining the model prediction scores and visual explanation heatmap of the model in order to make a more accurate classification of welding seam defects and improve both its performance and its reliability. The results show that the hybrid model performance is relatively above our target performance and helps to increase the accuracy by at least 18%, which presents new perspectives to the developments of deep Learning explainability and interpretability.

2021 ◽  
Vol 11 (4) ◽  
pp. 290
Author(s):  
Luca Pasquini ◽  
Antonio Napolitano ◽  
Emanuela Tagliente ◽  
Francesco Dellepiane ◽  
Martina Lucignani ◽  
...  

Isocitrate dehydrogenase (IDH) mutant and wildtype glioblastoma multiforme (GBM) often show overlapping features on magnetic resonance imaging (MRI), representing a diagnostic challenge. Deep learning showed promising results for IDH identification in mixed low/high grade glioma populations; however, a GBM-specific model is still lacking in the literature. Our aim was to develop a GBM-tailored deep-learning model for IDH prediction by applying convoluted neural networks (CNN) on multiparametric MRI. We selected 100 adult patients with pathologically demonstrated WHO grade IV gliomas and IDH testing. MRI sequences included: MPRAGE, T1, T2, FLAIR, rCBV and ADC. The model consisted of a 4-block 2D CNN, applied to each MRI sequence. Probability of IDH mutation was obtained from the last dense layer of a softmax activation function. Model performance was evaluated in the test cohort considering categorical cross-entropy loss (CCEL) and accuracy. Calculated performance was: rCBV (accuracy 83%, CCEL 0.64), T1 (accuracy 77%, CCEL 1.4), FLAIR (accuracy 77%, CCEL 1.98), T2 (accuracy 67%, CCEL 2.41), MPRAGE (accuracy 66%, CCEL 2.55). Lower performance was achieved on ADC maps. We present a GBM-specific deep-learning model for IDH mutation prediction, with a maximal accuracy of 83% on rCBV maps. Highest predictivity achieved on perfusion images possibly reflects the known link between IDH and neoangiogenesis through the hypoxia inducible factor.


Author(s):  
Yong-Yeon Jo ◽  
Joon-myoung Kwon ◽  
Ki-Hyun Jeon ◽  
Yong-Hyeon Cho ◽  
Jae-Hyun Shin ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Dapeng Lang ◽  
Deyun Chen ◽  
Ran Shi ◽  
Yongjun He

Deep learning has been widely used in the field of image classification and image recognition and achieved positive practical results. However, in recent years, a number of studies have found that the accuracy of deep learning model based on classification greatly drops when making only subtle changes to the original examples, thus realizing the attack on the deep learning model. The main methods are as follows: adjust the pixels of attack examples invisible to human eyes and induce deep learning model to make the wrong classification; by adding an adversarial patch on the detection target, guide and deceive the classification model to make it misclassification. Therefore, these methods have strong randomness and are of very limited use in practical application. Different from the previous perturbation to traffic signs, our paper proposes a method that is able to successfully hide and misclassify vehicles in complex contexts. This method takes into account the complex real scenarios and can perturb with the pictures taken by a camera and mobile phone so that the detector based on deep learning model cannot detect the vehicle or misclassification. In order to improve the robustness, the position and size of the adversarial patch are adjusted according to different detection models by introducing the attachment mechanism. Through the test of different detectors, the patch generated in the single target detection algorithm can also attack other detectors and do well in transferability. Based on the experimental part of this paper, the proposed algorithm is able to significantly lower the accuracy of the detector. Affected by the real world, such as distance, light, angles, resolution, etc., the false classification of the target is realized by reducing the confidence level and background of the target, which greatly perturbs the detection results of the target detector. In COCO Dataset 2017, it reveals that the success rate of this algorithm reaches 88.7%.


2021 ◽  
Vol 32 ◽  
pp. S926-S927
Author(s):  
G. Toyokawa ◽  
Y. Yamada ◽  
N. Haratake ◽  
Y. Shiraishi ◽  
T. Takenaka ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document