scholarly journals Deep learning and explainable artificial intelligence techniques applied for detecting money laundering – a critical review

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Dattatray V. Kute ◽  
Biswajeet Pradhan ◽  
Nagesh Shukla ◽  
Abdullah Alamri
2021 ◽  
Vol 2070 (1) ◽  
pp. 012141
Author(s):  
Pavan Sharma ◽  
Hemant Amhia ◽  
Sunil Datt Sharma

Abstract Nowadays, artificial intelligence techniques are getting popular in modern industry to diagnose the rolling bearing faults (RBFs). The RBFs occur in rotating machinery and these are common in every manufacturing industry. The diagnosis of the RBFs is highly needed to reduce the financial and production losses. Therefore, various artificial intelligence techniques such as machine and deep learning have been developed to diagnose the RBFs in the rotating machines. But, the performance of these techniques has suffered due the size of the dataset. Because, Machine learning and deep learning methods based methods are suitable for the small and large datasets respectively. Deep learning methods have also been limited to large training time. In this paper, performance of the different pre-trained models for the RBFs classification has been analysed. CWRU Dataset has been used for the performance comparison.


Author(s):  
Evren Dağlarli

The explainable artificial intelligence (xAI) is one of the interesting issues that has emerged recently. Many researchers are trying to deal with the subject with different dimensions and interesting results that have come out. However, we are still at the beginning of the way to understand these types of models. The forthcoming years are expected to be years in which the openness of deep learning models is discussed. In classical artificial intelligence approaches, we frequently encounter deep learning methods available today. These deep learning methods can yield highly effective results according to the data set size, data set quality, the methods used in feature extraction, the hyper parameter set used in deep learning models, the activation functions, and the optimization algorithms. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network-based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. This is an important open point in artificial neural networks and deep learning models. For these reasons, it is necessary to make serious efforts on the explainability and interpretability of black box models.


2020 ◽  
Vol 38 (15_suppl) ◽  
pp. 5555-5555
Author(s):  
Okyaz Eminaga ◽  
Andreas Loening ◽  
Andrew Lu ◽  
James D Brooks ◽  
Daniel Rubin

5555 Background: The variation of the human perception has limited the potential of multi-parametric magnetic resonance imaging (mpMRI) of the prostate in determining prostate cancer and identifying significant prostate cancer. The current study aims to overcome this limitation and utilizes an explainable artificial intelligence to leverage the diagnostic potential of mpMRI in detecting prostate cancer (PCa) and determining its significance. Methods: A total of 6,020 MR images from 1,498 cases were considered (1,785 T2 images, 2,719 DWI images, and 1,516 ADC maps). The treatment determined the significance of PCa. Cases who received radical prostatectomy were considered significant, whereas cases with active surveillance and followed for at least two years were considered insignificant. The negative biopsy cases have either a single biopsy setting or multiple biopsy settings with the PCa exclusion. The images were randomly divided into development (80%) and test sets (20%) after stratifying according to the case in each image type. The development set was then divided into a training set (90%) and a validation set (10%). We developed deep learning models for PCa detection and the determination of significant PCa based on the PlexusNet architecture that supports explainable deep learning and volumetric input data. The input data for PCa detection was T2-weighted images, whereas the input data for determining significant PCa include all images types. The performance of PCa detection and determination of significant PCa was measured using the area under receiving characteristic operating curve (AUROC) and compared to the maximum PiRAD score (version 2) at the case level. The 10,000 times bootstrapping resampling was applied to measure the 95% confidence interval (CI) of AUROC. Results: The AUROC for the PCa detection was 0.833 (95% CI: 0.788-0.879) compared to the PiRAD score with 0.75 (0.718-0.764). The DL models to detect significant PCa using the ADC map or DWI images achieved the highest AUROC [ADC: 0.945 (95% CI: 0.913-0.982; DWI: 0.912 (95% CI: 0.871-0.954)] compared to a DL model using T2 weighted (0.850; 95% CI: 0.791-0.908) or PiRAD scores (0.604; 95% CI: 0.544-0.663). Finally, the attention map of PlexusNet from mpMRI with PCa correctly showed areas that contain PCa after matching with corresponding prostatectomy slice. Conclusions: We found that explainable deep learning is feasible on mpMRI and achieves high accuracy in determining cases with PCa and identifying cases with significant PCa.


2020 ◽  
Author(s):  
Maria Moreno de Castro

<p>The presence of automated decision making continuously increases in today's society. Algorithms based in machine and deep learning decide how much we pay for insurance,  translate our thoughts to speech, and shape our consumption of goods (via e-marketing) and knowledge (via search engines). Machine and deep learning models are ubiquitous in science too, in particular, many promising examples are being developed to prove their feasibility for earth sciences applications, like finding temporal trends or spatial patterns in data or improving parameterization schemes for climate simulations. </p><p>However, most machine and deep learning applications aim to optimise performance metrics (for instance, accuracy, which stands for the times the model prediction was right), which are rarely good indicators of trust (i.e., why these predictions were right?). In fact, with the increase of data volume and model complexity, machine learning and deep learning  predictions can be very accurate but also prone to rely on spurious correlations, encode and magnify bias, and draw conclusions that do not incorporate the underlying dynamics governing the system. Because of that, the uncertainty of the predictions and our confidence in the model are difficult to estimate and the relation between inputs and outputs becomes hard to interpret. </p><p>Since it is challenging to shift a community from “black” to “glass” boxes, it is more useful to implement Explainable Artificial Intelligence (XAI) techniques right at the beginning of the machine learning and deep learning adoption rather than trying to fix fundamental problems later. The good news is that most of the popular XAI techniques basically are sensitivity analyses because they consist of a systematic perturbation of some model components in order to observe how it affects the model predictions. The techniques comprise random sampling, Monte-Carlo simulations, and ensemble runs, which are common methods in geosciences. Moreover, many XAI techniques are reusable because they are model-agnostic and must be applied after the model has been fitted. In addition, interpretability provides robust arguments when communicating machine and deep learning predictions to scientists and decision-makers.</p><p>In order to assist not only the practitioners but also the end-users in the evaluation of  machine and deep learning results, we will explain the intuition behind some popular techniques of XAI and aleatory and epistemic Uncertainty Quantification: (1) the Permutation Importance and Gaussian processes on the inputs (i.e., the perturbation of the model inputs), (2) the Monte-Carlo Dropout, Deep ensembles, Quantile Regression, and Gaussian processes on the weights (i.e, the perturbation of the model architecture), (3) the Conformal Predictors (useful to estimate the confidence interval on the outputs), and (4) the Layerwise Relevance Propagation (LRP), Shapley values, and Local Interpretable Model-Agnostic Explanations (LIME) (designed to visualize how each feature in the data affected a particular prediction). We will also introduce some best-practises, like the detection of anomalies in the training data before the training, the implementation of fallbacks when the prediction is not reliable, and physics-guided learning by including constraints in the loss function to avoid physical inconsistencies, like the violation of conservation laws. </p>


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 327
Author(s):  
Ramiz Yilmazer ◽  
Derya Birant

Providing high on-shelf availability (OSA) is a key factor to increase profits in grocery stores. Recently, there has been growing interest in computer vision approaches to monitor OSA. However, the largest and well-known computer vision datasets do not provide annotation for store products, and therefore, a huge effort is needed to manually label products on images. To tackle the annotation problem, this paper proposes a new method that combines two concepts “semi-supervised learning” and “on-shelf availability” (SOSA) for the first time. Moreover, it is the first time that “You Only Look Once” (YOLOv4) deep learning architecture is used to monitor OSA. Furthermore, this paper provides the first demonstration of explainable artificial intelligence (XAI) on OSA. It presents a new software application, called SOSA XAI, with its capabilities and advantages. In the experimental studies, the effectiveness of the proposed SOSA method was verified on image datasets, with different ratios of labeled samples varying from 20% to 80%. The experimental results show that the proposed approach outperforms the existing approaches (RetinaNet and YOLOv3) in terms of accuracy.


Sign in / Sign up

Export Citation Format

Share Document