scholarly journals Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensing

Author(s):  
Ioannis Kakogeorgiou ◽  
Konstantinos Karantzalos
2021 ◽  
Vol 13 (15) ◽  
pp. 2883
Author(s):  
Gwanggil Jeon

Remote sensing is a fundamental tool for comprehending the earth and supporting human–earth communications [...]


2020 ◽  
Author(s):  
Lei Jin ◽  
Feng Shi ◽  
Qiuping Chun ◽  
Hong Chen ◽  
Yixin Ma ◽  
...  

Abstract Background Pathological diagnosis of glioma subtypes is essential for treatment planning and prognosis. Standard histological diagnosis of glioma is based on postoperative hematoxylin and eosin stained slides by neuropathologists. With advancing artificial intelligence (AI), the aim of this study was to determine whether deep learning can be applied to glioma classification. Methods A neuropathological diagnostic platform is designed comprising a slide scanner and deep convolutional neural networks (CNNs) to classify 5 major histological subtypes of glioma to assist pathologists. The CNNs were trained and verified on over 79 990 histological patch images from 267 patients. A logical algorithm is used when molecular profiles are available. Results A new model of the squeeze-and-excitation block DenseNet with weighted cross-entropy (named SD-Net_WCE) is developed for the glioma classification task, which learns the recognizable features of glioma histology CNN-based independent diagnostic testing on data from 56 patients with 17 262 histological patch images demonstrated patch level accuracy of 86.5% and patient level accuracy of 87.5%. Histopathological classifications could be further amplified to integrated neuropathological diagnosis by 2 molecular markers (isocitrate dehydrogenase and 1p/19q). Conclusion The model is capable of solving multiple classification tasks and can satisfactorily classify glioma subtypes. The system provides a novel aid for the integrated neuropathological diagnostic workflow of glioma.


Author(s):  
Nataliya Vladimirovna Apatova ◽  
Vitaliy Borisovich Popov

With increasing competition, the market situation is constantly changing and many enterprises are at risk of bankruptcy. There are various methods for predicting the insolvency of manufacturing enterprises, but artificial intelligence methods allow this to be more accurately. Global data used for the analysis and forecasting of bankruptcy reveal the general patterns of this economic phenomenon. An analysis of publications on predicting bankruptcy of enterprises made it possible to identify frequently used mathematical models constructed for foreign firms and giving high accuracy for Russian ones. However, a comparative analysis of various methods led to the conclusion that they need to update due to economic conditions external to the company, as well as the increased computing power of modern computers. The authors selected artificial intelligence methods that allow you to build a trained neural network and make it universal for predicting the bankruptcy of any production enterprise. The authors constructed an algorithm and a neural network, and made a bankruptcy forecast was carried out with an accuracy of 89 %. It substantiates the construction and use of a mathematical model with a high ability to predict the bankruptcy of various enterprises in any region of the world based on the latest neural network technologies of deep learning (Deep learning). Some of the deep learning technologies are the Keras and TensorFlow libraries — these are APIs (application programming interface) designed for specialists in the analysis and modeling of subject areas. The article presents the algorithm of the neural network, the results of its testing.


Author(s):  
Evren Dağlarli

The explainable artificial intelligence (xAI) is one of the interesting issues that has emerged recently. Many researchers are trying to deal with the subject with different dimensions and interesting results that have come out. However, we are still at the beginning of the way to understand these types of models. The forthcoming years are expected to be years in which the openness of deep learning models is discussed. In classical artificial intelligence approaches, we frequently encounter deep learning methods available today. These deep learning methods can yield highly effective results according to the data set size, data set quality, the methods used in feature extraction, the hyper parameter set used in deep learning models, the activation functions, and the optimization algorithms. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network-based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. This is an important open point in artificial neural networks and deep learning models. For these reasons, it is necessary to make serious efforts on the explainability and interpretability of black box models.


2020 ◽  
Vol 38 (15_suppl) ◽  
pp. 5555-5555
Author(s):  
Okyaz Eminaga ◽  
Andreas Loening ◽  
Andrew Lu ◽  
James D Brooks ◽  
Daniel Rubin

5555 Background: The variation of the human perception has limited the potential of multi-parametric magnetic resonance imaging (mpMRI) of the prostate in determining prostate cancer and identifying significant prostate cancer. The current study aims to overcome this limitation and utilizes an explainable artificial intelligence to leverage the diagnostic potential of mpMRI in detecting prostate cancer (PCa) and determining its significance. Methods: A total of 6,020 MR images from 1,498 cases were considered (1,785 T2 images, 2,719 DWI images, and 1,516 ADC maps). The treatment determined the significance of PCa. Cases who received radical prostatectomy were considered significant, whereas cases with active surveillance and followed for at least two years were considered insignificant. The negative biopsy cases have either a single biopsy setting or multiple biopsy settings with the PCa exclusion. The images were randomly divided into development (80%) and test sets (20%) after stratifying according to the case in each image type. The development set was then divided into a training set (90%) and a validation set (10%). We developed deep learning models for PCa detection and the determination of significant PCa based on the PlexusNet architecture that supports explainable deep learning and volumetric input data. The input data for PCa detection was T2-weighted images, whereas the input data for determining significant PCa include all images types. The performance of PCa detection and determination of significant PCa was measured using the area under receiving characteristic operating curve (AUROC) and compared to the maximum PiRAD score (version 2) at the case level. The 10,000 times bootstrapping resampling was applied to measure the 95% confidence interval (CI) of AUROC. Results: The AUROC for the PCa detection was 0.833 (95% CI: 0.788-0.879) compared to the PiRAD score with 0.75 (0.718-0.764). The DL models to detect significant PCa using the ADC map or DWI images achieved the highest AUROC [ADC: 0.945 (95% CI: 0.913-0.982; DWI: 0.912 (95% CI: 0.871-0.954)] compared to a DL model using T2 weighted (0.850; 95% CI: 0.791-0.908) or PiRAD scores (0.604; 95% CI: 0.544-0.663). Finally, the attention map of PlexusNet from mpMRI with PCa correctly showed areas that contain PCa after matching with corresponding prostatectomy slice. Conclusions: We found that explainable deep learning is feasible on mpMRI and achieves high accuracy in determining cases with PCa and identifying cases with significant PCa.


2020 ◽  
Vol 81 ◽  
pp. 149-165
Author(s):  
H Apaydin ◽  
MT Sattari

It is clearly known that precipitation is essential for fauna and flora. Studies have shown that location and temporal factors have an effect on precipitation. Accurate prediction of precipitation is very important for water resource management, and artificial intelligence methods are frequently used to make such predictions. In this study, the deep-learning and geographic information system (GIS) hybrid approach based on spatio-temporal variables was applied in order to model the amount of precipitation on Turkey’s coastline. Information about latitude, longitude, altitude, distance to the sea, and aspect was taken from meteorological stations, and these factors were utilized as spatial variables. The change in monthly precipitation was taken into account as a temporal variable. Artificial intelligence methods such as Gaussian process regression, support vector regression, the Broyden-Fletcher-Goldfarb-Shanno artificial neural network, M5, random forest, and long short-term memory (LSTM) were used. According to the results of the study, in which different input variable alternatives were also evaluated, LSTM was the most successful method for predicting precipitation with a value of 0.93 R. The study shows that the amount of precipitation can be estimated and a distribution map can be drawn by using spatio-temporal data and the deep-learning and GIS hybrid method at points where the measurement is not performed.


Sign in / Sign up

Export Citation Format

Share Document