scholarly journals Uncertainty-Aware Deep Learning-Based Cardiac Arrhythmias Classification Model of Electrocardiogram Signals

Computers ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 82
Author(s):  
Ahmad O. Aseeri

Deep Learning-based methods have emerged to be one of the most effective and practical solutions in a wide range of medical problems, including the diagnosis of cardiac arrhythmias. A critical step to a precocious diagnosis in many heart dysfunctions diseases starts with the accurate detection and classification of cardiac arrhythmias, which can be achieved via electrocardiograms (ECGs). Motivated by the desire to enhance conventional clinical methods in diagnosing cardiac arrhythmias, we introduce an uncertainty-aware deep learning-based predictive model design for accurate large-scale classification of cardiac arrhythmias successfully trained and evaluated using three benchmark medical datasets. In addition, considering that the quantification of uncertainty estimates is vital for clinical decision-making, our method incorporates a probabilistic approach to capture the model’s uncertainty using a Bayesian-based approximation method without introducing additional parameters or significant changes to the network’s architecture. Although many arrhythmias classification solutions with various ECG feature engineering techniques have been reported in the literature, the introduced AI-based probabilistic-enabled method in this paper outperforms the results of existing methods in outstanding multiclass classification results that manifest F1 scores of 98.62% and 96.73% with (MIT-BIH) dataset of 20 annotations, and 99.23% and 96.94% with (INCART) dataset of eight annotations, and 97.25% and 96.73% with (BIDMC) dataset of six annotations, for the deep ensemble and probabilistic mode, respectively. We demonstrate our method’s high-performing and statistical reliability results in numerical experiments on the language modeling using the gating mechanism of Recurrent Neural Networks.

Author(s):  
Mathieu Turgeon-Pelchat ◽  
Samuel Foucher ◽  
Yacine Bouroubi

Author(s):  
Masaya Tanaka ◽  
Atsushi Saito ◽  
Kosuke Shido ◽  
Yasuhiro Fujisawa ◽  
Kenshi Yamasaki ◽  
...  

Lung cancer is a serious illness which leads to increased mortality rate globally. The identification of lung cancer at the beginning stage is the probable method of improving the survival rate of the patients. Generally, Computed Tomography (CT) scan is applied for finding the location of the tumor and determines the stage of cancer. Existing works has presented an effective diagnosis classification model for CT lung images. This paper designs an effective diagnosis and classification model for CT lung images. The presented model involves different stages namely pre-processing, segmentation, feature extraction and classification. The initial stage includes an adaptive histogram based equalization (AHE) model for image enhancement and bilateral filtering (BF) model for noise removal. The pre-processed images are fed into the second stage of watershed segmentation model for effectively segment the images. Then, a deep learning based Xception model is applied for prominent feature extraction and the classification takes place by the use of logistic regression (LR) classifier. A comprehensive simulation is carried out to ensure the effective classification of the lung CT images using a benchmark dataset. The outcome implied the outstanding performance of the presented model on the applied test images.


2020 ◽  
Author(s):  
Yuan Yuan ◽  
Lei Lin

Satellite image time series (SITS) classification is a major research topic in remote sensing and is relevant for a wide range of applications. Deep learning approaches have been commonly employed for SITS classification and have provided state-of-the-art performance. However, deep learning methods suffer from overfitting when labeled data is scarce. To address this problem, we propose a novel self-supervised pre-training scheme to initialize a Transformer-based network by utilizing large-scale unlabeled data. In detail, the model is asked to predict randomly contaminated observations given an entire time series of a pixel. The main idea of our proposal is to leverage the inherent temporal structure of satellite time series to learn general-purpose spectral-temporal representations related to land cover semantics. Once pre-training is completed, the pre-trained network can be further adapted to various SITS classification tasks by fine-tuning all the model parameters on small-scale task-related labeled data. In this way, the general knowledge and representations about SITS can be transferred to a label-scarce task, thereby improving the generalization performance of the model as well as reducing the risk of overfitting. Comprehensive experiments have been carried out on three benchmark datasets over large study areas. Experimental results demonstrate the effectiveness of the proposed method, leading to a classification accuracy increment up to 1.91% to 6.69%. <div><b>This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.</b></div>


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Dapeng Lang ◽  
Deyun Chen ◽  
Ran Shi ◽  
Yongjun He

Deep learning has been widely used in the field of image classification and image recognition and achieved positive practical results. However, in recent years, a number of studies have found that the accuracy of deep learning model based on classification greatly drops when making only subtle changes to the original examples, thus realizing the attack on the deep learning model. The main methods are as follows: adjust the pixels of attack examples invisible to human eyes and induce deep learning model to make the wrong classification; by adding an adversarial patch on the detection target, guide and deceive the classification model to make it misclassification. Therefore, these methods have strong randomness and are of very limited use in practical application. Different from the previous perturbation to traffic signs, our paper proposes a method that is able to successfully hide and misclassify vehicles in complex contexts. This method takes into account the complex real scenarios and can perturb with the pictures taken by a camera and mobile phone so that the detector based on deep learning model cannot detect the vehicle or misclassification. In order to improve the robustness, the position and size of the adversarial patch are adjusted according to different detection models by introducing the attachment mechanism. Through the test of different detectors, the patch generated in the single target detection algorithm can also attack other detectors and do well in transferability. Based on the experimental part of this paper, the proposed algorithm is able to significantly lower the accuracy of the detector. Affected by the real world, such as distance, light, angles, resolution, etc., the false classification of the target is realized by reducing the confidence level and background of the target, which greatly perturbs the detection results of the target detector. In COCO Dataset 2017, it reveals that the success rate of this algorithm reaches 88.7%.


2019 ◽  
Vol 9 (18) ◽  
pp. 3717 ◽  
Author(s):  
Wenkuan Li ◽  
Dongyuan Li ◽  
Hongxia Yin ◽  
Lindong Zhang ◽  
Zhenfang Zhu ◽  
...  

Text representation learning is an important but challenging issue for various natural language processing tasks. Recently, deep learning-based representation models have achieved great success for sentiment classification. However, these existing models focus on more semantic information rather than sentiment linguistic knowledge, which provides rich sentiment information and plays a key role in sentiment analysis. In this paper, we propose a lexicon-enhanced attention network (LAN) based on text representation to improve the performance of sentiment classification. Specifically, we first propose a lexicon-enhanced attention mechanism by combining the sentiment lexicon with an attention mechanism to incorporate sentiment linguistic knowledge into deep learning methods. Second, we introduce a multi-head attention mechanism in the deep neural network to interactively capture the contextual information from different representation subspaces at different positions. Furthermore, we stack a LAN model to build a hierarchical sentiment classification model for large-scale text. Extensive experiments are conducted to evaluate the effectiveness of the proposed models on four popular real-world sentiment classification datasets at both the sentence level and the document level. The experimental results demonstrate that our proposed models can achieve comparable or better performance than the state-of-the-art methods.


2019 ◽  
Vol 11 (9) ◽  
pp. 190 ◽  
Author(s):  
Jamal ◽  
Xianqiao ◽  
Aldabbas

Emotions detection in social media is very effective to measure the mood of people about a specific topic, news, or product. It has a wide range of applications, including identifying psychological conditions such as anxiety or depression in users. However, it is a challenging task to distinguish useful emotions’ features from a large corpus of text because emotions are subjective, with limited fuzzy boundaries that may be expressed in different terminologies and perceptions. To tackle this issue, this paper presents a hybrid approach of deep learning based on TensorFlow with Keras for emotions detection on a large scale of imbalanced tweets’ data. First, preprocessing steps are used to get useful features from raw tweets without noisy data. Second, the entropy weighting method is used to compute the importance of each feature. Third, class balancer is applied to balance each class. Fourth, Principal Component Analysis (PCA) is applied to transform high correlated features into normalized forms. Finally, the TensorFlow based deep learning with Keras algorithm is proposed to predict high-quality features for emotions classification. The proposed methodology is analyzed on a dataset of 1,600,000 tweets collected from the website ‘kaggle’. Comparison is made of the proposed approach with other state of the art techniques on different training ratios. It is proved that the proposed approach outperformed among other techniques.


2020 ◽  
Vol 10 (2) ◽  
pp. 615 ◽  
Author(s):  
Tomas Iesmantas ◽  
Agne Paulauskaite-Taraseviciene ◽  
Kristina Sutiene

(1) Background: The segmentation of cell nuclei is an essential task in a wide range of biomedical studies and clinical practices. The full automation of this process remains a challenge due to intra- and internuclear variations across a wide range of tissue morphologies, differences in staining protocols and imaging procedures. (2) Methods: A deep learning model with metric embeddings such as contrastive loss and triplet loss with semi-hard negative mining is proposed in order to accurately segment cell nuclei in a diverse set of microscopy images. The effectiveness of the proposed model was tested on a large-scale multi-tissue collection of microscopy image sets. (3) Results: The use of deep metric learning increased the overall segmentation prediction by 3.12% in the average value of Dice similarity coefficients as compared to no metric learning. In particular, the largest gain was observed for segmenting cell nuclei in H&E -stained images when deep learning network and triplet loss with semi-hard negative mining were considered for the task. (4) Conclusion: We conclude that deep metric learning gives an additional boost to the overall learning process and consequently improves the segmentation performance. Notably, the improvement ranges approximately between 0.13% and 22.31% for different types of images in the terms of Dice coefficients when compared to no metric deep learning.


Crop identification (CI) utilizing hyperspectral pictures/images (HSI) collected from satellite is one of the effective research area considering various agriculture related applications. Wide range of research activity is carried out and modelled in the area of crop recognition (CR) for building efficient model. Correlation filter (CF) is considered to be one of an effective method and are been applied by existing methodologies for identifying similar signal features. Nonetheless, very limited is work is carried out using CF for crop classification using hyperspectral data. Further, effective method is required that bring good tradeoffs between memory and computational overhead. The crop classification model can be improved by combining machine learning (ML) technique with CF. HSI is composed of hundreds of channels with large data dimension that gives entire information of imaging. Thus, using classification model is very useful for real-time application uses. However, the accuracy of classification task is affected as HSI is composed of high number of redundant and correlated feature sets. Along with, induce computational overhead with less benefits using redundant features. Thus, effective band selection, texture analysis, and classification method is required for accurately classifying multiple crops. This paper analyses various existing techniques for identification and classification of crops using satellite imagery detection method. Then, identify the research issues, challenges, and problems of existing model for building efficient techniques for identification and classification of crops using satellite image. Experiment are conducted on standard hyperspectral data. The result attained shows proposed model attain superior classification accuracy when compared with existing hyperspectral image classification model.


Author(s):  
Na Wu ◽  
Fei Liu ◽  
Fanjia Meng ◽  
Mu Li ◽  
Chu Zhang ◽  
...  

Rapid varieties classification of crop seeds is significant for breeders to screen out seeds with specific traits and market regulators to detect seed purity. However, collecting high-quality, large-scale samples takes high costs in some cases, making it difficult to build an accurate classification model. This study aimed to explore a rapid and accurate method for varieties classification of different crop seeds under the sample-limited condition based on hyperspectral imaging (HSI) and deep transfer learning. Three deep neural networks with typical structures were designed based on a sample-rich Pea dataset. Obtained the highest accuracy of 99.57%, VGG-MODEL was transferred to classify four target datasets (rice, oat, wheat, and cotton) with limited samples. Accuracies of the deep transferred model achieved 95, 99, 80.8, and 83.86% on the four datasets, respectively. Using training sets with different sizes, the deep transferred model could always obtain higher performance than other traditional methods. The visualization of the deep features and classification results confirmed the portability of the shared features of seed spectra, providing an interpreted method for rapid and accurate varieties classification of crop seeds. The overall results showed great superiority of HSI combined with deep transfer learning for seed detection under sample-limited condition. This study provided a new idea for facilitating a crop germplasm screening process under the scenario of sample scarcity and the detection of other qualities of crop seeds under sample-limited condition based on HSI.


Sign in / Sign up

Export Citation Format

Share Document