Deep Neural Network Guided by Attention Mechanism for Segmentation of Liver Pathology Image

2021 ◽  
pp. 425-435
Author(s):  
Zhongrui Zhai ◽  
Chaoli Wang ◽  
Zhanquan Sun ◽  
Shuqun Cheng ◽  
Kang Wang
Symmetry ◽  
2020 ◽  
Vol 12 (11) ◽  
pp. 1827
Author(s):  
Dengao Li ◽  
Hang Wu ◽  
Jumin Zhao ◽  
Ye Tao ◽  
Jian Fu

Nowadays, a series of social problems caused by cardiovascular diseases are becoming increasingly serious. Accurate and efficient classification of arrhythmias according to an electrocardiogram is of positive significance for improving the health status of people all over the world. In this paper, a new neural network structure based on the most common 12-lead electrocardiograms was proposed to realize the classification of nine arrhythmias, which consists of Inception and GRU (Gated Recurrent Units) primarily. Moreover, a new attention mechanism is added to the model, which makes sense for data symmetry. The average F1 score obtained from three different test sets was over 0.886 and the highest was 0.919. The accuracy, sensitivity, and specificity obtained from the PhysioNet public database were 0.928, 0.901, and 0.984, respectively. As a whole, this deep neural network performed well in the multi-label classification of 12-lead ECG signals and showed better stability than other methods in the case of more test samples.


2021 ◽  
Vol 4 (4) ◽  
pp. 85
Author(s):  
Hashem Saleh Sharaf Al-deen ◽  
Zhiwen Zeng ◽  
Raeed Al-sabri ◽  
Arash Hekmat

Due to the increasing growth of social media content on websites such as Twitter and Facebook, analyzing textual sentiment has become a challenging task. Therefore, many studies have focused on textual sentiment analysis. Recently, deep learning models, such as convolutional neural networks and long short-term memory, have achieved promising performance in sentiment analysis. These models have proven their ability to cope with the arbitrary length of sequences. However, when they are used in the feature extraction layer, the feature distance is highly dimensional, the text data are sparse, and they assign equal importance to various features. To address these issues, we propose a hybrid model that combines a deep neural network with a multi-head attention mechanism (DNN–MHAT). In the DNN–MHAT model, we first design an improved deep neural network to capture the text's actual context and extract the local features of position invariants by combining recurrent bidirectional long short-term memory units (Bi-LSTM) with a convolutional neural network (CNN). Second, we present a multi-head attention mechanism to capture the words in the text that are significantly related to long space and encoding dependencies, which adds a different focus to the information outputted from the hidden layers of BiLSTM. Finally, a global average pooling is applied for transforming the vector into a high-level sentiment representation to avoid model overfitting, and a sigmoid classifier is applied to carry out the sentiment polarity classification of texts. The DNN–MHAT model is tested on four reviews and two Twitter datasets. The results of the experiments illustrate the effectiveness of the DNN–MHAT model, which achieved excellent performance compared to the state-of-the-art baseline methods based on short tweets and long reviews.


Sign in / Sign up

Export Citation Format

Share Document