scholarly journals An Improved Model for Analyzing Textual Sentiment Based on a Deep Neural Network Using Multi-Head Attention Mechanism

2021 ◽  
Vol 4 (4) ◽  
pp. 85
Author(s):  
Hashem Saleh Sharaf Al-deen ◽  
Zhiwen Zeng ◽  
Raeed Al-sabri ◽  
Arash Hekmat

Due to the increasing growth of social media content on websites such as Twitter and Facebook, analyzing textual sentiment has become a challenging task. Therefore, many studies have focused on textual sentiment analysis. Recently, deep learning models, such as convolutional neural networks and long short-term memory, have achieved promising performance in sentiment analysis. These models have proven their ability to cope with the arbitrary length of sequences. However, when they are used in the feature extraction layer, the feature distance is highly dimensional, the text data are sparse, and they assign equal importance to various features. To address these issues, we propose a hybrid model that combines a deep neural network with a multi-head attention mechanism (DNN–MHAT). In the DNN–MHAT model, we first design an improved deep neural network to capture the text's actual context and extract the local features of position invariants by combining recurrent bidirectional long short-term memory units (Bi-LSTM) with a convolutional neural network (CNN). Second, we present a multi-head attention mechanism to capture the words in the text that are significantly related to long space and encoding dependencies, which adds a different focus to the information outputted from the hidden layers of BiLSTM. Finally, a global average pooling is applied for transforming the vector into a high-level sentiment representation to avoid model overfitting, and a sigmoid classifier is applied to carry out the sentiment polarity classification of texts. The DNN–MHAT model is tested on four reviews and two Twitter datasets. The results of the experiments illustrate the effectiveness of the DNN–MHAT model, which achieved excellent performance compared to the state-of-the-art baseline methods based on short tweets and long reviews.

2019 ◽  
Author(s):  
Kangkang Zhang ◽  
Tong Liu ◽  
Shengjing Song ◽  
Xin Zhao ◽  
Shijun Sun ◽  
...  

AbstractAcquiring clear and usable audio recordings is critical for acoustic analysis of animal vocalizations. Bioacoustics studies commonly face the problem of overlapping signals, but the issue is often ignored, as there is currently no satisfactory solution. This study presents a bi-directional long short-term memory (BLSTM) network to separate overlapping bat calls and reconstruct waveform audio sounds. The separation quality was evaluated using seven temporal-spectrum parameters. The applicability of this method for bat calls was assessed using six different species. In addition, clustering analysis was conducted with separated echolocation calls from each population. Results showed that all syllables in the overlapping calls were separated with high robustness across species. A comparison between the seven temporal-spectrum parameters showed no significant difference and negligible deviation between the extracted and original calls, indicating high separation quality. Clustering analysis of the separated echolocation calls also produced an accuracy of 93.8%, suggesting the reconstructed waveform sounds could be reliably used. These results suggest the proposed technique is a convenient and automated approach for separating overlapping calls using a BLSTM network. This powerful deep neural network approach has the potential to solve complex problems in bioacoustics.Author summaryIn recent years, the development of recording techniques and devices in animal acoustic experiment and population monitoring has led to a sharp increase in the volume of sound data. However, the collected sound would be overlapped because of the existence of multiple individuals, which laid restrictions on taking full advantage of experiment data. Besides, more convenient and automatic methods are needed to cope with the large datasets in animal acoustics. The echolocation calls and communication calls of bats are variable and often overlapped with each other both in the recordings from field and laboratory, which provides an excellent template for research on animal sound separation. Here, we firstly solved the problem of overlapping calls in bats successfully based on deep neural network. We built a network to separate the overlapping calls of six bat species. All the syllables in overlapping calls were separated and we found no significant difference between the separated syllables with non-overlapping syllables. We also demonstrated an instance of applying our method on species classification. Our study provides a useful and efficient model for sound data processing in acoustic research and the proposed method has the potential to be generalized to other animal species.


Author(s):  
Thang

In this research, we propose a method of human robot interactive intention prediction. The proposed algorithm makes use of a OpenPose library and a Long-short term memory deep learning neural network. The neural network observes the human posture in a time series, then predicts the human interactive intention. We train the deep neural network using dataset generated by us. The experimental results show that, our proposed method is able to predict the human robot interactive intention, providing 92% the accuracy on the testing set.


Sign in / Sign up

Export Citation Format

Share Document