scholarly journals A Deep Neural Network Model for the Detection and Classification of Emotions from Textual Content

Complexity ◽  
2022 ◽  
Vol 2022 ◽  
pp. 1-12
Author(s):  
Muhammad Zubair Asghar ◽  
Adidah Lajis ◽  
Muhammad Mansoor Alam ◽  
Mohd Khairil Rahmat ◽  
Haidawati Mohamad Nasir ◽  
...  

Emotion-based sentimental analysis has recently received a lot of interest, with an emphasis on automated identification of user behavior, such as emotional expressions, based on online social media texts. However, the majority of the prior attempts are based on traditional procedures that are insufficient to provide promising outcomes. In this study, we categorize emotional sentiments by recognizing them in the text. For that purpose, we present a deep learning model, bidirectional long-term short-term memory (BiLSMT), for emotion recognition that takes into account five main emotions (Joy, Sadness, Fear, Shame, Guilt). We use our experimental assessments on the emotion dataset to accomplish the emotion categorization job. The datasets were evaluated and the findings revealed that, when compared to state-of-the-art methodologies, the proposed model can successfully categorize user emotions into several classifications. Finally, we assess the efficacy of our strategy using statistical analysis. This research’s findings help firms to apply best practices in the selection, management, and optimization of policies, services, and product information.

2020 ◽  
Vol 3 (1) ◽  
pp. 445-454
Author(s):  
Celal Buğra Kaya ◽  
Alperen Yılmaz ◽  
Gizem Nur Uzun ◽  
Zeynep Hilal Kilimci

Pattern classification is related with the automatic finding of regularities in dataset through the utilization of various learning techniques. Thus, the classification of the objects into a set of categories or classes is provided. This study is undertaken to evaluate deep learning methodologies to the classification of stock patterns. In order to classify patterns that are obtained from stock charts, convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long-short term memory networks (LSTMs) are employed. To demonstrate the efficiency of proposed model in categorizing patterns, hand-crafted image dataset is constructed from stock charts in Istanbul Stock Exchange and NASDAQ Stock Exchange. Experimental results show that the usage of convolutional neural networks exhibits superior classification success in recognizing patterns compared to the other deep learning methodologies.


2018 ◽  
Vol 1 (2) ◽  
pp. 101
Author(s):  
Vesna Srnic ◽  
Emina Berbic Kolar ◽  
Igor Ilic

<p><em>In addition to the well-known classification of long-term and short-term memory, we are also interested in distinguishing episodic, semantic and procedural memory in the areas of linguistic narrative and multimedial semantic deconstruction in postmodernism. We compare the liveliness of memorization in literary tradition and literature art with postmodernist divisions and reverberations of traditional memorizations through human multitasking and performative multimedia art, as well as formulate the existence of creative, intuitive and superhuman paradigms.</em></p><em>Since the memory can be physical, psychological or spiritual, according to neurobiologist Dr. J. Bauer (Das Gedächtnis des Körpers, 2004), the greatest importance for memorizing has the social role of collaboration, and consequently the personal transformation and remodelling of genomic architecture, yet the media theorist Mark Hansen thinks technology brings different solutions of framing function (Hansen, 2000). We believe that postmodern deconstruction does not necessarily damage memory, especially in the field of human multitasking that utilizes multimedia performative art by means of anthropologization of technology, thereby enhancing artistic and affective pre&amp;post-linguistic experience while unifying technology and humans through intuitive empathy in society.</em>


Author(s):  
Preethi D. ◽  
Neelu Khare

This chapter presents an ensemble-based feature selection with long short-term memory (LSTM) model. A deep recurrent learning model is proposed for classifying network intrusion. This model uses ensemble-based feature selection (EFS) for selecting the appropriate features from the dataset and long short-term memory for the classification of network intrusions. The EFS combines five feature selection techniques, namely information gain, gain ratio, chi-square, correlation-based feature selection, and symmetric uncertainty-based feature selection. The experiments were conducted using the standard benchmark NSL-KDD dataset and implemented using tensor flow and python. The proposed model is evaluated using the classification performance metrics and also compared with all the 41 features without any feature selection as well as with each individual feature selection technique and classified using LSTM. The performance study showed that the proposed model performs better, with 99.8% accuracy, with a higher detection and lower false alarm rates.


Energies ◽  
2020 ◽  
Vol 13 (8) ◽  
pp. 2102 ◽  
Author(s):  
Vo-Nguyen Tuyet-Doan ◽  
Tien-Tung Nguyen ◽  
Minh-Tuan Nguyen ◽  
Jong-Ho Lee ◽  
Yong-Hwa Kim

Detecting, measuring, and classifying partial discharges (PDs) are important tasks for assessing the condition of insulation systems used in different electrical equipment. Owing to the implementation of the phase-resolved PD (PRPD) as a sequence input, an existing method that processes sequential data, e.g., the recurrent neural network, using a long short-term memory (LSTM) has been applied for fault classification. However, the model performance is not further improved because of the lack of supporting parallel computation and the inability to recognize the relevance of all inputs. To overcome these two drawbacks, we propose a novel deep-learning model in this study based on a self-attention mechanism to classify the PD patterns in a gas-insulated switchgear (GIS). The proposed model uses a self-attention block that offers the advantages of simultaneous computation and selective focusing on parts of the PRPD signals and a classification block to finally classify faults in the GIS. Moreover, the combination of LSTM and self-attention is considered for comparison purposes. The experimental results show that the proposed method achieves performance superiority compared with the previous neural networks, whereas the model complexity is significantly reduced.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Basma Abd El-Rahiem ◽  
Ahmed Sedik ◽  
Ghada M. El Banby ◽  
Hani M. Ibrahem ◽  
Mohamed Amin ◽  
...  

PurposeThe objective of this paper is to perform infrared (IR) face recognition efficiently with convolutional neural networks (CNNs). The proposed model in this paper has several advantages such as the automatic feature extraction using convolutional and pooling layers and the ability to distinguish between faces without visual details.Design/methodology/approachA model which comprises five convolutional layers in addition to five max-pooling layers is introduced for the recognition of IR faces.FindingsThe experimental results and analysis reveal high recognition rates of IR faces with the proposed model.Originality/valueA designed CNN model is presented for IR face recognition. Both the feature extraction and classification tasks are incorporated into this model. The problems of low contrast and absence of details in IR images are overcome with the proposed model. The recognition accuracy reaches 100% in experiments on the Terravic Facial IR Database (TFIRDB).


Sensors ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 861 ◽  
Author(s):  
Xiangdong Ran ◽  
Zhiguang Shan ◽  
Yufei Fang ◽  
Chuang Lin

Traffic prediction is based on modeling the complex non-linear spatiotemporal traffic dynamics in road network. In recent years, Long Short-Term Memory has been applied to traffic prediction, achieving better performance. The existing Long Short-Term Memory methods for traffic prediction have two drawbacks: they do not use the departure time through the links for traffic prediction, and the way of modeling long-term dependence in time series is not direct in terms of traffic prediction. Attention mechanism is implemented by constructing a neural network according to its task and has recently demonstrated success in a wide range of tasks. In this paper, we propose an Long Short-Term Memory-based method with attention mechanism for travel time prediction. We present the proposed model in a tree structure. The proposed model substitutes a tree structure with attention mechanism for the unfold way of standard Long Short-Term Memory to construct the depth of Long Short-Term Memory and modeling long-term dependence. The attention mechanism is over the output layer of each Long Short-Term Memory unit. The departure time is used as the aspect of the attention mechanism and the attention mechanism integrates departure time into the proposed model. We use AdaGrad method for training the proposed model. Based on the datasets provided by Highways England, the experimental results show that the proposed model can achieve better accuracy than the Long Short-Term Memory and other baseline methods. The case study suggests that the departure time is effectively employed by using attention mechanism.


Author(s):  
Tao Gui ◽  
Qi Zhang ◽  
Lujun Zhao ◽  
Yaosong Lin ◽  
Minlong Peng ◽  
...  

In recent years, long short-term memory (LSTM) has been successfully used to model sequential data of variable length. However, LSTM can still experience difficulty in capturing long-term dependencies. In this work, we tried to alleviate this problem by introducing a dynamic skip connection, which can learn to directly connect two dependent words. Since there is no dependency information in the training data, we propose a novel reinforcement learning-based method to model the dependency relationship and connect dependent words. The proposed model computes the recurrent transition functions based on the skip connections, which provides a dynamic skipping advantage over RNNs that always tackle entire sentences sequentially. Our experimental results on three natural language processing tasks demonstrate that the proposed method can achieve better performance than existing methods. In the number prediction experiment, the proposed model outperformed LSTM with respect to accuracy by nearly 20%.


Author(s):  
Shirien K A ◽  
Neethu George ◽  
Surekha Mariam Varghese

Descriptive answer script assessment and rating program is an automated framework to evaluate the answer scripts correctly. There are several classification schemes in which a piece of text is evaluated on the basis of spelling, semantics and meaning. But, lots of these aren’t successful. Some of the models available to rate the response scripts include Simple Long Short Term Memory (LSTM), Deep LSTM. In addition to that Convolution Neural Network and Bi-directional LSTM is considered here to refine the result. The model uses convolutional neural networks and bidirectional LSTM networks to learn local information of words and capture long-term dependency information of contexts on the Tensorflow and Keras deep learning framework. The embedding semantic representation of texts can be used for computing semantic similarities between pieces of texts and to grade them based on the similarity score. The experiment used methods for data optimization, such as data normalization and dropout, and tested the model on an Automated Student Evaluation Short Response Scoring, a commonly used public dataset. By comparing with the existing systems, the proposed model has achieved the state-of-the-art performance and achieves better results in the accuracy of the test dataset.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Lulu Wang ◽  
Hanmei Peng ◽  
Mao Tan ◽  
Rui Pan

The inflow forecasting is one of the most important technologies for modern hydropower station. Under the joint influence of soil, upstream inflow, and precipitation, the inflow is often characterized by time lag, nonlinearity, and uncertainty and then results in the difficulty of accurate multistep prediction of inflow. To address the coupling relationship between inflow and the related factors, this paper proposes a long short-term memory deep learning model based on the Bagging algorithm (Bagging-LSTM) to predict the inflows of future 3 h, 12 h, and 24 h, respectively. To validate the proposed model, the inflow and related weather data come from a hydropower station in southern China. Compared with the classical time series models, the results show that the proposed model outperforms them on different accuracy metrics, especially in the scenario of multistep prediction.


Author(s):  
Surenthiran Krishnan ◽  
Pritheega Magalingam ◽  
Roslina Ibrahim

<span>This paper proposes a new hybrid deep learning model for heart disease prediction using recurrent neural network (RNN) with the combination of multiple gated recurrent units (GRU), long short-term memory (LSTM) and Adam optimizer. This proposed model resulted in an outstanding accuracy of 98.6876% which is the highest in the existing model of RNN. The model was developed in Python 3.7 by integrating RNN in multiple GRU that operates in Keras and Tensorflow as the backend for deep learning process, supported by various Python libraries. The recent existing models using RNN have reached an accuracy of 98.23% and deep neural network (DNN) has reached 98.5%. The common drawbacks of the existing models are low accuracy due to the complex build-up of the neural network, high number of neurons with redundancy in the neural network model and imbalance datasets of Cleveland. Experiments were conducted with various customized model, where results showed that the proposed model using RNN and multiple GRU with synthetic minority oversampling technique (SMOTe) has reached the best performance level. This is the highest accuracy result for RNN using Cleveland datasets and much promising for making an early heart disease prediction for the patients.</span>


Sign in / Sign up

Export Citation Format

Share Document