Deep Learning based Speech Emotion Recognition System

2021 ◽  
Vol 23 (12) ◽  
pp. 212-223
Author(s):  
P Jothi Thilaga ◽  
◽  
S Kavipriya ◽  
K Vijayalakshmi ◽  
◽  
...  

Emotions are elementary for humans, impacting perception and everyday activities like communication, learning and decision-making. Speech emotion Recognition (SER) systems aim to facilitate the natural interaction with machines by direct voice interaction rather than exploitation ancient devices as input to know verbal content and build it straightforward for human listeners to react. During this SER system primarily composed of 2 sections called feature extraction and feature classification phase. SER implements on bots to speak with humans during a non-lexical manner. The speech emotion recognition algorithm here is predicated on the Convolutional Neural Network (CNN) model, which uses varied modules for emotion recognition and classifiers to differentiate feelings like happiness, calm, anger, neutral state, sadness, and fear. The accomplishment of classification is predicated on extracted features. Finally, the emotion of a speech signal will be determined.

2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Linqin Cai ◽  
Yaxin Hu ◽  
Jiangong Dong ◽  
Sitong Zhou

With the rapid development in social media, single-modal emotion recognition is hard to satisfy the demands of the current emotional recognition system. Aiming to optimize the performance of the emotional recognition system, a multimodal emotion recognition model from speech and text was proposed in this paper. Considering the complementarity between different modes, CNN (convolutional neural network) and LSTM (long short-term memory) were combined in a form of binary channels to learn acoustic emotion features; meanwhile, an effective Bi-LSTM (bidirectional long short-term memory) network was resorted to capture the textual features. Furthermore, we applied a deep neural network to learn and classify the fusion features. The final emotional state was determined by the output of both speech and text emotion analysis. Finally, the multimodal fusion experiments were carried out to validate the proposed model on the IEMOCAP database. In comparison with the single modal, the overall recognition accuracy of text increased 6.70%, and that of speech emotion recognition soared 13.85%. Experimental results show that the recognition accuracy of our multimodal is higher than that of the single modal and outperforms other published multimodal models on the test datasets.


2021 ◽  
Vol 11 (11) ◽  
pp. 4782
Author(s):  
Huan-Chung Li ◽  
Telung Pan ◽  
Man-Hua Lee ◽  
Hung-Wen Chiu

In recent years, many types of research have continued to improve the environment of human speech and emotion recognition. As facial emotion recognition has gradually matured through speech recognition, the result of this study provided more accurate recognition of complex human emotional performance, and speech emotion identification will be derived from human subjective interpretation into the use of computers to automatically interpret the speaker’s emotional expression. Focused on use in medical care, which can be used to understand the current feelings of physicians and patients during a visit, and improve the medical treatment through the relationship between illness and interaction. By transforming the voice data into a single observation segment per second, the first to the thirteenth dimensions of the frequency cestrum coefficients are used as speech emotion recognition eigenvalue vectors. Vectors for the eigenvalue vectors are maximum, minimum, average, median, and standard deviation, and there are 65 eigenvalues in total for the construction of an artificial neural network. The sentiment recognition system developed by the hospital is used as a comparison between the sentiment recognition results of the artificial neural network classification, and then use the foregoing results for a comprehensive analysis to understand the interaction between the doctor and the patient. Using this experimental module, the emotion recognition rate is 93.34%, and the accuracy rate of facial emotion recognition results can be 86.3%.


2021 ◽  
Vol 11 (4) ◽  
pp. 1890
Author(s):  
Sung-Woo Byun ◽  
Seok-Pil Lee

The goal of the human interface is to recognize the user’s emotional state precisely. In the speech emotion recognition study, the most important issue is the effective parallel use of the extraction of proper speech features and an appropriate classification engine. Well defined speech databases are also needed to accurately recognize and analyze emotions from speech signals. In this work, we constructed a Korean emotional speech database for speech emotion analysis and proposed a feature combination that can improve emotion recognition performance using a recurrent neural network model. To investigate the acoustic features, which can reflect distinct momentary changes in emotional expression, we extracted F0, Mel-frequency cepstrum coefficients, spectral features, harmonic features, and others. Statistical analysis was performed to select an optimal combination of acoustic features that affect the emotion from speech. We used a recurrent neural network model to classify emotions from speech. The results show the proposed system has more accurate performance than previous studies.


2021 ◽  
Vol 40 ◽  
pp. 03006
Author(s):  
Beenaa Salian ◽  
Omkar Narvade ◽  
Rujuta Tambewagh ◽  
Smita Bharne

Speech has several distinguishing characteristic features which has remained a state-of-the-art tool for extracting valuable information from audio samples. Our aim is to develop a emotion recognition system using these speech features, which would be able to accurately and efficiently recognize emotions through audio analysis. In this article, we have employed a hybrid neural network comprising four blocks of time distributed convolutional layers followed by a layer of Long Short Term Memory to achieve the same.The audio samples for the speech dataset are collectively assembled from RAVDESS, TESS and SAVEE audio datasets and are further augmented by injecting noise. Mel Spectrograms are computed from audio samples and are used to train the neural network. We have been able to achieve a testing accuracy of about 89.26%.


Sign in / Sign up

Export Citation Format

Share Document