Emotion Analysis from Human Voice Using Various Prosodic Features and Text Analysis

2020 ◽  
Vol 17 (9) ◽  
pp. 4244-4247
Author(s):  
Vybhav Jain ◽  
S. B. Rajeshwari ◽  
Jagadish S. Kallimani

Emotion Analysis is a dynamic field of research with the aim to provide a method to recognize the emotions of a person only from their voice. It is more famously recognized as the Speech Emotion Recognition (SER) problem. This problem has been studied upon from more than a decade with results coming from either Voice Analysis or Text Analysis. Individually, both these methods have shown a good accuracy up till now. But, the use of both of these methods in unison has showed a much more better result than either one of those parts considered individually. When different people of different age groups are talking, it is important to understand their emotions behind what they say as this will in turn help us in reacting better. To try and achieve this, the paper implements a model which performs Emotion Analysis based on both Tone and Text Analysis. The prosodic features of the tone are analyzed and then the speech is converted to text. Once the text has been extracted from the speech, Sentiment Analysis is done on the extracted text to further improve the accuracy of the Emotion Recognition.

Author(s):  
Yashwardhan Bhangdia ◽  
Rashi Bhansali ◽  
Ninad Chaudhari ◽  
Dimple Chandnani ◽  
M L Dhore

2012 ◽  
Vol 241-244 ◽  
pp. 1677-1681
Author(s):  
Yu Tai Wang ◽  
Jie Han ◽  
Xiao Qing Jiang ◽  
Jing Zou ◽  
Hui Zhao

The present status of speech emotion recognition was introduced in the paper. The emotional databases of Chinese speech and facial expressions were established with the noise stimulus and movies evoking subjects' emtion. For different emotional states, we analyzed the single-mode speech emotion recognitions based the prosodic features and the geometric features of facial expression. Then, we discussed the bimodal emotion recognition by the use of Gaussian Mixture Model. The experimental results show that, the bimodal emotion recognition rate combined with facial expression is about 6% higher than the single model recognition rate merely using prosodic features.


2010 ◽  
Vol E93-D (10) ◽  
pp. 2813-2821 ◽  
Author(s):  
Yu ZHOU ◽  
Junfeng LI ◽  
Yanqing SUN ◽  
Jianping ZHANG ◽  
Yonghong YAN ◽  
...  

2021 ◽  
Author(s):  
Lasse Hansen ◽  
Yan-Ping Zhang ◽  
Detlef Wolf ◽  
Konstantinos Sechidis ◽  
Nicolai Ladegaard ◽  
...  

Objective: Affective disorders have long been associated with atypical voice patterns, however, current work on automated voice analysis often suffers from small sample sizes and untested generalizability. This study investigated a generalizable approach to aid clinical evaluation of depression and remission from voice. Methods: A Mixture-of-Experts machine learning model was trained to infer happy/sad emotional state using three publicly available emotional speech corpora. We examined the model's predictive ability to classify the presence of depression on Danish speaking healthy controls (N = 42), patients with first-episode major depressive disorder (MDD) (N = 40), and the same patients in remission (N = 25) based on recorded clinical interviews. The model was evaluated on raw data, data cleaned for background noise, and speaker diarized data. Results: The model showed reliable separation between healthy controls and depressed patients at the first visit, obtaining an AUC of 0.71. Further, we observed a reliable treatment effect in the depression group, with speech from patients in remission being indistinguishable from that of the control group. Model predictions were stable throughout the interview, suggesting that as little as 20-30 seconds of speech is enough to accurately screen a patient. Background noise (but not speaker diarization) heavily impacted predictions, suggesting that a controlled environment and consistent preprocessing pipelines are crucial for correct characterizations. Conclusion: A generalizable speech emotion recognition model can effectively reveal changes in speaker depressive states before and after treatment in patients with MDD. Data collection settings and data cleaning are crucial when considering automated voice analysis for clinical purposes.


Sign in / Sign up

Export Citation Format

Share Document