scholarly journals Relevance of the types and the statistical properties of features in the recognition of basic emotions in speech

2014 ◽  
Vol 27 (3) ◽  
pp. 425-433 ◽  
Author(s):  
Milana Bojanic ◽  
Vlado Delic ◽  
Milan Secujski

Due to the advance of speech technologies and their increasing usage in various applications, automatic recognition of emotions in speech represents one of the emerging fields in human-computer interaction. This paper deals with several topics related to automatic emotional speech recognition, most notably with the improvement of recognition accuracy by lowering the dimensionality of the feature space and evaluation of the relevance of particular feature types. The research is focused on the classification of emotional speech into five basic emotional classes (anger, joy, fear, sadness and neutral speech) using a recorded corpus of emotional speech in Serbian.

Author(s):  
Mona Nagy ElBedwehy ◽  
G. M. Behery ◽  
Reda Elbarougy

Human emotion plays a major role in expressing their feelings through speech. Emotional speech recognition is an important research field in the human–computer interaction. Ultimately, the endowing machines that perceive the users’ emotions will enable a more intuitive and reliable interaction.The researchers presented many models to recognize the human emotion from the speech. One of the famous models is the Gaussian mixture model (GMM). Nevertheless, GMM may sometimes have one or more of its components as ill-conditioned or singular covariance matrices when the number of features is high and some features are correlated. In this research, a new system based on a weighted distance optimization (WDO) has been developed for recognizing the emotional speech. The main purpose of the WDO system (WDOS) is to address the GMM shortcomings and increase the recognition accuracy. We found that WDOS has achieved considerable success through a comparative study of all emotional states and the individual emotional state characteristics. WDOS has a superior performance accuracy of 86.03% for the Japanese language. It improves the Japanese emotion recognition accuracy by 18.43% compared with GMM and [Formula: see text]-mean.


Sign in / Sign up

Export Citation Format

Share Document