The Research of Speech Emotion Recognition Based on Gaussian Mixture Model

2014 ◽  
Vol 668-669 ◽  
pp. 1126-1129
Author(s):  
Wan Li Zhang ◽  
Guo Xin Li ◽  
Wei Gao

A new recognition method based on Gaussian mixture model for speech emotion recognition is proposed in this paper. To improve the effectiveness of feature extraction and accuracy of emotion recognition, extraction of Mel frequency cepstrum coefficient combined with Gaussian mixture model is used to recognize speech emotion. According to feature parameters extraction method by analyzing the principle of vocalization theory, emotion models based on Gaussian mixture model are generated and the similarity of their templates is obtained. A series of experiments is performed with recorded speech based on Gaussian mixture model and indicates the system gains high performance and better robustness.

2013 ◽  
Vol 380-384 ◽  
pp. 3530-3533
Author(s):  
Yong Qiang Bao ◽  
Li Zhao ◽  
Cheng Wei Huang

In this paper we studied speech emotion recognition from Mandarin speech signal. Five basic emotion classes and the neutral state are considered. In a listening experiment we verified the speech corpus using a judgment matrix. Acoustic parameters including short-term energy, pitch contour, and formants are extracted from emotional speech signal. Gaussian mixture model is then adopted for training the emotion model. Due to the data challenge in GMM training, we use multiple discriminant analysis for feature optimization and compared with basic Fisher discriminant ratio based method. The experimental results show that using multiple discriminant analysis our GMM classifier gives a promising recognition rate for Mandarin speech emotion recognition.


2015 ◽  
Vol 2015 ◽  
pp. 1-13 ◽  
Author(s):  
Hariharan Muthusamy ◽  
Kemal Polat ◽  
Sazali Yaacob

Recently, researchers have paid escalating attention to studying the emotional state of an individual from his/her speech signals as the speech signal is the fastest and the most natural method of communication between individuals. In this work, new feature enhancement using Gaussian mixture model (GMM) was proposed to enhance the discriminatory power of the features extracted from speech and glottal signals. Three different emotional speech databases were utilized to gauge the proposed methods. Extreme learning machine (ELM) andk-nearest neighbor (kNN) classifier were employed to classify the different types of emotions. Several experiments were conducted and results show that the proposed methods significantly improved the speech emotion recognition performance compared to research works published in the literature.


2013 ◽  
Vol 38 (4) ◽  
pp. 457-463 ◽  
Author(s):  
Chengwei Huang ◽  
Guoming Chen ◽  
Hua Yu ◽  
Yongqiang Bao ◽  
Li Zhao

Abstract Speaker‘s emotional states are recognized from speech signal with Additive white Gaussian noise (AWGN). The influence of white noise on a typical emotion recogniztion system is studied. The emotion classifier is implemented with Gaussian mixture model (GMM). A Chinese speech emotion database is used for training and testing, which includes nine emotion classes (e.g. happiness, sadness, anger, surprise, fear, anxiety, hesitation, confidence and neutral state). Two speech enhancement algorithms are introduced for improved emotion classification. In the experiments, the Gaussian mixture model is trained on the clean speech data, while tested under AWGN with various signal to noise ratios (SNRs). The emotion class model and the dimension space model are both adopted for the evaluation of the emotion recognition system. Regarding the emotion class model, the nine emotion classes are classified. Considering the dimension space model, the arousal dimension and the valence dimension are classified into positive regions or negative regions. The experimental results show that the speech enhancement algorithms constantly improve the performance of our emotion recognition system under various SNRs, and the positive emotions are more likely to be miss-classified as negative emotions under white noise environment.


Sign in / Sign up

Export Citation Format

Share Document