Speech Emotion Recognition Based on Three-Channel Feature Fusion of CNN and BiLSTM

Author(s):  
Lilong Huang ◽  
Jing Dong ◽  
Dongsheng Zhou ◽  
Qiang Zhang
2020 ◽  
Vol 1616 ◽  
pp. 012106
Author(s):  
Xia Pengfei ◽  
Zhou Houpan ◽  
Zhou Weidong

2016 ◽  
Vol 2016 ◽  
pp. 1-11 ◽  
Author(s):  
Zou Cairong ◽  
Zhang Xinran ◽  
Zha Cheng ◽  
Zhao Li

The feature fusion from separate source is the current technical difficulties of cross-corpus speech emotion recognition. The purpose of this paper is to, based on Deep Belief Nets (DBN) in Deep Learning, use the emotional information hiding in speech spectrum diagram (spectrogram) as image features and then implement feature fusion with the traditional emotion features. First, based on the spectrogram analysis by STB/Itti model, the new spectrogram features are extracted from the color, the brightness, and the orientation, respectively; then using two alternative DBN models they fuse the traditional and the spectrogram features, which increase the scale of the feature subset and the characterization ability of emotion. Through the experiment on ABC database and Chinese corpora, the new feature subset compared with traditional speech emotion features, the recognition result on cross-corpus, distinctly advances by 8.8%. The method proposed provides a new idea for feature fusion of emotion recognition.


Sign in / Sign up

Export Citation Format

Share Document