speaker independent
Recently Published Documents


TOTAL DOCUMENTS

646
(FIVE YEARS 61)

H-INDEX

32
(FIVE YEARS 3)

2022 ◽  
pp. 1146-1156
Author(s):  
Revathi A. ◽  
Sasikaladevi N.

This chapter on multi speaker independent emotion recognition encompasses the use of perceptual features with filters spaced in Equivalent rectangular bandwidth (ERB) and BARK scale and vector quantization (VQ) classifier for classifying groups and artificial neural network with back propagation algorithm for emotion classification in a group. Performance can be improved by using the large amount of data in a pertinent emotion to adequately train the system. With the limited set of data, this proposed system has provided consistently better accuracy for the perceptual feature with critical band analysis done in ERB scale.


2021 ◽  
Vol 22 (4) ◽  
pp. 471-481
Author(s):  
К.Т. Koshekov ◽  
А.А. Savostin ◽  
B.K. Seidakhmetov ◽  
R.K. Anayatova ◽  
I.O. Fedorov

Abstract This paper proposes a method of automatic speaker-independent recognition of human psycho-emotional states by analyzing the speech signal based on Deep Learning technology to solve the problems of aviation profiling. For this purpose, an algorithm to classify seven human psycho-emotional states, including anger, joy, fear, surprise, disgust, sadness, and neutral state was developed. The algorithm is based on the use of Mel-frequency cepstral coefficients and Mel spectrograms as informative features of speech signals audio recordings. These informative features are used to train two deep convolutional neural networks on the generated dataset. The developed classifier testing on a delayed verification dataset showed that the metric for the multiclass fraction of correct answers’ accuracy is 0.93. The solution proposed in the paper can be in demand in human-machine interfaces creation, medicine, marketing, and in the field of air transportation.


2021 ◽  
Author(s):  
Tomoaki Hayakawa ◽  
Chee Siang Leow ◽  
Akio Kobayashi ◽  
Takehito Utsuro ◽  
Hiromitsu Nishizaki

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5317
Author(s):  
Jingyu Quan ◽  
Yoshihiro Miyake ◽  
Takayuki Nozawa

During social interaction, humans recognize others’ emotions via individual features and interpersonal features. However, most previous automatic emotion recognition techniques only used individual features—they have not tested the importance of interpersonal features. In the present study, we asked whether interpersonal features, especially time-lagged synchronization features, are beneficial to the performance of automatic emotion recognition techniques. We explored this question in the main experiment (speaker-dependent emotion recognition) and supplementary experiment (speaker-independent emotion recognition) by building an individual framework and interpersonal framework in visual, audio, and cross-modality, respectively. Our main experiment results showed that the interpersonal framework outperformed the individual framework in every modality. Our supplementary experiment showed—even for unknown communication pairs—that the interpersonal framework led to a better performance. Therefore, we concluded that interpersonal features are useful to boost the performance of automatic emotion recognition tasks. We hope to raise attention to interpersonal features in this study.


Sign in / Sign up

Export Citation Format

Share Document