Multimodel Music Emotion Recognition Using Unsupervised Deep Neural Networks

Author(s):  
Jianchao Zhou ◽  
Xiaoou Chen ◽  
Deshun Yang
Author(s):  
Syed Asif Ahmad Qadri ◽  
Teddy Surya Gunawan ◽  
Taiba Majid Wani ◽  
Eliathamby Ambikairajah ◽  
Mira Kartiwi ◽  
...  

2017 ◽  
Vol 11 (8) ◽  
pp. 1301-1309 ◽  
Author(s):  
Panagiotis Tzirakis ◽  
George Trigeorgis ◽  
Mihalis A. Nicolaou ◽  
Bjorn W. Schuller ◽  
Stefanos Zafeiriou

2017 ◽  
Vol 7 (10) ◽  
pp. 1060 ◽  
Author(s):  
Youjun Li ◽  
Jiajin Huang ◽  
Haiyan Zhou ◽  
Ning Zhong

Author(s):  
Jaejin Cho ◽  
Raghavendra Pappagari ◽  
Purva Kulkarni ◽  
Jesús Villalba ◽  
Yishay Carmiel ◽  
...  

2021 ◽  
Vol 3 ◽  
Author(s):  
Weili Guo ◽  
Guangyu Li ◽  
Jianfeng Lu ◽  
Jian Yang

Human emotion recognition is an important issue in human–computer interactions, and electroencephalograph (EEG) has been widely applied to emotion recognition due to its high reliability. In recent years, methods based on deep learning technology have reached the state-of-the-art performance in EEG-based emotion recognition. However, there exist singularities in the parameter space of deep neural networks, which may dramatically slow down the training process. It is very worthy to investigate the specific influence of singularities when applying deep neural networks to EEG-based emotion recognition. In this paper, we mainly focus on this problem, and analyze the singular learning dynamics of deep multilayer perceptrons theoretically and numerically. The results can help us to design better algorithms to overcome the serious influence of singularities in deep neural networks for EEG-based emotion recognition.


Author(s):  
Yan-Han Chew ◽  
Lai-Kuan Wong ◽  
John See ◽  
Huai-Qian Khor ◽  
Balasubramanian Abivishaq

Author(s):  
Biqiao Zhang ◽  
Yuqing Kong ◽  
Georg Essl ◽  
Emily Mower Provost

In this paper, we propose a Deep Metric Learning (DML) approach that supports soft labels. DML seeks to learn representations that encode the similarity between examples through deep neural networks. DML generally presupposes that data can be divided into discrete classes using hard labels. However, some tasks, such as our exemplary domain of speech emotion recognition (SER), work with inherently subjective data, data for which it may not be possible to identify a single hard label. We propose a family of loss functions, fSimilarity Preservation Loss (f-SPL), based on the dual form of f-divergence for DML with soft labels. We show that the minimizer of f-SPL preserves the pairwise label similarities in the learned feature embeddings. We demonstrate the efficacy of the proposed loss function on the task of cross-corpus SER with soft labels. Our approach, which combines f-SPL and classification loss, significantly outperforms a baseline SER system with the same structure but trained with only classification loss in most experiments. We show that the presented techniques are more robust to over-training and can learn an embedding space in which the similarity between examples is meaningful.


Sign in / Sign up

Export Citation Format

Share Document