Short-time-span EEG-based personalized emotion recognition with deep convolutional neural network

Author(s):  
Kit Hwa Cheah ◽  
Humaira Nisar ◽  
Vooi Voon Yap ◽  
Chen-Yi Lee
2021 ◽  
Author(s):  
Naveen Kumari ◽  
Rekha Bhatia

Abstract Facial emotion recognition extracts the human emotions from the images and videos. As such, it requires an algorithm to understand and model the relationships between faces and facial expressions, and to recognize human emotions. Recently, deep learning models are extensively utilized enhance the facial emotion recognition rate. However, the deep learning models suffer from the overfitting issue. Moreover, deep learning models perform poorly for images which have poor visibility and noise. Therefore, in this paper, a novel deep learning based facial emotion recognition tool is proposed. Initially, a joint trilateral filter is applied to the obtained dataset to remove the noise. Thereafter, contrast-limited adaptive histogram equalization (CLAHE) is applied to the filtered images to improve the visibility of images. Finally, a deep convolutional neural network is trained. Nadam optimizer is also utilized to optimize the cost function of deep convolutional neural networks. Experiments are achieved by using the benchmark dataset and competitive human emotion recognition models. Comparative analysis demonstrates that the proposed facial emotion recognition model performs considerably better compared to the competitive models.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6008 ◽  
Author(s):  
Misbah Farooq ◽  
Fawad Hussain ◽  
Naveed Khan Baloch ◽  
Fawad Riasat Raja ◽  
Heejung Yu ◽  
...  

Speech emotion recognition (SER) plays a significant role in human–machine interaction. Emotion recognition from speech and its precise classification is a challenging task because a machine is unable to understand its context. For an accurate emotion classification, emotionally relevant features must be extracted from the speech data. Traditionally, handcrafted features were used for emotional classification from speech signals; however, they are not efficient enough to accurately depict the emotional states of the speaker. In this study, the benefits of a deep convolutional neural network (DCNN) for SER are explored. For this purpose, a pretrained network is used to extract features from state-of-the-art speech emotional datasets. Subsequently, a correlation-based feature selection technique is applied to the extracted features to select the most appropriate and discriminative features for SER. For the classification of emotions, we utilize support vector machines, random forests, the k-nearest neighbors algorithm, and neural network classifiers. Experiments are performed for speaker-dependent and speaker-independent SER using four publicly available datasets: the Berlin Dataset of Emotional Speech (Emo-DB), Surrey Audio Visual Expressed Emotion (SAVEE), Interactive Emotional Dyadic Motion Capture (IEMOCAP), and the Ryerson Audio Visual Dataset of Emotional Speech and Song (RAVDESS). Our proposed method achieves an accuracy of 95.10% for Emo-DB, 82.10% for SAVEE, 83.80% for IEMOCAP, and 81.30% for RAVDESS, for speaker-dependent SER experiments. Moreover, our method yields the best results for speaker-independent SER with existing handcrafted features-based SER approaches.


2021 ◽  
Vol 6 (1) ◽  
pp. 1-5
Author(s):  
Steven Lawrence ◽  
Taif Anjum ◽  
Amir Shabani

Facial emotion recognition (FER) is a critical component for affective computing in social companion robotics. Current FER datasets are not sufficiently age-diversified as they are predominantly adults excluding seniors above fifty years of age which is the target group in long-term care facilities. Data collection from this age group is more challenging due to their privacy concerns and also restrictions under pandemic situations such as COVID-19. We address this issue by using age augmentation which could act as a regularizer and reduce the overfitting of the classifier as well. Our comprehensive experiments show that improving a typical Deep Convolutional Neural Network (CNN) architecture with facial age augmentation improves both the accuracy and standard deviation of the classifier when predicting emotions of diverse age groups including seniors. The proposed framework is a promising step towards improving a participant’s experience and interactions with social companion robots with affective computing.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Zhiwen Huang ◽  
Jianmin Zhu ◽  
Jingtao Lei ◽  
Xiaoru Li ◽  
Fengqing Tian

Tool wear monitoring is essential in precision manufacturing to improve surface quality, increase machining efficiency, and reduce manufacturing cost. Although tool wear can be reflected by measurable signals in automatic machining operations, with the increase of collected data, features are manually extracted and optimized, which lowers monitoring efficiency and increases prediction error. For addressing the aforementioned problems, this paper proposes a tool wear monitoring method using vibration signal based on short-time Fourier transform (STFT) and deep convolutional neural network (DCNN) in milling operations. First, the image representation of acquired vibration signals is obtained based on STFT, and then the DCNN model is designed to establish the relationship between obtained time-frequency maps and tool wear, which performs adaptive feature extraction and automatic tool wear prediction. Moreover, this method is demonstrated by employing three tool wear experimental datasets collected from three-flute ball nose tungsten carbide cutter of a high-speed CNC machine under dry milling. Finally, the experimental results prove that the proposed method is more accurate and relatively reliable than other compared methods.


Sign in / Sign up

Export Citation Format

Share Document