scholarly journals Human emotional state recognition using 3D facial expression features

Author(s):  
Yun Tie

In recent years there has been a growing interest in improving all aspects of the interaction between human and computers. Emotion recognition is a new research direction in human-computer interaction (HCI) which is based on affective computing that is expected to significantly improve the quality of HCI system and communications. Most existing works address this problem using 2D features, but they are sensitive to head pose, clutter, and variations in lighting conditions. In light of such problems, two 3D visual feature based approaches are presented in this dissertation. First, we present a recognition method based on the Gabor library for real 3D visual features extraction and an improved kernel canonical correlation analysis (IKCCA) algorithm for emotion classification. Second, to reduce the computation cost and provide a more general approach, we propose using a fiducial points' controlled 3D face model to recognize human emotion from video sequences. An Elastic body spine (EBS) technique is applied for deformation feature extraction and a discriminative Isomap (D-Isomap) based classification is used for the final decision. The most significant contributions of this work are detecting and tracking fiducial points automatically from video sequences to construct a generic 3D face model, and the introduction of EBS deformation features for emotion recognition. The experimental results show the robustness and effectiveness of the proposed methods.

2021 ◽  
Author(s):  
Yun Tie

In recent years there has been a growing interest in improving all aspects of the interaction between human and computers. Emotion recognition is a new research direction in human-computer interaction (HCI) which is based on affective computing that is expected to significantly improve the quality of HCI system and communications. Most existing works address this problem using 2D features, but they are sensitive to head pose, clutter, and variations in lighting conditions. In light of such problems, two 3D visual feature based approaches are presented in this dissertation. First, we present a recognition method based on the Gabor library for real 3D visual features extraction and an improved kernel canonical correlation analysis (IKCCA) algorithm for emotion classification. Second, to reduce the computation cost and provide a more general approach, we propose using a fiducial points' controlled 3D face model to recognize human emotion from video sequences. An Elastic body spine (EBS) technique is applied for deformation feature extraction and a discriminative Isomap (D-Isomap) based classification is used for the final decision. The most significant contributions of this work are detecting and tracking fiducial points automatically from video sequences to construct a generic 3D face model, and the introduction of EBS deformation features for emotion recognition. The experimental results show the robustness and effectiveness of the proposed methods.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5135
Author(s):  
Ngoc-Dau Mai ◽  
Boon-Giin Lee ◽  
Wan-Young Chung

In this research, we develop an affective computing method based on machine learning for emotion recognition using a wireless protocol and a wearable electroencephalography (EEG) custom-designed device. The system collects EEG signals using an eight-electrode placement on the scalp; two of these electrodes were placed in the frontal lobe, and the other six electrodes were placed in the temporal lobe. We performed experiments on eight subjects while they watched emotive videos. Six entropy measures were employed for extracting suitable features from the EEG signals. Next, we evaluated our proposed models using three popular classifiers: a support vector machine (SVM), multi-layer perceptron (MLP), and one-dimensional convolutional neural network (1D-CNN) for emotion classification; both subject-dependent and subject-independent strategies were used. Our experiment results showed that the highest average accuracies achieved in the subject-dependent and subject-independent cases were 85.81% and 78.52%, respectively; these accuracies were achieved using a combination of the sample entropy measure and 1D-CNN. Moreover, our study investigates the T8 position (above the right ear) in the temporal lobe as the most critical channel among the proposed measurement positions for emotion classification through electrode selection. Our results prove the feasibility and efficiency of our proposed EEG-based affective computing method for emotion recognition in real-world applications.


2021 ◽  
Vol 335 ◽  
pp. 04001
Author(s):  
Didar Dadebayev ◽  
Goh Wei Wei ◽  
Tan Ee Xion

Emotion recognition, as a branch of affective computing, has attracted great attention in the last decades as it can enable more natural brain-computer interface systems. Electroencephalography (EEG) has proven to be an effective modality for emotion recognition, with which user affective states can be tracked and recorded, especially for primitive emotional events such as arousal and valence. Although brain signals have been shown to correlate with emotional states, the effectiveness of proposed models is somewhat limited. The challenge is improving accuracy, while appropriate extraction of valuable features might be a key to success. This study proposes a framework based on incorporating fractal dimension features and recursive feature elimination approach to enhance the accuracy of EEG-based emotion recognition. The fractal dimension and spectrum-based features to be extracted and used for more accurate emotional state recognition. Recursive Feature Elimination will be used as a feature selection method, whereas the classification of emotions will be performed by the Support Vector Machine (SVM) algorithm. The proposed framework will be tested with a widely used public database, and results are expected to demonstrate higher accuracy and robustness compared to other studies. The contributions of this study are primarily about the improvement of the EEG-based emotion classification accuracy. There is a potential restriction of how generic the results can be as different EEG dataset might yield different results for the same framework. Therefore, experimenting with different EEG dataset and testing alternative feature selection schemes can be very interesting for future work.


2021 ◽  
Vol 14 ◽  
Author(s):  
Yinfeng Fang ◽  
Haiyang Yang ◽  
Xuguang Zhang ◽  
Han Liu ◽  
Bo Tao

Due to the rapid development of human–computer interaction, affective computing has attracted more and more attention in recent years. In emotion recognition, Electroencephalogram (EEG) signals are easier to be recorded than other physiological experiments and are not easily camouflaged. Because of the high dimensional nature of EEG data and the diversity of human emotions, it is difficult to extract effective EEG features and recognize the emotion patterns. This paper proposes a multi-feature deep forest (MFDF) model to identify human emotions. The EEG signals are firstly divided into several EEG frequency bands and then extract the power spectral density (PSD) and differential entropy (DE) from each frequency band and the original signal as features. A five-class emotion model is used to mark five emotions, including neutral, angry, sad, happy, and pleasant. With either original features or dimension reduced features as input, the deep forest is constructed to classify the five emotions. These experiments are conducted on a public dataset for emotion analysis using physiological signals (DEAP). The experimental results are compared with traditional classifiers, including K Nearest Neighbors (KNN), Random Forest (RF), and Support Vector Machine (SVM). The MFDF achieves the average recognition accuracy of 71.05%, which is 3.40%, 8.54%, and 19.53% higher than RF, KNN, and SVM, respectively. Besides, the accuracies with the input of features after dimension reduction and raw EEG signal are only 51.30 and 26.71%, respectively. The result of this study shows that the method can effectively contribute to EEG-based emotion classification tasks.


Author(s):  
Yunshu Hou ◽  
Ping Fan ◽  
Ilse Ravyse ◽  
Valentin Enescu ◽  
Rongchun Zhao ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Ayan Seal ◽  
Puthi Prem Nivesh Reddy ◽  
Pingali Chaithanya ◽  
Arramada Meghana ◽  
Kamireddy Jahnavi ◽  
...  

Human emotion recognition has been a major field of research in the last decades owing to its noteworthy academic and industrial applications. However, most of the state-of-the-art methods identified emotions after analyzing facial images. Emotion recognition using electroencephalogram (EEG) signals has got less attention. However, the advantage of using EEG signals is that it can capture real emotion. However, very few EEG signals databases are publicly available for affective computing. In this work, we present a database consisting of EEG signals of 44 volunteers. Twenty-three out of forty-four are females. A 32 channels CLARITY EEG traveler sensor is used to record four emotional states namely, happy, fear, sad, and neutral of subjects by showing 12 videos. So, 3 video files are devoted to each emotion. Participants are mapped with the emotion that they had felt after watching each video. The recorded EEG signals are considered further to classify four types of emotions based on discrete wavelet transform and extreme learning machine (ELM) for reporting the initial benchmark classification performance. The ELM algorithm is used for channel selection followed by subband selection. The proposed method performs the best when features are captured from the gamma subband of the FP1-F7 channel with 94.72% accuracy. The presented database would be available to the researchers for affective recognition applications.


2020 ◽  
Vol 10 (5) ◽  
pp. 1619 ◽  
Author(s):  
Chao Pan ◽  
Cheng Shi ◽  
Honglang Mu ◽  
Jie Li ◽  
Xinbo Gao

Emotion plays a nuclear part in human attention, decision-making, and communication. Electroencephalogram (EEG)-based emotion recognition has developed a lot due to the application of Brain-Computer Interface (BCI) and its effectiveness compared to body expressions and other physiological signals. Despite significant progress in affective computing, emotion recognition is still an unexplored problem. This paper introduced Logistic Regression (LR) with Gaussian kernel and Laplacian prior for EEG-based emotion recognition. The Gaussian kernel enhances the EEG data separability in the transformed space. The Laplacian prior promotes the sparsity of learned LR regressors to avoid over-specification. The LR regressors are optimized using the logistic regression via variable splitting and augmented Lagrangian (LORSAL) algorithm. For simplicity, the introduced method is noted as LORSAL. Experiments were conducted on the dataset for emotion analysis using EEG, physiological and video signals (DEAP). Various spectral features and features by combining electrodes (power spectral density (PSD), differential entropy (DE), differential asymmetry (DASM), rational asymmetry (RASM), and differential caudality (DCAU)) were extracted from different frequency bands (Delta, Theta, Alpha, Beta, Gamma, and Total) with EEG signals. The Naive Bayes (NB), support vector machine (SVM), linear LR with L1-regularization (LR_L1), linear LR with L2-regularization (LR_L2) were used for comparison in the binary emotion classification for valence and arousal. LORSAL obtained the best classification accuracies (77.17% and 77.03% for valence and arousal, respectively) on the DE features extracted from total frequency bands. This paper also investigates the critical frequency bands in emotion recognition. The experimental results showed the superiority of Gamma and Beta bands in classifying emotions. It was presented that DE was the most informative and DASM and DCAU had lower computational complexity with relatively ideal accuracies. An analysis of LORSAL and the recently deep learning (DL) methods is included in the discussion. Conclusions and future work are presented in the final section.


2020 ◽  
Vol 11 (1) ◽  
pp. 1-16
Author(s):  
Rana Seif Fathalla ◽  
Wafa Saad Alshehri

Affective computing aims to create smart systems able to interact emotionally with users. For effective affective computing experiences, emotions should be detected accurately. The emotion influences appear in all the modalities of humans, such as the facial expression, voice, and body language, as well as in the different bio-parameters of the agents, such as the electro-dermal activity (EDA), the respiration patterns, the skin conductance, and the temperature as well as the brainwaves, which is called electroencephalography (EEG). This review provides an overview of the emotion recognition process, its methodology, and methods. It also explains the EEG-based emotion recognition as an example of emotion recognition methods demonstrating the required steps starting from capturing the EEG signals during the emotion elicitation process, then feature extraction using different techniques, such as empirical mode decomposition technique (EMD) and variational mode decomposition technique (VMD). Finally, emotion classification using different classifiers including the support vector machine (SVM) and deep neural network (DNN) is also highlighted.


2011 ◽  
Vol 26 (8-9) ◽  
pp. 550-566 ◽  
Author(s):  
Yunshu Hou ◽  
Ping Fan ◽  
Ilse Ravyse ◽  
Valentin Enescu ◽  
Hichem Sahli

2019 ◽  
Vol 8 (3) ◽  
pp. 5926-5929

Blind forensic-investigation in a digital image is a new research direction in image security. It aims to discover the altered image content without any embedded security scheme. Block and key point based methods are the two dispensation options in blind image forensic investigation. Both the techniques exhibit the best performance to reveal the tampered image. The success of these methods is limited due to computational complexity and detection accuracy against various image distortions and geometric transformation operations. This article introduces different blind image tampering methods and introduces a robust image forensic investigation method to determine the copy-move tampered image by means of fuzzy logic approach. Empirical outcomes facilitate that the projected scheme effectively classifies copy-move type of forensic images as well as blurred tampered image. Overall detection accuracy of this method is high over the existing methods.


Sign in / Sign up

Export Citation Format

Share Document