Facial expression recognition based on Electroencephalogram and facial landmark localization

2019 ◽  
Vol 27 (4) ◽  
pp. 373-387 ◽  
Author(s):  
Dahua Li ◽  
Zhe Wang ◽  
Qiang Gao ◽  
Yu Song ◽  
Xiao Yu ◽  
...  
Author(s):  
Romain Belmonte ◽  
Benjamin Allaert ◽  
Pierre Tirilly ◽  
Ioan Marius Bilasco ◽  
Chaabane Djeraba ◽  
...  

Author(s):  
Dinesh Kumar P ◽  
Dr. B. Rosiline Jeetha

Facial expression, as one of the most significant means for human beings to show their emotions and intensions in the process of communication, plays a significant role in human interfaces. In recent years, facial expression recognition has been under especially intensive investigation, due conceivably to its vital applications in various fields including virtual reality, intelligent tutoring system, health-care and data driven animation. The main target of facial expression recognition is to identify the human emotional state (e.g., anger, contempt, disgust, fear, happiness, sadness, and surprise ) based on the given facial images. This paper deals with the Facial expression detection and recognition through Viola-jones algorithm and HCNN using LSTM method. It improves the hypothesis execution enough and meanwhile inconceivably reduces the computational costs. In feature matching, the author proposes Hybrid Scale-Invariant Feature Transform (SIFT) with double δ-LBP (Dδ-LBP) and it utilizes the fixed facial landmark localization approach and SIFT’s orientation assignment, to obtain the features that are illumination and pose independent. For face detection, basically we utilize the face detection Viola-Jones algorithm and it recognizes the occluded face and it helps to perform the feature selection through the whale optimization algorithm, once after compression and further, it minimizes the feature vector given into the Hybrid Convolutional Neural Network (HCNN) and Long Short-Term Memory (LSTM) model for identifying the facial expression in efficient manner.The experimental result confirms that the HCNN-LSTM Model beats traditional deep-learning and machine-learning techniques with respect to precision, recall, f-measure, and accuracy using CK+ database. Proposes Hybrid Scale-Invariant Feature Transform (SIFT) with double δ-LBP (Dδ-LBP) and it utilizes the fixed facial landmark localization approach and SIFT’s orientation assignment, to obtain the features that are illumination and pose independent. And HCNN and LSTM model for identifying the facial expression.


2021 ◽  
Vol 115 ◽  
pp. 107893
Author(s):  
Boyu Chen ◽  
Wenlong Guan ◽  
Peixia Li ◽  
Naoki Ikeda ◽  
Kosuke Hirasawa ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5184
Author(s):  
Min Kyu Lee ◽  
Dae Ha Kim ◽  
Byung Cheol Song

Facial expression recognition (FER) technology has made considerable progress with the rapid development of deep learning. However, conventional FER techniques are mainly designed and trained for videos that are artificially acquired in a limited environment, so they may not operate robustly on videos acquired in a wild environment suffering from varying illuminations and head poses. In order to solve this problem and improve the ultimate performance of FER, this paper proposes a new architecture that extends a state-of-the-art FER scheme and a multi-modal neural network that can effectively fuse image and landmark information. To this end, we propose three methods. To maximize the performance of the recurrent neural network (RNN) in the previous scheme, we first propose a frame substitution module that replaces the latent features of less important frames with those of important frames based on inter-frame correlation. Second, we propose a method for extracting facial landmark features based on the correlation between frames. Third, we propose a new multi-modal fusion method that effectively fuses video and facial landmark information at the feature level. By applying attention based on the characteristics of each modality to the features of the modality, novel fusion is achieved. Experimental results show that the proposed method provides remarkable performance, with 51.4% accuracy for the wild AFEW dataset, 98.5% accuracy for the CK+ dataset and 81.9% accuracy for the MMI dataset, outperforming the state-of-the-art networks.


2019 ◽  
Vol 1 (1) ◽  
pp. 25-31
Author(s):  
Arif Budi Setiawan ◽  
Kaspul Anwar ◽  
Laelatul Azizah ◽  
Adhi Prahara

During interview, a psychologist should pay attention to every gesture and response, both verbal and nonverbal language/behaviors, made by the client. Psychologist certainly has limitation in recognizing every gesture and response that indicates a lie, especially in interpreting nonverbal behaviors that usually occurs in a short time. In this research, a real time facial expression recognition is proposed to track nonverbal behaviors to help psychologist keep informed about the change of facial expression that indicate a lie. The method tracks eye gaze, wrinkles on the forehead, and false smile using combination of face detection and facial landmark recognition to find the facial features and image processing method to track the nonverbal behaviors in facial features. Every nonverbal behavior is recorded and logged according to the video timeline to assist the psychologist analyze the behavior of the client. The result of tracking nonverbal behaviors of face is accurate and expected to be useful assistant for the psychologists.


Author(s):  
Gerard Medioni ◽  
Jongmoo Choi ◽  
Matthieu Labeau ◽  
Jatuporn Toy Leksut ◽  
Lingchao Meng

Sign in / Sign up

Export Citation Format

Share Document