scholarly journals Video Based Sub-Categorized Facial Emotion Detection using LBP and Edge Computing

2021 ◽  
Vol 35 (1) ◽  
pp. 55-61
Author(s):  
Praveen Kulkarni ◽  
Rajesh T M

Facial expression recognition assumes a significant function in imparting the feelings and expectations of people. Recognizing facial emotions in an uncontrolled climate is more problematic than in a controlled climate due to progress in hindrance, glare and clamor. This paper, we demonstrate another system for successful facial emotion recognition from ongoing face images. Dissimilar to different strategies which invest a lot of energy by partitioning the picture into squares or entire face pictures; our strategy extricates the discriminative component from notable face areas and afterward consolidates with surface and direction highlights for better portrayal. We also made sub-categories in the main expressions like happy and sad, to identify the level of happiness and sadness and to check whether the person is really happy/sad or acting to be happy/sad. Moreover, we lessen the information measurement by choosing the profoundly discriminative highlights. The proposed system is fit for giving high matching precision rate even within the sight of impediments, light, and commotion. To show the heartiness of the expected structure, it utilized two freely accessible testing dataset. These trial results show that the presentation of the expected structure is superior to current strategies, which demonstrate the impressive capability of consolidating mathematical highlights with appearance-based highlights.

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


2020 ◽  
Vol 28 (1) ◽  
pp. 97-111
Author(s):  
Nadir Kamel Benamara ◽  
Mikel Val-Calvo ◽  
Jose Ramón Álvarez-Sánchez ◽  
Alejandro Díaz-Morcillo ◽  
Jose Manuel Ferrández-Vicente ◽  
...  

Facial emotion recognition (FER) has been extensively researched over the past two decades due to its direct impact in the computer vision and affective robotics fields. However, the available datasets to train these models include often miss-labelled data due to the labellers bias that drives the model to learn incorrect features. In this paper, a facial emotion recognition system is proposed, addressing automatic face detection and facial expression recognition separately, the latter is performed by a set of only four deep convolutional neural network respect to an ensembling approach, while a label smoothing technique is applied to deal with the miss-labelled training data. The proposed system takes only 13.48 ms using a dedicated graphics processing unit (GPU) and 141.97 ms using a CPU to recognize facial emotions and reaches the current state-of-the-art performances regarding the challenging databases, FER2013, SFEW 2.0, and ExpW, giving recognition accuracies of 72.72%, 51.97%, and 71.82% respectively.


Optik ◽  
2016 ◽  
Vol 127 (15) ◽  
pp. 6195-6203 ◽  
Author(s):  
Sajid Ali Khan ◽  
Ayyaz Hussain ◽  
Muhammad Usman

2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Bin Jiang ◽  
Qiuwen Zhang ◽  
Zuhe Li ◽  
Qinggang Wu ◽  
Huanlong Zhang

AbstractMethods using salient facial patches (SFPs) play a significant role in research on facial expression recognition. However, most SFP methods use only frontal face images or videos for recognition, and they do not consider head position variations. We contend that SFP can be an effective approach for recognizing facial expressions under different head rotations. Accordingly, we propose an algorithm, called profile salient facial patches (PSFP), to achieve this objective. First, to detect facial landmarks and estimate head poses from profile face images, a tree-structured part model is used for pose-free landmark localization. Second, to obtain the salient facial patches from profile face images, the facial patches are selected using the detected facial landmarks while avoiding their overlap or the transcending of the actual face range. To analyze the PSFP recognition performance, three classical approaches for local feature extraction, specifically the histogram of oriented gradients (HOG), local binary pattern, and Gabor, were applied to extract profile facial expression features. Experimental results on the Radboud Faces Database show that PSFP with HOG features can achieve higher accuracies under most head rotations.


2020 ◽  
Author(s):  
Bin Jiang ◽  
Qiuwen Zhang ◽  
Zuhe Li ◽  
Qinggang Wu ◽  
Huanlong Zhang

Abstract Methods using salient facial patches (SFP) play a significant role in research on facial expression recognition. However, most SFP methods use only frontal face images or videos for recognition, and do not consider variations of head position. In our view, SFP can also be a good choice to recognize facial expression under different head rotations, and thus we propose an algorithm for this purpose, called Profile Salient Facial Patches (PSFP). First, in order to detect the facial landmarks from profile face images, the tree-structured part model is used for pose-free landmark localization; this approach excels at detecting facial landmarks and estimating head poses. Second, to obtain the salient facial patches from profile face images, the facial patches are selected using the detected facial landmarks, while avoiding overlap with each other or going beyond the range of the actual face. For the purpose of analyzing the recognition performance of PSFP, three classical approaches for local feature extraction-histogram of oriented Gradients (HOG), local binary pattern (LBP), and Gabor were applied to extract profile facial expression features. Experimental results on radboud faces database show that PSFP with HOG features can achieve higher accuracies under the most head rotations.


Facial expression recognition has been a functioning exploration territory in the previous ten years, with developing application regions including symbol activity, neuromarketing and amiable robots. The acknowledgment of outward appearances isn't a simple issue for AI techniques, since individuals can change altogether in the manner they demonstrate their looks. Indeed, even pictures of a similar individual in a similar outward appearance can shift in splendor, foundation and present, and these varieties are underscored if thinking about various subjects (due to varieties fit as a fiddle, ethnicity among others). Albeit outward appearance acknowledgment is contemplated in the writing, few works perform reasonable assessment abstaining from blending subjects while preparing and testing the proposed calculations. Thus, outward appearance acknowledgment is as yet a difficult issue in PC vision. In this work, we propose a straightforward answer for outward appearance acknowledgment that utilizes a blend of Convolutional Neural Network and explicit picture pre-handling steps. Convolutional Neural Networks accomplish better precision with huge information. Be that as it may, there are no openly accessible datasets with adequate information for outward appearance acknowledgment with profound structures. Subsequently, to handle the issue, we apply some pre-preparing systems to extricate just demeanour explicit highlights from a face picture and investigate the introduction request of the examples amid preparing. An investigation of the effect of each picture pre-preparing task in the precision rate is displayed. The proposed strategy: accomplishes aggressive outcomes when contrasted and other outward appearance acknowledgment techniques – going up to 92% precision - it is quick to prepare, and it takes into consideration ongoing outward appearance acknowledgment with standard PCs.


2018 ◽  
Vol 18 (02) ◽  
pp. 1850012 ◽  
Author(s):  
Zhaoqi Wu ◽  
Reziwanguli Xiamixiding ◽  
Atul Sajjanhar ◽  
Juan Chen ◽  
Quan Wen

We investigate facial expression recognition (FER) based on image appearance. FER is performed using state-of-the-art classification approaches. Different approaches to preprocess face images are investigated. First, region-of-interest (ROI) images are obtained by extracting the facial ROI from raw images. FER of ROI images is used as the benchmark and compared with the FER of difference images. Difference images are obtained by computing the difference between the ROI images of neutral and peak facial expressions. FER is also evaluated for images which are obtained by applying the Local binary pattern (LBP) operator to ROI images. Further, we investigate different contrast enhancement operators to preprocess images, namely, histogram equalization (HE) approach and a brightness preserving approach for histogram equalization. The classification experiments are performed for a convolutional neural network (CNN) and a pre-trained deep learning model. All experiments are performed on three public face databases, namely, Cohn–Kanade (CK[Formula: see text]), JAFFE and FACES.


Sign in / Sign up

Export Citation Format

Share Document