Facial Expression Recognition Using Local Transitional Pattern on Gabor Filtered Facial Images

2013 ◽  
Vol 30 (1) ◽  
pp. 47 ◽  
Author(s):  
Tanveer Ahsan ◽  
Taskeed Jabid ◽  
Ui-Pil Chong
Fractals ◽  
2002 ◽  
Vol 10 (01) ◽  
pp. 47-52 ◽  
Author(s):  
TAKUMA TAKEHARA ◽  
FUMIO OCHIAI ◽  
NAOTO SUZUKI

Following the Mandelbrot's theory of fractals, many shapes and phenomena in nature have been suggested to be fractal. Even animal behavior and human physiological responses can also be represented as fractal. Here, we show the evidence that it is possible to apply the concept of fractals even to the facial expression recognition, which is one of the most important parts of human recognition. Rating data derived from judging morphed facial images were represented in the two-dimensional psychological space by multidimensional scaling of four different scales. The resultant perimeter of the structure of the emotion circumplex was fluctuated and was judged to have a fractal dimension of 1.18. The smaller the unit of measurement, the longer the length of the perimeter of the circumplex. In this study, we provide interdisciplinarily important evidence of fractality through its application to facial expression recognition.


2022 ◽  
Vol 8 ◽  
Author(s):  
Niyati Rawal ◽  
Dorothea Koert ◽  
Cigdem Turan ◽  
Kristian Kersting ◽  
Jan Peters ◽  
...  

The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots’ joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots.


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Awais Mahmood ◽  
Shariq Hussain ◽  
Khalid Iqbal ◽  
Wail S. Elkilani

Facial expression recognition plays an important role in communicating the emotions and intentions of human beings. Facial expression recognition in uncontrolled environment is more difficult as compared to that in controlled environment due to change in occlusion, illumination, and noise. In this paper, we present a new framework for effective facial expression recognition from real-time facial images. Unlike other methods which spend much time by dividing the image into blocks or whole face image, our method extracts the discriminative feature from salient face regions and then combine with texture and orientation features for better representation. Furthermore, we reduce the data dimension by selecting the highly discriminative features. The proposed framework is capable of providing high recognition accuracy rate even in the presence of occlusions, illumination, and noise. To show the robustness of the proposed framework, we used three publicly available challenging datasets. The experimental results show that the performance of the proposed framework is better than existing techniques, which indicate the considerable potential of combining geometric features with appearance-based features.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5391
Author(s):  
Suraiya Yasmin ◽  
Refat Khan Pathan ◽  
Munmun Biswas ◽  
Mayeen Uddin Khandaker ◽  
Mohammad Rashed Iqbal Faruque

Compelling facial expression recognition (FER) processes have been utilized in very successful fields like computer vision, robotics, artificial intelligence, and dynamic texture recognition. However, the FER’s critical problem with traditional local binary pattern (LBP) is the loss of neighboring pixels related to different scales that can affect the texture of facial images. To overcome such limitations, this study describes a new extended LBP method to extract feature vectors from images, detecting each image from facial expressions. The proposed method is based on the bitwise AND operation of two rotational kernels applied on LBP(8,1) and LBP(8,2) and utilizes two accessible datasets. Firstly, the facial parts are detected and the essential components of a face are observed, such as eyes, nose, and lips. The portion of the face is then cropped to reduce the dimensions and an unsharp masking kernel is applied to sharpen the image. The filtered images then go through the feature extraction method and wait for the classification process. Four machine learning classifiers were used to verify the proposed method. This study shows that the proposed multi-scale featured local binary pattern (MSFLBP), together with Support Vector Machine (SVM), outperformed the recent LBP-based state-of-the-art approaches resulting in an accuracy of 99.12% for the Extended Cohn–Kanade (CK+) dataset and 89.08% for the Karolinska Directed Emotional Faces (KDEF) dataset.


2020 ◽  
Vol 8 (5) ◽  
pp. 5602-5604

Facial Expression Recognition is one of the recent trends to detect human expression in streaming video sequences. To identify emotions of video like sad, happy or angry. In this paper, the proposed method employs two individual deep convolution neural networks (CNNs), including a permanent CNN processing of static facial images and a temporary CN network processing of optical flow images, to separately learn high-level spatial and temporal characteristics on the separated video segments. Such two CNNs are fine tuned from a pre-trained CNN model to target video facial expression datasets. The spatial and temporal characteristics obtained at the segment level are then incorporated into a deep fusion network built with a model of deep belief network (DBN). This deep fusion network is used to learn spatiotemporal discriminative features together


Sign in / Sign up

Export Citation Format

Share Document