scholarly journals Local Directional Threshold based Binary Patterns for Facial Expression Recognition and Analysis

2018 ◽  
Vol 7 (4.6) ◽  
pp. 17
Author(s):  
V Uma Maheswari ◽  
Vara Prasad ◽  
S Viswanadha Raju

In this paper, proposing a novel method to retrieve the edge and texture information from facial images named local directional standard matrix (LDSM) and local dynamic threshold based binary pattern (LDTBP). LBP and LTP operators are used for texture extraction of an image by finding difference between center and surrounding pixels but they failed to detect edges and large intensity variations. Thus addressed such problems in proposed method firstly, calculated the LDSM matrix with standard deviation of horizontal and vertical pixels of each pixel. Therefore, values are encoded based on the dynamic threshold which is calculated from median of LDSM values of each pixel called LDTBP. In experiments used LFW facial expression dataset so used SVM classifier to classify the images and retrieved relevant images then measured in terms of average precision and average recall. 

Fractals ◽  
2002 ◽  
Vol 10 (01) ◽  
pp. 47-52 ◽  
Author(s):  
TAKUMA TAKEHARA ◽  
FUMIO OCHIAI ◽  
NAOTO SUZUKI

Following the Mandelbrot's theory of fractals, many shapes and phenomena in nature have been suggested to be fractal. Even animal behavior and human physiological responses can also be represented as fractal. Here, we show the evidence that it is possible to apply the concept of fractals even to the facial expression recognition, which is one of the most important parts of human recognition. Rating data derived from judging morphed facial images were represented in the two-dimensional psychological space by multidimensional scaling of four different scales. The resultant perimeter of the structure of the emotion circumplex was fluctuated and was judged to have a fractal dimension of 1.18. The smaller the unit of measurement, the longer the length of the perimeter of the circumplex. In this study, we provide interdisciplinarily important evidence of fractality through its application to facial expression recognition.


2020 ◽  
Vol 11 (1) ◽  
pp. 48-70 ◽  
Author(s):  
Sivaiah Bellamkonda ◽  
Gopalan N.P

Facial expression analysis and recognition has gained popularity in the last few years for its challenging nature and broad area of applications like HCI, pain detection, operator fatigue detection, surveillance, etc. The key of real-time FER system is exploiting its variety of features extracted from the source image. In this article, three different features viz. local binary pattern, Gabor, and local directionality pattern were exploited to perform feature fusion and two classification algorithms viz. support vector machines and artificial neural networks were used to validate the proposed model on benchmark datasets. The classification accuracy has been improved in the proposed feature fusion of Gabor and LDP features with SVM classifier, recorded an average accuracy of 93.83% on JAFFE, 95.83% on CK and 96.50% on MMI. The recognition rates were compared with the existing studies in the literature and found that the proposed feature fusion model has improved the performance.


2013 ◽  
Vol 347-350 ◽  
pp. 3780-3785
Author(s):  
Jing Jie Yan ◽  
Ming Han Xin

Although spatio-temporal features (ST) have recently been developed and shown to be available for facial expression recognition and behavior recognition in videos, it utilizes the method of directly flattening the cuboid into a vector as a feature vector for recognition which causes the obtained vector is likely to potentially sensitive to small cuboid perturbations or noises. To overcome the drawback of spatio-temporal features, we propose a novel method called fused spatio-temporal features (FST) method utilizing the separable linear filters to detect interesting points and fusing two cuboids representation methods including local histogrammed gradient descriptor and flattening the cuboid into a vector for cuboids descriptor. The proposed FST method may robustness to small cuboid perturbations or noises and also preserve both spatial and temporal positional information. The experimental results on two video-based facial expression databases demonstrate the effectiveness of the proposed method.


2019 ◽  
Vol 8 (4) ◽  
pp. 6140-6144

In this work, we propose a prospective novel method to address illumination invariant system for facial expression recognition. Facial expressions are used to convey nonverbal visual information among humans. This also plays a vital role in human-machine interface modules that have invoked attention of many researchers. Earlier machine learning algorithms require complex feature extraction algorithms and are relying on the size and uniqueness of features related to the subjects. In this paper, a deep convolutional neural network is proposed for facial expression recognition and it is trained on two publicly available datasets such as JAFFE and Yale databases under different illumination conditions. Furthermore, transfer learning is used with pre-trained networks such as AlexNet and ResNet-101 trained on ImageNet database. Experimental results show that the designed network could recognize up to 30% variation in the illumination and it achieves an accuracy of 92%.


2022 ◽  
Vol 8 ◽  
Author(s):  
Niyati Rawal ◽  
Dorothea Koert ◽  
Cigdem Turan ◽  
Kristian Kersting ◽  
Jan Peters ◽  
...  

The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots’ joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots.


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Awais Mahmood ◽  
Shariq Hussain ◽  
Khalid Iqbal ◽  
Wail S. Elkilani

Facial expression recognition plays an important role in communicating the emotions and intentions of human beings. Facial expression recognition in uncontrolled environment is more difficult as compared to that in controlled environment due to change in occlusion, illumination, and noise. In this paper, we present a new framework for effective facial expression recognition from real-time facial images. Unlike other methods which spend much time by dividing the image into blocks or whole face image, our method extracts the discriminative feature from salient face regions and then combine with texture and orientation features for better representation. Furthermore, we reduce the data dimension by selecting the highly discriminative features. The proposed framework is capable of providing high recognition accuracy rate even in the presence of occlusions, illumination, and noise. To show the robustness of the proposed framework, we used three publicly available challenging datasets. The experimental results show that the performance of the proposed framework is better than existing techniques, which indicate the considerable potential of combining geometric features with appearance-based features.


Sign in / Sign up

Export Citation Format

Share Document