scholarly journals Machine‐Learned Light‐Field Camera that Reads Facial Expression from High‐Contrast and Illumination Invariant 3D Facial Images

2021 ◽  
pp. 2100182
Author(s):  
Sang-In Bae ◽  
Sangyeon Lee ◽  
Jae-Myeong Kwon ◽  
Hyun-Kyung Kim ◽  
Kyung-Won Jang ◽  
...  
Fractals ◽  
2002 ◽  
Vol 10 (01) ◽  
pp. 47-52 ◽  
Author(s):  
TAKUMA TAKEHARA ◽  
FUMIO OCHIAI ◽  
NAOTO SUZUKI

Following the Mandelbrot's theory of fractals, many shapes and phenomena in nature have been suggested to be fractal. Even animal behavior and human physiological responses can also be represented as fractal. Here, we show the evidence that it is possible to apply the concept of fractals even to the facial expression recognition, which is one of the most important parts of human recognition. Rating data derived from judging morphed facial images were represented in the two-dimensional psychological space by multidimensional scaling of four different scales. The resultant perimeter of the structure of the emotion circumplex was fluctuated and was judged to have a fractal dimension of 1.18. The smaller the unit of measurement, the longer the length of the perimeter of the circumplex. In this study, we provide interdisciplinarily important evidence of fractality through its application to facial expression recognition.


Optik ◽  
2018 ◽  
Vol 158 ◽  
pp. 1016-1025 ◽  
Author(s):  
Asim Munir ◽  
Ayyaz Hussain ◽  
Sajid Ali Khan ◽  
Muhammad Nadeem ◽  
Sadia Arshid

2019 ◽  
Vol 8 (4) ◽  
pp. 6140-6144

In this work, we propose a prospective novel method to address illumination invariant system for facial expression recognition. Facial expressions are used to convey nonverbal visual information among humans. This also plays a vital role in human-machine interface modules that have invoked attention of many researchers. Earlier machine learning algorithms require complex feature extraction algorithms and are relying on the size and uniqueness of features related to the subjects. In this paper, a deep convolutional neural network is proposed for facial expression recognition and it is trained on two publicly available datasets such as JAFFE and Yale databases under different illumination conditions. Furthermore, transfer learning is used with pre-trained networks such as AlexNet and ResNet-101 trained on ImageNet database. Experimental results show that the designed network could recognize up to 30% variation in the illumination and it achieves an accuracy of 92%.


2022 ◽  
Vol 8 ◽  
Author(s):  
Niyati Rawal ◽  
Dorothea Koert ◽  
Cigdem Turan ◽  
Kristian Kersting ◽  
Jan Peters ◽  
...  

The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots’ joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots.


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Awais Mahmood ◽  
Shariq Hussain ◽  
Khalid Iqbal ◽  
Wail S. Elkilani

Facial expression recognition plays an important role in communicating the emotions and intentions of human beings. Facial expression recognition in uncontrolled environment is more difficult as compared to that in controlled environment due to change in occlusion, illumination, and noise. In this paper, we present a new framework for effective facial expression recognition from real-time facial images. Unlike other methods which spend much time by dividing the image into blocks or whole face image, our method extracts the discriminative feature from salient face regions and then combine with texture and orientation features for better representation. Furthermore, we reduce the data dimension by selecting the highly discriminative features. The proposed framework is capable of providing high recognition accuracy rate even in the presence of occlusions, illumination, and noise. To show the robustness of the proposed framework, we used three publicly available challenging datasets. The experimental results show that the performance of the proposed framework is better than existing techniques, which indicate the considerable potential of combining geometric features with appearance-based features.


2018 ◽  
Vol 7 (4.6) ◽  
pp. 17
Author(s):  
V Uma Maheswari ◽  
Vara Prasad ◽  
S Viswanadha Raju

In this paper, proposing a novel method to retrieve the edge and texture information from facial images named local directional standard matrix (LDSM) and local dynamic threshold based binary pattern (LDTBP). LBP and LTP operators are used for texture extraction of an image by finding difference between center and surrounding pixels but they failed to detect edges and large intensity variations. Thus addressed such problems in proposed method firstly, calculated the LDSM matrix with standard deviation of horizontal and vertical pixels of each pixel. Therefore, values are encoded based on the dynamic threshold which is calculated from median of LDSM values of each pixel called LDTBP. In experiments used LFW facial expression dataset so used SVM classifier to classify the images and retrieved relevant images then measured in terms of average precision and average recall. 


Perception ◽  
10.1068/p5811 ◽  
2008 ◽  
Vol 37 (11) ◽  
pp. 1637-1648 ◽  
Author(s):  
Satoru Kawamura ◽  
Masashi Komori ◽  
Yusuke Miyamoto

We examined the effect of facial expression on the assignment of gender to facial images. A computational analysis of the facial images was applied to examine whether physical aspects of the face itself induced this effect. Thirty-six observers rated the degree of masculinity of the faces of 48 men, and the degree of femininity of the faces of 48 women. Half of the faces had a neutral facial expression, and the other half was smiling. Smiling significantly reduced the perceived masculinity of men's faces, especially for male observers, whereas no effect of smiling on femininity ratings was obtained for women's faces. A principal component analysis was conducted on the matrix of pixel luminance values for each facial image × all the images. The third principle component explained a relatively high proportion of the variance of both facial expressions and gender of face. These results suggest that the effect of smiling on the assignment of gender is caused, at least in part, by the physical relationship between facial expression and face gender.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5391
Author(s):  
Suraiya Yasmin ◽  
Refat Khan Pathan ◽  
Munmun Biswas ◽  
Mayeen Uddin Khandaker ◽  
Mohammad Rashed Iqbal Faruque

Compelling facial expression recognition (FER) processes have been utilized in very successful fields like computer vision, robotics, artificial intelligence, and dynamic texture recognition. However, the FER’s critical problem with traditional local binary pattern (LBP) is the loss of neighboring pixels related to different scales that can affect the texture of facial images. To overcome such limitations, this study describes a new extended LBP method to extract feature vectors from images, detecting each image from facial expressions. The proposed method is based on the bitwise AND operation of two rotational kernels applied on LBP(8,1) and LBP(8,2) and utilizes two accessible datasets. Firstly, the facial parts are detected and the essential components of a face are observed, such as eyes, nose, and lips. The portion of the face is then cropped to reduce the dimensions and an unsharp masking kernel is applied to sharpen the image. The filtered images then go through the feature extraction method and wait for the classification process. Four machine learning classifiers were used to verify the proposed method. This study shows that the proposed multi-scale featured local binary pattern (MSFLBP), together with Support Vector Machine (SVM), outperformed the recent LBP-based state-of-the-art approaches resulting in an accuracy of 99.12% for the Extended Cohn–Kanade (CK+) dataset and 89.08% for the Karolinska Directed Emotional Faces (KDEF) dataset.


Sign in / Sign up

Export Citation Format

Share Document