scholarly journals Facial Expression Ambiguity and Face Image Quality Affect Differently on Expression Interpretation Bias

Perception ◽  
2021 ◽  
pp. 030100662110002
Author(s):  
Jade Kinchella ◽  
Kun Guo

We often show an invariant or comparable recognition performance for perceiving prototypical facial expressions, such as happiness and anger, under different viewing settings. However, it is unclear to what extent the categorisation of ambiguous expressions and associated interpretation bias are invariant in degraded viewing conditions. In this exploratory eye-tracking study, we systematically manipulated both facial expression ambiguity (via morphing happy and angry expressions in different proportions) and face image clarity/quality (via manipulating image resolution) to measure participants’ expression categorisation performance, perceived expression intensity, and associated face-viewing gaze distribution. Our analysis revealed that increasing facial expression ambiguity and decreasing face image quality induced the opposite direction of expression interpretation bias (negativity vs. positivity bias, or increased anger vs. increased happiness categorisation), the same direction of deterioration impact on rating expression intensity, and qualitatively different influence on face-viewing gaze allocation (decreased gaze at eyes but increased gaze at mouth vs. stronger central fixation bias). These novel findings suggest that in comparison with prototypical facial expressions, our visual system has less perceptual tolerance in processing ambiguous expressions which are subject to viewing condition-dependent interpretation bias.

2019 ◽  
Vol 9 (16) ◽  
pp. 3379
Author(s):  
Hyun-Jun Hyung ◽  
Han Ul Yoon ◽  
Dongwoon Choi ◽  
Duk-Yeon Lee ◽  
Dong-Wook Lee

Because the internal structure, degree of freedom, skin control position and range of the android face are different, it is very difficult to generate facial expressions by applying existing facial expression generation methods. In addition, facial expressions differ among robots because they are designed subjectively. To address these problems, we developed a system that can automatically generate robot facial expressions by combining an android, a recognizer capable of classifying facial expressions and a genetic algorithm. We have developed two types (older men and young women) of android face robots that can simulate human skin movements. We selected 16 control positions to generate the facial expressions of these robots. The expressions were generated by combining the displacements of 16 motors. A chromosome comprising 16 genes (motor displacements) was generated by applying real-coded genetic algorithms; subsequently, it was used to generate robot facial expressions. To determine the fitness of the generated facial expressions, expression intensity was evaluated through a facial expression recognizer. The proposed system was used to generate six facial expressions (angry, disgust, fear, happy, sad, surprised); the results confirmed that they were more appropriate than manually generated facial expressions.


Algorithms ◽  
2019 ◽  
Vol 12 (11) ◽  
pp. 227 ◽  
Author(s):  
Yingying Wang ◽  
Yibin Li ◽  
Yong Song ◽  
Xuewen Rong

In recent years, with the development of artificial intelligence and human–computer interaction, more attention has been paid to the recognition and analysis of facial expressions. Despite much great success, there are a lot of unsatisfying problems, because facial expressions are subtle and complex. Hence, facial expression recognition is still a challenging problem. In most papers, the entire face image is often chosen as the input information. In our daily life, people can perceive other’s current emotions only by several facial components (such as eye, mouth and nose), and other areas of the face (such as hair, skin tone, ears, etc.) play a smaller role in determining one’s emotion. If the entire face image is used as the only input information, the system will produce some unnecessary information and miss some important information in the process of feature extraction. To solve the above problem, this paper proposes a method that combines multiple sub-regions and the entire face image by weighting, which can capture more important feature information that is conducive to improving the recognition accuracy. Our proposed method was evaluated based on four well-known publicly available facial expression databases: JAFFE, CK+, FER2013 and SFEW. The new method showed better performance than most state-of-the-art methods.


2020 ◽  
Vol 8 (2) ◽  
pp. 68-84
Author(s):  
Naoki Imamura ◽  
Hiroki Nomiya ◽  
Teruhisa Hochin

Facial expression intensity has been proposed to digitize the degree of facial expressions in order to retrieve impressive scenes from lifelog videos. The intensity is calculated based on the correlation of facial features compared to each facial expression. However, the correlation is not determined objectively. It should be determined statistically based on the contribution score of the facial features necessary for expression recognition. Therefore, the proposed method recognizes facial expressions by using a neural network and calculates the contribution score of input toward the output. First, the authors improve some facial features. After that, they verify the score correctly by comparing the accuracy transitions depending on reducing useful and useless features and process the score statistically. As a result, they extract useful facial features from the neural network.


2018 ◽  
Vol 11 (2) ◽  
pp. 16-33 ◽  
Author(s):  
A.V. Zhegallo

The study investigates the specifics of recognition of emotional facial expressions in peripherally exposed facial expressions, while exposition time was shorter compared to the duration of the latent period of a saccade towards the exposed image. The study showed that recognition of peripherical perception reproduces the patterns of the choice of the incorrect responses. The mutual mistaken recognition is common for the facial expressions of a fear, anger and surprise. In the case of worsening of the conditions of recognition, calmness and grief as facial expression were included in the complex of a mutually mistakenly identified expressions. The identification of the expression of happiness deserves a special attention, because it can be mistakenly identified as different facial expression, but other expressions are never recognized as happiness. Individual accuracy of recognition varies from 0.29 to 0.80. The sufficient condition of a high accuracy in recognition was the recognition of the facial expressions using peripherical vision without making a saccade in the direction of the face image exposed.


2021 ◽  
Vol 2021 ◽  
pp. 1-21
Author(s):  
Ke Li ◽  
Hu Chen ◽  
Faxiu Huang ◽  
Shenggui Ling ◽  
Zhisheng You

Face image quality has an important effect on recognition performance. Recognition-oriented face image quality assessment is particularly necessary for the screening or application of face images with various qualities. In this work, sharpness and brightness were mainly assessed by a classification model. We selected very high-quality images of each subject and established nine kinds of quality labels that are related to recognition performance by utilizing a combination of face recognition algorithms, the human vision system, and a traditional brightness calculation method. Experiments were conducted on a custom dataset and the CMU multi-PIE face database for training and testing and on Labeled Faces in the Wild for cross-validation. The experimental results show that the proposed method can effectively reduce the false nonmatch rate by removing the low-quality face images identified by the classification model and vice versa. This method is even effective for face recognition algorithms that are not involved in label creation and whose training data are nonhomologous to the training set of our quality assessment model. The results show that the proposed method can distinguish images of different qualities with reasonable accuracy and is consistent with subjective human evaluation. The quality labels established in this paper are closely related to the recognition performance and exhibit good generalization to other recognition algorithms. Our method can be used to reject low-quality images to improve the recognition rate and screen high-quality images for subsequent processing.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Qing Lin ◽  
Ruili He ◽  
Peihe Jiang

State-of-the-art facial expression methods outperform human beings, especially, thanks to the success of convolutional neural networks (CNNs). However, most of the existing works focus mainly on analyzing an adult’s face and ignore the important problems: how can we recognize facial expression from a baby’s face image and how difficult is it? In this paper, we first introduce a new face image database, named BabyExp, which contains 12,000 images from babies younger than two years old, and each image is with one of three facial expressions (i.e., happy, sad, and normal). To the best of our knowledge, the proposed dataset is the first baby face dataset for analyzing a baby’s face image, which is complementary to the existing adult face datasets and can shed some light on exploring baby face analysis. We also propose a feature guided CNN method with a new loss function, called distance loss, to optimize interclass distance. In order to facilitate further research, we provide the benchmark of expression recognition on the BabyExp dataset. Experimental results show that the proposed network achieves the recognition accuracy of 87.90% on BabyExp.


2016 ◽  
Vol 119 (2) ◽  
pp. 539-556 ◽  
Author(s):  
Xiaoling Wang ◽  
Mingyi Qian ◽  
Hongyu Yu ◽  
Yang Sun ◽  
Songwei Li ◽  
...  

This study examined how positive-scale assessment of ambiguous social stimuli affects interpretation bias in social anxiety. Participants with high and low social anxiety ( N = 60) performed a facial expression discrimination task to assess interpretation bias. Participants were then randomly assigned to assess the emotion of briefly presented faces either on a negative or on a positive scale. They subsequently repeated the facial expression discrimination task. Participants with high versus low social anxiety made more negative interpretations of ambiguous facial expressions. However, those in the positive-scale assessment condition subsequently showed reduced negative interpretations of ambiguous facial expressions. These results suggest that interpretation bias in social anxiety could be mediated by positive priming rather than an outright negative bias.


PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262344
Author(s):  
Maria Tsantani ◽  
Vita Podgajecka ◽  
Katie L. H. Gray ◽  
Richard Cook

The use of surgical-type face masks has become increasingly common during the COVID-19 pandemic. Recent findings suggest that it is harder to categorise the facial expressions of masked faces, than of unmasked faces. To date, studies of the effects of mask-wearing on emotion recognition have used categorisation paradigms: authors have presented facial expression stimuli and examined participants’ ability to attach the correct label (e.g., happiness, disgust). While the ability to categorise particular expressions is important, this approach overlooks the fact that expression intensity is also informative during social interaction. For example, when predicting an interactant’s future behaviour, it is useful to know whether they are slightly fearful or terrified, contented or very happy, slightly annoyed or angry. Moreover, because categorisation paradigms force observers to pick a single label to describe their percept, any additional dimensionality within observers’ interpretation is lost. In the present study, we adopted a complementary emotion-intensity rating paradigm to study the effects of mask-wearing on expression interpretation. In an online experiment with 120 participants (82 female), we investigated how the presence of face masks affects the perceived emotional profile of prototypical expressions of happiness, sadness, anger, fear, disgust, and surprise. For each of these facial expressions, we measured the perceived intensity of all six emotions. We found that the perceived intensity of intended emotions (i.e., the emotion that the actor intended to convey) was reduced by the presence of a mask for all expressions except for anger. Additionally, when viewing all expressions except surprise, masks increased the perceived intensity of non-intended emotions (i.e., emotions that the actor did not intend to convey). Intensity ratings were unaffected by presentation duration (500ms vs 3000ms), or attitudes towards mask wearing. These findings shed light on the ambiguity that arises when interpreting the facial expressions of masked faces.


In this paper we are proposing a compact CNN model for facial expression recognition. Expression recognition on the low quality images are much more challenging and interesting due to the presence of low-intensity expressions. These low intensity expressions are difficult to distinguish with insufficient image resolution. Data collection for FER is expensive and time-consuming. Researches indicates the fact that downloaded images from the Internet is very useful to model and train expression recognition problem. We use extra datasets to improve the training of facial expression recognition, each representing specific data source. Moreover, to prevent subjective annotation, each dataset is labeled with different approaches to ensure annotation qualities. Recognizing the precise and exact expression from a variety of expressions of different people is a huge problem. To solve this problem, we proposed an Emotion Detection Model to extract emotions from the given input image. This work mainly focuses on the psychological approach of color circle-emotion relation[1] to find the accurate emotion from the input image. Initially the whole image is preprocessed and pixel by pixel data is studied. And the combinations of the circles based on combined data will result in a new color. This resulted color will be directly correlated to a particular emotion. Based on the psychological aspects the output will be of reasonable accuracy. The major application of our work is to predict a person’s emotion based on his face images or video frames This can even be applied for evaluating the public opinion relating to a particular movie, form the video reaction posts on social Medias. One of the diverse applications of our system is to understand the students learning from their emotions. Human beings shows their emotional states and intentions through facial expressions.. Facial expressions are powerful and natural methods that emphasize the emotional status of humans .The approach used in this work successfully exploits temporal information and it improves the accuracies on the public benchmarking databases. The basic facial expressions are happiness, fear, anger, disgust sadness, and surprise[2]. Contempt was subsequently added as one of the basic emotions. Having sufficient well labeled training data with variations of the populations and environments is important for the design of a deep expression recognition system .Behaviors, poses, facial expressions, actions and speech are considered as channels, which convey human emotions. Lot of research works are going on in this field to explore the correlation between the above mentioned channels and emotions. This paper highlights on the development of a system which automatically recognizes the


2020 ◽  
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others’ facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography (EMG) signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behavior, and replicated earlier findings of faster and more accurate responses in congruent vs. incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, as compared to frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


Sign in / Sign up

Export Citation Format

Share Document