scholarly journals Are people happy when they smile? Affective assessments based on automatic smile genuineness identification

2020 ◽  
Author(s):  
Monica Perusquía-Hernández

Smiles are one of the most ubiquitous facial expressions. They are often interpreted as a signalling cue of positive emotion. However, as any other facial expression, smiles can also be voluntarily fabricated, masked or inhibited with different communication goals. This review discusses automatic identification of smile genuineness. First, emotions and their bodily manifestation are introduced. Second, an overview of the literature on different types of smiles is provided. Afterwards, different techniques used to investigate smile production are described. These techniques range from human video-coding, bio-signal inspection, and novel sensors that, together with automated techniques using machine learning, aim to investigate facial expression characteristic’s beyond human perception. Next, a general summary of the spatio-temporal shape of a smile is provided. Finally, the remaining challenges regarding individual and cultural differences are discussed.

2019 ◽  
Vol 9 (21) ◽  
pp. 4542 ◽  
Author(s):  
Marco Leo ◽  
Pierluigi Carcagnì ◽  
Cosimo Distante ◽  
Pier Luigi Mazzeo ◽  
Paolo Spagnolo ◽  
...  

The computational analysis of facial expressions is an emerging research topic that could overcome the limitations of human perception and get quick and objective outcomes in the assessment of neurodevelopmental disorders (e.g., Autism Spectrum Disorders, ASD). Unfortunately, there have been only a few attempts to quantify facial expression production and most of the scientific literature aims at the easier task of recognizing if either a facial expression is present or not. Some attempts to face this challenging task exist but they do not provide a comprehensive study based on the comparison between human and automatic outcomes in quantifying children’s ability to produce basic emotions. Furthermore, these works do not exploit the latest solutions in computer vision and machine learning. Finally, they generally focus only on a homogeneous (in terms of cognitive capabilities) group of individuals. To fill this gap, in this paper some advanced computer vision and machine learning strategies are integrated into a framework aimed to computationally analyze how both ASD and typically developing children produce facial expressions. The framework locates and tracks a number of landmarks (virtual electromyography sensors) with the aim of monitoring facial muscle movements involved in facial expression production. The output of these virtual sensors is then fused to model the individual ability to produce facial expressions. Gathered computational outcomes have been correlated with the evaluation provided by psychologists and evidence has been given that shows how the proposed framework could be effectively exploited to deeply analyze the emotional competence of ASD children to produce facial expressions.


2019 ◽  
Vol 2 (1) ◽  
Author(s):  
Catarina Iria ◽  
Rui Paixão ◽  
Fernando Barbosa

It is unknown if the ability of Portuguese in the identification of NimStim data set, which was created in America to provide facial expressions that could be recognized by untrained people, is (or not) similar to the Americans. To test this hypothesis the performance of Portuguese in the recognition of Happiness, Surprise, Sadness, Fear, Disgust and Anger NimStim facial expressions was compared with the Americans, but no significant differences were found. In both populations the easiest emotion to identify was Happiness while Fear was the most difficult one. However, with exception for Surprise, Portuguese tend to show a lower accuracy rate for all the emotions studied. Results highlighted some cultural differences.


PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0251057
Author(s):  
Miquel Mascaró ◽  
Francisco J. Serón ◽  
Francisco J. Perales ◽  
Javier Varona ◽  
Ramon Mas

Laughter and smiling are significant facial expressions used in human to human communication. We present a computational model for the generation of facial expressions associated with laughter and smiling in order to facilitate the synthesis of such facial expressions in virtual characters. In addition, a new method to reproduce these types of laughter is proposed and validated using databases of generic and specific facial smile expressions. In particular, a proprietary database of laugh and smile expressions is also presented. This database lists the different types of classified and generated laughs presented in this work. The generated expressions are validated through a user study with 71 subjects, which concluded that the virtual character expressions built using the presented model are perceptually acceptable in quality and facial expression fidelity. Finally, for generalization purposes, an additional analysis shows that the results are independent of the type of virtual character’s appearance.


Author(s):  
Ritvik Tiwari ◽  
Rudra Thorat ◽  
Vatsal Abhani ◽  
Shakti Mahapatro

Emotion recognition based on facial expression is an intriguing research field, which has been presented and applied in various spheres such as safety, health and in human machine interfaces. Researchers in this field are keen in developing techniques that can prove to be an aid to interpret, decode facial expressions and then extract these features in order to achieve a better prediction by the computer. With advancements in deep learning, the different types of prospects of this technique are exploited to achieve a better performance. We spotlight these contributions, the architecture and the databases used and present the progress made by comparing the proposed methods and the results obtained. The interest of this paper is to guide the technology enthusiasts by reviewing recent works and providing insights to make improvements to this field.


2020 ◽  
Vol 10 (11) ◽  
pp. 4002
Author(s):  
Sathya Bursic ◽  
Giuseppe Boccignone ◽  
Alfio Ferrara ◽  
Alessandro D’Amelio ◽  
Raffaella Lanzarotti

When automatic facial expression recognition is applied to video sequences of speaking subjects, the recognition accuracy has been noted to be lower than with video sequences of still subjects. This effect known as the speaking effect arises during spontaneous conversations, and along with the affective expressions the speech articulation process influences facial configurations. In this work we question whether, aside from facial features, other cues relating to the articulation process would increase emotion recognition accuracy when added in input to a deep neural network model. We develop two neural networks that classify facial expressions in speaking subjects from the RAVDESS dataset, a spatio-temporal CNN and a GRU cell RNN. They are first trained on facial features only, and afterwards both on facial features and articulation related cues extracted from a model trained for lip reading, while varying the number of consecutive frames provided in input as well. We show that using DNNs the addition of features related to articulation increases classification accuracy up to 12%, the increase being greater with more consecutive frames provided in input to the model.


2020 ◽  
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others’ facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography (EMG) signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behavior, and replicated earlier findings of faster and more accurate responses in congruent vs. incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, as compared to frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


2013 ◽  
Vol 38 (7) ◽  
pp. 1286-1294 ◽  
Author(s):  
Zong-Xin LI ◽  
Yuan-Quan CHEN ◽  
Qing-Cheng WANG ◽  
Kai-Chang LIU ◽  
Wang-Sheng GAO ◽  
...  

2021 ◽  
pp. 174702182199299
Author(s):  
Mohamad El Haj ◽  
Emin Altintas ◽  
Ahmed A Moustafa ◽  
Abdel Halim Boudoukha

Future thinking, which is the ability to project oneself forward in time to pre-experience an event, is intimately associated with emotions. We investigated whether emotional future thinking can activate emotional facial expressions. We invited 43 participants to imagine future scenarios, cued by the words “happy,” “sad,” and “city.” Future thinking was video recorded and analysed with a facial analysis software to classify whether facial expressions (i.e., happy, sad, angry, surprised, scared, disgusted, and neutral facial expression) of participants were neutral or emotional. Analysis demonstrated higher levels of happy facial expressions during future thinking cued by the word “happy” than “sad” or “city.” In contrast, higher levels of sad facial expressions were observed during future thinking cued by the word “sad” than “happy” or “city.” Higher levels of neutral facial expressions were observed during future thinking cued by the word “city” than “happy” or “sad.” In the three conditions, the neutral facial expressions were high compared with happy and sad facial expressions. Together, emotional future thinking, at least for future scenarios cued by “happy” and “sad,” seems to trigger the corresponding facial expression. Our study provides an original physiological window into the subjective emotional experience during future thinking.


2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


Sign in / Sign up

Export Citation Format

Share Document