scholarly journals Does the Goal Matter? Emotion Recognition Tasks Can Change the Social Value of Facial Mimicry Towards Artificial Agents

2021 ◽  
Vol 8 ◽  
Author(s):  
Giulia Perugia ◽  
Maike Paetzel-Prüsmann ◽  
Isabelle Hupont ◽  
Giovanna Varni ◽  
Mohamed Chetouani ◽  
...  

In this paper, we present a study aimed at understanding whether the embodiment and humanlikeness of an artificial agent can affect people’s spontaneous and instructed mimicry of its facial expressions. The study followed a mixed experimental design and revolved around an emotion recognition task. Participants were randomly assigned to one level of humanlikeness (between-subject variable: humanlike, characterlike, or morph facial texture of the artificial agents) and observed the facial expressions displayed by three artificial agents differing in embodiment (within-subject variable: video-recorded robot, physical robot, and virtual agent) and a human (control). To study both spontaneous and instructed facial mimicry, we divided the experimental sessions into two phases. In the first phase, we asked participants to observe and recognize the emotions displayed by the agents. In the second phase, we asked them to look at the agents’ facial expressions, replicate their dynamics as closely as possible, and then identify the observed emotions. In both cases, we assessed participants’ facial expressions with an automated Action Unit (AU) intensity detector. Contrary to our hypotheses, our results disclose that the agent that was perceived as the least uncanny, and most anthropomorphic, likable, and co-present, was the one spontaneously mimicked the least. Moreover, they show that instructed facial mimicry negatively predicts spontaneous facial mimicry. Further exploratory analyses revealed that spontaneous facial mimicry appeared when participants were less certain of the emotion they recognized. Hence, we postulate that an emotion recognition goal can flip the social value of facial mimicry as it transforms a likable artificial agent into a distractor. Further work is needed to corroborate this hypothesis. Nevertheless, our findings shed light on the functioning of human-agent and human-robot mimicry in emotion recognition tasks and help us to unravel the relationship between facial mimicry, liking, and rapport.

2020 ◽  
Author(s):  
Arianna Schiano Lomoriello ◽  
Giulio Caperna ◽  
Elisa De Stefani ◽  
Pier Francesco Ferrari ◽  
Paola Sessa

According to the models of sensorimotor simulation, we recognize others' emotions by subtly mimicking their expressions, which allows us to feel the corresponding emotion via facial feedback. In this contest, facial mimicry, which requires the implicit activation of the motor programs that produce a specific expression, is a crucial phenomenon occurring in emotion recognition, also concerning expression intensity. Consequently, difficulties to produce facial expressions would affect the experience of emotional understanding. In the present investigation, we recruited a sample (N = 11) of patients with Moebius syndrome (MBS), characterized by congenital facial paralysis, and a control group (N = 11) of healthy participants. By leveraging the MBS unique condition, we aimed at investigating the role of facial mimicry and sensorimotor simulation in creating a precise embodied concept of each emotion. The two groups underwent a sensitive facial emotion recognition task, optimally tuned to test sensitivity to emotion intensity and emotion discriminability in terms of their confusability with other emotions. Our study provides evidence of a deficit in recognizing emotions in MBS patients, expressed by a significant decrease in the rating of the intensity of three specific emotion categories, namely sadness, fear and disgust. Moreover, we observed an impairment in detecting these emotions, resulting in a stronger confusability of such emotions with the neutral and the secondary blended emotion. These findings provide support for embodied theories, which hypothesize that sensorimotor systems are involved in the detection and discrimination of emotions.


2020 ◽  
Author(s):  
Arianna Schiano Lomoriello ◽  
Paola Sessa ◽  
Giulio Caperna ◽  
Pier Francesco Ferrari

According to the models of sensorimotor simulation, we recognize others' emotions by subtly mimicking their expressions, which allows us to feel the corresponding emotion via facial feedback. In this contest, facial mimicry, which requires the implicit activation of the motor programs that produce a specific expression, is a crucial phenomenon occurring in emotion recognition, also concerning expression intensity. Consequently, difficulties to produce facial expressions would affect the experience of emotional understanding. In the present investigation, we recruited a sample (N = 11) of patients with Moebius syndrome (MBS), characterized by congenital facial paralysis, and a control group (N = 11) of healthy participants. By leveraging the MBS unique condition, we aimed at investigating the role of facial mimicry and sensorimotor simulation in creating a precise embodied concept of each emotion. The two groups underwent a sensitive facial emotion recognition task, optimally tuned to test sensitivity to emotion intensity and emotion discriminability in terms of their confusability with other emotions. Our study provides evidence of a deficit in recognizing emotions in MBS patients, expressed by a significant decrease in the rating of the intensity of three specific emotion categories, namely sadness, fear and disgust. Moreover, we observed an impairment in detecting these emotions, resulting in a stronger confusability of such emotions with the neutral and the secondary blended emotion. These findings provide support for embodied theories, which hypothesize that sensorimotor systems are involved in the detection and discrimination of emotions.


2004 ◽  
Vol 32 (1) ◽  
pp. 24-33 ◽  
Author(s):  
David Wasserman

In this article I want to ask what we should do, either collectively or individually, if we could identify by genetic and family profding the 12% of the male population likely to commit almost half the violent crime in our society. What if we could identify some individuals in that 12% not only at birth, but in utero, or before implantation? I will explain the source of these figures later; for now, I will use them only to provide a concrete example of the kind of predictive claims we can expect to be made with some frequency, and some scientific credibility, over the next generation. I will adopt an outlook that one commentator has called “pragmatic optimism,” but which could also be called technological optimism - the belief that a science or technology will achieve many or most of its advertised goals. My optimism will be directed towards human behavioral genetics, the source of predictions like the one I just offered; I will assume that this controversial discipline will achieve a substantial pan of its scientific ambition to identlfy genetic differences among individuals that help predict and possibly explain future behavior, psychological health, and cognitive skill. This optimism is very limited -it concerns the scientific success of behavioral genetics, not the social value of that success.


2009 ◽  
Vol 364 (1535) ◽  
pp. 3497-3504 ◽  
Author(s):  
Ursula Hess ◽  
Reginald B. Adams ◽  
Robert E. Kleck

Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions.


2013 ◽  
Vol 8 (1) ◽  
pp. 75-93 ◽  
Author(s):  
Roy P.C. Kessels ◽  
Barbara Montagne ◽  
Angelique W. Hendriks ◽  
David I. Perrett ◽  
Edward H.F. de Haan

2020 ◽  
Author(s):  
Connor Tom Keating ◽  
Sophie L Sowden ◽  
Dagmar S Fraser ◽  
Jennifer L Cook

Abstract A burgeoning literature suggests that alexithymia, and not autism, is responsible for the difficulties with static emotion recognition that are documented in the autistic population. Here we investigate whether alexithymia can also account for difficulties with dynamic facial expressions. Autistic and control adults (N=60) matched on age, gender, non-verbal reasoning ability and alexithymia, completed an emotion recognition task, which employed dynamic point light displays of emotional facial expressions that varied in speed and spatial exaggeration. The ASD group exhibited significantly lower recognition accuracy for angry, but not happy or sad, expressions with normal speed and spatial exaggeration. The level of autistic, and not alexithymic, traits was a significant predictor of accuracy for angry expressions with normal speed and spatial exaggeration.


2018 ◽  
Vol 8 (12) ◽  
pp. 219 ◽  
Author(s):  
Mayra Gutiérrez-Muñoz ◽  
Martha Fajardo-Araujo ◽  
Erika González-Pérez ◽  
Victor Aguirre-Arzola ◽  
Silvia Solís-Ortiz

Polymorphisms of the estrogen receptor ESR1 and ESR2 genes have been linked with cognitive deficits and affective disorders. The effects of these genetic variants on emotional processing in females with low estrogen levels are not well known. The aim was to explore the impact of the ESR1 and ESR2 genes on the responses to the facial emotion recognition task in females. Postmenopausal healthy female volunteers were genotyped for the polymorphisms Xbal and PvuII of ESR1 and the polymorphism rs1256030 of ESR2. The effect of these polymorphisms on the response to the facial emotion recognition of the emotions happiness, sadness, disgust, anger, surprise, and fear was analyzed. Females carrying the P allele of the PvuII polymorphism or the X allele of the Xbal polymorphism of ESR1 easily recognized facial expressions of sadness that were more difficult for the women carrying the p allele or the x allele. They displayed higher accuracy, fast response time, more correct responses, and fewer omissions to complete the task, with a large effect size. Women carrying the ESR2 C allele of ESR2 showed a faster response time for recognizing facial expressions of anger. These findings link ESR1 and ESR2 polymorphisms in facial emotion recognition of negative emotions.


Leonardo ◽  
2002 ◽  
Vol 35 (4) ◽  
pp. 427-431 ◽  
Author(s):  
Phoebe Sengers

Artificial-agent technology has become commonplace in technical research from com-puter graphics to interface design and in popular culture through the Web and computer games. On the one hand, the population of the Web and our PCs with characters who reflect us can be seen as a humaniza-tion of a previously purely mechanical interface. On the other hand, the mechanization of subjectivity carries the danger of simply reducing the human to the machine. The author argues that predominant artificial intelligence (AI) ap-proaches to modeling agents are based on an erasure of subjectivity analogous to that which appears when people are subjected to institutionalization. The result is agent behavior that is fragmented, depersonalized, lifeless and incomprehensible. Approaching the problem using a hybrid of critical theory and AI agent technology, the author argues that agent behavior should be narratively under-standable; she presents a new agent architecture that struc-tures behavior to be comprehen-sible as narrative.


2021 ◽  
Author(s):  
Evrim Gulbetekin

Abstract This investigation used three experiments to test the effect of mask use and other-race effect (ORE) on face perception in three contexts: (a) face recognition, (b) recognition of facial expressions, and (c) social distance. The first, which involved a matching-to-sample paradigm, tested Caucasian subjects with either masked or unmasked faces using Caucasian and Asian samples. The participants exhibited the best performance in recognizing an unmasked face condition and the poorest when asked to recognize a masked face that they had seen earlier without a mask. Accuracy was also poorer for Asian faces than Caucasian faces. The second experiment presented Asian or Caucasian faces having different emotional expressions, with and without masks. The results for this task, which involved identifying which emotional expression the participants had seen on the presented face, indicated that emotion recognition performance decreased for faces portrayed with masks. The emotional expressions ranged from the most accurately to least accurately recognized as follows: happy, neutral, disgusted, and fearful. Emotion recognition performance was poorer for Asian stimuli compared to Caucasian. Experiment 3 used the same participants and stimuli and asked participants to indicate the social distance they would prefer to observe with each pictured person. The participants preferred a wider social distance with unmasked faces compared to masked faces. Social distance also varied by the portrayed emotion: ranging from farther to closer as follows: disgusted, fearful, neutral, and happy. Race was also a factor; participants preferred wider social distance for Asian compared to Caucasian faces. Altogether, our findings indicated that during the COVID-19 pandemic face perception and social distance were affected by mask use, ORE.


Sign in / Sign up

Export Citation Format

Share Document