Children's facial expressions of pain in the context of complex social interactions

2002 ◽  
Vol 25 (04) ◽  
Author(s):  
Carl L. von Baeyer
2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Andry Chowanda

AbstractSocial interactions are important for us, humans, as social creatures. Emotions play an important part in social interactions. They usually express meanings along with the spoken utterances to the interlocutors. Automatic facial expressions recognition is one technique to automatically capture, recognise, and understand emotions from the interlocutor. Many techniques proposed to increase the accuracy of emotions recognition from facial cues. Architecture such as convolutional neural networks demonstrates promising results for emotions recognition. However, most of the current models of convolutional neural networks require an enormous computational power to train and process emotional recognition. This research aims to build compact networks with depthwise separable layers while also maintaining performance. Three datasets and three other similar architectures were used to be compared with the proposed architecture. The results show that the proposed architecture performed the best among the other architectures. It achieved up to 13% better accuracy and 6–71% smaller and more compact than the other architectures. The best testing accuracy achieved by the architecture was 99.4%.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Chun-Ting Hsu ◽  
Wataru Sato ◽  
Sakiko Yoshikawa

Abstract Facial expression is an integral aspect of non-verbal communication of affective information. Earlier psychological studies have reported that the presentation of prerecorded photographs or videos of emotional facial expressions automatically elicits divergent responses, such as emotions and facial mimicry. However, such highly controlled experimental procedures may lack the vividness of real-life social interactions. This study incorporated a live image relay system that delivered models’ real-time performance of positive (smiling) and negative (frowning) dynamic facial expressions or their prerecorded videos to participants. We measured subjective ratings of valence and arousal and facial electromyography (EMG) activity in the zygomaticus major and corrugator supercilii muscles. Subjective ratings showed that the live facial expressions were rated to elicit higher valence and more arousing than the corresponding videos for positive emotion conditions. Facial EMG data showed that compared with the video, live facial expressions more effectively elicited facial muscular activity congruent with the models’ positive facial expressions. The findings indicate that emotional facial expressions in live social interactions are more evocative of emotional reactions and facial mimicry than earlier experimental data have suggested.


2014 ◽  
Vol 10 (5) ◽  
pp. 20140275 ◽  
Author(s):  
Sharlene E. Santana ◽  
Seth D. Dobson ◽  
Rui Diogo

Facial colour patterns and facial expressions are among the most important phenotypic traits that primates use during social interactions. While colour patterns provide information about the sender's identity, expressions can communicate its behavioural intentions. Extrinsic factors, including social group size, have shaped the evolution of facial coloration and mobility, but intrinsic relationships and trade-offs likely operate in their evolution as well. We hypothesize that complex facial colour patterning could reduce how salient facial expressions appear to a receiver, and thus species with highly expressive faces would have evolved uniformly coloured faces. We test this hypothesis through a phylogenetic comparative study, and explore the underlying morphological factors of facial mobility. Supporting our hypothesis, we find that species with highly expressive faces have plain facial colour patterns. The number of facial muscles does not predict facial mobility; instead, species that are larger and have a larger facial nucleus have more expressive faces. This highlights a potential trade-off between facial mobility and colour patterning in primates and reveals complex relationships between facial features during primate evolution.


2018 ◽  
Vol 15 (4) ◽  
pp. 172988141878315 ◽  
Author(s):  
Nicole Lazzeri ◽  
Daniele Mazzei ◽  
Maher Ben Moussa ◽  
Nadia Magnenat-Thalmann ◽  
Danilo De Rossi

Human communication relies mostly on nonverbal signals expressed through body language. Facial expressions, in particular, convey emotional information that allows people involved in social interactions to mutually judge the emotional states and to adjust its behavior appropriately. First studies aimed at investigating the recognition of facial expressions were based on static stimuli. However, facial expressions are rarely static, especially in everyday social interactions. Therefore, it has been hypothesized that the dynamics inherent in a facial expression could be fundamental in understanding its meaning. In addition, it has been demonstrated that nonlinguistic and linguistic information can contribute to reinforce the meaning of a facial expression making it easier to be recognized. Nevertheless, few studies have been performed on realistic humanoid robots. This experimental work aimed at demonstrating the human-like expressive capability of a humanoid robot by examining whether the effect of motion and vocal content influenced the perception of its facial expressions. The first part of the experiment aimed at studying the recognition capability of two kinds of stimuli related to the six basic expressions (i.e. anger, disgust, fear, happiness, sadness, and surprise): static stimuli, that is, photographs, and dynamic stimuli, that is, video recordings. The second and third parts were focused on comparing the same six basic expressions performed by a virtual avatar and by a physical robot under three different conditions: (1) muted facial expressions, (2) facial expressions with nonlinguistic vocalizations, and (3) facial expressions with an emotionally neutral verbal sentence. The results show that static stimuli performed by a human being and by the robot were more ambiguous than the corresponding dynamic stimuli on which motion and vocalization were associated. This hypothesis has been also investigated with a 3-dimensional replica of the physical robot demonstrating that even in case of a virtual avatar, dynamic and vocalization improve the emotional conveying capability.


2019 ◽  
Vol 3 (2) ◽  
pp. 32 ◽  
Author(s):  
Troy McDaniel ◽  
Diep Tran ◽  
Abhik Chowdhury ◽  
Bijan Fakhri ◽  
Sethuraman Panchanathan

Given that most cues exchanged during a social interaction are nonverbal (e.g., facial expressions, hand gestures, body language), individuals who are blind are at a social disadvantage compared to their sighted peers. Very little work has explored sensory augmentation in the context of social assistive aids for individuals who are blind. The purpose of this study is to explore the following questions related to visual-to-vibrotactile mapping of facial action units (the building blocks of facial expressions): (1) How well can individuals who are blind recognize tactile facial action units compared to those who are sighted? (2) How well can individuals who are blind recognize emotions from tactile facial action units compared to those who are sighted? These questions are explored in a preliminary pilot test using absolute identification tasks in which participants learn and recognize vibrotactile stimulations presented through the Haptic Chair, a custom vibrotactile display embedded on the back of a chair. Study results show that individuals who are blind are able to recognize tactile facial action units as well as those who are sighted. These results hint at the potential for tactile facial action units to augment and expand access to social interactions for individuals who are blind.


Author(s):  
C. Hasson ◽  
P. Gaussier ◽  
S. Boucenna

AbstractClassical models of emotions consider either the communicational aspect of emotions (for instance the emotions conveyed by the facial expressions) or the second order control necessary for survival purpose when the autonomy of the system is an issue. Here, we show the interdependence of communication and meta-control aspects of emotion. We propose the idea that emotions must be understood as a dynamical system linking two controllers: one devoted to social interactions (i.e. communication aspects) and another one devoted to the interactions within the physical world (i.e metacontrol of a more classical controller). Illustrations will be provided from applications involving navigation among different goal places according to different internal drives or object grasping and avoidance.


Author(s):  
Eliala A. Salvadori ◽  
Cristina Colonnesi ◽  
Heleen S. Vonk ◽  
Frans J. Oort ◽  
Evin Aktar

Emotional mimicry, the tendency to automatically and spontaneously reproduce others’ facial expressions, characterizes human social interactions from infancy onwards. Yet, little is known about the factors modulating its development in the first year of life. This study investigated infant emotional mimicry and its association with parent emotional mimicry, parent-infant mutual attention, and parent dispositional affective empathy. One hundred and seventeen parent-infant dyads (51 six-month-olds, 66 twelve-month-olds) were observed during video presentation of strangers’ happy, sad, angry, and fearful faces. Infant and parent emotional mimicry (i.e., facial expressions valence-congruent to the video) and their mutual attention (i.e., simultaneous gaze at one another) were systematically coded second-by-second. Parent empathy was assessed via self-report. Path models indicated that infant mimicry of happy stimuli was positively and independently associated with parent mimicry and affective empathy, while infant mimicry of sad stimuli was related to longer parent-infant mutual attention. Findings provide new insights into infants’ and parents’ coordination of mimicry and attention during triadic contexts of interactions, endorsing the social-affiliative function of mimicry already present in infancy: emotional mimicry occurs as an automatic parent-infant shared behavior and early manifestation of empathy only when strangers’ emotional displays are positive, and thus perceived as affiliative.


2021 ◽  
Vol 12 (1) ◽  
pp. 8
Author(s):  
Sara Ferracci ◽  
Felice Giuliani ◽  
Alfredo Brancucci ◽  
Davide Pietroni

Over the past fifteen years, research has demonstrated the central role of interpersonal emotions in communicating intentions, goals and desires. These emotions can be conveyed through facial expressions during specific social interactions, such as in the context of coordination between economic agents, where information inferred from them can influence certain decision-making processes. We investigated whether four facial expressions (happiness, neutral, angry and disgusted) can affect decision-making in the Ultimatum Game (UG). In this economic game, one player (proposer) plays the first move and proposes how to allocate a given amount of money in an anonymous one-shot interaction. If the other player (responder) accepts the proposal, each player receives the allocated amount of money; if he/she rejects the offer, both players receive nothing. During the task, participants acted as the responder (Experiment 1) or the proposer (Experiment 2) while seeing the opponent’s facial expression. For the responders, the results show that the decision was mainly driven by the fairness of the offer, with a small main effect of emotion. No interaction effect was found between emotion and offer. For the proposers, the results show that participants modulated their offers on the basis of the responders’ expressed emotions. The most generous/fair offers were proposed to happy responders. Less generous/fair offers were proposed to neutral responders. Finally, the least generous/fair offers were proposed to angry and disgusted responders.


2021 ◽  
Vol 12 ◽  
Author(s):  
Wee Kiat Lau

Face masks impact social interactions because emotion recognition is difficult due to face occlusion. However, is this enough to conclude that face masks negatively impact social interactions? We investigated the impact of face masks on invariant characteristics (sex, age), trait-like characteristics (trustworthiness, attractiveness, and approachability), and emotional expressions (happiness and excitability). Participants completed an online survey and rated masked and no-masked faces. The same face remained masked or no-masked throughout the survey. Results revealed that, when compared to no-masked faces, masked happy faces appeared less happy. Face masks did not negatively impact the ratings of other characteristics. Participants were better at judging the sex of masked faces. Masked faces also appeared younger, more trustworthy, more attractive, and more approachable. Therefore, face masks did not always result in unfavorable ratings. An additional post hoc modeling revealed that trustworthiness and attractiveness ratings for masked faces predicted the same trait ratings for no-masked faces. However, approachability ratings for no-masked faces predicted the same trait ratings for masked faces. This hinted that information from masked/no-masked faces, such as from the eye and eye region, could aid in the understanding of others during social interaction. Future directions were proposed to expand the research.


Sign in / Sign up

Export Citation Format

Share Document