scholarly journals Can an android’s posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?

PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0254905
Author(s):  
Satoshi Yagi ◽  
Yoshihiro Nakata ◽  
Yutaka Nakamura ◽  
Hiroshi Ishiguro

Expressing emotions through various modalities is a crucial function not only for humans but also for robots. The mapping method from facial expressions to the basic emotions is widely used in research on robot emotional expressions. This method claims that there are specific facial muscle activation patterns for each emotional expression and people can perceive these emotions by reading these patterns. However, recent research on human behavior reveals that some emotional expressions, such as the emotion “intense”, are difficult to judge as positive or negative by just looking at the facial expression alone. Nevertheless, it has not been investigated whether robots can also express ambiguous facial expressions with no clear valence and whether the addition of body expressions can make the facial valence clearer to humans. This paper shows that an ambiguous facial expression of an android can be perceived more clearly by viewers when body postures and movements are added. We conducted three experiments and online surveys among North American residents with 94, 114 and 114 participants, respectively. In Experiment 1, by calculating the entropy, we found that the facial expression “intense” was difficult to judge as positive or negative when they were only shown the facial expression. In Experiments 2 and 3, by analyzing ANOVA, we confirmed that participants were better at judging the facial valence when they were shown the whole body of the android, even though the facial expression was the same as in Experiment 1. These results suggest that facial and body expressions by robots should be designed jointly to achieve better communication with humans. In order to achieve smoother cooperative human-robot interaction, such as education by robots, emotion expressions conveyed through a combination of both the face and the body of the robot is necessary to convey the robot’s intentions or desires to humans.

2018 ◽  
Author(s):  
Adrienne Wood ◽  
Jared Martin ◽  
Martha W. Alibali ◽  
Paula Niedenthal

Recognition of affect expressed in the face is disrupted when the body expresses an incongruent affect. Existing research has documented such interference for universally recognizable bodily expressions. However, it remains unknown whether learned, conventional gestures can interfere with facial expression processing. Study 1 participants (N = 62) viewed videos of facial expressions accompanied by hand gestures and reported the valence of either the face or hand. Responses were slower and less accurate when the face-hand pairing was incongruent compared to congruent. We hypothesized that hand gestures might exert an even stronger influence on facial expression processing when other routes to understanding the meaning of a facial expression, such as with sensorimotor simulation, are disrupted. Participants in Study 2 (N = 127) completed the same task, but the facial mobility of some participants was restricted, which disrupted face processing in prior work. The hand-face congruency effect from Study 1 was replicated. The facial mobility manipulation affected males only, and it did not moderate the congruency effect. The present work suggests the affective meaning of conventional gestures is processed automatically and can interfere with face perception, but perceivers do not seem to rely more on gestures when sensorimotor face processing is disrupted.


2018 ◽  
Vol 11 (2) ◽  
pp. 16-33 ◽  
Author(s):  
A.V. Zhegallo

The study investigates the specifics of recognition of emotional facial expressions in peripherally exposed facial expressions, while exposition time was shorter compared to the duration of the latent period of a saccade towards the exposed image. The study showed that recognition of peripherical perception reproduces the patterns of the choice of the incorrect responses. The mutual mistaken recognition is common for the facial expressions of a fear, anger and surprise. In the case of worsening of the conditions of recognition, calmness and grief as facial expression were included in the complex of a mutually mistakenly identified expressions. The identification of the expression of happiness deserves a special attention, because it can be mistakenly identified as different facial expression, but other expressions are never recognized as happiness. Individual accuracy of recognition varies from 0.29 to 0.80. The sufficient condition of a high accuracy in recognition was the recognition of the facial expressions using peripherical vision without making a saccade in the direction of the face image exposed.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Edita Fino ◽  
Michela Menegatti ◽  
Alessio Avenanti ◽  
Monica Rubini

Abstract Spontaneous emotionally congruent facial responses (ECFR) to others’ emotional expressions can occur by simply observing others’ faces (i.e., smiling) or by reading emotion related words (i.e., to smile). The goal of the present study was to examine whether language describing political leaders’ emotions affects voters by inducing emotionally congruent facial reactions as a function of readers’ and politicians’ shared political orientation. Participants read sentences describing politicians’ emotional expressions, while their facial muscle activation was measured by means of electromyography (EMG). Results showed that reading sentences describing left and right-wing politicians “smiling” or “frowning” elicits ECFR for ingroup but not outgroup members. Remarkably, ECFR were sensitive to attitudes toward individual leaders beyond the ingroup vs. outgroup political divide. Through integrating behavioral and physiological methods we were able to consistently tap on a ‘favored political leader effect’ thus capturing political attitudes towards an individual politician at a given moment of time, at multiple levels (explicit responses and automatic ECFR) and across political party membership lines. Our findings highlight the role of verbal behavior of politicians in affecting voters’ facial expressions with important implications for social judgment and behavioral outcomes.


2020 ◽  
Author(s):  
Tamara Van Der Zant ◽  
Jessica Reid ◽  
Catherine J. Mondloch ◽  
Nicole L. Nelson

Perceptions of traits (such as trustworthiness or dominance) are influenced by the emotion displayed on a face. For instance, the same individual is reported as more trustworthy when they look happy than when they look angry. This overextension of emotional expressions has been shown with facial expression but whether this phenomenon also occurs when viewing postural expressions was unknown. We sought to examine how expressive behaviour of the body would influence judgements of traits and how sensitivity to this cue develops. In the context of a storybook, adults (N = 35) and children (aged 5 to 8 years; N = 60) selected one of two partners to help face a challenge. The challenges required either a trustworthy or dominant partner. Participants chose between a partner with an emotional (happy/angry) face and neutral body or one with a neutral face and emotional body. As predicted, happy over neutral facial expressions were preferred when selecting a trustworthy partner and angry postural expressions were preferred over neutral when selecting a dominant partner. Children’s performance was not adult-like on most tasks. The results demonstrate that emotional postural expressions can also influence judgements of others’ traits, but that postural influence on trait judgements develops throughout childhood.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


2021 ◽  
pp. 174702182199299
Author(s):  
Mohamad El Haj ◽  
Emin Altintas ◽  
Ahmed A Moustafa ◽  
Abdel Halim Boudoukha

Future thinking, which is the ability to project oneself forward in time to pre-experience an event, is intimately associated with emotions. We investigated whether emotional future thinking can activate emotional facial expressions. We invited 43 participants to imagine future scenarios, cued by the words “happy,” “sad,” and “city.” Future thinking was video recorded and analysed with a facial analysis software to classify whether facial expressions (i.e., happy, sad, angry, surprised, scared, disgusted, and neutral facial expression) of participants were neutral or emotional. Analysis demonstrated higher levels of happy facial expressions during future thinking cued by the word “happy” than “sad” or “city.” In contrast, higher levels of sad facial expressions were observed during future thinking cued by the word “sad” than “happy” or “city.” Higher levels of neutral facial expressions were observed during future thinking cued by the word “city” than “happy” or “sad.” In the three conditions, the neutral facial expressions were high compared with happy and sad facial expressions. Together, emotional future thinking, at least for future scenarios cued by “happy” and “sad,” seems to trigger the corresponding facial expression. Our study provides an original physiological window into the subjective emotional experience during future thinking.


2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


1999 ◽  
Vol 202 (16) ◽  
pp. 2127-2138 ◽  
Author(s):  
T. Knower ◽  
R.E. Shadwick ◽  
S.L. Katz ◽  
J.B. Graham ◽  
C.S. Wardle

To learn about muscle function in two species of tuna (yellowfin Thunnus albacares and skipjack Katsuwonus pelamis), a series of electromyogram (EMG) electrodes was implanted down the length of the body in the internal red (aerobic) muscle. Additionally, a buckle force transducer was fitted around the deep caudal tendons on the same side of the peduncle as the electrodes. Recordings of muscle activity and caudal tendon forces were made while the fish swam over a range of steady, sustainable cruising speeds in a large water tunnel treadmill. In both species, the onset of red muscle activation proceeds sequentially in a rostro-caudal direction, while the offset (or deactivation) is nearly simultaneous at all sites, so that EMG burst duration decreases towards the tail. Muscle duty cycle at each location remains a constant proportion of the tailbeat period (T), independent of swimming speed, and peak force is registered in the tail tendons just as all ipsilateral muscle deactivates. Mean duty cycles in skipjack are longer than those in yellowfin. In yellowfin red muscle, there is complete segregation of contralateral activity, while in skipjack there is slight overlap. In both species, all internal red muscle on one side is active simultaneously for part of each cycle, lasting 0.18T in yellowfin and 0.11T in skipjack. (Across the distance encompassing the majority of the red muscle mass, 0.35-0.65L, where L is fork length, the duration is 0.25T in both species.) When red muscle activation patterns were compared across a variety of fish species, it became apparent that the EMG patterns grade in a progression that parallels the kinematic spectrum of swimming modes from anguilliform to thunniform. The tuna EMG pattern, underlying the thunniform swimming mode, culminates this progression, exhibiting an activation pattern at the extreme opposite end of the spectrum from the anguilliform mode.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245777
Author(s):  
Fanny Poncet ◽  
Robert Soussignan ◽  
Margaux Jaffiol ◽  
Baptiste Gaudelus ◽  
Arnaud Leleu ◽  
...  

Recognizing facial expressions of emotions is a fundamental ability for adaptation to the social environment. To date, it remains unclear whether the spatial distribution of eye movements predicts accurate recognition or, on the contrary, confusion in the recognition of facial emotions. In the present study, we asked participants to recognize facial emotions while monitoring their gaze behavior using eye-tracking technology. In Experiment 1a, 40 participants (20 women) performed a classic facial emotion recognition task with a 5-choice procedure (anger, disgust, fear, happiness, sadness). In Experiment 1b, a second group of 40 participants (20 women) was exposed to the same materials and procedure except that they were instructed to say whether (i.e., Yes/No response) the face expressed a specific emotion (e.g., anger), with the five emotion categories tested in distinct blocks. In Experiment 2, two groups of 32 participants performed the same task as in Experiment 1a while exposed to partial facial expressions composed of actions units (AUs) present or absent in some parts of the face (top, middle, or bottom). The coding of the AUs produced by the models showed complex facial configurations for most emotional expressions, with several AUs in common. Eye-tracking data indicated that relevant facial actions were actively gazed at by the decoders during both accurate recognition and errors. False recognition was mainly associated with the additional visual exploration of less relevant facial actions in regions containing ambiguous AUs or AUs relevant to other emotional expressions. Finally, the recognition of facial emotions from partial expressions showed that no single facial actions were necessary to effectively communicate an emotional state. In contrast, the recognition of facial emotions relied on the integration of a complex set of facial cues.


Sign in / Sign up

Export Citation Format

Share Document