dynamic facial expressions
Recently Published Documents


TOTAL DOCUMENTS

127
(FIVE YEARS 26)

H-INDEX

23
(FIVE YEARS 3)

2021 ◽  
Vol 12 ◽  
Author(s):  
Sylwia Hyniewska ◽  
Joanna Dąbrowska ◽  
Iwona Makowska ◽  
Kamila Jankowiak-Siuda ◽  
Krystyna Rymarczyk

Atypical emotion interpretation has been widely reported in individuals with borderline personality disorder (iBPD); however, empirical studies reported mixed results so far. We suggest that discrepancies in observations of emotion interpretation by iBPD can be explained by biases related to their fear of rejection and abandonment, i.e., the three moral emotions of anger, disgust, and contempt. In this study, we hypothesized that iBPD would show a higher tendency to correctly interpret these three displays of social rejection and attribute more negative valence. A total of 28 inpatient iBPDs and 28 healthy controls were asked to judge static and dynamic facial expressions in terms of emotions, valence, and self-reported arousal evoked by the observed faces. Our results partially confirmed our expectations. The iBPD correctly interpreted the three unambiguous moral emotions. Contempt, a complex emotion with a difficulty in recognizing facial expressions, was recognized better by iBPD than by healthy controls. All negative emotions were judged more negatively by iBPD than by controls, but no difference was observed in the neutral or positive emotion. Alexithymia and anxiety trait and state levels were controlled in all analyses.


2021 ◽  
Author(s):  
Jianxin Wang ◽  
Craig Poskanzer ◽  
Stefano Anzellotti

Facial expressions are critical in our daily interactions. Studying how humans recognize dynamic facial expressions is an important area of research in social perception, but advancements are hampered by the difficulty of creating well-controlled stimuli. Research on the perception of static faces has made significant progress thanks to techniques that make it possible to generate synthetic face stimuli. However, synthetic dynamic expressions are more difficult to generate; methods that yield realistic dynamics typically rely on the use of infrared markers applied on the face, making it expensive to create datasets that include large numbers of different expressions. In addition, the use of markers might interfere with facial dynamics. In this paper, we contribute a new method to generate large amounts of realistic and well-controlled facial expression videos. We use a deep convolutional neural network with attention and asymmetric loss to extract the dynamics of action units from videos, and demonstrate that this approach outperforms a baseline model based on convolutional neural networks without attention on the same stimuli. Next, we develop a pipeline to use the action unit dynamics to render realistic synthetic videos. This pipeline makes it possible to generate large scale naturalistic and controllable facial expression datasets to facilitate future research in social cognitive science.


2021 ◽  
Vol 21 (9) ◽  
pp. 2238
Author(s):  
Michael Stettler ◽  
Nick Taubert ◽  
Ramona Siebert ◽  
Silvia Spadacenta ◽  
Peter Dicke ◽  
...  

Neuroreport ◽  
2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Kazuma Mori ◽  
Akihiro Tanaka ◽  
Hideaki Kawabata ◽  
Hiroshi Arao

2021 ◽  
Vol 15 ◽  
Author(s):  
Teresa Sollfrank ◽  
Oona Kohnen ◽  
Peter Hilfiker ◽  
Lorena C. Kegel ◽  
Hennric Jokeit ◽  
...  

This study aimed to examine whether the cortical processing of emotional faces is modulated by the computerization of face stimuli (”avatars”) in a group of 25 healthy participants. Subjects were passively viewing 128 static and dynamic facial expressions of female and male actors and their respective avatars in neutral or fearful conditions. Event-related potentials (ERPs), as well as alpha and theta event-related synchronization and desynchronization (ERD/ERS), were derived from the EEG that was recorded during the task. All ERP features, except for the very early N100, differed in their response to avatar and actor faces. Whereas the N170 showed differences only for the neutral avatar condition, later potentials (N300 and LPP) differed in both emotional conditions (neutral and fear) and the presented agents (actor and avatar). In addition, we found that the avatar faces elicited significantly stronger reactions than the actor face for theta and alpha oscillations. Especially theta EEG frequencies responded specifically to visual emotional stimulation and were revealed to be sensitive to the emotional content of the face, whereas alpha frequency was modulated by all the stimulus types. We can conclude that the computerized avatar faces affect both, ERP components and ERD/ERS and evoke neural effects that are different from the ones elicited by real faces. This was true, although the avatars were replicas of the human faces and contained similar characteristics in their expression.


2021 ◽  
Vol 5 (3) ◽  
pp. 13
Author(s):  
Heting Wang ◽  
Vidya Gaddy ◽  
James Ross Beveridge ◽  
Francisco R. Ortega

The role of affect has been long studied in human–computer interactions. Unlike previous studies that focused on seven basic emotions, an avatar named Diana was introduced who expresses a higher level of emotional intelligence. To adapt to the users various affects during interaction, Diana simulates emotions with dynamic facial expressions. When two people collaborated to build blocks, their affects were recognized and labeled using the Affdex SDK and a descriptive analysis was provided. When participants turned to collaborate with Diana, their subjective responses were collected and the length of completion was recorded. Three modes of Diana were involved: a flat-faced Diana, a Diana that used mimicry facial expressions, and a Diana that used emotionally responsive facial expressions. Twenty-one responses were collected through a five-point Likert scale questionnaire and the NASA TLX. Results from questionnaires were not statistically different. However, the emotionally responsive Diana obtained more positive responses, and people spent the longest time with the mimicry Diana. In post-study comments, most participants perceived facial expressions on Diana’s face as natural, four mentioned uncomfortable feelings caused by the Uncanny Valley effect.


2021 ◽  
Vol 151 ◽  
pp. 107734
Author(s):  
Katia M. Harlé ◽  
Alan N. Simmons ◽  
Jessica Bomyea ◽  
Andrea D. Spadoni ◽  
Charles T. Taylor

2021 ◽  
Vol 14 (4) ◽  
pp. 4-22
Author(s):  
O.A. Korolkova ◽  
E.A. Lobodinskaya

In an experimental study, we explored the role of the natural or artificial character of expression and the speed of its exposure in the recognition of emotional facial expressions during stroboscopic presentation. In Series 1, participants identified emotions represented as sequences of frames from a video of a natural facial expression; in Series 2 participants were shown sequences of linear morph images. The exposure speed was varied. The results showed that at any exposure speed, the expressions of happiness and disgust were recognized most accurately. Longer presentation increased the accuracy of assessments of happiness, disgust, and surprise. Expression of surprise, demonstrated as a linear transformation, was recognized more efficiently than frames of natural expression of surprise. Happiness was perceived more accurately on video frames. The accuracy of the disgust recognition did not depend on the type of images. The qualitative nature of the stimuli and the speed of their presentation did not affect the accuracy of sadness recognition. The categorical structure of the perception of expressions was stable in any type of exposed images. The obtained results suggest a qualitative difference in the perception of natural and artificial images of expressions, which can be observed under extreme exposure conditions.


2020 ◽  
Vol 20 (11) ◽  
pp. 250
Author(s):  
Tyler Roberts ◽  
Gerald Cupchik ◽  
Gloria Rebello ◽  
Jonathan S. Cant ◽  
Adrian Nestor

Sign in / Sign up

Export Citation Format

Share Document