4. Facial expressions in social interactions: Beyond basic emotions

Author(s):  
Susanne Kaiser ◽  
Thomas Wehrle
2016 ◽  
Vol 37 (1) ◽  
pp. 16-23 ◽  
Author(s):  
Chit Yuen Yi ◽  
Matthew W. E. Murry ◽  
Amy L. Gentzler

Abstract. Past research suggests that transient mood influences the perception of facial expressions of emotion, but relatively little is known about how trait-level emotionality (i.e., temperament) may influence emotion perception or interact with mood in this process. Consequently, we extended earlier work by examining how temperamental dimensions of negative emotionality and extraversion were associated with the perception accuracy and perceived intensity of three basic emotions and how the trait-level temperamental effect interacted with state-level self-reported mood in a sample of 88 adults (27 men, 18–51 years of age). The results indicated that higher levels of negative mood were associated with higher perception accuracy of angry and sad facial expressions, and higher levels of perceived intensity of anger. For perceived intensity of sadness, negative mood was associated with lower levels of perceived intensity, whereas negative emotionality was associated with higher levels of perceived intensity of sadness. Overall, our findings added to the limited literature on adult temperament and emotion perception.


2021 ◽  
Vol 5 (3) ◽  
pp. 13
Author(s):  
Heting Wang ◽  
Vidya Gaddy ◽  
James Ross Beveridge ◽  
Francisco R. Ortega

The role of affect has been long studied in human–computer interactions. Unlike previous studies that focused on seven basic emotions, an avatar named Diana was introduced who expresses a higher level of emotional intelligence. To adapt to the users various affects during interaction, Diana simulates emotions with dynamic facial expressions. When two people collaborated to build blocks, their affects were recognized and labeled using the Affdex SDK and a descriptive analysis was provided. When participants turned to collaborate with Diana, their subjective responses were collected and the length of completion was recorded. Three modes of Diana were involved: a flat-faced Diana, a Diana that used mimicry facial expressions, and a Diana that used emotionally responsive facial expressions. Twenty-one responses were collected through a five-point Likert scale questionnaire and the NASA TLX. Results from questionnaires were not statistically different. However, the emotionally responsive Diana obtained more positive responses, and people spent the longest time with the mimicry Diana. In post-study comments, most participants perceived facial expressions on Diana’s face as natural, four mentioned uncomfortable feelings caused by the Uncanny Valley effect.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Andry Chowanda

AbstractSocial interactions are important for us, humans, as social creatures. Emotions play an important part in social interactions. They usually express meanings along with the spoken utterances to the interlocutors. Automatic facial expressions recognition is one technique to automatically capture, recognise, and understand emotions from the interlocutor. Many techniques proposed to increase the accuracy of emotions recognition from facial cues. Architecture such as convolutional neural networks demonstrates promising results for emotions recognition. However, most of the current models of convolutional neural networks require an enormous computational power to train and process emotional recognition. This research aims to build compact networks with depthwise separable layers while also maintaining performance. Three datasets and three other similar architectures were used to be compared with the proposed architecture. The results show that the proposed architecture performed the best among the other architectures. It achieved up to 13% better accuracy and 6–71% smaller and more compact than the other architectures. The best testing accuracy achieved by the architecture was 99.4%.


Perception ◽  
2017 ◽  
Vol 46 (9) ◽  
pp. 1077-1089 ◽  
Author(s):  
Kathleen Kang ◽  
Laura Anthoney ◽  
Peter Mitchell

Being able to recognize facial expressions of basic emotions is of great importance to social development. However, we still know surprisingly little about children’s developing ability to interpret emotions that are expressed dynamically, naturally, and subtly, despite real-life expressions having such appearance in the vast majority of cases. The current research employs a new technique of capturing dynamic, subtly expressed natural emotional displays (happy, sad, angry, shocked, and disgusted). Children aged 7, 9, and 11 years (and adults) were systematically able to discriminate each emotional display from alternatives in a five-way choice. Children were most accurate in identifying the expression of happiness and were also relatively accurate in identifying the expression of sadness; they were far less accurate than adults in identifying shocked and disgusted. Children who performed well academically also tended to be the most accurate in recognizing expressions, and this relationship maintained independently of chronological age. Generally, the findings testify to a well-developed ability to recognize very subtle naturally occurring expressions of emotions.


2019 ◽  
Vol 10 ◽  
Author(s):  
Wataru Sato ◽  
Sylwia Hyniewska ◽  
Kazusa Minemoto ◽  
Sakiko Yoshikawa

Author(s):  
Claudia Faita ◽  
Federico Vanni ◽  
Cristian Lorenzini ◽  
Marcello Carrozzino ◽  
Camilla Tanca ◽  
...  

2018 ◽  
Vol 17 (4) ◽  
pp. 407-432 ◽  
Author(s):  
Dušan Stamenković ◽  
Miloš Tasić ◽  
Charles Forceville

In Making Comics: Storytelling Secrets of Comics, Manga and Graphic Novels (2006), Scott McCloud proposes that the use of specific drawing techniques will enable viewers to reliably deduce different degrees of intensity of the six basic emotions from facial expressions in comics. Furthermore, he suggests that an accomplished comics artist can combine the components of facial expressions conveying the basic emotions to produce complex expressions, many of which are supposedly distinct and recognizable enough to be named. This article presents an empirical investigation and assessment of the validity of these claims, based on the results obtained from three questionnaires. Each of the questionnaires deals with one of the aspects of McCloud’s proposal: face expression intensity, labelling and compositionality. The data show that the tasks at hand were much more difficult than would have been expected on the basis of McCloud’s proposal, with the intensity matching task being the most successful of the three.


2017 ◽  
Vol 7 (2) ◽  
pp. 177-202
Author(s):  
James A. Clinton ◽  
Stephen W. Briner ◽  
Andrew M. Sherrill ◽  
Thomas Ackerman ◽  
Joseph P. Magliano

Abstract Filmmakers must rely on cinematic devices of perspective (close-ups and point-of-view shot sequencing) to emphasize facial expressions associated with affective states. This study explored the extent to which differences in the use of these devices across two films that have the same content lead to differences in the understanding of the affective states of characters. Participants viewed one of two versions of the films and made affective judgments about how characters felt about one another with respect to saddness and anger. The extent to which the auditory and visual contexts were present when making the judgments was varied across four experiments. The results of the study showed judgments about sadness differed across the two films, but only when the entire context (sound and visual input) were present. The results are discussed in the context of the role of facial expressions and context in inferring basic emotions.


Sign in / Sign up

Export Citation Format

Share Document