Judgements of facial expressions of emotions in context and no-context Conditions

1984 ◽  
Vol 1 ◽  
pp. 29-35
Author(s):  
Michael P. O'Driscoll ◽  
Barry L. Richardson ◽  
Dianne B. Wuillemin

Thirty photographs depicting diverse emotional expressions were shown to a sample of Melanesian students who were assigned to either a face plus context or face alone condition. Significant differences between the two groups were obtained in a substantial proportion of cases on Schlosberg's Pleasant Unpleasant, and Attention – Rejection scales and the emotional expressions were judged to be appropriate to the context. These findings support the suggestion that the presence or absence of context is an important variable in the judgement of emotional expression and lend credence to the universal process theory.Research on perception of emotions has consistently illustrated that observers can accurately judge emotions in facial expressions (Ekman, Friesen, & Ellsworth, 1972; Izard, 1971) and that the face conveys important information about emotions being experienced (Ekman & Oster, 1979). In recent years, however, a question of interest has been the relative contributions of facial cues and contextual information to observers' overall judgements. This issue is important for theoretical and methodological reasons. From a theoretical viewpoint, unravelling the determinants of emotion perception would enhance our understanding of the processes of person perception and impression formation and would provide a framework for research on interpersonal communication. On methodological grounds, the researcher's approach to the face versus context issue can influence the type of research procedures used to analyse emotion perception. Specifically, much research in this field has been criticized for use of posed emotional expressions as stimuli for observers to evaluate. Spignesi and Shor (1981) have noted that only one of approximately 25 experimental studies has utilized facial expressions occurring spontaneously in real-life situations.

PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245777
Author(s):  
Fanny Poncet ◽  
Robert Soussignan ◽  
Margaux Jaffiol ◽  
Baptiste Gaudelus ◽  
Arnaud Leleu ◽  
...  

Recognizing facial expressions of emotions is a fundamental ability for adaptation to the social environment. To date, it remains unclear whether the spatial distribution of eye movements predicts accurate recognition or, on the contrary, confusion in the recognition of facial emotions. In the present study, we asked participants to recognize facial emotions while monitoring their gaze behavior using eye-tracking technology. In Experiment 1a, 40 participants (20 women) performed a classic facial emotion recognition task with a 5-choice procedure (anger, disgust, fear, happiness, sadness). In Experiment 1b, a second group of 40 participants (20 women) was exposed to the same materials and procedure except that they were instructed to say whether (i.e., Yes/No response) the face expressed a specific emotion (e.g., anger), with the five emotion categories tested in distinct blocks. In Experiment 2, two groups of 32 participants performed the same task as in Experiment 1a while exposed to partial facial expressions composed of actions units (AUs) present or absent in some parts of the face (top, middle, or bottom). The coding of the AUs produced by the models showed complex facial configurations for most emotional expressions, with several AUs in common. Eye-tracking data indicated that relevant facial actions were actively gazed at by the decoders during both accurate recognition and errors. False recognition was mainly associated with the additional visual exploration of less relevant facial actions in regions containing ambiguous AUs or AUs relevant to other emotional expressions. Finally, the recognition of facial emotions from partial expressions showed that no single facial actions were necessary to effectively communicate an emotional state. In contrast, the recognition of facial emotions relied on the integration of a complex set of facial cues.


2017 ◽  
Vol 17 (3-4) ◽  
pp. 218-231 ◽  
Author(s):  
Steven O. Roberts ◽  
Kerrie C. Leonard ◽  
Arnold K. Ho ◽  
Susan A. Gelman

Abstract Previous research shows that Multiracial adults are categorized as more Black than White (i.e., Black-categorization bias), especially when they have angry facial expressions. The present research examined the extent to which these categorization patterns extended to Multiracial children, with both White and Black participants. Consistent with past research, both White and Black participants categorized Multiracial children as more Black than White. Counter to what was found with Multiracial adults in previous research, emotional expressions (e.g., happy vs. angry) did not moderate how Multiracial children were categorized. Additionally, for Black participants, anti-White bias was correlated with categorizing Multiracial children as more White than Black. The developmental and cultural implications of these data are discussed, as they provide new insight into the important role that age plays in Multiracial person perception.


2009 ◽  
Vol 364 (1535) ◽  
pp. 3497-3504 ◽  
Author(s):  
Ursula Hess ◽  
Reginald B. Adams ◽  
Robert E. Kleck

Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions.


2020 ◽  
Author(s):  
Noa Simhi ◽  
Galit Yovel

Most studies on person perception have primarily investigated static images of faces. However, real-life person perception involves also the body and often the gait of the whole person. Whereas some studies indicated that the face dominates the representation of the whole person, others have emphasized the additional contribution of the body and gait. Here, we compared models of whole person perception by asking whether a model that includes the body for static whole person stimuli and also the gait for dynamic whole person stimuli accounts better for the representation of the whole person than a model that takes into account the face alone. Participants rated the distinctiveness of static or dynamic displays of different people based on either the whole person, face, body, or gait. By fitting a linear regression model to the representation of the whole person based on the face, body and gait, we revealed that the face and body contribute uniquely and independently to the representation of the static whole person, and that gait further contributes to the representation of the dynamic person. A complementary analysis examined if these components are also valid dimensions of a whole person representational space. This analysis further confirmed that the body in addition to the face, as well as the gait are valid dimensions of the static and dynamic whole person representations, respectively. These data clearly show that whole person perception goes beyond the face and is significantly influenced by the body and gait.


Author(s):  
Rui Zhang ◽  
Ling Guan

With nearly twenty years of intensive study on the content-based image retrieval and annotation, the topic still remains difficult. By and large, the essential challenge lies in the limitation of using low-level visual features to characterize the semantic information of images, commonly known as the semantic gap. To bridge this gap, various approaches have been proposed based on the incorporation of human knowledge and textual information as well as the learning techniques utilizing the information of different modalities. At the same time, contextual information which represents the relationship between different real world/conceptual entities has shown its significance with respect to recognition tasks not only through real life experience but also scientific studies. In this chapter, the authors first review the state of the art of the existing works on image annotation and retrieval. Moreover, a general Bayesian framework which integrates content and contextual information and its application to both image annotation and retrieval are elaborated. The contextual information is considered as the statistical relationship between different images and different semantic concepts for image retrieval and annotation, respectively. The framework has efficient learning and classification procedures and the effectiveness is evaluated based on experimental studies, which demonstrate its advantage over both content-based and context-based approaches.


2021 ◽  
Vol 39 (3) ◽  
pp. 315-327
Author(s):  
Marco Brambilla ◽  
Matteo Masi ◽  
Simone Mattavelli ◽  
Marco Biella

Face processing has mainly been investigated by presenting facial expressions without any contextual information. However, in everyday interactions with others, the sight of a face is often accompanied by contextual cues that are processed either visually or under different sensory modalities. Here, we tested whether the perceived trustworthiness of a face is influenced by the auditory context in which that face is embedded. In Experiment 1, participants evaluated trustworthiness from faces that were surrounded by either threatening or non-threatening auditory contexts. Results showed that faces were judged more untrustworthy when accompanied by threatening auditory information. Experiment 2 replicated the effect in a design that disentangled the effects of threatening contexts from negative contexts in general. Thus, perceiving facial trustworthiness involves a cross-modal integration of the face and the level of threat posed by the surrounding context.


2020 ◽  
Vol 4 (Supplement_1) ◽  
pp. 455-456
Author(s):  
Yosra Abualula ◽  
Eric Allard

Abstract The purpose of this study was to examine age differences in emotion perception as a function of emotion type and gaze direction. Old and young adult participants were presented with facial images showing happiness, sadness, fear, anger and disgust while having their eyes tracked. The image stimuli included a manipulation of eye gaze. Half of the facial expressions had a directed eye gaze while the other half showed an averted gaze. A 2 (age) x 2 (gaze) x 5 (emotion) repeated measures ANOVA was used to analyze emotion perception scores and fixation to eye and mouth regions of the face. The manipulation of eye gaze yielded more age similarities than differences in emotion perception. Overall, we did not detect age differences in recognition ability. However, we found that certain emotion categories differentially impacted emotion perception. Interestingly, we observed that an averted gaze led to beneficial performance for fear and disgust faces. Additionally, participants spent more time fixating on the eye regions of sad facial expressions. We discuss how naturalistic manipulations of various facial features could impact age-related differences (or similarities) in emotion perception.


2021 ◽  
Author(s):  
Jens Lange ◽  
Marc Heerdink ◽  
Gerben van Kleef

Emotional expressions play an important role in coordinating social interaction. We review research on two critical processes that underlie such coordination: (1) perceiving emotions from emotion expressions and (2) drawing inferences from perceived emotions. Broad evidence indicates that (a) observers can accurately perceive emotions from a person’s facial, bodily, vocal, verbal, and symbolic expressions, and that such emotion perception is further informed by contextual information. Moreover, (b) observers draw consequential and contextualized inferences from these perceived emotions about the expresser, the situation, and the self. Thus, emotion expressions enable coordinated action by providing information that facilitates adaptive behavioral responses. We recommend that future research investigate how people integrate information from different expressive modalities and how this affects consequential inferences.


2021 ◽  
Author(s):  
Jalil Rasgado-Toledo ◽  
Elizabeth Valles-Capetillo ◽  
Averi Giudicessi ◽  
Magda Giordano

Speakers use a variety of contextual information, such as facial emotional expressions for the successful transmission of their message. Listeners must decipher the meaning by understanding the intention behind it (Recanati, 1986). A traditional approach to the study of communicative intention has been through speech acts (Escandell, 2006). The objective of the present study is to further the understanding of the influence of facial expression to the recognition of communicative intention. The study sought to: verify the reliability of facial expressions recognition, find if there is an association between a facial expression and a category of speech acts, test if words contain an intentional load independent of the facial expression presented, and test whether facial expressions can modify an utterance’s communicative intention and the neural correlates associated using univariate and multivariate approaches. We found that previous observation of facial expressions associated with emotions can modify the interpretation of an assertive utterance that followed the facial expression. The hemodynamic brain response to an assertive utterance was moderated by the preceding facial expression and that classification based on the emotions expressed by the facial expression could be decoded by fluctuations in the brain’s hemodynamic response during the presentation of the assertive utterance. Neuroimaging data showed activation of regions involved in language, intentionality and face recognition during the utterance’s reading. Our results indicate that facial expression is a relevant contextual cue that decodes the intention of an utterance, and during decoding it engages different brain regions in agreement with the emotion expressed.


2016 ◽  
Vol 33 (S1) ◽  
pp. S370-S371
Author(s):  
M. Rocha ◽  
S. Soares ◽  
S. Silva ◽  
N. Madeira ◽  
C. Silva

IntroductionAlexithymia is a multifactorial personality trait observed in several mental disorders, especially those with poor social functioning. Although it has been proposed that difficulties in interpersonal interactions in highly alexithymic individuals may stem from their reduced ability to express and recognize facial expressions, this still remains controversial.AimIn everyday life, faces displaying emotions are dynamic, although most studies have relied on static stimuli. The aim of this study was to investigate whether individuals with high levels of alexithymia differed from a control group in the categorization of emotional faces presented in a dynamic way. Given the highly dynamic nature of facial displays in real life, we used morphed videos depicting faces varying 1% from neutral to angry, disgust or happy faces, with a video presentation of 35 seconds.MethodSixty participants (27 males and 33 females) were divided into high (HA) and low levels of alexithymia (LA) by using the Toronto Alexithymia Scale (TAS-20). Participants were instructed to watch the face change from neutral to an emotion and to press a keyboard as soon as they could categorize an emotion expressed in the face.ResultsThe results revealed an interaction between alexithymia and emotion showing that HA, compared to LA, were more inaccurate at categorizing angry faces.Disclosure of interestThe authors have not supplied their declaration of competing interest.


Sign in / Sign up

Export Citation Format

Share Document