Supplemental Material for “You’re Just Envious”: Inferring Benign and Malicious Envy From Facial Expressions and Contextual Information

Emotion ◽  
2021 ◽  
2022 ◽  
Vol 12 ◽  
Author(s):  
Marta F. Nudelman ◽  
Liana C. L. Portugal ◽  
Izabela Mocaiber ◽  
Isabel A. David ◽  
Beatriz S. Rodolpho ◽  
...  

Background: Evidence indicates that the processing of facial stimuli may be influenced by incidental factors, and these influences are particularly powerful when facial expressions are ambiguous, such as neutral faces. However, limited research investigated whether emotional contextual information presented in a preceding and unrelated experiment could be pervasively carried over to another experiment to modulate neutral face processing.Objective: The present study aims to investigate whether an emotional text presented in a first experiment could generate negative emotion toward neutral faces in a second experiment unrelated to the previous experiment.Methods: Ninety-nine students (all women) were randomly assigned to read and evaluate a negative text (negative context) or a neutral text (neutral text) in the first experiment. In the subsequent second experiment, the participants performed the following two tasks: (1) an attentional task in which neutral faces were presented as distractors and (2) a task involving the emotional judgment of neutral faces.Results: The results show that compared to the neutral context, in the negative context, the participants rated more faces as negative. No significant result was found in the attentional task.Conclusion: Our study demonstrates that incidental emotional information available in a previous experiment can increase participants’ propensity to interpret neutral faces as more negative when emotional information is directly evaluated. Therefore, the present study adds important evidence to the literature suggesting that our behavior and actions are modulated by previous information in an incidental or low perceived way similar to what occurs in everyday life, thereby modulating our judgments and emotions.


2021 ◽  
Vol 39 (3) ◽  
pp. 315-327
Author(s):  
Marco Brambilla ◽  
Matteo Masi ◽  
Simone Mattavelli ◽  
Marco Biella

Face processing has mainly been investigated by presenting facial expressions without any contextual information. However, in everyday interactions with others, the sight of a face is often accompanied by contextual cues that are processed either visually or under different sensory modalities. Here, we tested whether the perceived trustworthiness of a face is influenced by the auditory context in which that face is embedded. In Experiment 1, participants evaluated trustworthiness from faces that were surrounded by either threatening or non-threatening auditory contexts. Results showed that faces were judged more untrustworthy when accompanied by threatening auditory information. Experiment 2 replicated the effect in a design that disentangled the effects of threatening contexts from negative contexts in general. Thus, perceiving facial trustworthiness involves a cross-modal integration of the face and the level of threat posed by the surrounding context.


1984 ◽  
Vol 1 ◽  
pp. 29-35
Author(s):  
Michael P. O'Driscoll ◽  
Barry L. Richardson ◽  
Dianne B. Wuillemin

Thirty photographs depicting diverse emotional expressions were shown to a sample of Melanesian students who were assigned to either a face plus context or face alone condition. Significant differences between the two groups were obtained in a substantial proportion of cases on Schlosberg's Pleasant Unpleasant, and Attention – Rejection scales and the emotional expressions were judged to be appropriate to the context. These findings support the suggestion that the presence or absence of context is an important variable in the judgement of emotional expression and lend credence to the universal process theory.Research on perception of emotions has consistently illustrated that observers can accurately judge emotions in facial expressions (Ekman, Friesen, & Ellsworth, 1972; Izard, 1971) and that the face conveys important information about emotions being experienced (Ekman & Oster, 1979). In recent years, however, a question of interest has been the relative contributions of facial cues and contextual information to observers' overall judgements. This issue is important for theoretical and methodological reasons. From a theoretical viewpoint, unravelling the determinants of emotion perception would enhance our understanding of the processes of person perception and impression formation and would provide a framework for research on interpersonal communication. On methodological grounds, the researcher's approach to the face versus context issue can influence the type of research procedures used to analyse emotion perception. Specifically, much research in this field has been criticized for use of posed emotional expressions as stimuli for observers to evaluate. Spignesi and Shor (1981) have noted that only one of approximately 25 experimental studies has utilized facial expressions occurring spontaneously in real-life situations.


2020 ◽  
Vol 11 (1) ◽  
pp. 16
Author(s):  
Arianna Palmieri ◽  
Federica Meconi ◽  
Antonino Vallesi ◽  
Mariagrazia Capizzi ◽  
Emanuele Pick ◽  
...  

Background: Spino-bulbar muscular atrophy is a rare genetic X-linked disease caused by testosterone insensitivity. An inverse correlation has been described between testosterone levels and empathic responses. The present study explored the profile of neural empathic responding in spino-bulbar muscular atrophy patients. Methods: Eighteen patients with spino-bulbar muscular atrophy and eighteen healthy male controls were enrolled in the study. Their event-related potentials were recorded during an “Empathy Task” designed to distinguish neural responses linked with experience-sharing (early response) and mentalizing (late response) components of empathy. The task involved the presentation of contextual information (painful vs. neutral sentences) and facial expressions (painful vs. neutral). An explicit dispositional empathy-related questionnaire was also administered to all participants, who were screened via neuropsychological battery tests that did not reveal potential cognitive deficits. Due to electrophysiological artefacts, data from 12 patients and 17 controls were finally included in the analyses. Results: Although patients and controls did not differ in terms of dispositional, explicit empathic self-ratings, notably conservative event-related potentials analyses (i.e., spatio-temporal permutation cluster analyses) showed a significantly greater experience-sharing neural response in patients compared to healthy controls in the Empathy-task when both contextual information and facial expressions were painful. Conclusion: The present study contributes to the characterization of the psychological profile of patients with spino-bulbar muscular atrophy, highlighting the peculiarities in enhanced neural responses underlying empathic reactions.


2021 ◽  
Author(s):  
Jalil Rasgado-Toledo ◽  
Elizabeth Valles-Capetillo ◽  
Averi Giudicessi ◽  
Magda Giordano

Speakers use a variety of contextual information, such as facial emotional expressions for the successful transmission of their message. Listeners must decipher the meaning by understanding the intention behind it (Recanati, 1986). A traditional approach to the study of communicative intention has been through speech acts (Escandell, 2006). The objective of the present study is to further the understanding of the influence of facial expression to the recognition of communicative intention. The study sought to: verify the reliability of facial expressions recognition, find if there is an association between a facial expression and a category of speech acts, test if words contain an intentional load independent of the facial expression presented, and test whether facial expressions can modify an utterance’s communicative intention and the neural correlates associated using univariate and multivariate approaches. We found that previous observation of facial expressions associated with emotions can modify the interpretation of an assertive utterance that followed the facial expression. The hemodynamic brain response to an assertive utterance was moderated by the preceding facial expression and that classification based on the emotions expressed by the facial expression could be decoded by fluctuations in the brain’s hemodynamic response during the presentation of the assertive utterance. Neuroimaging data showed activation of regions involved in language, intentionality and face recognition during the utterance’s reading. Our results indicate that facial expression is a relevant contextual cue that decodes the intention of an utterance, and during decoding it engages different brain regions in agreement with the emotion expressed.


2007 ◽  
Vol 60 (8) ◽  
pp. 1101-1115 ◽  
Author(s):  
Isabelle Blanchette ◽  
Anne Richards ◽  
Adele Cross

In 3 experiments, we investigate how anxiety influences interpretation of ambiguous facial expressions of emotion. Specifically, we examine whether anxiety modulates the effect of contextual cues on interpretation. Participants saw ambiguous facial expressions. Simultaneously, positive or negative contextual information appeared on the screen. Participants judged whether each expression was positive or negative. We examined the impact of verbal and visual contextual cues on participants’ judgements. We used 3 different anxiety induction procedures and measured levels of trait anxiety (Experiment 2). Results showed that high state anxiety resulted in greater use of contextual information in the interpretation of the facial expressions. Trait anxiety was associated with mood-congruent effects on interpretation, but not greater use of contextual information.


Author(s):  
Stephanie S. A. H. Blom ◽  
Henk Aarts ◽  
Gün R. Semin

AbstractBuilding on the notion that processing of emotional stimuli is sensitive to context, in two experimental tasks we explored whether the detection of emotion in emotional words (task 1) and facial expressions (task 2) is facilitated by social verbal context. Three different levels of contextual supporting information were compared, namely (1) no information, (2) the verbal expression of an emotionally matched word pronounced with a neutral intonation, and (3) the verbal expression of an emotionally matched word pronounced with emotionally matched intonation. We found that increasing levels of supporting contextual information enhanced emotion detection for words, but not for facial expressions. We also measured activity of the corrugator and zygomaticus muscle to assess facial simulation, as processing of emotional stimuli can be facilitated by facial simulation. While facial simulation emerged for facial expressions, the level of contextual supporting information did not qualify this effect. All in all, our findings suggest that adding emotional-relevant voice elements positively influence emotion detection.


Sign in / Sign up

Export Citation Format

Share Document