Faces and Sounds Becoming One: Cross-Modal Integration of Facial and Auditory Cues in Judging Trustworthiness

2021 ◽  
Vol 39 (3) ◽  
pp. 315-327
Author(s):  
Marco Brambilla ◽  
Matteo Masi ◽  
Simone Mattavelli ◽  
Marco Biella

Face processing has mainly been investigated by presenting facial expressions without any contextual information. However, in everyday interactions with others, the sight of a face is often accompanied by contextual cues that are processed either visually or under different sensory modalities. Here, we tested whether the perceived trustworthiness of a face is influenced by the auditory context in which that face is embedded. In Experiment 1, participants evaluated trustworthiness from faces that were surrounded by either threatening or non-threatening auditory contexts. Results showed that faces were judged more untrustworthy when accompanied by threatening auditory information. Experiment 2 replicated the effect in a design that disentangled the effects of threatening contexts from negative contexts in general. Thus, perceiving facial trustworthiness involves a cross-modal integration of the face and the level of threat posed by the surrounding context.

2018 ◽  
Author(s):  
Adrienne Wood ◽  
Jared Martin ◽  
Martha W. Alibali ◽  
Paula Niedenthal

Recognition of affect expressed in the face is disrupted when the body expresses an incongruent affect. Existing research has documented such interference for universally recognizable bodily expressions. However, it remains unknown whether learned, conventional gestures can interfere with facial expression processing. Study 1 participants (N = 62) viewed videos of facial expressions accompanied by hand gestures and reported the valence of either the face or hand. Responses were slower and less accurate when the face-hand pairing was incongruent compared to congruent. We hypothesized that hand gestures might exert an even stronger influence on facial expression processing when other routes to understanding the meaning of a facial expression, such as with sensorimotor simulation, are disrupted. Participants in Study 2 (N = 127) completed the same task, but the facial mobility of some participants was restricted, which disrupted face processing in prior work. The hand-face congruency effect from Study 1 was replicated. The facial mobility manipulation affected males only, and it did not moderate the congruency effect. The present work suggests the affective meaning of conventional gestures is processed automatically and can interfere with face perception, but perceivers do not seem to rely more on gestures when sensorimotor face processing is disrupted.


Author(s):  
Elke B. Lange ◽  
Jens Fünderich ◽  
Hartmut Grimm

AbstractWe investigated how visual and auditory information contributes to emotion communication during singing. Classically trained singers applied two different facial expressions (expressive/suppressed) to pieces from their song and opera repertoire. Recordings of the singers were evaluated by laypersons or experts, presented to them in three different modes: auditory, visual, and audio–visual. A manipulation check confirmed that the singers succeeded in manipulating the face while keeping the sound highly expressive. Analyses focused on whether the visual difference or the auditory concordance between the two versions determined perception of the audio–visual stimuli. When evaluating expressive intensity or emotional content a clear effect of visual dominance showed. Experts made more use of the visual cues than laypersons. Consistency measures between uni-modal and multimodal presentations did not explain the visual dominance. The evaluation of seriousness was applied as a control. The uni-modal stimuli were rated as expected, but multisensory evaluations converged without visual dominance. Our study demonstrates that long-term knowledge and task context affect multisensory integration. Even though singers’ orofacial movements are dominated by sound production, their facial expressions can communicate emotions composed into the music, and observes do not rely on audio information instead. Studies such as ours are important to understand multisensory integration in applied settings.


1990 ◽  
Vol 3 (3) ◽  
pp. 153-168 ◽  
Author(s):  
Andrew W. Young ◽  
Hadyn D. Ellis ◽  
T. Krystyna Szulecka ◽  
Karel W. De Pauw

We report detailed investigations of the face processing abilities of four patients who had shown symptoms involving delusional misidentification. One (GC) was diagnosed as a Frégoli case, and the other three (SL, GS, and JS) by symptoms of intermetamorphosis. The face processing tasks examined their ability to recognize emotional facial expressions, identify familiar faces, match photographs of unfamiliar faces, and remember photographs of faces of unfamiliar people. The Frégoli patient (GC) was impaired at identifying familiar faces, and severely impaired at matching photographs of unfamiliar people wearing different disguises to undisguised views. Two of the intermetamorphosis patients (SL and GS) also showed impaired face processing abilities, but the third US) performed all tests at a normal level. These findings constrain conceptions of the relation between delusional misidentification, face processing impairment, and brain injury.


1984 ◽  
Vol 1 ◽  
pp. 29-35
Author(s):  
Michael P. O'Driscoll ◽  
Barry L. Richardson ◽  
Dianne B. Wuillemin

Thirty photographs depicting diverse emotional expressions were shown to a sample of Melanesian students who were assigned to either a face plus context or face alone condition. Significant differences between the two groups were obtained in a substantial proportion of cases on Schlosberg's Pleasant Unpleasant, and Attention – Rejection scales and the emotional expressions were judged to be appropriate to the context. These findings support the suggestion that the presence or absence of context is an important variable in the judgement of emotional expression and lend credence to the universal process theory.Research on perception of emotions has consistently illustrated that observers can accurately judge emotions in facial expressions (Ekman, Friesen, & Ellsworth, 1972; Izard, 1971) and that the face conveys important information about emotions being experienced (Ekman & Oster, 1979). In recent years, however, a question of interest has been the relative contributions of facial cues and contextual information to observers' overall judgements. This issue is important for theoretical and methodological reasons. From a theoretical viewpoint, unravelling the determinants of emotion perception would enhance our understanding of the processes of person perception and impression formation and would provide a framework for research on interpersonal communication. On methodological grounds, the researcher's approach to the face versus context issue can influence the type of research procedures used to analyse emotion perception. Specifically, much research in this field has been criticized for use of posed emotional expressions as stimuli for observers to evaluate. Spignesi and Shor (1981) have noted that only one of approximately 25 experimental studies has utilized facial expressions occurring spontaneously in real-life situations.


2007 ◽  
Vol 60 (8) ◽  
pp. 1101-1115 ◽  
Author(s):  
Isabelle Blanchette ◽  
Anne Richards ◽  
Adele Cross

In 3 experiments, we investigate how anxiety influences interpretation of ambiguous facial expressions of emotion. Specifically, we examine whether anxiety modulates the effect of contextual cues on interpretation. Participants saw ambiguous facial expressions. Simultaneously, positive or negative contextual information appeared on the screen. Participants judged whether each expression was positive or negative. We examined the impact of verbal and visual contextual cues on participants’ judgements. We used 3 different anxiety induction procedures and measured levels of trait anxiety (Experiment 2). Results showed that high state anxiety resulted in greater use of contextual information in the interpretation of the facial expressions. Trait anxiety was associated with mood-congruent effects on interpretation, but not greater use of contextual information.


2017 ◽  
Vol 26 (3) ◽  
pp. 243-248 ◽  
Author(s):  
Reginald B. Adams ◽  
Daniel N. Albohn ◽  
Kestutis Kveraga

A social-functional approach to face processing comes with a number of assumptions. First, given that humans possess limited cognitive resources, it assumes that we naturally allocate attention to processing and integrating the most adaptively relevant social cues. Second, from these cues, we make behavioral forecasts about others in order to respond in an efficient and adaptive manner. This assumption aligns with broader ecological accounts of vision that highlight a direct action-perception link, even for nonsocial vision. Third, humans are naturally predisposed to process faces in this functionally adaptive manner. This latter contention is implied by our attraction to dynamic aspects of the face, including looking behavior and facial expressions, from which we tend to overgeneralize inferences, even when forming impressions of stable traits. The functional approach helps to address how and why observers are able to integrate functionally related compound social cues in a manner that is ecologically relevant and thus adaptive.


2020 ◽  
Author(s):  
M. Edelstein ◽  
B. Monk ◽  
V.S. Ramachandran ◽  
R. Rouw

ABSTRACTMisophonia is a newly researched condition in which specific sounds cause an intense, aversive response in individuals, characterized by negative emotions and autonomic arousal. Although virtually any sound can become a misophonic “trigger,” the most common sounds appear to be bodily sounds related to chewing and eating as well as other repetitive sounds. An intriguing aspect of misophonia is the fact that many misophonic individuals report that they are triggered more, or even only, by sounds produced by specific individuals, and less, or not at all, by sounds produced by animals (although there are always exceptions).In general, anecdotal evidence suggests that misophonic triggers involve a combination of sound stimuli and contextual cues. The aversive stimulus is more than just a sound and can be thought of as a Gestalt of features which includes sound as a necessary component as well as additional contextual information. In this study, we explore how contextual information influences misophonic responses to human chewing, as well as sonically similar sounds produced by non-human sources. The current study revealed that the exact same sound can be perceived as being much more or less aversive depending on the contextual information presented alongside the auditory information. The results of this study provide a foundation for potential cognitive based therapies.


2018 ◽  
Author(s):  
Mariana R. Pereira ◽  
Tiago O. Paiva ◽  
Fernando Barbosa ◽  
Pedro R. Almeida ◽  
Eva C. Martins ◽  
...  

AbstractTypicality, or averageness, is one of the key features that influences face evaluation, but the role of this property in the perception of facial expressions of emotions is still not fully understood. Typical faces are usually considered more pleasant and trustworthy, and neuroimaging results suggest typicality modulates amygdala and fusiform activation, influencing face perception. At the same time, there is evidence that arousal is a key affective feature that modulates neural reactivity to emotional expressions. In this sense, it remains unclear whether the neural effects of typicality depend on altered perceptions of affect from facial expressions or if the effects of typicality and affect independently modulate face processing. The goal of this work was to dissociate the effects of typicality and affective properties, namely valence and arousal, in electrophysiological responses and self-reported ratings across several facial expressions of emotion. Two ERP components relevant for face processing were measured, the N170 and Vertex Positive Potential (VPP), complemented by subjective ratings of typicality, valence, and arousal, in a sample of 30 healthy young adults (21 female). The results point out to a modulation of the electrophysiological responses by arousal, regardless of the typicality or valence properties of the face. These findings suggest that previous findings of neural responses to typicality may be better explained by accounting for the subjective perception of arousal in facial expressions.


2010 ◽  
Vol 69 (3) ◽  
pp. 161-167 ◽  
Author(s):  
Jisien Yang ◽  
Adrian Schwaninger

Configural processing has been considered the major contributor to the face inversion effect (FIE) in face recognition. However, most researchers have only obtained the FIE with one specific ratio of configural alteration. It remains unclear whether the ratio of configural alteration itself can mediate the occurrence of the FIE. We aimed to clarify this issue by manipulating the configural information parametrically using six different ratios, ranging from 4% to 24%. Participants were asked to judge whether a pair of faces were entirely identical or different. The paired faces that were to be compared were presented either simultaneously (Experiment 1) or sequentially (Experiment 2). Both experiments revealed that the FIE was observed only when the ratio of configural alteration was in the intermediate range. These results indicate that even though the FIE has been frequently adopted as an index to examine the underlying mechanism of face processing, the emergence of the FIE is not robust with any configural alteration but dependent on the ratio of configural alteration.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


Sign in / Sign up

Export Citation Format

Share Document