scholarly journals The role of stimulus-based cues and conceptual information in processing facial expressions of emotion

2020 ◽  
Author(s):  
Thomas Murray ◽  
Justin O'Brien ◽  
Noam Sagiv ◽  
Lucia Garrido

Face shape and surface textures are two important cues that aid in the perception of facial expressions of emotion. Additionally, this perception is also influenced by high-level emotion concepts. Across two studies, we use representational similarity analysis to investigate the relative roles of shape, surface, and conceptual information in the perception, categorisation, and neural representation of facial expressions. In Study 1, 50 participants completed a perceptual task designed to measure the perceptual similarity of expression pairs, and a categorical task designed to measure the confusability between expression pairs when assigning emotion labels to a face. We used representational similarity analysis and constructed three models of the similarities between emotions using distinct information. Two models were based on stimulus-based cues (face shapes and surface textures) and one model was based on emotion concepts. Using multiple linear regression, we found that behaviour during both tasks was related with the similarity of emotion concepts. The model based on face shapes was more related with behaviour in the perceptual task than in the categorical, and the model based on surface textures was more related with behaviour in the categorical than the perceptual task. In Study 2, 30 participants viewed facial expressions while undergoing fMRI, allowing for the measurement of brain representational geometries of facial expressions of emotion in three core face-responsive regions (the Fusiform Face Area, Occipital Face Area, and Superior Temporal Sulcus), and a region involved in theory of mind (Medial Prefrontal Cortex). Across all four regions, the representational distances between facial expression pairs were related to the similarities of emotion concepts, but not to either of the stimulus-based cues. Together, these results highlight the important top-down influence of high-level emotion concepts both in behavioural tasks and in the neural representation of facial expressions.

2016 ◽  
Author(s):  
Heeyoung Choo ◽  
Jack Nasar ◽  
Bardia Nikrahei ◽  
Dirk B. Walther

AbstractImages of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people’s visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture.


2014 ◽  
Vol 26 (3) ◽  
pp. 490-500 ◽  
Author(s):  
Yaara Erez ◽  
Galit Yovel

Target objects required for goal-directed behavior are typically embedded within multiple irrelevant objects that may interfere with their encoding. Most neuroimaging studies of high-level visual cortex have examined the representation of isolated objects, and therefore, little is known about how surrounding objects influence the neural representation of target objects. To investigate the effect of different types of clutter on the distributed responses to target objects in high-level visual areas, we used fMRI and manipulated the type of clutter. Specifically, target objects (i.e., a face and a house) were presented either in isolation, in the presence of a homogeneous (identical objects from another category) clutter (“pop-out” display), or in the presence of a heterogeneous (different objects) clutter, while participants performed a target identification task. Using multivoxel pattern analysis (MVPA) we found that in the posterior fusiform object area a heterogeneous but not homogeneous clutter interfered with decoding of the target objects. Furthermore, multivoxel patterns evoked by isolated objects were more similar to multivoxel patterns evoked by homogenous compared with heterogeneous clutter in the lateral occipital and posterior fusiform object areas. Interestingly, there was no effect of clutter on the neural representation of the target objects in their category-selective areas, such as the fusiform face area and the parahippocampal place area. Our findings show that the variation among irrelevant surrounding objects influences the neural representation of target objects in the object general area, but not in object category-selective cortex, where the representation of target objects is invariant to their surroundings.


1984 ◽  
Vol 59 (1) ◽  
pp. 147-150 ◽  
Author(s):  
Gilles Kirouac ◽  
François Y. Doré

The purpose of this experiment was to study the accuracy of judgment of facial expressions of emotions that were displayed for very brief exposure times. Twenty university students were shown facial stimuli that were presented for durations ranging from 10 to 50 msec. The data showed that accuracy of judgment reached a fairly high level even at very brief exposure times and that human observers are especially competent to process very rapid changes in facial appearance.


Author(s):  
Maddalena Boccia ◽  
Valentina Sulpizio ◽  
Federica Bencivenga ◽  
Cecilia Guariglia ◽  
Gaspare Galati

AbstractIt is commonly acknowledged that visual imagery and perception rely on the same content-dependent brain areas in the high-level visual cortex (HVC). However, the way in which our brain processes and organizes previous acquired knowledge to allow the generation of mental images is still a matter of debate. Here, we performed a representation similarity analysis of three previous fMRI experiments conducted in our laboratory to characterize the neural representation underlying imagery and perception of objects, buildings and faces and to disclose possible dissimilarities in the neural structure of such representations. To this aim, we built representational dissimilarity matrices (RDMs) by computing multivariate distances between the activity patterns associated with each pair of stimuli in the content-dependent areas of the HVC and HC. We found that spatial information is widely coded in the HVC during perception (i.e. RSC, PPA and OPA) and imagery (OPA and PPA). Also, visual information seems to be coded in both preferred and non-preferred regions of the HVC, supporting a distributed view of encoding. Overall, the present results shed light upon the spatial coding of imagined and perceived exemplars in the HVC.


2016 ◽  
Vol 37 (1) ◽  
pp. 16-23 ◽  
Author(s):  
Chit Yuen Yi ◽  
Matthew W. E. Murry ◽  
Amy L. Gentzler

Abstract. Past research suggests that transient mood influences the perception of facial expressions of emotion, but relatively little is known about how trait-level emotionality (i.e., temperament) may influence emotion perception or interact with mood in this process. Consequently, we extended earlier work by examining how temperamental dimensions of negative emotionality and extraversion were associated with the perception accuracy and perceived intensity of three basic emotions and how the trait-level temperamental effect interacted with state-level self-reported mood in a sample of 88 adults (27 men, 18–51 years of age). The results indicated that higher levels of negative mood were associated with higher perception accuracy of angry and sad facial expressions, and higher levels of perceived intensity of anger. For perceived intensity of sadness, negative mood was associated with lower levels of perceived intensity, whereas negative emotionality was associated with higher levels of perceived intensity of sadness. Overall, our findings added to the limited literature on adult temperament and emotion perception.


2006 ◽  
Author(s):  
Mark E. Hastings ◽  
June P. Tangney ◽  
Jeffrey Stuewig

2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


Sign in / Sign up

Export Citation Format

Share Document