scholarly journals A Flexible Neural Representation of Faces in the Human Brain

2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Runnan Cao ◽  
Xin Li ◽  
Alexander Todorov ◽  
Shuo Wang

Abstract An important question in human face perception research is to understand whether the neural representation of faces is dynamically modulated by context. In particular, although there is a plethora of neuroimaging literature that has probed the neural representation of faces, few studies have investigated what low-level structural and textural facial features parametrically drive neural responses to faces and whether the representation of these features is modulated by the task. To answer these questions, we employed 2 task instructions when participants viewed the same faces. We first identified brain regions that parametrically encoded high-level social traits such as perceived facial trustworthiness and dominance, and we showed that these brain regions were modulated by task instructions. We then employed a data-driven computational face model with parametrically generated faces and identified brain regions that encoded low-level variation in the faces (shape and skin texture) that drove neural responses. We further analyzed the evolution of the neural feature vectors along the visual processing stream and visualized and explained these feature vectors. Together, our results showed a flexible neural representation of faces for both low-level features and high-level social traits in the human brain.

Author(s):  
Maria Tsantani ◽  
Nikolaus Kriegeskorte ◽  
Katherine Storrs ◽  
Adrian Lloyd Williams ◽  
Carolyn McGettigan ◽  
...  

AbstractFaces of different people elicit distinct functional MRI (fMRI) patterns in several face-selective brain regions. Here we used representational similarity analysis to investigate what type of identity-distinguishing information is encoded in three face-selective regions: fusiform face area (FFA), occipital face area (OFA), and posterior superior temporal sulcus (pSTS). We used fMRI to measure brain activity patterns elicited by naturalistic videos of famous face identities, and compared their representational distances in each region with models of the differences between identities. Models included low-level to high-level image-computable properties and complex human-rated properties. We found that the FFA representation reflected perceived face similarity, social traits, and gender, and was well accounted for by the OpenFace model (deep neural network, trained to cluster faces by identity). The OFA encoded low-level image-based properties (pixel-wise and Gabor-jet dissimilarities). Our results suggest that, although FFA and OFA can both discriminate between identities, the FFA representation is further removed from the image, encoding higher-level perceptual and social face information.


2017 ◽  
Vol 117 (1) ◽  
pp. 388-402 ◽  
Author(s):  
Michael A. Cohen ◽  
George A. Alvarez ◽  
Ken Nakayama ◽  
Talia Konkle

Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing.


2021 ◽  
Author(s):  
Ning Mei ◽  
Roberto Santana ◽  
David Soto

AbstractDespite advances in the neuroscience of visual consciousness over the last decades, we still lack a framework for understanding the scope of unconscious processing and how it relates to conscious experience. Previous research observed brain signatures of unconscious contents in visual cortex, but these have not been identified in a reliable manner, with low trial numbers and signal detection theoretic constraints not allowing to decisively discard conscious perception. Critically, the extent to which unconscious content is represented in high-level processing stages along the ventral visual stream and linked prefrontal areas remains unknown. Using a within-subject, high-precision, highly-sampled fMRI approach, we show that unconscious contents, even those associated with null sensitivity, can be reliably decoded from multivoxel patterns that are highly distributed along the ventral visual pathway and also involving prefrontal substrates. Notably, the neural representation in these areas generalised across conscious and unconscious visual processing states, placing constraints on prior findings that fronto-parietal substrates support the representation of conscious contents and suggesting revisions to models of consciousness such as the neuronal global workspace. We then provide a computational model simulation of visual information processing/representation in the absence of perceptual sensitivity by using feedforward convolutional neural networks trained to perform a similar visual task to the human observers. The work provides a novel framework for pinpointing the neural representation of unconscious knowledge across different task domains.


2013 ◽  
Vol 31 (2) ◽  
pp. 197-209 ◽  
Author(s):  
BEVIL R. CONWAY

AbstractExplanations for color phenomena are often sought in the retina, lateral geniculate nucleus, and V1, yet it is becoming increasingly clear that a complete account will take us further along the visual-processing pathway. Working out which areas are involved is not trivial. Responses to S-cone activation are often assumed to indicate that an area or neuron is involved in color perception. However, work tracing S-cone signals into extrastriate cortex has challenged this assumption: S-cone responses have been found in brain regions, such as the middle temporal (MT) motion area, not thought to play a major role in color perception. Here, we review the processing of S-cone signals across cortex and present original data on S-cone responses measured with fMRI in alert macaque, focusing on one area in which S-cone signals seem likely to contribute to color (V4/posterior inferior temporal cortex) and on one area in which S signals are unlikely to play a role in color (MT). We advance a hypothesis that the S-cone signals in color-computing areas are required to achieve a balanced neural representation of perceptual color space, whereas those in noncolor-areas provide a cue to illumination (not luminance) and confer sensitivity to the chromatic contrast generated by natural daylight (shadows, illuminated by ambient sky, surrounded by direct sunlight). This sensitivity would facilitate the extraction of shape-from-shadow signals to benefit global scene analysis and motion perception.


2010 ◽  
Vol 22 (6) ◽  
pp. 1235-1243 ◽  
Author(s):  
Marieke L. Schölvinck ◽  
Geraint Rees

Motion-induced blindness (MIB) is a visual phenomenon in which highly salient visual targets spontaneously disappear from visual awareness (and subsequently reappear) when superimposed on a moving background of distracters. Such fluctuations in awareness of the targets, although they remain physically present, provide an ideal paradigm to study the neural correlates of visual awareness. Existing behavioral data on MIB are consistent both with a role for structures early in visual processing and with involvement of high-level visual processes. To further investigate this issue, we used high field functional MRI to investigate signals in human low-level visual cortex and motion-sensitive area V5/MT while participants reported disappearance and reappearance of an MIB target. Surprisingly, perceptual invisibility of the target was coupled to an increase in activity in low-level visual cortex plus area V5/MT compared with when the target was visible. This increase was largest in retinotopic regions representing the target location. One possibility is that our findings result from an active process of completion of the field of distracters that acts locally in the visual cortex, coupled to a more global process that facilitates invisibility in general visual cortex. Our findings show that the earliest anatomical stages of human visual cortical processing are implicated in MIB, as with other forms of bistable perception.


2016 ◽  
Author(s):  
Heeyoung Choo ◽  
Jack Nasar ◽  
Bardia Nikrahei ◽  
Dirk B. Walther

AbstractImages of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people’s visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture.


2019 ◽  
Author(s):  
Ali Pournaghdali ◽  
Bennett L Schwartz

Studies utilizing continuous flash suppression (CFS) provide valuable information regarding conscious and nonconscious perception. There are, however, crucial unanswered questions regarding the mechanisms of suppression and the level of visual processing in the absence of consciousness with CFS. Research suggests that the answers to these questions depend on the experimental configuration and how we assess consciousness in these studies. The aim of this review is to evaluate the impact of different experimental configurations and the assessment of consciousness on the results of the previous CFS studies. We review studies that evaluated the influence of different experimental configuration on the depth of suppression with CFS and discuss how different assessments of consciousness may impact the results of CFS studies. Finally, we review behavioral and brain recording studies of CFS. In conclusion, previous studies provide evidence for survival of low-level visual information and complete impairment of high-level visual information under the influence of CFS. That is, studies suggest that nonconscious perception of lower-level visual information happens with CFS but there is no evidence for nonconscious highlevel recognition with CFS.


2019 ◽  
Author(s):  
Mengyuan Gong ◽  
Taosheng Liu

AbstractSelective attention is a core cognitive function for efficient processing of information. Although it is well known that attention can modulate neural responses in many brain areas, the computational principles underlying attentional modulation remain unclear. Contrary to the prevailing view of a high-dimensional, distributed neural representation, here we show a surprisingly simple, biased neural representation for feature-based attention in a large dataset including five human fMRI studies. We found that when participants selected one feature from a compound stimulus, voxels in many cortical areas responded consistently higher to one attended feature over the other. This univariate bias was robust at the level of single brain areas and consistent across brain areas within individual subjects. Importantly, this univariate bias showed a progressively stronger magnitude along the cortical hierarchy. In frontoparietal areas, the bias was strongest and contributed largely to pattern-based decoding, whereas early visual areas lacked such a bias. These findings suggest a gradual transition from a more analog to a more abstract representation of attentional priority along the cortical hierarchy. Biased neural responses in high-level areas likely reflect a low-dimensional neural code that facilitates robust representation and simple read-out of cognitive variables.


Sign in / Sign up

Export Citation Format

Share Document