Distinct mechanisms of face representation by enhancive and suppressive neurons of the inferior temporal cortex

2020 ◽  
Vol 124 (4) ◽  
pp. 1216-1228
Author(s):  
Sina Salehi ◽  
Mohammad Reza A. Dehaqani ◽  
Behrad Noudoost ◽  
Hossein Esteky

Electrophysiological and imaging studies have suggested that face information is encoded by a network of clusters of enhancive face-selective neurons in the visual cortex of man and monkey. We show that nearly half of face-selective neurons are suppressed by face stimulation. The suppressive neurons form spatial clusters and convey more face identity information than the enhancive face neurons. Our results suggest the presence of two neuronal subsystems for coarse and fine face information processing.

2018 ◽  
Author(s):  
Gaby Pfeifer ◽  
Jamie Ward ◽  
Natasha Sigala

AbstractThe sensory recruitment model envisages visual working memory (VWM) as an emergent property that is encoded and maintained in sensory (visual) regions. The model implies that enhanced sensory-perceptual functions, as in synaesthesia, entail a dedicated VWM-system, showing reduced visual cortex activity as a result of neural specificity. By contrast, sensory-perceptual decline, as in old age, is expected to show enhanced visual cortex activity as a result of neural broadening. To test this model, young grapheme-colour synaesthetes, older adults and young controls engaged in a delayed pair-associative retrieval and a delayed matching-to-sample task, consisting of achromatic fractal stimuli that do not induce synaesthesia. While a previous analysis of this dataset (Pfeifer et al., 2016) has focused on cued retrieval and recognition of pair-associates (i.e. long-term memory), the current study focuses on visual working memory and considers, for the first time, the crucial delay period in which no visual stimuli are present, but working memory processes are engaged. Participants were trained to criterion and demonstrated comparable behavioural performance on VWM tasks. Whole-brain and region-of-interest-analyses revealed significantly lower activity in synaesthetes’ middle frontal gyrus and visual regions (cuneus, inferior temporal cortex) respectively, suggesting greater neural efficiency relative to young and older adults in both tasks. The results support the sensory recruitment model and can explain age and individual WM-differences based on neural specificity in visual cortex.


2018 ◽  
Author(s):  
Géza Gergely Ambrus ◽  
Daniel Kaiser ◽  
Radoslaw Martin Cichy ◽  
Gyula Kovács

AbstractIn real-life situations, the appearance of a person’s face can vary substantially across different encounters, making face recognition a challenging task for the visual system. Recent fMRI decoding studies have suggested that face recognition is supported by identity representations located in regions of the occipito-temporal cortex. Here, we used EEG to elucidate the temporal emergence of these representations. Human participants (both sexes) viewed a set of highly variable face images of four highly familiar celebrities (two male, two female), while performing an orthogonal task. Univariate analyses of event-related EEG responses revealed a pronounced differentiation between male and female faces, but not between identities of the same sex. Using multivariate representational similarity analysis, we observed a gradual emergence of face identity representations, with an increasing degree of invariance. Face identity information emerged rapidly, starting shortly after 100ms from stimulus onset. From 400ms after onset and predominantly in the right hemisphere, identity representations showed two invariance properties: (1) they equally discriminated identities of opposite sexes and of the same sex, and (2) they were tolerant to image-based variations. These invariant representations may be a crucial prerequisite for successful face recognition in everyday situations, where the appearance of a familiar person can vary drastically.Significance StatementRecognizing the face of a friend on the street is a task we effortlessly perform in our everyday lives. However, the necessary visual processing underlying familiar face recognition is highly complex. As the appearance of a given person varies drastically between encounters, for example across viewpoints or emotional expressions, the brain needs to extract identity information that is invariant to such changes. Using multivariate analyses of EEG data, we characterize how invariant representations of face identity emerge gradually over time. After 400ms of processing, cortical representations reliably differentiated two similar identities (e.g., two famous male actors), even across a set of highly variable images. These representations may support face recognition under challenging real-life conditions.


2018 ◽  
Author(s):  
Jiayu Zhan ◽  
Oliver G. B. Garrod ◽  
Nicola van Rijsbergen ◽  
Philippe schyns

Current theories of cognition are cast in terms of information processing mechanisms that use mental representations. For example, consider the mechanisms of face identification that use mental representations to identify familiar faces under various conditions of pose, illumination and ageing, or to draw resemblance between family members. Providing an explanation of these information processing mechanisms thus relies on showing how the actual information contents of these representations are used. Yet, these representational contents are rarely characterized, which in turn hinders knowledge of mechanisms. Here, we address this pervasive gap by characterizing the detailed contents of mental representations of familiar faces using a new methodological approach. We used a unique generative model of face identity information combined with perceptual judgments and reverse correlation to model the 3D representational contents of 4 familiar faces in 14 participants. We then demonstrated the validity of these contents using everyday perceptual tasks that generalize face identity and resemblance judgments to new viewpoints, age and sex with a new group of participants. Our work highlights that such models of mental representations are critical to understanding generalization behavior and for characterizing the information processing mechanisms that must use them.


2019 ◽  
Author(s):  
Kathryn O'Nell ◽  
Rebecca Saxe ◽  
Stefano Anzellotti

According to the dominant account of face processing, recognition of emotional expressions is implemented by the superior temporal sulcus (STS), while recognition of face identity is implemented by inferior temporal cortex (IT) (Haxby et al., 2000). However, recent patient and imaging studies (Fox et al., 2011, Anzellotti et al. 2017) found that the STS also encodes information about identity.Jointly representing expression and identity might be computationally advantageous: learning to recognize expressions could lead to the emergence of representations that support identity recognition. To test this hypothesis, we trained a deep densely connected convolutional network (DenseNet, Huang et al., 2017) to classify face images from the fer2013 dataset as either angry, disgusted, afraid, happy, sad, surprised, or neutral. We then froze the weights of the DenseNet and trained linear layers attached to progressively deeper layers of this net to classify either emotion or identity using a subset of the Karolinska (KDEF) dataset. Finally, we tested emotion and identity classification in left out images in the KDEF dataset that were not used for training.Classification accuracy for emotions in the KDEF dataset increased from early to late layers of the DenseNet, indicating successful transfer across datasets. Critically, classification accuracy for identity also increased from early to late layers of this DenseNet, despite the fact that it had not been trained to classify identity. A linear layer trained on the DenseNet features vastly outperformed a linear layer trained on pixels (98.8% vs 68.7%), demonstrating that the high accuracy obtained with the DenseNet features cannot be explained by low-level confounds. These results show that learning to recognize facial expressions can lead to the spontaneous emergence of representations that support the recognition of identity, thus offering a principled computational account for the discovery of expression and identity representations within the same portion of STS.


2018 ◽  
Author(s):  
Simona Monaco ◽  
Ying Chen ◽  
Nicholas Menghi ◽  
J Douglas Crawford

AbstractSensorimotor integration involves feedforward and reentrant processing of sensory input. Grasp-related motor activity precedes and is thought to influence visual object processing. Yet, while the importance of reentrant feedback is well established in perception, the top-down modulations for action and the neural circuits involved in this process have received less attention. Do action-specific intentions influence the processing of visual information in the human cortex? Using a cue-separation fMRI paradigm, we found that action-specific instruction (manual alignment vs. grasp) influences the cortical processing of object orientation several seconds after the object had been viewed. This influence occurred as early as in the primary visual cortex and extended to ventral and dorsal visual stream areas. Importantly, this modulation was unrelated to non-specific action planning. Further, the primary visual cortex showed stronger functional connectivity with frontal-parietal areas and the inferior temporal cortex during the delay following orientation processing for align than grasping movements, strengthening the idea of reentrant feedback from dorsal visual stream areas involved in action. To our knowledge, this is the first demonstration that intended manual actions have such an early, pervasive, and differential influence on the cortical processing of vision.


2019 ◽  
Author(s):  
Thomas P. O’Connell ◽  
Marvin M. Chun ◽  
Gabriel Kreiman

AbstractDecoding information from neural responses in visual cortex demonstrates interpolation across repetitions or exemplars. Is it possible to decode novel categories from neural activity without any prior training on activity from those categories? We built zero-shot neural decoders by mapping responses from macaque inferior temporal cortex onto a deep neural network. The resulting models correctly interpreted responses to novel categories, even extrapolating from a single category.


Sign in / Sign up

Export Citation Format

Share Document