scholarly journals Adjudicating between face-coding models with individual-face fMRI responses

2015 ◽  
Author(s):  
Johan D. Carlin ◽  
Nikolaus Kriegeskorte

AbstractThe perceptual representation of individual faces is often explained with reference to a norm-based face space. In such spaces, individuals are encoded as vectors where identity is primarily conveyed by direction and distinctiveness by eccentricity. Here we measured human fMRI responses and psychophysical similarity judgments of individual face exemplars, which were generated as realistic 3D animations using a computer-graphics model. We developed and evaluated multiple neurobiologically plausible computational models, each of which predicts a representational distance matrix and a regional-mean activation profile for 24 face stimuli. In the fusiform face area, a face-space coding model with sigmoidal ramp tuning provided a better account of the data than one based on exemplar tuning. However, an image-processing model with weighted banks of Gabor filters performed similarly. Accounting for the data required the inclusion of a measurement-level population averaging mechanism that approximates how fMRI voxels locally average distinct neuronal tunings. Our study demonstrates the importance of comparing multiple models and of modeling the measurement process in computational neuroimaging.Author SummaryHumans recognize conspecifics by their faces. Understanding how faces are recognized is an open computational problem with relevance to theories of perception, social cognition, and the engineering of computer vision systems. Here we measured brain activity with functional MRI while human participants viewed individual faces. We developed multiple computational models inspired by known response preferences of single neurons in the primate visual cortex. We then compared these neuronal models to patterns of brain activity corresponding to individual faces. The data were consistent with a model where neurons respond to directions in a high-dimensional space of faces. It also proved essential to model how functional MRI voxels locally average the responses of tens of thousands of neurons. The study highlights the challenges in adjudicating between alternative computational theories of visual information processing.

2019 ◽  
Author(s):  
Zarrar Shehzad ◽  
Eunjoo Byeon ◽  
Gregory McCarthy

AbstractWe are highly accurate at recognizing familiar faces even with large variation in visual presentation due to pose, lighting, hairstyle, etc. The neural basis of such within-person face variation has been largely unexplored. Building on prior behavioral work, we hypothesized that learning a person’s average face helps link the different instances of that person’s face into a coherent identity within face-selective regions within ventral occipitotemporal cortex (VOTC). To test this hypothesis, we measured brain activity using fMRI for eight well-known celebrities with 18 naturalistic photos per identity. Each photo was mapped into a face-space using a neural network where the Euclidean distance between photos corresponded with face similarity. We confirmed in a behavioral study that photos closer to a person’s average face in a face-space were judged to look more like that person. fMRI results revealed hemispheric differences in identity processing. The right fusiform face area (FFA) encoded face-likeness with brain signal increasing the closer a photo was to the average of all faces. This suggests that the right FFA pattern matches to an average face template. In contrast, the left FFA and left anterior fusiform gyrus (aFus) encoded person-likeness. The brain signal increased the further a photo was from the person’s average face weighted by the features most relevant for face identification. This suggests that the left FFA and aFUS processes an identity error signal. Our results encourage a new consideration of the left fusiform in face processing, specifically for within-person processing of face identity.


1999 ◽  
Vol 11 (3) ◽  
pp. 300-311 ◽  
Author(s):  
Edmund T. Rolls ◽  
Martin J. Tovée ◽  
Stefano Panzeri

Backward masking can potentially provide evidence of the time needed for visual processing, a fundamental constraint that must be incorporated into computational models of vision. Although backward masking has been extensively used psychophysically, there is little direct evidence for the effects of visual masking on neuronal responses. To investigate the effects of a backward masking paradigm on the responses of neurons in the temporal visual cortex, we have shown that the response of the neurons is interrupted by the mask. Under conditions when humans can just identify the stimulus, with stimulus onset asynchronies (SOA) of 20 msec, neurons in macaques respond to their best stimulus for approximately 30 msec. We now quantify the information that is available from the responses of single neurons under backward masking conditions when two to six faces were shown. We show that the information available is greatly decreased as the mask is brought closer to the stimulus. The decrease is more marked than the decrease in firing rate because it is the selective part of the firing that is especially attenuated by the mask, not the spontaneous firing, and also because the neuronal response is more variable at short SOAs. However, even at the shortest SOA of 20 msec, the information available is on average 0.1 bits. This compares to 0.3 bits with only the 16-msec target stimulus shown and a typical value for such neurons of 0.4 to 0.5 bits with a 500-msec stimulus. The results thus show that considerable information is available from neuronal responses even under backward masking conditions that allow the neurons to have their main response in 30 msec. This provides evidence for how rapid the processing of visual information is in a cortical area and provides a fundamental constraint for understanding how cortical information processing operates.


2004 ◽  
Vol 16 (9) ◽  
pp. 1669-1679 ◽  
Author(s):  
Emily D. Grossman ◽  
Randolph Blake ◽  
Chai-Youn Kim

Individuals improve with practice on a variety of perceptual tasks, presumably reflecting plasticity in underlying neural mechanisms. We trained observers to discriminate biological motion from scrambled (nonbiological) motion and examined whether the resulting improvement in perceptual performance was accompanied by changes in activation within the posterior superior temporal sulcus and the fusiform “face area,” brain areas involved in perception of biological events. With daily practice, initially naive observers became more proficient at discriminating biological from scrambled animations embedded in an array of dynamic “noise” dots, with the extent of improvement varying among observers. Learning generalized to animations never seen before, indicating that observers had not simply memorized specific exemplars. In the same observers, neural activity prior to and following training was measured using functional magnetic resonance imaging. Neural activity within the posterior superior temporal sulcus and the fusiform “face area” reflected the participants' learning: BOLD signals were significantly larger after training in response both to animations experienced during training and to novel animations. The degree of learning was positively correlated with the amplitude changes in BOLD signals.


2016 ◽  
Vol 371 (1705) ◽  
pp. 20160278 ◽  
Author(s):  
Nikolaus Kriegeskorte ◽  
Jörn Diedrichsen

High-resolution functional imaging is providing increasingly rich measurements of brain activity in animals and humans. A major challenge is to leverage such data to gain insight into the brain's computational mechanisms. The first step is to define candidate brain-computational models (BCMs) that can perform the behavioural task in question. We would then like to infer which of the candidate BCMs best accounts for measured brain-activity data. Here we describe a method that complements each BCM by a measurement model (MM), which simulates the way the brain-activity measurements reflect neuronal activity (e.g. local averaging in functional magnetic resonance imaging (fMRI) voxels or sparse sampling in array recordings). The resulting generative model (BCM-MM) produces simulated measurements. To avoid having to fit the MM to predict each individual measurement channel of the brain-activity data, we compare the measured and predicted data at the level of summary statistics. We describe a novel particular implementation of this approach, called probabilistic representational similarity analysis (pRSA) with MMs, which uses representational dissimilarity matrices (RDMs) as the summary statistics. We validate this method by simulations of fMRI measurements (locally averaging voxels) based on a deep convolutional neural network for visual object recognition. Results indicate that the way the measurements sample the activity patterns strongly affects the apparent representational dissimilarities. However, modelling of the measurement process can account for these effects, and different BCMs remain distinguishable even under substantial noise. The pRSA method enables us to perform Bayesian inference on the set of BCMs and to recognize the data-generating model in each case. This article is part of the themed issue ‘Interpreting BOLD: a dialogue between cognitive and cellular neuroscience’.


NeuroImage ◽  
2008 ◽  
Vol 40 (1) ◽  
pp. 197-212 ◽  
Author(s):  
Jae-Min Lee ◽  
Jing Hu ◽  
Jianbo Gao ◽  
Bruce Crosson ◽  
Kyung K. Peck ◽  
...  

2018 ◽  
Vol 120 (5) ◽  
pp. 2311-2324 ◽  
Author(s):  
Andrey R. Nikolaev ◽  
Radha Nila Meghanathan ◽  
Cees van Leeuwen

In free viewing, the eyes return to previously visited locations rather frequently, even though the attentional and memory-related processes controlling eye-movement show a strong antirefixation bias. To overcome this bias, a special refixation triggering mechanism may have to be recruited. We probed the neural evidence for such a mechanism by combining eye tracking with EEG recording. A distinctive signal associated with refixation planning was observed in the EEG during the presaccadic interval: the presaccadic potential was reduced in amplitude before a refixation compared with normal fixations. The result offers direct evidence for a special refixation mechanism that operates in the saccade planning stage of eye movement control. Once the eyes have landed on the revisited location, acquisition of visual information proceeds indistinguishably from ordinary fixations. NEW & NOTEWORTHY A substantial proportion of eye fixations in human natural viewing behavior are revisits of recently visited locations, i.e., refixations. Our recently developed methods enabled us to study refixations in a free viewing visual search task, using combined eye movement and EEG recording. We identified in the EEG a distinctive refixation-related signal, signifying a control mechanism specific to refixations as opposed to ordinary eye fixations.


2010 ◽  
Vol 104 (1) ◽  
pp. 336-345 ◽  
Author(s):  
Alison Harris ◽  
Geoffrey Karl Aguirre

Although the right fusiform face area (FFA) is often linked to holistic processing, new data suggest this region also encodes part-based face representations. We examined this question by assessing the metric of neural similarity for faces using a continuous carryover functional MRI (fMRI) design. Using faces varying along dimensions of eye and mouth identity, we tested whether these axes are coded independently by separate part-tuned neural populations or conjointly by a single population of holistically tuned neurons. Consistent with prior results, we found a subadditive adaptation response in the right FFA, as predicted for holistic processing. However, when holistic processing was disrupted by misaligning the halves of the face, the right FFA continued to show significant adaptation, but in an additive pattern indicative of part-based neural tuning. Thus this region seems to contain neural populations capable of representing both individual parts and their integration into a face gestalt. A third experiment, which varied the asymmetry of changes in the eye and mouth identity dimensions, also showed part-based tuning from the right FFA. In contrast to the right FFA, the left FFA consistently showed a part-based pattern of neural tuning across all experiments. Together, these data support the existence of both part-based and holistic neural tuning within the right FFA, further suggesting that such tuning is surprisingly flexible and dynamic.


2021 ◽  
pp. 1-55
Author(s):  
Amit Naskar ◽  
Anirudh Vattikonda ◽  
Gustavo Deco ◽  
Dipanjan Roy ◽  
Arpan Banerjee

Abstract Previous computational models have related spontaneous resting-state brain activity with local excitatory−inhibitory balance in neuronal populations. However, how underlying neurotransmitter kinetics associated with E-I balance governs resting state spontaneous brain dynamics remains unknown. Understanding the mechanisms by virtue of which fluctuations in neurotransmitter concentrations, a hallmark of a variety of clinical conditions relate to functional brain activity is of critical importance. We propose a multi-scale dynamic mean field model (MDMF) – a system of coupled differential equations for capturing the synaptic gating dynamics in excitatory and inhibitory neural populations as a function of neurotransmitter kinetics. Individual brain regions are modelled as population of MDMF and are connected by realistic connection topologies estimated from Diffusion Tensor Imaging data. First, MDMF successfully predicts resting-state functionalconnectivity. Second, our results show that optimal range of glutamate and GABA neurotransmitter concentrations subserve as the dynamic working point of the brain, that is, the state of heightened metastability observed in empirical blood-oxygen-level dependent signals. Third, for predictive validity the network measures of segregation (modularity and clustering coefficient) and integration (global efficiency and characteristic path length) from existing healthy and pathological brain network studies could be captured by simulated functional connectivity from MDMF model.


Sign in / Sign up

Export Citation Format

Share Document