scholarly journals Evidence for face selectivity in early vision

2020 ◽  
Author(s):  
Florence Campana ◽  
Jacob G. Martin ◽  
Levan Bokeria ◽  
Simon Thorpe ◽  
Xiong Jiang ◽  
...  

AbstractThe commonly accepted “simple-to-complex” model of visual processing in the brain posits that visual tasks on complex objects such as faces are based on representations in high-level visual areas. Yet, recent experimental data showing the visual system’s ability to localize faces in natural images within 100ms (Crouzet et al., 2010) challenge the prevalent hierarchical description of the visual system, and instead suggest the hypothesis of face-selectivity in early visual areas. In the present study, we tested this hypothesis with human participants in two eye tracking experiments, an fMRI experiment and an EEG experiment. We found converging evidence for neural representations selective for upright faces in V1/V2, with latencies starting around 40 ms post-stimulus onset. Our findings suggest a revision of the standard “simple-to-complex” model of hierarchical visual processing.Significance statementVisual processing in the brain is classically described as a series of stages with increasingly complex object representations: early visual areas encode simple visual features (such as oriented bars), and high-level visual areas encode representations for complex objects (such as faces). In the present study, we provide behavioral, fMRI, and EEG evidence for representations of complex objects – namely faces – in early visual areas. Our results challenge the standard “simple-to-complex” model of visual processing, suggesting that it needs to be revised to include neural representations for faces at the lowest levels of the visual hierarchy. Such early object representations would permit the rapid and precise localization of complex objects, as has previously been reported for the object class of faces.

2007 ◽  
Vol 98 (1) ◽  
pp. 382-393 ◽  
Author(s):  
Thomas J. McKeeff ◽  
David A. Remus ◽  
Frank Tong

Behavioral studies have shown that object recognition becomes severely impaired at fast presentation rates, indicating a limitation in temporal processing capacity. Here, we studied whether this behavioral limit in object recognition reflects limitations in the temporal processing capacity of early visual areas tuned to basic features or high-level areas tuned to complex objects. We used functional MRI (fMRI) to measure the temporal processing capacity of multiple areas along the ventral visual pathway progressing from the primary visual cortex (V1) to high-level object-selective regions, specifically the fusiform face area (FFA) and parahippocampal place area (PPA). Subjects viewed successive images of faces or houses at presentation rates varying from 2.3 to 37.5 items/s while performing an object discrimination task. Measures of the temporal frequency response profile of each visual area revealed a systematic decline in peak tuning across the visual hierarchy. Areas V1–V3 showed peak activity at rapid presentation rates of 18–25 items/s, area V4v peaked at intermediate rates (9 items/s), and the FFA and PPA peaked at the slowest temporal rates (4–5 items/s). Our results reveal a progressive loss in the temporal processing capacity of the human visual system as information is transferred from early visual areas to higher areas. These data suggest that temporal limitations in object recognition likely result from the limited processing capacity of high-level object-selective areas rather than that of earlier stages of visual processing.


2020 ◽  
Vol 30 (5) ◽  
pp. 2721-2739 ◽  
Author(s):  
Jackson C Liang ◽  
Jonathan Erez ◽  
Felicia Zhang ◽  
Rhodri Cusack ◽  
Morgan D Barense

Abstract Certain transformations must occur within the brain to allow rapid processing of familiar experiences. Complex objects are thought to become unitized, whereby multifeature conjunctions are retrieved as rapidly as a single feature. Behavioral studies strongly support unitization theory, but a compelling neural mechanism is lacking. Here, we examined how unitization transforms conjunctive representations to become more “feature-like” by recruiting posterior regions of the ventral visual stream (VVS) whose architecture is specialized for processing single features. We used functional magnetic resonance imaging to scan humans before and after visual training with novel objects. We implemented a novel multivoxel pattern analysis to measure a conjunctive code, which represented a conjunction of object features above and beyond the sum of the parts. Importantly, a multivoxel searchlight showed that the strength of conjunctive coding in posterior VVS increased posttraining. Furthermore, multidimensional scaling revealed representational separation at the level of individual features in parallel to the changes at the level of feature conjunctions. Finally, functional connectivity between anterior and posterior VVS was higher for novel objects than for trained objects, consistent with early involvement of anterior VVS in unitizing feature conjunctions in response to novelty. These data demonstrate that the brain implements unitization as a mechanism to refine complex object representations over the course of multiple learning experiences.


2020 ◽  
Author(s):  
Yaoda Xu ◽  
Maryam Vaziri-Pashkam

ABSTRACTAny given visual object input is characterized by multiple visual features, such as identity, position and size. Despite the usefulness of identity and nonidentity features in vision and their joint coding throughout the primate ventral visual processing pathway, they have so far been studied relatively independently. Here we document the relative coding strength of object identity and nonidentity features in a brain region and how this may change across the human ventral visual pathway. We examined a total of four nonidentity features, including two Euclidean features (position and size) and two non-Euclidean features (image statistics and spatial frequency content of an image). Overall, identity representation increased and nonidentity feature representation decreased along the ventral visual pathway, with identity outweighed the non-Euclidean features, but not the Euclidean ones, in higher levels of visual processing. A similar analysis was performed in 14 convolutional neural networks (CNNs) pretrained to perform object categorization with varying architecture, depth, and with/without recurrent processing. While the relative coding strength of object identity and nonidentity features in lower CNN layers matched well with that in early human visual areas, the match between higher CNN layers and higher human visual regions were limited. Similar results were obtained regardless of whether a CNN was trained with real-world or stylized object images that emphasized shape representation. Together, by measuring the relative coding strength of object identity and nonidentity features, our approach provided a new tool to characterize feature coding in the human brain and the correspondence between the brain and CNNs.SIGNIFICANCE STATEMENTThis study documented the relative coding strength of object identity compared to four types of nonidentity features along the human ventral visual processing pathway and compared brain responses with those of 14 CNNs pretrained to perform object categorization. Overall, identity representation increased and nonidentity feature representation decreased along the ventral visual pathway, with the coding strength of the different nonidentity features differed at higher levels of visual processing. While feature coding in lower CNN layers matched well with that of early human visual areas, the match between higher CNN layers and higher human visual regions were limited. Our approach provided a new tool to characterize feature coding in the human brain and the correspondence between the brain and CNNs.


2019 ◽  
Author(s):  
Koen V. Haak ◽  
Christian F. Beckmann

AbstractWhether and how the balance between plasticity and stability varies across the brain is an important open question. Within a processing hierarchy, it is thought that plasticity is increased at higher levels of cortical processing, but direct quantitative comparisons between low- and high-level plasticity have not been made so far. Here, we addressed this issue for the human cortical visual system. By quantifying plasticity as the complement of the heritability of functional connectivity, we demonstrate a non-monotonic relationship between plasticity and hierarchical level, such that plasticity decreases from early to mid-level cortex, and then increases further of the visual hierarchy. This non-monotonic relationship argues against recent theory that the balance between plasticity and stability is governed by the costs of the “coding-catastrophe”, and can be explained by a concurrent decline of short-term adaptation and rise of long-term plasticity up the visual processing hierarchy.


2021 ◽  
Author(s):  
Yiyuan Zhang ◽  
Ke Zhou ◽  
Pinglei Bao ◽  
Jia Liu

To achieve the computational goal of rapidly recognizing miscellaneous objects in the environment despite large variations in their appearance, our mind represents objects in a high-dimensional object space to provide separable category information and enable the extraction of different kinds of information necessary for various levels of the visual processing. To implement this abstract and complex object space, the ventral temporal cortex (VTC) develops different object-selective regions with a certain topological organization as the physical substrate. However, the principle that governs the topological organization of object selectivities in the VTC remains unclear. Here, equipped with the wiring cost minimization principle constrained by the wiring length of neurons in the human temporal lobe, we constructed a hybrid self-organizing map (SOM) model as an artificial VTC (VTC-SOM) to explain how the abstract and complex object space is faithfully implemented in the brain. In two in silico experiments with the empirical brain imaging and single-unit data, our VTC-SOM predicted the topological structure of fine-scale functional regions (face-, object-, body-, and place-selective regions) and the boundary (i.e., middle Fusiform Sulcus) in large-scale abstract functional maps (animate vs. inanimate, real-word large-size vs. small-size, central vs. peripheral), with no significant loss in functionality (e.g., categorical selectivity, a hierarchy of view-invariant representations). These findings illustrated that the simple principle utilized in our model, rather than multiple hypotheses such as temporal associations, conceptual knowledge, and computational demands together, was apparently sufficient to determine the topological organization of object-selectivities in the VTC. In this way, the high-dimensional object space is implemented in a two-dimensional cortical surface of the brain faithfully.


2019 ◽  
Author(s):  
Lina Teichmann ◽  
Genevieve L. Quek ◽  
Amanda K. Robinson ◽  
Tijl Grootswagers ◽  
Thomas A. Carlson ◽  
...  

AbstractThe ability to rapidly and accurately recognise complex objects is a crucial function of the human visual system. To recognise an object, we need to bind incoming visual features such as colour and form together into cohesive neural representations and integrate these with our pre-existing knowledge about the world. For some objects, typical colour is a central feature for recognition; for example, a banana is typically yellow. Here, we applied multivariate pattern analysis on time-resolved neuroimaging (magnetoencephalography) data to examine how object-colour knowledge affects emerging object representations over time. Our results from 20 participants (11 female) show that the typicality of object-colour combinations influences object representations, although not at the initial stages of object and colour processing. We find evidence that colour decoding peaks later for atypical object-colour combinations in comparison to typical object-colour combinations, illustrating the interplay between processing incoming object features and stored object-knowledge. Taken together, these results provide new insights into the integration of incoming visual information with existing conceptual object knowledge.Significance StatementTo recognise objects, we have to be able to bind object features such as colour and shape into one coherent representation and compare it to stored object knowledge. The magnetoencephalography data presented here provide novel insights about the integration of incoming visual information with our knowledge about the world. Using colour as a model to understand the interaction between seeing and knowing, we show that there is a unique pattern of brain activity for congruently coloured objects (e.g., a yellow banana) relative to incongruently coloured objects (e.g., a red banana). This effect of object-colour knowledge only occurs after single object features are processed, demonstrating that conceptual knowledge is accessed relatively late in the visual processing hierarchy.


eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Carlos González-García ◽  
Matthew W Flounders ◽  
Raymond Chang ◽  
Alexis T Baria ◽  
Biyu J He

How prior knowledge shapes perceptual processing across the human brain, particularly in the frontoparietal (FPN) and default-mode (DMN) networks, remains unknown. Using ultra-high-field (7T) functional magnetic resonance imaging (fMRI), we elucidated the effects that the acquisition of prior knowledge has on perceptual processing across the brain. We observed that prior knowledge significantly impacted neural representations in the FPN and DMN, rendering responses to individual visual images more distinct from each other, and more similar to the image-specific prior. In addition, neural representations were structured in a hierarchy that remained stable across perceptual conditions, with early visual areas and DMN anchored at the two extremes. Two large-scale cortical gradients occur along this hierarchy: first, dimensionality of the neural representational space increased along the hierarchy; second, prior’s impact on neural representations was greater in higher-order areas. These results reveal extensive and graded influences of prior knowledge on perceptual processing across the brain.


2020 ◽  
Author(s):  
Aliff Asyraff ◽  
Rafael Lemarchand ◽  
Andres Tamm ◽  
Paul Hoffman

AbstractMultivariate neuroimaging studies indicate that the brain represents word and object concepts in a format that readily generalises across stimuli. Here we investigated whether this was true for neural representations of events described using sentences. Participants viewed sentences describing four events in different ways. Multivariate classifiers were trained to discriminate the four events using a subset of sentences, allowing us to test generalisation to novel sentences. We found that neural patterns in a left-lateralised network of frontal, temporal and parietal regions discriminated events in a way that generalised successfully over changes in the syntactic and lexical properties of the sentences used to describe them. In contrast, decoding in visual areas was sentence-specific and failed to generalise to novel sentences. In the reverse analysis, we tested for decoding of syntactic and lexical form, independent of the event being described. Regions displaying this coding were limited and largely fell outside the canonical semantic network. Our results indicate that a distributed neural network represents the meaning of event sentences in a way that is robust to changes in their structure and form. They suggest that the semantic system disregards the surface properties of stimuli in order to represent their underlying conceptual significance.


2019 ◽  
Author(s):  
Sirui Liu ◽  
Qing Yu ◽  
Peter U. Tse ◽  
Patrick Cavanagh

SummaryWhen perception differs from the physical stimulus, as it does for visual illusions and binocular rivalry, the opportunity arises to localize where perception emerges in the visual processing hierarchy. Representations prior to that stage differ from the eventual conscious percept even though they provide input to it. Here we investigate where and how a remarkable misperception of position emerges in the brain. This “double-drift” illusion causes a dramatic mismatch between retinal and perceived location, producing a perceived path that can differ from its physical path by 45° or more [1]. The deviations in the perceived trajectory can accumulate over at least a second [1] whereas other motion-induced position shifts accumulate over only 80 to 100 ms before saturating [2]. Using fMRI and multivariate pattern analysis, we find that the illusory path does not share activity patterns with a matched physical path in any early visual areas. In contrast, a whole-brain searchlight analysis reveals a shared representation in more anterior regions of the brain. These higher-order areas would have the longer time constants required to accumulate the small moment-to-moment position offsets that presumably originate in early visual cortices, and then transform these sensory inputs into a final conscious percept. The dissociation between perception and the activity in early sensory cortex suggests that perceived position does not emerge in what is traditionally regarded as the visual system but emerges instead at a much higher level.


2014 ◽  
Vol 26 (1) ◽  
pp. 120-131 ◽  
Author(s):  
Thomas A. Carlson ◽  
Ryan A. Simmons ◽  
Nikolaus Kriegeskorte ◽  
L. Robert Slevc

In the ventral visual pathway, early visual areas encode light patterns on the retina in terms of image properties, for example, edges and color, whereas higher areas encode visual information in terms of objects and categories. At what point does semantic knowledge, as instantiated in human language, emerge? We examined this question by studying whether semantic similarity in language relates to the brain's organization of object representations in inferior temporal cortex (ITC), an area of the brain at the crux of several proposals describing how the brain might represent conceptual knowledge. Semantic relationships among words can be viewed as a geometrical structure with some pairs of words close in their meaning (e.g., man and boy) and other pairs more distant (e.g., man and tomato). ITC's representation of objects similarly can be viewed as a complex structure with some pairs of stimuli evoking similar patterns of activation (e.g., man and boy) and other pairs evoking very different patterns (e.g., man and tomato). In this study, we examined whether the geometry of visual object representations in ITC bears a correspondence to the geometry of semantic relationships between word labels used to describe the objects. We compared ITC's representation to semantic structure, evaluated by explicit ratings of semantic similarity and by five computational measures of semantic similarity. We show that the representational geometry of ITC—but not of earlier visual areas (V1)—is reflected both in explicit behavioral ratings of semantic similarity and also in measures of semantic similarity derived from word usage patterns in natural language. Our findings show that patterns of brain activity in ITC not only reflect the organization of visual information into objects but also represent objects in a format compatible with conceptual thought and language.


Sign in / Sign up

Export Citation Format

Share Document