scholarly journals Size Precedes View: Developmental Emergence of Invariant Object Representations in Lateral Occipital Complex

2015 ◽  
Vol 27 (3) ◽  
pp. 474-491 ◽  
Author(s):  
Mayu Nishimura ◽  
K. Suzanne Scherf ◽  
Valentinos Zachariou ◽  
Michael J. Tarr ◽  
Marlene Behrmann

Although object perception involves encoding a wide variety of object properties (e.g., size, color, viewpoint), some properties are irrelevant for identifying the object. The key to successful object recognition is having an internal representation of the object identity that is insensitive to these properties while accurately representing important diagnostic features. Behavioral evidence indicates that the formation of these kinds of invariant object representations takes many years to develop. However, little research has investigated the developmental emergence of invariant object representations in the ventral visual processing stream, particularly in the lateral occipital complex (LOC) that is implicated in object processing in adults. Here, we used an fMR adaptation paradigm to evaluate age-related changes in the neural representation of objects within LOC across variations in size and viewpoint from childhood through early adulthood. We found a dissociation between the neural encoding of object size and object viewpoint within LOC: by age of 5–10 years, area LOC demonstrates adaptation across changes in size, but not viewpoint, suggesting that LOC responses are invariant to size variations, but that adaptation across changes in view is observed in LOC much later in development. Furthermore, activation in LOC was correlated with behavioral indicators of view invariance across the entire sample, such that greater adaptation was correlated with better recognition of objects across changes in viewpoint. We did not observe similar developmental differences within early visual cortex. These results indicate that LOC acquires the capacity to compute invariance specific to different sources of information at different time points over the course of development.

1998 ◽  
Vol 353 (1373) ◽  
pp. 1341-1351 ◽  
Author(s):  
Glyn W. Humphreys

I present evidence on the nature of object coding in the brain and discuss the implications of this coding for models of visual selective attention. Neuropsychological studies of task–based constraints on: (i) visual neglect; and (ii) reading and counting, reveal the existence of parallel forms of spatial representation for objects: within–object representations, where elements are coded as parts of objects, and between–object representations, where elements are coded as independent objects. Aside from these spatial codes for objects, however, the coding of visual space is limited. We are extremely poor at remembering small spatial displacements across eye movements, indicating (at best) impoverished coding of spatial position per se . Also, effects of element separation on spatial extinction can be eliminated by filling the space with an occluding object, indicating that spatial effects on visual selection are moderated by object coding. Overall, there are separate limits on visual processing reflecting: (i) the competition to code parts within objects; (ii) the small number of independent objects that can be coded in parallel; and (iii) task–based selection of whether within– or between–object codes determine behaviour. Between–object coding may be linked to the dorsal visual system while parallel coding of parts within objects takes place in the ventral system, although there may additionally be some dorsal involvement either when attention must be shifted within objects or when explicit spatial coding of parts is necessary for object identification.


Neurology ◽  
2020 ◽  
Vol 95 (12) ◽  
pp. e1672-e1685 ◽  
Author(s):  
Colin Groot ◽  
B.T. Thomas Yeo ◽  
Jacob W. Vogel ◽  
Xiuming Zhang ◽  
Nanbo Sun ◽  
...  

ObjectiveTo determine whether atrophy relates to phenotypical variants of posterior cortical atrophy (PCA) recently proposed in clinical criteria (i.e., dorsal, ventral, dominant-parietal, and caudal) we assessed associations between latent atrophy factors and cognition.MethodsWe employed a data-driven Bayesian modeling framework based on latent Dirichlet allocation to identify latent atrophy factors in a multicenter cohort of 119 individuals with PCA (age 64 ± 7 years, 38% male, Mini-Mental State Examination 21 ± 5, 71% β-amyloid positive, 29% β-amyloid status unknown). The model uses standardized gray matter density images as input (adjusted for age, sex, intracranial volume, MRI scanner field strength, and whole-brain gray matter volume) and provides voxelwise probabilistic maps for a predetermined number of atrophy factors, allowing every individual to express each factor to a degree without a priori classification. Individual factor expressions were correlated to 4 PCA-specific cognitive domains (object perception, space perception, nonvisual/parietal functions, and primary visual processing) using general linear models.ResultsThe model revealed 4 distinct yet partially overlapping atrophy factors: right-dorsal, right-ventral, left-ventral, and limbic. We found that object perception and primary visual processing were associated with atrophy that predominantly reflects the right-ventral factor. Furthermore, space perception was associated with atrophy that predominantly represents the right-dorsal and right-ventral factors. However, individual participant profiles revealed that the large majority expressed multiple atrophy factors and had mixed clinical profiles with impairments across multiple domains, rather than displaying a discrete clinical–radiologic phenotype.ConclusionOur results indicate that specific brain behavior networks are vulnerable in PCA, but most individuals display a constellation of affected brain regions and symptoms, indicating that classification into 4 mutually exclusive variants is unlikely to be clinically useful.


2021 ◽  
Author(s):  
Marek A. Pedziwiatr ◽  
Elisabeth von dem Hagen ◽  
Christoph Teufel

Humans constantly move their eyes to explore the environment and obtain information. Competing theories of gaze guidance consider the factors driving eye movements within a dichotomy between low-level visual features and high-level object representations. However, recent developments in object perception indicate a complex and intricate relationship between features and objects. Specifically, image-independent object-knowledge can generate objecthood by dynamically reconfiguring how feature space is carved up by the visual system. Here, we adopt this emerging perspective of object perception, moving away from the simplifying dichotomy between features and objects in explanations of gaze guidance. We recorded eye movements in response to stimuli that appear as meaningless patches on initial viewing but are experienced as coherent objects once relevant object-knowledge has been acquired. We demonstrate that gaze guidance differs substantially depending on whether observers experienced the same stimuli as meaningless patches or organised them into object representations. In particular, fixations on identical images became object-centred, less dispersed, and more consistent across observers once exposed to relevant prior object-knowledge. Observers' gaze behaviour also indicated a shift from exploratory information-sampling to a strategy of extracting information mainly from selected, object-related image areas. These effects were evident from the first fixations on the image. Importantly, however, eye-movements were not fully determined by object representations but were best explained by a simple model that integrates image-computable features and high-level, knowledge-dependent object representations. Overall, the results show how information sampling via eye-movements in humans is guided by a dynamic interaction between image-computable features and knowledge-driven perceptual organisation.


Author(s):  
Emmanouil Froudarakis ◽  
Uri Cohen ◽  
Maria Diamantaki ◽  
Edgar Y. Walker ◽  
Jacob Reimer ◽  
...  

AbstractDespite variations in appearance we robustly recognize objects. Neuronal populations responding to objects presented under varying conditions form object manifolds and hierarchically organized visual areas are thought to untangle pixel intensities into linearly decodable object representations. However, the associated changes in the geometry of object manifolds along the cortex remain unknown. Using home cage training we showed that mice are capable of invariant object recognition. We simultaneously recorded the responses of thousands of neurons to measure the information about object identity available across the visual cortex and found that lateral visual areas LM, LI and AL carry more linearly decodable object identity information compared to other visual areas. We applied the theory of linear separability of manifolds, and found that the increase in classification capacity is associated with a decrease in the dimension and radius of the object manifold, identifying features of the population code that enable invariant object coding.


2021 ◽  
Author(s):  
Ning Mei ◽  
Roberto Santana ◽  
David Soto

AbstractDespite advances in the neuroscience of visual consciousness over the last decades, we still lack a framework for understanding the scope of unconscious processing and how it relates to conscious experience. Previous research observed brain signatures of unconscious contents in visual cortex, but these have not been identified in a reliable manner, with low trial numbers and signal detection theoretic constraints not allowing to decisively discard conscious perception. Critically, the extent to which unconscious content is represented in high-level processing stages along the ventral visual stream and linked prefrontal areas remains unknown. Using a within-subject, high-precision, highly-sampled fMRI approach, we show that unconscious contents, even those associated with null sensitivity, can be reliably decoded from multivoxel patterns that are highly distributed along the ventral visual pathway and also involving prefrontal substrates. Notably, the neural representation in these areas generalised across conscious and unconscious visual processing states, placing constraints on prior findings that fronto-parietal substrates support the representation of conscious contents and suggesting revisions to models of consciousness such as the neuronal global workspace. We then provide a computational model simulation of visual information processing/representation in the absence of perceptual sensitivity by using feedforward convolutional neural networks trained to perform a similar visual task to the human observers. The work provides a novel framework for pinpointing the neural representation of unconscious knowledge across different task domains.


2021 ◽  
Vol 15 ◽  
Author(s):  
Julian L. Amengual ◽  
Suliann Ben Hamed

Persistent activity has been observed in the prefrontal cortex (PFC), in particular during the delay periods of visual attention tasks. Classical approaches based on the average activity over multiple trials have revealed that such an activity encodes the information about the attentional instruction provided in such tasks. However, single-trial approaches have shown that activity in this area is rather sparse than persistent and highly heterogeneous not only within the trials but also between the different trials. Thus, this observation raised the question of how persistent the actually persistent attention-related prefrontal activity is and how it contributes to spatial attention. In this paper, we review recent evidence of precisely deconstructing the persistence of the neural activity in the PFC in the context of attention orienting. The inclusion of machine-learning methods for decoding the information reveals that attention orienting is a highly dynamic process, possessing intrinsic oscillatory dynamics working at multiple timescales spanning from milliseconds to minutes. Dimensionality reduction methods further show that this persistent activity dynamically incorporates multiple sources of information. This novel framework reflects a high complexity in the neural representation of the attention-related information in the PFC, and how its computational organization predicts behavior.


2020 ◽  
Author(s):  
Yaoda Xu ◽  
Maryam Vaziri-Pashkam

ABSTRACTAny given visual object input is characterized by multiple visual features, such as identity, position and size. Despite the usefulness of identity and nonidentity features in vision and their joint coding throughout the primate ventral visual processing pathway, they have so far been studied relatively independently. Here we document the relative coding strength of object identity and nonidentity features in a brain region and how this may change across the human ventral visual pathway. We examined a total of four nonidentity features, including two Euclidean features (position and size) and two non-Euclidean features (image statistics and spatial frequency content of an image). Overall, identity representation increased and nonidentity feature representation decreased along the ventral visual pathway, with identity outweighed the non-Euclidean features, but not the Euclidean ones, in higher levels of visual processing. A similar analysis was performed in 14 convolutional neural networks (CNNs) pretrained to perform object categorization with varying architecture, depth, and with/without recurrent processing. While the relative coding strength of object identity and nonidentity features in lower CNN layers matched well with that in early human visual areas, the match between higher CNN layers and higher human visual regions were limited. Similar results were obtained regardless of whether a CNN was trained with real-world or stylized object images that emphasized shape representation. Together, by measuring the relative coding strength of object identity and nonidentity features, our approach provided a new tool to characterize feature coding in the human brain and the correspondence between the brain and CNNs.SIGNIFICANCE STATEMENTThis study documented the relative coding strength of object identity compared to four types of nonidentity features along the human ventral visual processing pathway and compared brain responses with those of 14 CNNs pretrained to perform object categorization. Overall, identity representation increased and nonidentity feature representation decreased along the ventral visual pathway, with the coding strength of the different nonidentity features differed at higher levels of visual processing. While feature coding in lower CNN layers matched well with that of early human visual areas, the match between higher CNN layers and higher human visual regions were limited. Our approach provided a new tool to characterize feature coding in the human brain and the correspondence between the brain and CNNs.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Rose Bruffaerts ◽  
◽  
Lorraine K. Tyler ◽  
Meredith Shafto ◽  
Kamen A. Tsvetanov ◽  
...  

Abstract Making sense of the external world is vital for multiple domains of cognition, and so it is crucial that object recognition is maintained across the lifespan. We investigated age differences in perceptual and conceptual processing of visual objects in a population-derived sample of 85 healthy adults (24–87 years old) by relating measures of object processing to cognition across the lifespan. Magnetoencephalography (MEG) was recorded during a picture naming task to provide a direct measure of neural activity, that is not confounded by age-related vascular changes. Multiple linear regression was used to estimate neural responsivity for each individual, namely the capacity to represent visual or semantic information relating to the pictures. We find that the capacity to represent semantic information is linked to higher naming accuracy, a measure of task-specific performance. In mature adults, the capacity to represent semantic information also correlated with higher levels of fluid intelligence, reflecting domain-general performance. In contrast, the latency of visual processing did not relate to measures of cognition. These results indicate that neural responsivity measures relate to naming accuracy and fluid intelligence. We propose that maintaining neural responsivity in older age confers benefits in task-related and domain-general cognitive processes, supporting the brain maintenance view of healthy cognitive ageing.


2010 ◽  
Vol 22 (11) ◽  
pp. 2417-2426 ◽  
Author(s):  
Stephanie A. McMains ◽  
Sabine Kastner

Multiple stimuli that are present simultaneously in the visual field compete for neural representation. At the same time, however, multiple stimuli in cluttered scenes also undergo perceptual organization according to certain rules originally defined by the Gestalt psychologists such as similarity or proximity, thereby segmenting scenes into candidate objects. How can these two seemingly orthogonal neural processes that occur early in the visual processing stream be reconciled? One possibility is that competition occurs among perceptual groups rather than at the level of elements within a group. We probed this idea using fMRI by assessing competitive interactions across visual cortex in displays containing varying degrees of perceptual organization or perceptual grouping (Grp). In strong Grp displays, elements were arranged such that either an illusory figure or a group of collinear elements were present, whereas in weak Grp displays the same elements were arranged randomly. Competitive interactions among stimuli were overcome throughout early visual cortex and V4, when elements were grouped regardless of Grp type. Our findings suggest that context-dependent grouping mechanisms and competitive interactions are linked to provide a bottom–up bias toward candidate objects in cluttered scenes.


Sign in / Sign up

Export Citation Format

Share Document