scholarly journals Neural representation of objects in space: a dual coding account

1998 ◽  
Vol 353 (1373) ◽  
pp. 1341-1351 ◽  
Author(s):  
Glyn W. Humphreys

I present evidence on the nature of object coding in the brain and discuss the implications of this coding for models of visual selective attention. Neuropsychological studies of task–based constraints on: (i) visual neglect; and (ii) reading and counting, reveal the existence of parallel forms of spatial representation for objects: within–object representations, where elements are coded as parts of objects, and between–object representations, where elements are coded as independent objects. Aside from these spatial codes for objects, however, the coding of visual space is limited. We are extremely poor at remembering small spatial displacements across eye movements, indicating (at best) impoverished coding of spatial position per se . Also, effects of element separation on spatial extinction can be eliminated by filling the space with an occluding object, indicating that spatial effects on visual selection are moderated by object coding. Overall, there are separate limits on visual processing reflecting: (i) the competition to code parts within objects; (ii) the small number of independent objects that can be coded in parallel; and (iii) task–based selection of whether within– or between–object codes determine behaviour. Between–object coding may be linked to the dorsal visual system while parallel coding of parts within objects takes place in the ventral system, although there may additionally be some dorsal involvement either when attention must be shifted within objects or when explicit spatial coding of parts is necessary for object identification.

2015 ◽  
Vol 27 (3) ◽  
pp. 474-491 ◽  
Author(s):  
Mayu Nishimura ◽  
K. Suzanne Scherf ◽  
Valentinos Zachariou ◽  
Michael J. Tarr ◽  
Marlene Behrmann

Although object perception involves encoding a wide variety of object properties (e.g., size, color, viewpoint), some properties are irrelevant for identifying the object. The key to successful object recognition is having an internal representation of the object identity that is insensitive to these properties while accurately representing important diagnostic features. Behavioral evidence indicates that the formation of these kinds of invariant object representations takes many years to develop. However, little research has investigated the developmental emergence of invariant object representations in the ventral visual processing stream, particularly in the lateral occipital complex (LOC) that is implicated in object processing in adults. Here, we used an fMR adaptation paradigm to evaluate age-related changes in the neural representation of objects within LOC across variations in size and viewpoint from childhood through early adulthood. We found a dissociation between the neural encoding of object size and object viewpoint within LOC: by age of 5–10 years, area LOC demonstrates adaptation across changes in size, but not viewpoint, suggesting that LOC responses are invariant to size variations, but that adaptation across changes in view is observed in LOC much later in development. Furthermore, activation in LOC was correlated with behavioral indicators of view invariance across the entire sample, such that greater adaptation was correlated with better recognition of objects across changes in viewpoint. We did not observe similar developmental differences within early visual cortex. These results indicate that LOC acquires the capacity to compute invariance specific to different sources of information at different time points over the course of development.


2017 ◽  
Author(s):  
Daniel Kaiser ◽  
Marius V. Peelen

AbstractTo optimize processing, the human visual system utilizes regularities present in naturalistic visual input. One of these regularities is the relative position of objects in a scene (e.g., a sofa in front of a television), with behavioral research showing that regularly positioned objects are easier to perceive and to remember. Here we use fMRI to test how positional regularities are encoded in the visual system. Participants viewed pairs of objects that formed minimalistic two-object scenes (e.g., a “living room” consisting of a sofa and television) presented in their regularly experienced spatial arrangement or in an irregular arrangement (with interchanged positions). Additionally, single objects were presented centrally and in isolation. Multi-voxel activity patterns evoked by the object pairs were modeled as the average of the response patterns evoked by the two single objects forming the pair. In two experiments, this approximation in object-selective cortex was significantly less accurate for the regularly than the irregularly positioned pairs, indicating integration of individual object representations. More detailed analysis revealed a transition from independent to integrative coding along the posterior-anterior axis of the visual cortex, with the independent component (but not the integrative component) being almost perfectly predicted by object selectivity across the visual hierarchy. These results reveal a transitional stage between individual object and multi-object coding in visual cortex, providing a possible neural correlate of efficient processing of regularly positioned objects in natural scenes.


Author(s):  
Emmanouil Froudarakis ◽  
Uri Cohen ◽  
Maria Diamantaki ◽  
Edgar Y. Walker ◽  
Jacob Reimer ◽  
...  

AbstractDespite variations in appearance we robustly recognize objects. Neuronal populations responding to objects presented under varying conditions form object manifolds and hierarchically organized visual areas are thought to untangle pixel intensities into linearly decodable object representations. However, the associated changes in the geometry of object manifolds along the cortex remain unknown. Using home cage training we showed that mice are capable of invariant object recognition. We simultaneously recorded the responses of thousands of neurons to measure the information about object identity available across the visual cortex and found that lateral visual areas LM, LI and AL carry more linearly decodable object identity information compared to other visual areas. We applied the theory of linear separability of manifolds, and found that the increase in classification capacity is associated with a decrease in the dimension and radius of the object manifold, identifying features of the population code that enable invariant object coding.


2021 ◽  
Author(s):  
Ning Mei ◽  
Roberto Santana ◽  
David Soto

AbstractDespite advances in the neuroscience of visual consciousness over the last decades, we still lack a framework for understanding the scope of unconscious processing and how it relates to conscious experience. Previous research observed brain signatures of unconscious contents in visual cortex, but these have not been identified in a reliable manner, with low trial numbers and signal detection theoretic constraints not allowing to decisively discard conscious perception. Critically, the extent to which unconscious content is represented in high-level processing stages along the ventral visual stream and linked prefrontal areas remains unknown. Using a within-subject, high-precision, highly-sampled fMRI approach, we show that unconscious contents, even those associated with null sensitivity, can be reliably decoded from multivoxel patterns that are highly distributed along the ventral visual pathway and also involving prefrontal substrates. Notably, the neural representation in these areas generalised across conscious and unconscious visual processing states, placing constraints on prior findings that fronto-parietal substrates support the representation of conscious contents and suggesting revisions to models of consciousness such as the neuronal global workspace. We then provide a computational model simulation of visual information processing/representation in the absence of perceptual sensitivity by using feedforward convolutional neural networks trained to perform a similar visual task to the human observers. The work provides a novel framework for pinpointing the neural representation of unconscious knowledge across different task domains.


2020 ◽  
Author(s):  
Zixuan Wang ◽  
Yuki Murai ◽  
David Whitney

AbstractPerceiving the positions of objects is a prerequisite for most other visual and visuomotor functions, but human perception of object position varies from one individual to the next. The source of these individual differences in perceived position and their perceptual consequences are unknown. Here, we tested whether idiosyncratic biases in the underlying representation of visual space propagate across different levels of visual processing. In Experiment 1, using a position matching task, we found stable, observer-specific compressions and expansions within local regions throughout the visual field. We then measured Vernier acuity (Experiment 2) and perceived size of objects (Experiment 3) across the visual field and found that individualized spatial distortions were closely associated with variations in both visual acuity and apparent object size. Our results reveal idiosyncratic biases in perceived position and size, originating from a heterogeneous spatial resolution that carries across the visual hierarchy.


Author(s):  
N Seijdel ◽  
N Tsakmakidis ◽  
EHF De Haan ◽  
SM Bohte ◽  
HS Scholte

AbstractFeedforward deep convolutional neural networks (DCNNs) are, under specific conditions, matching and even surpassing human performance in object recognition in natural scenes. This performance suggests that the analysis of a loose collection of image features could support the recognition of natural object categories, without dedicated systems to solve specific visual subtasks. Research in humans however suggests that while feedforward activity may suffice for sparse scenes with isolated objects, additional visual operations (‘routines’) that aid the recognition process (e.g. segmentation or grouping) are needed for more complex scenes. Linking human visual processing to performance of DCNNs with increasing depth, we here explored if, how, and when object information is differentiated from the backgrounds they appear on. To this end, we controlled the information in both objects and backgrounds, as well as the relationship between them by adding noise, manipulating background congruence and systematically occluding parts of the image. Results indicate that with an increase in network depth, there is an increase in the distinction between object- and background information. For more shallow networks, results indicated a benefit of training on segmented objects. Overall, these results indicate that, de facto, scene segmentation can be performed by a network of sufficient depth. We conclude that the human brain could perform scene segmentation in the context of object identification without an explicit mechanism, by selecting or “binding” features that belong to the object and ignoring other features, in a manner similar to a very deep convolutional neural network.


1978 ◽  
Vol 22 (1) ◽  
pp. 74-77
Author(s):  
Robert Fox

Virtually all the extensive research on inhibitory interactions among adjacent visual stimuli seen in such phenomena as simultaneous contrast and visual masking have employed situations in which the interacting stimulus elements occupy the same depth plane, i.e., the z-axis values are the same, in deference to the implicit assumption that processing of depth information occurs only after the visual processing of contour information is completed. But there are theoretical reasons and some data suggesting that the interactions among contours depend critically upon their relative positions in depth—interactions may not occur if the stimulus elements occupy different depth positions. The extent to which the metacontrast form of visual masking is dependent upon depth position was investigated in a series of experiments that used stereoscopic contours formed from random-element stereograms as test and mask stimuli. The random-element stereogram generation system permitted large variations in depth to be made without introducing confounding changes in proximal stimulation. The main results are 1) separation of test and mask stimuli in depth substantially reduces masking, and 2) when more than one stimulus is in visual space the stimulus that either appears first or appears closer to the observer receives preferential processing by the visual system.


2010 ◽  
Vol 22 (11) ◽  
pp. 2417-2426 ◽  
Author(s):  
Stephanie A. McMains ◽  
Sabine Kastner

Multiple stimuli that are present simultaneously in the visual field compete for neural representation. At the same time, however, multiple stimuli in cluttered scenes also undergo perceptual organization according to certain rules originally defined by the Gestalt psychologists such as similarity or proximity, thereby segmenting scenes into candidate objects. How can these two seemingly orthogonal neural processes that occur early in the visual processing stream be reconciled? One possibility is that competition occurs among perceptual groups rather than at the level of elements within a group. We probed this idea using fMRI by assessing competitive interactions across visual cortex in displays containing varying degrees of perceptual organization or perceptual grouping (Grp). In strong Grp displays, elements were arranged such that either an illusory figure or a group of collinear elements were present, whereas in weak Grp displays the same elements were arranged randomly. Competitive interactions among stimuli were overcome throughout early visual cortex and V4, when elements were grouped regardless of Grp type. Our findings suggest that context-dependent grouping mechanisms and competitive interactions are linked to provide a bottom–up bias toward candidate objects in cluttered scenes.


2004 ◽  
Vol 92 (1) ◽  
pp. 622-629 ◽  
Author(s):  
Mark A. Pinsk ◽  
Glen M. Doniger ◽  
Sabine Kastner

Selective attention operates in visual cortex by facilitating processing of selected stimuli and by filtering out unwanted information from nearby distracters over circumscribed regions of visual space. The neural representation of unattended stimuli outside this focus of attention is less well understood. We studied the neural fate of unattended stimuli using functional magnetic resonance imaging by dissociating the activity evoked by attended (target) stimuli presented to the periphery of a visual hemifield and unattended (distracter) stimuli presented simultaneously to a corresponding location of the contralateral hemifield. Subjects covertly directed attention to a series of target stimuli and performed either a low or a high attentional-load search task on a stream of otherwise identical stimuli. With this task, target-search-related activity increased with increasing attentional load, whereas distracter-related activity decreased with increasing load in areas V4 and TEO but not in early areas V1 and V2. This finding presents evidence for a load-dependent push-pull mechanism of selective attention that operates over large portions of the visual field at intermediate processing stages. This mechanism appeared to be controlled by a distributed frontoparietal network of brain areas that reflected processes related to target selection during spatially directed attention.


Author(s):  
Daniel Tomsic ◽  
Julieta Sztarker

Decapod crustaceans, in particular semiterrestrial crabs, are highly visual animals that greatly rely on visual information. Their responsiveness to visual moving stimuli, with behavioral displays that can be easily and reliably elicited in the laboratory, together with their sturdiness for experimental manipulation and the accessibility of their nervous system for intracellular electrophysiological recordings in the intact animal, make decapod crustaceans excellent experimental subjects for investigating the neurobiology of visually guided behaviors. Investigations of crustaceans have elucidated the general structure of their eyes and some of their specializations, the anatomical organization of the main brain areas involved in visual processing and their retinotopic mapping of visual space, and the morphology, physiology, and stimulus feature preferences of a number of well-identified classes of neurons, with emphasis on motion-sensitive elements. This anatomical and physiological knowledge, in connection with results of behavioral experiments in the laboratory and the field, are revealing the neural circuits and computations involved in important visual behaviors, as well as the substrate and mechanisms underlying visual memories in decapod crustaceans.


Sign in / Sign up

Export Citation Format

Share Document