Representation of Shapes, Edges, and Surfaces Across Multiple Cues in the Human Visual Cortex

2008 ◽  
Vol 99 (3) ◽  
pp. 1380-1393 ◽  
Author(s):  
Joakim Vinberg ◽  
Kalanit Grill-Spector

The lateral occipital complex (LOC) responds preferentially to objects compared with random stimuli or textures independent of the visual cue. However, it is unknown whether the LOC (or other cortical regions) are involved in the processing of edges or global surfaces without shape information. Here, we examined processing of 1) global shape, 2) disconnected edges without a global shape, and 3) global surfaces without edges versus random stimuli across motion and stereo cues. The LOC responded more strongly to global shapes than to edges, surfaces, or random stimuli, for both motion and stereo cues. However, its responses to local edges or global surfaces were not different from random stimuli. This suggests that the LOC processes shapes, not edges or surfaces. LOC also responded more strongly to objects than to holes with the same shape, suggesting sensitivity to border ownership. V7 responded more strongly to edges than to surfaces or random stimuli for both motion and stereo cues, whereas V3a and V4 preferred motion edges. Finally, a region in the caudal intraparietal sulcus (cIPS) responded more strongly to both stereo versus motion and to stereo surfaces versus random stereo (but not to motion surfaces vs. random motion). Thus we found evidence for cue-specific responses to surfaces in the cIPS, both cue-specific and cue-independent responses to edges in intermediate visual areas, and shape-selective responses across multiple cues in the LOC. Overall, these data suggest that integration of visual information across multiple cues is mainly achieved at the level of shape and underscore LOC's role in shape computations.

2016 ◽  
Vol 116 (4) ◽  
pp. 1795-1806 ◽  
Author(s):  
K. Sathian

Haptic sensing of objects acquires information about a number of properties. This review summarizes current understanding about how these properties are processed in the cerebral cortex of macaques and humans. Nonnoxious somatosensory inputs, after initial processing in primary somatosensory cortex, are partially segregated into different pathways. A ventrally directed pathway carries information about surface texture into parietal opercular cortex and thence to medial occipital cortex. A dorsally directed pathway transmits information regarding the location of features on objects to the intraparietal sulcus and frontal eye fields. Shape processing occurs mainly in the intraparietal sulcus and lateral occipital complex, while orientation processing is distributed across primary somatosensory cortex, the parietal operculum, the anterior intraparietal sulcus, and a parieto-occipital region. For each of these properties, the respective areas outside primary somatosensory cortex also process corresponding visual information and are thus multisensory. Consistent with the distributed neural processing of haptic object properties, tactile spatial acuity depends on interaction between bottom-up tactile inputs and top-down attentional signals in a distributed neural network. Future work should clarify the roles of the various brain regions and how they interact at the network level.


1996 ◽  
Vol 75 (4) ◽  
pp. 1673-1686 ◽  
Author(s):  
H. D. Critchley ◽  
E. T. Rolls

1. The primate orbitofrontal cortex is the site of convergence of information from primary taste and primary olfactory cortical regions. In addition, it receives projections from temporal lobe visual areas concerned with the representation of objects such as foods. Previous work has shown that the responses of gustatory neurons in the secondary taste area within the orbitofrontal cortex are modulated by hunger and satiety, in that they stop responding to the taste of a food on which an animal has been fed to behavioral satiation, yet may continue to respond to the taste of other foods. 2. This study demonstrates a similar modulation of the responses of olfactory and visual orbitofrontal cortex neurons after feeding to satiety. Seven of nine olfactory neurons that were responsive to the odors of foods, such as blackcurrant juice, were found to decrease their responses to the odor of the satiating food in a selective and statistically significant manner. 3. It also was found for eight of nine neurons that had selective responses to the sight of food, that they demonstrated a sensory-specific reduction in their visual responses to foods after satiation. 4. The responses of orbitofrontal cortex neurons selective for foods in more than one modality also were analyzed before and after feeding to satiation. Satiety often affected the responses of these multimodal neurons across all modalities, but a sensory-specific effect was not always demonstrable for both modalities. 5. These findings show that the olfactory and visual representations of food, as well as the taste representation of food, in the primate orbitofrontal cortex are modulated by hunger. Usually a component related to sensory-specific satiety can be demonstrated. The findings link at least part of the processing of olfactory and visual information in this brain region to the control of feeding-related behavior.


2017 ◽  
Author(s):  
Thomas C. Sprague ◽  
Sirawaj Itthipuripat ◽  
Vy A. Vo ◽  
John T. Serences

AbstractComputational models posit that visual attention is guided by activity within spatial maps that index the image-computable salience and the behavioral relevance of objects in the scene. However, the simultaneous influence of these factors on putative neural ‘attentional priority maps’ in human cortex is not well understood. We tested the hypothesis that visual salience and behavioral relevance independently impact the activation profile across retinotopically-organized cortical regions by quantifying attentional priority maps measured in human brains using functional MRI while participants attended one of two differentially-salient stimuli. We find that the topography of activation in priority maps, as reflected in the modulation of region-level patterns of population activity, independently indexed the physical salience and behavioral relevance of each scene element. Moreover, salience strongly impacted activation patterns in early visual areas, whereas later visual areas were dominated by relevance. This suggests that prioritizing spatial locations relies on distributed neural codes containing graded representations of salience and relevance across the visual hierarchy.Significance StatementOften, it is necessary to orient towards bright, unique, or sudden events in the environment – that is, salient stimuli. However, we can focus processing resources on less salient visual information if it is relevant to the task at hand. We tested a theory which supposes that we represent different scene elements according to both their salience and their relevance in a series of ‘priority maps’ by measuring fMRI activation patterns across the human brain and reconstructing spatial maps of the visual scene under different task conditions. We found that different regions indexed either the salience or the relevance of scene items, but not their interaction, suggesting an evolving representation of salience and relevance across different visual areas.


Perception ◽  
2018 ◽  
Vol 47 (6) ◽  
pp. 660-683 ◽  
Author(s):  
Kimberly Meier ◽  
Marita Partanen ◽  
Deborah Giaschi

It has been suggested that slow and medium-to-fast speeds of motion may be processed by at least partially separate mechanisms. The purpose of this study was to establish the cortical areas activated during motion-defined form and global motion tasks as a function of speed, using functional magnetic resonance imaging. Participants performed discrimination tasks with random dot stimuli at high coherence, at coherence near their own thresholds, and for random motion. Stimuli were moving at 0.1 or 5 deg/s. In the motion-defined form task, lateral occipital complex, V5/MT+ and intraparietal sulcus showed greater activation by high or near-threshold coherence than by random motion stimuli; V5/MT+ and intraparietal sulcus demonstrated greater activation for 5 than 0.1 deg/s dot motion. In the global motion task, only high coherence stimuli elicited significant activation over random motion; this activation was primarily in nonclassical motion areas. V5/MT+ was active for all motion conditions and showed similar activation for coherent and random motion. No regions demonstrated speed-tuning effects for global motion. These results suggest that similar cortical systems are activated by slow- and medium-speed stimuli during these tasks in healthy adults.


2019 ◽  
Vol 31 (6) ◽  
pp. 821-836 ◽  
Author(s):  
Elliot Collins ◽  
Erez Freud ◽  
Jana M. Kainerstorfer ◽  
Jiaming Cao ◽  
Marlene Behrmann

Although shape perception is primarily considered a function of the ventral visual pathway, previous research has shown that both dorsal and ventral pathways represent shape information. Here, we examine whether the shape-selective electrophysiological signals observed in dorsal cortex are a product of the connectivity to ventral cortex or are independently computed. We conducted multiple EEG studies in which we manipulated the input parameters of the stimuli so as to bias processing to either the dorsal or ventral visual pathway. Participants viewed displays of common objects with shape information parametrically degraded across five levels. We measured shape sensitivity by regressing the amplitude of the evoked signal against the degree of stimulus scrambling. Experiment 1, which included grayscale versions of the stimuli, served as a benchmark establishing the temporal pattern of shape processing during typical object perception. These stimuli evoked broad and sustained patterns of shape sensitivity beginning as early as 50 msec after stimulus onset. In Experiments 2 and 3, we calibrated the stimuli such that visual information was delivered primarily through parvocellular inputs, which mainly project to the ventral pathway, or through koniocellular inputs, which mainly project to the dorsal pathway. In the second and third experiments, shape sensitivity was observed, but in distinct spatio-temporal configurations from each other and from that elicited by grayscale inputs. Of particular interest, in the koniocellular condition, shape selectivity emerged earlier than in the parvocellular condition. These findings support the conclusion of distinct dorsal pathway computations of object shape, independent from the ventral pathway.


2010 ◽  
Vol 104 (4) ◽  
pp. 2075-2081 ◽  
Author(s):  
Lars Strother ◽  
Adrian Aldcroft ◽  
Cheryl Lavell ◽  
Tutis Vilis

Functional MRI (fMRI) studies of the human object recognition system commonly identify object-selective cortical regions by comparing blood oxygen level–dependent (BOLD) responses to objects versus those to scrambled objects. Object selectivity distinguishes human lateral occipital cortex (LO) from earlier visual areas. Recent studies suggest that, in addition to being object selective, LO is retinotopically organized; LO represents both object and location information. Although LO responses to objects have been shown to depend on location, it is not known whether responses to scrambled objects vary similarly. This is important because it would suggest that the degree of object selectivity in LO does not vary with retinal stimulus position. We used a conventional functional localizer to identify human visual area LO by comparing BOLD responses to objects versus scrambled objects presented to either the upper (UVF) or lower (LVF) visual field. In agreement with recent findings, we found evidence of position-dependent responses to objects. However, we observed the same degree of position dependence for scrambled objects and thus object selectivity did not differ for UVF and LVF stimuli. We conclude that, in terms of BOLD response, LO discriminates objects from non-objects equally well in either visual field location, despite stronger responses to objects in the LVF.


Author(s):  
Jessica A.F. Thompson ◽  
Yoshua Bengio ◽  
Elia Formisano ◽  
Marc Schönwiesner

AbstractThe correspondence between the activity of artificial neurons in convolutional neural networks (CNNs) trained to recognize objects in images and neural activity collected throughout the primate visual system has been well documented. Shallower layers of CNNs are typically more similar to early visual areas and deeper layers tend to be more similar to later visual areas, providing evidence for a shared representational hierarchy. This phenomenon has not been thoroughly studied in the auditory domain. Here, we compared the representations of CNNs trained to recognize speech (triphone recognition) to 7-Tesla fMRI activity collected throughout the human auditory pathway, including subcortical and cortical regions, while participants listened to speech. We found no evidence for a shared representational hierarchy of acoustic speech features. Instead, all auditory regions of interest were most similar to a single layer of the CNNs: the first fully-connected layer. This layer sits at the boundary between the relatively task-general intermediate layers and the highly task-specific final layers. This suggests that alternative architectural designs and/or training objectives may be needed to achieve fine-grained layer-wise correspondence with the human auditory pathway.HighlightsTrained CNNs more similar to auditory fMRI activity than untrainedNo evidence of a shared representational hierarchy for acoustic featuresAll ROIs were most similar to the first fully-connected layerCNN performance on speech recognition task positively associated with fmri similarity


2019 ◽  
Vol 30 (3) ◽  
pp. 875-887
Author(s):  
Kai Hwang ◽  
James M Shine ◽  
Dillan Cellier ◽  
Mark D’Esposito

Abstract Past studies have demonstrated that flexible interactions between brain regions support a wide range of goal-directed behaviors. However, the neural mechanisms that underlie adaptive communication between brain regions are not well understood. In this study, we combined theta-burst transcranial magnetic stimulation (TMS) and functional magnetic resonance imaging to investigate the sources of top-down biasing signals that influence task-evoked functional connectivity. Subjects viewed sequences of images of faces and buildings and were required to detect repetitions (2-back vs. 1-back) of the attended stimuli category (faces or buildings). We found that functional connectivity between ventral temporal cortex and the primary visual cortex (VC) increased during processing of task-relevant stimuli, especially during higher memory loads. Furthermore, the strength of functional connectivity was greater for correct trials. Increases in task-evoked functional connectivity strength were correlated with increases in activity in multiple frontal, parietal, and subcortical (caudate and thalamus) regions. Finally, we found that TMS to superior intraparietal sulcus (IPS), but not to primary somatosensory cortex, decreased task-specific modulation in connectivity patterns between the primary VC and the parahippocampal place area. These findings demonstrate that the human IPS is a source of top-down biasing signals that modulate task-evoked functional connectivity among task-relevant cortical regions.


2021 ◽  
pp. 191-235
Author(s):  
Richard E. Passingham

The dorsal prefrontal (PF) cortex generates and plans the goals or targets for foveal search and manual foraging. The goals are conditional on the relative recency of prior events and actions, and the connections of areas 9/46 and 46 explain how these areas can support the ability to generate the next goal. Area 9/46 can generate sequences of eye movements because it has visuospatial inputs from the cortex in the intraparietal sulcus and outputs to the frontal eye field and superior colliculus. Area 46 can generate sequences of hand and arm movements because it has inputs from the inferior parietal areas PFG and SII and outputs to the forelimb regions of the premotor areas and thence to the motor cortex. Both areas get timing and order information indirectly from the parietal cortex and hippocampus, and colour and shape information from the ventral prefrontal cortex. Inputs from the orbital prefrontal cortex enable both areas to integrate generate goals in accordance with current needs.


2002 ◽  
Vol 87 (6) ◽  
pp. 3102-3116 ◽  
Author(s):  
Galia Avidan ◽  
Michal Harel ◽  
Talma Hendler ◽  
Dafna Ben-Bashat ◽  
Ehud Zohary ◽  
...  

An important characteristic of visual perception is the fact that object recognition is largely immune to changes in viewing conditions. This invariance is obtained within a sequence of ventral stream visual areas beginning in area V1 and ending in high order occipito-temporal object areas (the lateral occipital complex, LOC). Here we studied whether this transformation could be observed in the contrast response of these areas. Subjects were presented with line drawings of common objects and faces in five different contrast levels (0, 4, 6, 10, and 100%). Our results show that indeed there was a gradual trend of increasing contrast invariance moving from area V1, which manifested high sensitivity to contrast changes, to the LOC, which showed a significantly higher degree of invariance at suprathreshold contrasts (from 10 to 100%). The trend toward increased invariance could be observed for both face and object images; however, it was more complete for the face images, while object images still manifested substantial sensitivity to contrast changes. Control experiments ruled out the involvement of attention effects or hemodynamic “ceiling” in producing the contrast invariance. The transition from V1 to LOC was gradual with areas along the ventral stream becoming increasingly contrast-invariant. These results further stress the hierarchical and gradual nature of the transition from early retinotopic areas to high order ones, in the build-up of abstract object representations.


Sign in / Sign up

Export Citation Format

Share Document