scholarly journals Sensitivity of human visual cortical area V6 to stereoscopic depth gradients associated with self-motion

2011 ◽  
Vol 106 (3) ◽  
pp. 1240-1249 ◽  
Author(s):  
Velia Cardin ◽  
Andrew T. Smith

The principal visual cue to self-motion (egomotion) is optic flow, which is specified in terms of local 2D velocities in the retinal image without reference to depth cues. However, in general, points near the center of expansion of natural flow fields are distant, whereas those in the periphery are closer, creating gradients of horizontal binocular disparity. To assess whether the brain combines disparity gradients with optic flow when encoding egomotion, stereoscopic gradients were applied to expanding dot patterns presented to observers during functional MRI scanning. The gradients were radially symmetrical, disparity changing as a function of eccentricity. The depth cues were either consistent with egomotion (peripheral dots perceived as near and central dots perceived as far) or inconsistent (the reverse gradient, central dots near, peripheral dots far). The BOLD activity generated by these stimuli was compared in a range of predefined visual regions in 13 participants with good stereoacuity. Visual area V6, in the parieto-occipital sulcus, showed a unique pattern of results, responding well to all optic flow patterns but much more strongly when they were paired with consistent rather than inconsistent or zero-disparity gradients. Of the other areas examined, a region of the precuneus and parietoinsular vestibular cortex also differentiate between consistent and inconsistent gradients, but with weak or suppressive responses. V3A, V7, MT, and ventral intraparietal area responded more strongly in the presence of a depth gradient but were indifferent to its depth-flow congruence. The results suggest that depth and flow cues are integrated in V6 to improve estimation of egomotion.

2017 ◽  
Vol 30 (7-8) ◽  
pp. 739-761 ◽  
Author(s):  
Ramy Kirollos ◽  
Robert S. Allison ◽  
Stephen Palmisano

Behavioural studies have consistently found stronger vection responses for oscillating, compared to smooth/constant, patterns of radial flow (the simulated viewpoint oscillation advantage for vection). Traditional accounts predict that simulated viewpoint oscillation should impair vection by increasing visual–vestibular conflicts in stationary observers (as this visual oscillation simulates self-accelerations that should strongly stimulate the vestibular apparatus). However, support for increased vestibular activity during accelerating vection has been mixed in the brain imaging literature. This fMRI study examined BOLD activity in visual (cingulate sulcus visual area — CSv; medial temporal complex — MT+; V6; precuneus motion area — PcM) and vestibular regions (parieto-insular vestibular cortex — PIVC/posterior insular cortex — PIC; ventral intraparietal region — VIP) when stationary observers were exposed to vection-inducing optic flow (i.e., globally coherent oscillating and smooth self-motion displays) as well as two suitable control displays. In line with earlier studies in which no vection occurred, CSv and PIVC/PIC both showed significantly increased BOLD activity during oscillating global motion compared to the other motion conditions (although this effect was found for fewer subjects in PIVC/PIC). The increase in BOLD activity in PIVC/PIC during prolonged exposure to the oscillating (compared to smooth) patterns of global optical flow appears consistent with vestibular facilitation.


Author(s):  
Caroline A. Miller ◽  
Laura L. Bruce

The first visual cortical axons arrive in the cat superior colliculus by the time of birth. Adultlike receptive fields develop slowly over several weeks following birth. The developing cortical axons go through a sequence of changes before acquiring their adultlike morphology and function. To determine how these axons interact with neurons in the colliculus, cortico-collicular axons were labeled with biocytin (an anterograde neuronal tracer) and studied with electron microscopy.Deeply anesthetized animals received 200-500 nl injections of biocytin (Sigma; 5% in phosphate buffer) in the lateral suprasylvian visual cortical area. After a 24 hr survival time, the animals were deeply anesthetized and perfused with 0.9% phosphate buffered saline followed by fixation with a solution of 1.25% glutaraldehyde and 1.0% paraformaldehyde in 0.1M phosphate buffer. The brain was sectioned transversely on a vibratome at 50 μm. The tissue was processed immediately to visualize the biocytin.


2010 ◽  
Vol 103 (4) ◽  
pp. 1865-1873 ◽  
Author(s):  
Tao Zhang ◽  
Kenneth H. Britten

The ventral intraparietal area (VIP) of the macaque monkey is thought to be involved in judging heading direction based on optic flow. We recorded neuronal discharges in VIP while monkeys were performing a two-alternative, forced-choice heading discrimination task to relate quantitatively the activity of VIP neurons to monkeys' perceptual choices. Most VIP neurons were responsive to simulated heading stimuli and were tuned such that their responses changed across a range of forward trajectories. Using receiver operating characteristic (ROC) analysis, we found that most VIP neurons were less sensitive to small heading changes than was the monkey, although a minority of neurons were equally sensitive. Pursuit eye movements modestly yet significantly increased both neuronal and behavioral thresholds by approximately the same amount. Our results support the view that VIP activity is involved in self-motion judgments.


2018 ◽  
Vol 119 (3) ◽  
pp. 1113-1126 ◽  
Author(s):  
Mengmeng Shao ◽  
Gregory C. DeAngelis ◽  
Dora E. Angelaki ◽  
Aihua Chen

The ventral intraparietal area (VIP) of the macaque brain is a multimodal cortical region, with many cells tuned to both optic flow and vestibular stimuli. Responses of many VIP neurons also show robust correlations with perceptual judgments during a fine heading discrimination task. Previous studies have shown that heading tuning based on optic flow is represented in a clustered fashion in VIP. However, it is unknown whether vestibular self-motion selectivity is clustered in VIP. Moreover, it is not known whether stimulus- and choice-related signals in VIP show clustering in the context of a heading discrimination task. To address these issues, we compared the response characteristics of isolated single units (SUs) with those of the undifferentiated multiunit (MU) activity corresponding to several neighboring neurons recorded from the same microelectrode. We find that MU activity typically shows selectivity similar to that of simultaneously recorded SUs, for both the vestibular and visual stimulus conditions. In addition, the choice-related activity of MU signals, as quantified using choice probabilities, is correlated with the choice-related activity of SUs. Overall, these findings suggest that both sensory and choice-related signals regarding self-motion are clustered in VIP. NEW & NOTEWORTHY We demonstrate, for the first time, that the vestibular tuning of ventral intraparietal area (VIP) neurons in response to both translational and rotational motion is clustered. In addition, heading discriminability and choice-related activity are also weakly clustered in VIP.


2019 ◽  
Author(s):  
Jorrit S Montijn ◽  
Rex G Liu ◽  
Amir Aschner ◽  
Adam Kohn ◽  
Peter E Latham ◽  
...  

AbstractIf the brain processes incoming data efficiently, information should degrade little between early and later neural processing stages, and so information in early stages should match behavioral performance. For instance, if there is enough information in a visual cortical area to determine the orientation of a grating to within 1 degree, and the code is simple enough to be read out by downstream circuits, then animals should be able to achieve that performance behaviourally. Despite over 30 years of research, it is still not known how efficient the brain is. For tasks involving a large number of neurons, the amount of information encoded by neural circuits is limited by differential correlations. Therefore, determining how much information is encoded requires quantifying the strength of differential correlations. Detecting them, however, is difficult. We report here a new method, which requires on the order of 100s of neurons and trials. This method relies on computing the alignment of the neural stimulus encoding direction, f′, with the eigenvectors of the noise covariance matrix, Σ. In the presence of strong differential correlations, f′ must be spanned by a small number of the eigenvectors with largest eigenvalues. Using simulations with a leaky-integrate-and-fire neuron model of the LGN-V1 circuit, we confirmed that this method can indeed detect differential correlations consistent with those that would limit orientation discrimination thresholds to 0.5-3 degrees. We applied this technique to V1 recordings in awake monkeys and found signatures of differential correlations, consistent with a discrimination threshold of 0.47-1.20 degrees, which is not far from typical discrimination thresholds (1-2 deg). These results suggest that, at least in macaque monkeys, V1 contains about as much information as is seen in behaviour, implying that downstream circuits are efficient at extracting the information available in V1.


2012 ◽  
Vol 108 (3) ◽  
pp. 794-801 ◽  
Author(s):  
Velia Cardin ◽  
Lara Hemsworth ◽  
Andrew T. Smith

The extraction of optic flow cues is fundamental for successful locomotion. During forward motion, the focus of expansion (FoE), in conjunction with knowledge of eye position, indicates the direction in which the individual is heading. Therefore, it is expected that cortical brain regions that are involved in the estimation of heading will be sensitive to this feature. To characterize cortical sensitivity to the location of the FoE or, more generally, the center of flow (CoF) during visually simulated self-motion, we carried out a functional MRI (fMRI) adaptation experiment in several human visual cortical areas that are thought to be sensitive to optic flow parameters, namely, V3A, V6, MT/V5, and MST. In each trial, two optic flow patterns were sequentially presented, with the CoF located in either the same or different positions. With an adaptation design, an area sensitive to heading direction should respond more strongly to a pair of stimuli with different CoFs than to stimuli with the same CoF. Our results show such release from adaptation in areas MT/V5 and MST, and to a lesser extent V3A, suggesting the involvement of these areas in the processing of heading direction. The effect could not be explained either by differences in local motion or by attention capture. It was not observed to a significant extent in area V6 or in control area V1. The different patterns of responses observed in MST and V6, areas that are both involved in the processing of egomotion in macaques and humans, suggest distinct roles in the processing of visual cues for self-motion.


2020 ◽  
Vol 33 (6) ◽  
pp. 625-644 ◽  
Author(s):  
Maria Gallagher ◽  
Reno Choi ◽  
Elisa Raffaella Ferrè

Abstract During exposure to Virtual Reality (VR) a sensory conflict may be present, whereby the visual system signals that the user is moving in a certain direction with a certain acceleration, while the vestibular system signals that the user is stationary. In order to reduce this conflict, the brain may down-weight vestibular signals, which may in turn affect vestibular contributions to self-motion perception. Here we investigated whether vestibular perceptual sensitivity is affected by VR exposure. Participants’ ability to detect artificial vestibular inputs was measured during optic flow or random motion stimuli on a VR head-mounted display. Sensitivity to vestibular signals was significantly reduced when optic flow stimuli were presented, but importantly this was only the case when both visual and vestibular cues conveyed information on the same plane of self-motion. Our results suggest that the brain dynamically adjusts the weight given to incoming sensory cues for self-motion in VR; however this is dependent on the congruency of visual and vestibular cues.


2010 ◽  
Vol 30 (8) ◽  
pp. 3022-3042 ◽  
Author(s):  
A. Chen ◽  
G. C. DeAngelis ◽  
D. E. Angelaki

2019 ◽  
Author(s):  
Paria Mehrani ◽  
Andrei Mouraviev ◽  
John K. Tsotsos

There is still much to understand about the color processing mechanisms in the brain and the transformation from cone-opponent representations to perceptual hues. Moreover, it is unclear which areas(s) in the brain represent unique hues. We propose a hierarchical model inspired by the neuronal mechanisms in the brain for local hue representation, which reveals the contributions of each visual cortical area in hue representation. Local hue encoding is achieved through incrementally increasing processing nonlinearities beginning with cone input. Besides employing nonlinear rectifications, we propose multiplicative modulations as a form of nonlinearity. Our simulation results indicate that multiplicative modulations have significant contributions in encoding of hues along intermediate directions in the MacLeod-Boynton diagram and that model V4 neurons have the capacity to encode unique hues. Additionally, responses of our model neurons resemble those of biological color cells, suggesting that our model provides a novel formulation of the brain’s color processing pathway.


Sign in / Sign up

Export Citation Format

Share Document