scholarly journals Eye-centered visual receptive fields in the ventral intraparietal area

2014 ◽  
Vol 112 (2) ◽  
pp. 353-361 ◽  
Author(s):  
Xiaodong Chen ◽  
Gregory C. DeAngelis ◽  
Dora E. Angelaki

The ventral intraparietal area (VIP) processes multisensory visual, vestibular, tactile, and auditory signals in diverse reference frames. We recently reported that visual heading signals in VIP are represented in an approximately eye-centered reference frame when measured using large-field optic flow stimuli. No VIP neuron was found to have head-centered visual heading tuning, and only a small proportion of cells had reference frames that were intermediate between eye- and head-centered. In contrast, previous studies using moving bar stimuli have reported that visual receptive fields (RFs) in VIP are head-centered for a substantial proportion of neurons. To examine whether these differences in previous findings might be due to the neuronal property examined (heading tuning vs. RF measurements) or the type of visual stimulus used (full-field optic flow vs. a single moving bar), we have quantitatively mapped visual RFs of VIP neurons using a large-field, multipatch, random-dot motion stimulus. By varying eye position relative to the head, we tested whether visual RFs in VIP are represented in head- or eye-centered reference frames. We found that the vast majority of VIP neurons have eye-centered RFs with only a single neuron classified as head-centered and a small minority classified as intermediate between eye- and head-centered. Our findings suggest that the spatial reference frames of visual responses in VIP may depend on the visual stimulation conditions used to measure RFs and might also be influenced by how attention is allocated during stimulus presentation.

1991 ◽  
Vol 66 (2) ◽  
pp. 485-496 ◽  
Author(s):  
D. L. Robinson ◽  
J. W. McClurkin ◽  
C. Kertzman ◽  
S. E. Petersen

1. We recorded from single neurons in awake, trained rhesus monkeys in a lighted environment and compared responses to stimulus movement during periods of fixation with those to motion caused by saccadic or pursuit eye movements. Neurons in the inferior pulvinar (PI), lateral pulvinar (PL), and superior colliculus were tested. 2. Cells in PI and PL respond to stimulus movement over a wide range of speeds. Some of these cells do not respond to comparable stimulus motion, or discharge only weakly, when it is generated by saccadic or pursuit eye movements. Other neurons respond equivalently to both types of motion. Cells in the superficial layers of the superior colliculus have similar properties to those in PI and PL. 3. When tested in the dark to reduce visual stimulation from the background, cells in PI and PL still do not respond to motion generated by eye movements. Some of these cells have a suppression of activity after saccadic eye movements made in total darkness. These data suggest that an extraretinal signal suppresses responses to visual stimuli during eye movements. 4. The suppression of responses to stimuli during eye movements is not an absolute effect. Images brighter than 2.0 log units above background illumination evoke responses from cells in PI and PL. The suppression appears stronger in the superior colliculus than in PI and PL. 5. These experiments demonstrate that many cells in PI and PL have a suppression of their responses to stimuli that cross their receptive fields during eye movements. These cells are probably suppressed by an extraretinal signal. Comparable effects are present in the superficial layers of the superior colliculus. These properties in PI and PL may reflect the function of the ascending tectopulvinar system.


2021 ◽  
Author(s):  
Sudha Sharma ◽  
Hemant Kumar Srivastava ◽  
Sharba Bandyopadhyay

AbstractSo far, our understanding on the role of the auditory cortex (ACX) in processing visual information has been limited to infragranular layers of the ACX, which have been shown to respond to visual stimulation. Here, we investigate the neurons in supragranular layers of the mouse ACX using 2-photon calcium imaging. Contrary to previous reports, here we show that more than 20% of responding neurons in layer2/3 of the ACX respond to full-field visual stimulation. These responses occur by both excitation and hyperpolarization. The primary ACX (A1) has a greater proportion of visual responses by hyperpolarization compared to excitation likely driven by inhibitory neurons of the infragranular layers of the ACX rather than local layer 2/3 inhibitory neurons. Further, we found that more than 60% of neurons in the layer 2/3 of A1 are multisensory in nature. We also show the presence of multisensory neurons in close proximity to exclusive auditory neurons and that there is a reduction in the noise correlations of the recorded neurons during multisensory presentation. This is evidence in favour of deep and intricate visual influence over auditory processing. The results have strong implications for decoding visual influences over the early auditory cortical regions.Significance statementTo understand, what features of our visual world are processed in the auditory cortex (ACX), understanding response properties of auditory cortical neurons to visual stimuli is important. Here, we show the presence of visual and multisensory responses in the supragranular layers of the ACX. Hyperpolarization to visual stimulation is more commonly observed in the primary ACX. Multisensory stimulation results in suppression of responses compared to unisensory stimulation and an overall decrease in noise correlation in the primary ACX. The close-knit architecture of these neurons with auditory specific neurons suggests the influence of non-auditory stimuli on the auditory processing.


1975 ◽  
Vol 38 (2) ◽  
pp. 219-230 ◽  
Author(s):  
J. T. McIlwain

1. The receptive fields of collicular neurons in the cat, recorded in a single microelectrode penetration, were not centered on a point in visual space, but nested eccentrically with the smaller fields displaced toward the area centralis. The eccentric nesting was not eliminated by correcting the fields for the tangent screen distortion or by making penetrations normal to the collicular surface in coronal and parasagittal planes. These findings do not support the idea that collicular cells form topographically organized columns oriented normal to the collicular surface. 2. When the receptive fields were plotted in the visual coordinate system of the collicular map, the nesting became much more concentric, suggesting that the eccentric nesting of the receptive fields in visual space was largely a product of the retinotectal coordinate transformation. 3. The profile of a collicular receptive field, plotted in the collicular visual coordinate system is called the receptive-field image. Receptive-field images tended to have oval shapes with the long axis oriented mediolaterally. Clusters of receptive-field images, plotted for single penetrations, appeared similar wherever they occurred in the collicular map, suggesting that a common pattern of neural convergence determines the geometry of the receptive-field images in all parts of the colliculus. 4. The neural substrate of the receptive-field images was examined by tracing the theoretical patterns of neural activity which a point stimulus would produce in the retinotectal system. This analysis suggested that the shape and dimensions of the receptive-field images, and consequently the receptive fields, might be accounted for in large part by the geometry of collicular dendritic fields, the dimensions of the visual receptive fields of afferent fibers, and the retinotectal coordinate transformation. 5. Because it adjusts for the retinotectal distortion of visual space, the receptive-field image may be used to outline the distribution of collicular cells excited by a point stimulus. This makes it possible to show that a point stimulus activates large-field cells in the superficial gray layer over an area of about 2.5 by 1.5 mm in the central parts of the colliculus. It is suggested that such cells may organize the directional signals required by the oculomotor system for visual orienting behavior.


2013 ◽  
Vol 25 (5) ◽  
pp. 790-801 ◽  
Author(s):  
Chiara Renzi ◽  
Patrick Bruns ◽  
Kirstin-Friederike Heise ◽  
Maximo Zimerman ◽  
Jan-Frederik Feldheim ◽  
...  

Previous studies have suggested that the putative human homologue of the ventral intraparietal area (hVIP) is crucially involved in the remapping of tactile information into external spatial coordinates and in the realignment of tactile and visual maps. It is unclear, however, whether hVIP is critical for the remapping process during audio-tactile cross-modal spatial interactions. The audio-tactile ventriloquism effect, where the perceived location of a sound is shifted toward the location of a synchronous but spatially disparate tactile stimulus, was used to probe spatial interactions in audio-tactile processing. Eighteen healthy volunteers were asked to report the perceived location of brief auditory stimuli presented from three different locations (left, center, and right). Auditory stimuli were presented either alone (unimodal stimuli) or concurrently to a spatially discrepant tactile stimulus applied to the left or right index finger (bimodal stimuli), with the hands adopting either an uncrossed or a crossed posture. Single pulses of TMS were delivered over the hVIP or a control site (primary somatosensory cortex, SI) 80 msec after trial onset. TMS to the hVIP, compared with the control SI-TMS, interfered with the remapping of touch into external space, suggesting that hVIP is crucially involved in transforming spatial reference frames across audition and touch.


2001 ◽  
Vol 86 (4) ◽  
pp. 1991-2000 ◽  
Author(s):  
Sean P. Dukelow ◽  
Joseph F. X. DeSouza ◽  
Jody C. Culham ◽  
Albert V. van den Berg ◽  
Ravi S. Menon ◽  
...  

In humans, functional imaging studies have demonstrated a homologue of the macaque motion complex, MT+ [suggested to contain both middle temporal (MT) and medial superior temporal (MST)], in the ascending limb of the inferior temporal sulcus. In the macaque monkey, motion-sensitive areas MT and MST are adjacent in the superior temporal sulcus. Electrophysiological research has demonstrated that while MT receptive fields primarily encode the contralateral visual field, MST dorsal (MSTd) receptive fields extend well into the ipsilateral visual field. Additionally, macaque MST has been shown to receive extraretinal smooth-pursuit eye-movement signals, whereas MT does not. We used functional magnetic resonance imaging (fMRI) and the neural properties that had been observed in monkeys to distinguish putative human areas MT from MST. Optic flow stimuli placed in the full field, or contralateral field only, produced a large cluster of functional activation in our subjects consistent with previous reports of human area MT+. Ipsilateral optic flow stimuli limited to the peripheral retina produced activation only in an anterior subsection of the MT+ complex, likely corresponding to putative MSTd. During visual pursuit of a single target, a large portion of the MT+ complex was activated. However, during nonvisual pursuit, only the anterolateral portion of the MT+ complex was activated. This subsection of the MT+ cluster could correspond to putative MSTl (lateral). In summary, we observed three distinct subregions of the human MT+ complex that were arranged in a manner similar to that seen in the monkey.


2012 ◽  
Vol 108 (10) ◽  
pp. 2653-2667 ◽  
Author(s):  
Jan Churan ◽  
Daniel Guitton ◽  
Christopher C. Pack

Saccades are useful for directing the high-acuity fovea to visual targets that are of behavioral relevance. The selection of visual targets for eye movements involves the superior colliculus (SC), where many neurons respond to visual stimuli. Many of these neurons are also activated before and during saccades of specific directions and amplitudes. Although the role of the SC in controlling eye movements has been thoroughly examined, far less is known about the nature of the visual responses in this area. We have, therefore, recorded from neurons in the intermediate layers of the macaque SC, while using a sparse-noise mapping procedure to obtain a detailed characterization of the spatiotemporal structure of visual receptive fields. We find that SC responses to flashed visual stimuli start roughly 50 ms after the onset of the stimulus and last for on average ∼70 ms. About 50% of these neurons are strongly suppressed by visual stimuli flashed at certain locations flanking the excitatory center, and the spatiotemporal pattern of suppression exerts a predictable influence on the timing of saccades. This suppression may, therefore, contribute to the filtering of distractor stimuli during target selection. We also find that saccades affect the processing of visual stimuli by SC neurons in a manner that is quite similar to the saccadic suppression and postsaccadic enhancement that has been observed in the cortex and in perception. However, in contrast to what has been observed in the cortex, decreased visual sensitivity was generally associated with increased firing rates, while increased sensitivity was associated with decreased firing rates. Overall, these results suggest that the processing of visual stimuli by SC receptive fields can influence oculomotor behavior and that oculomotor signals originating in the SC can shape perisaccadic visual perception.


1988 ◽  
Vol 60 (2) ◽  
pp. 604-620 ◽  
Author(s):  
W. T. Newsome ◽  
R. H. Wurtz ◽  
H. Komatsu

1. We investigated cells in the middle temporal visual area (MT) and the medial superior temporal area (MST) that discharged during smooth pursuit of a dim target in an otherwise dark room. For each of these pursuit cells we determined whether the response during pursuit originated from visual stimulation of the retina by the pursuit target or from an extraretinal input related to the pursuit movement itself. We distinguished between these alternatives by removing the visual motion stimulus during pursuit either by blinking off the visual target briefly or by stabilizing the target on the retina. 2. In the foveal representation of MT (MTf), we found that pursuit cells usually decreased their rate of discharge during a blink or during stabilization of the visual target. The pursuit response of these cells depends on visual stimulation of the retina by the pursuit target. 3. In a dorsal-medial region of MST (MSTd), cells continued to respond during pursuit despite a blink or stabilization of the pursuit target. The pursuit response of these cells is dependent on an extraretinal input. 4. In a lateral-anterior region of MST (MST1), we found both types of pursuit cells; some, like those in MTf, were dependent on visual inputs whereas others, like those in MSTd, received an extraretinal input. 5. We observed a relationship between pursuit responses and passive visual responses. MST cells whose pursuit responses were attributable to extraretinal inputs tended to respond preferentially to large-field random-dot patterns. Some cells that preferred small spots also had an extraretinal input. 6. For 92% of the pursuit cells we studied, the pursuit response began after onset of the pursuit eye movement. A visual response preceding onset of the eye movement could be observed in many of these cells if the initial motion of the target occurred within the visual receptive field of the cell and in its preferred direction. In contrast to the pursuit response, however, this visual response was not dependent on execution of the pursuit movement. 7. For the remaining 8% of the pursuit cells, the pursuit discharge began before initiation of the pursuit eye movement. This occurred even though the initial motion of the target was outside the receptive field as mapped during fixation trials. Our data suggest, however, that such responses may be attributable to an expansion of the receptive field that accompanies enhanced visual responses.(ABSTRACT TRUNCATED AT 400 WORDS)


1995 ◽  
Vol 74 (6) ◽  
pp. 2379-2400 ◽  
Author(s):  
L. G. Nowak ◽  
M. H. Munk ◽  
J. I. Nelson ◽  
A. C. James ◽  
J. Bullier

1. Single-unit and multiunit activities were recorded at the area 17-18 border of each cortical hemisphere in paralyzed cats anesthetized with nitrous oxide supplemented with halothane. Cross-correlation histograms (CCHs) were computed between 86 pairs of single units and 99 pairs of multiunit activities. Visually evoked peaks in the CCHs were removed by subtracting the shift predictor. 2. Three types of CCH peaks were observed: T peaks with narrow widths (4-28 ms), C peaks with intermediate widths (30-100 ms), and H peaks with large widths (100-1,000 ms). Oscillatory coupling was observed rarely. This tripartite distribution of CCH peaks is similar to that reported in an earlier study on the temporal coupling between areas 17 and 18. Different types of peaks occurred in isolation or in combination. Combination of different peak types was more often observed in multiunit recordings. 3. CCH peaks of all types were usually centered, meaning that units in opposite hemispheres tend to synchronize their discharges. 4. T peaks were observed almost exclusively for units with overlapping receptive fields and preferentially for units with similar optimal orientations. No dependence on receptive field position or optimal orientation was observed for the encounter rate of C and H peaks. 5. A new method, called the peristimulus CCH, was developed to study the time course of the temporal coupling. This showed that H peaks can occur during visual stimulation and that their time course follows that of the visual responses of the coupled neurons. 6. Using one single bar or two simultaneously presented light bars as stimuli, we studied the effect of visual stimulation on the strength of H coupling. This showed that H coupling observed under stimulation with a single moving light bar can be completely abolished, with little change in visual responses, when the stimulus is changed to two noncoherently moving bars. This was related to a strong decrease of the H peaks in the autocorrelograms. 7. These results demonstrate that T, C, and H peaks constitute, together with high-frequency oscillations, universal forms of temporal coupling between neurons located in different cortical areas. The following paper reports on the effects of cortical lesions on the encounter rate and strength of these different types of coupling.


2005 ◽  
Vol 94 (4) ◽  
pp. 2331-2352 ◽  
Author(s):  
O'Dhaniel A. Mullette-Gillman ◽  
Yale E. Cohen ◽  
Jennifer M. Groh

The integration of visual and auditory events is thought to require a joint representation of visual and auditory space in a common reference frame. We investigated the coding of visual and auditory space in the lateral and medial intraparietal areas (LIP, MIP) as a candidate for such a representation. We recorded the activity of 275 neurons in LIP and MIP of two monkeys while they performed saccades to a row of visual and auditory targets from three different eye positions. We found 45% of these neurons to be modulated by the locations of visual targets, 19% by auditory targets, and 9% by both visual and auditory targets. The reference frame for both visual and auditory receptive fields ranged along a continuum between eye- and head-centered reference frames with ∼10% of auditory and 33% of visual neurons having receptive fields that were more consistent with an eye- than a head-centered frame of reference and 23 and 18% having receptive fields that were more consistent with a head- than an eye-centered frame of reference, leaving a large fraction of both visual and auditory response patterns inconsistent with both head- and eye-centered reference frames. The results were similar to the reference frame we have previously found for auditory stimuli in the inferior colliculus and core auditory cortex. The correspondence between the visual and auditory receptive fields of individual neurons was weak. Nevertheless, the visual and auditory responses were sufficiently well correlated that a simple one-layer network constructed to calculate target location from the activity of the neurons in our sample performed successfully for auditory targets even though the weights were fit based only on the visual responses. We interpret these results as suggesting that although the representations of space in areas LIP and MIP are not easily described within the conventional conceptual framework of reference frames, they nevertheless process visual and auditory spatial information in a similar fashion.


Sign in / Sign up

Export Citation Format

Share Document