Early Coding of Visuomanual Coordination During Reaching in Parietal Area PEc

2001 ◽  
Vol 85 (1) ◽  
pp. 462-467 ◽  
Author(s):  
Stefano Ferraina ◽  
Alexandra Battaglia-Mayer ◽  
Aldo Genovesio ◽  
Barbara Marconi ◽  
Paolo Onorati ◽  
...  

The parietal mechanisms of eye-hand coordination during reaching were studied by recording neural activity in area PEc while monkeys performed different tasks, aimed at assessing the influence of retinal, hand-, and eye-related signals on neural activity. The tasks used consisted of 1) reaching to foveated and 2) to extra-foveal targets, with constant eye position; and 3) saccadic eye movement toward, and holding of eye position on peripheral targets, the same as those of the reaching tasks. In all tasks, hand and/or eye movements were made from a central position to eight peripheral targets. A conventional visual fixation paradigm was used as a control task, to assess location and extent of visual receptive field of neurons. A large proportion of cells in area PEc displayed significant relationships to hand movement direction and position. Many of them were also related to the eye's position. Relationships to saccadic eye movements were found for a smaller proportion of cells. Most neurons were tuned to different combination of hand- and eye-related signals; some of them were also influenced by visual information. This combination of signals can be an expression of the early stages of the composition of motor commands for different forms of visuomotor coordination that depend on the integration of hand- and eye-related information. These results assign to area PEc, classically considered as a somatosensory association cortex, a new visuomotor role.

2000 ◽  
Vol 83 (4) ◽  
pp. 2374-2391 ◽  
Author(s):  
Alexandra Battaglia-Mayer ◽  
Stefano Ferraina ◽  
Takashi Mitsuda ◽  
Barbara Marconi ◽  
Aldo Genovesio ◽  
...  

Neural activity was recorded in the parietooccipital cortex while monkeys performed different tasks aimed at investigating visuomotor interactions of retinal, eye, and arm-related signals on neural activity. The tasks were arm reaching 1) to foveated targets; 2) to extrafoveal targets, with constant eye position; 3) within an instructed-delayed paradigm, under both light and darkness; 4) saccadic eye movements toward, and static eye holding on peripheral targets; and 5) visual fixation and stimulation. The activity of many cells was modulated during arm reaction (68%) and movement time (58%), and during static holding of the arm in space (64%), when eye position was kept constant. Eye position influenced the activity of many cells during hand reaction (45%) and movement time (51%) and holding of hand static position (69%). Many cells (56%) were also modulated during preparation for hand movement, in the delayed reach task. Modulation was present also in the dark in 59% of cells during this epoch, 51% during reaction and movement time, and 48% during eye/hand holding on the target. Cells (50%) displaying light-dark differences of activity were considered as related to the sight and monitoring of hand motion and/or position in the visual field. Saccadic eye movements modulated a smaller percentage (25%) of cells than eye position (68%). Visual receptive fields were mapped in 44% of the cells studied. They were generally large and extended to the periphery of the tested (30°) visual field. Sixty-six percent of cells were motion sensitive. Therefore the activity of many neurons in this area reflects the combined influence of visual, eye, and arm movement–related signals. For most neurons, the orientation of the preferred directions computed across different epochs and tasks, therefore expression of all different eye- and hand-related activity types, clustered within a limited sector of space, the field of global tuning. These spatial fields might be an ideal frame to combine eye and hand signals, thanks to the congruence of their tuning properties. The relationships between cell activity and oculomotor and visuomanual behavior were task dependent. During saccades, most cells were recruited when the eye moved to a spatial location that was also target for hand movement, whereas during hand movement most cells fired depending on whether or not the animal had prior knowledge about the location of the visual targets.


2003 ◽  
Vol 90 (2) ◽  
pp. 1279-1294 ◽  
Author(s):  
Ralph M. Siegel ◽  
Milena Raffi ◽  
Raymond E. Phinney ◽  
Jessica A. Turner ◽  
Gábor Jandó

In the behaving monkey, inferior parietal lobe cortical neurons combine visual information with eye position signals. However, an organized topographic map of these neurons' properties has never been demonstrated. Intrinsic optical imaging revealed a functional architecture for the effect of eye position on the visual response to radial optic flow. The map was distributed across two subdivisions of the inferior parietal lobule, area 7a and the dorsal prelunate area, DP. Area 7a contains a representation of the lower eye position gain fields while area DP represents the upper eye position gain fields. Horizontal eye position is represented orthogonal to the vertical eye position across the medial lateral extents of the cortices. Similar topographies were found in three hemispheres of two monkeys; the horizontal and vertical gain field representations were not isotropic with a greater modulation found with the vertical. Monte Carlo methods demonstrated the significance of the maps, and they were verified in part using multiunit recordings. The novel topographic organization of this association cortex area provides a substrate for constructing representations of surrounding space for perception and the guidance of motor behaviors.


1998 ◽  
Vol 79 (3) ◽  
pp. 1461-1480 ◽  
Author(s):  
Markus Lappe ◽  
Martin Pekel ◽  
Klaus-Peter Hoffmann

Lappe, Markus, Martin Pekel, and Klaus-Peter Hoffmann. Optokinetic eye movements elicited by radial optic flow in the macaque monkey. J. Neurophysiol. 79: 1461–1480, 1998. We recorded spontaneous eye movements elicited by radial optic flow in three macaque monkeys using the scleral search coil technique. Computer-generated stimuli simulated forward or backward motion of the monkey with respect to a number of small illuminated dots arranged on a virtual ground plane. We wanted to see whether optokinetic eye movements are induced by radial optic flow stimuli that simulate self-movement, quantify their parameters, and consider their effects on the processing of optic flow. A regular pattern of interchanging fast and slow eye movements with a frequency of 2 Hz was observed. When we shifted the horizontal position of the focus of expansion (FOE) during simulated forward motion (expansional optic flow), median horizontal eye position also shifted in the same direction but only by a smaller amount; for simulated backward motion (contractional optic flow), median eye position shifted in the opposite direction. We relate this to a change in Schlagfeld typically observed in optokinetic nystagmus. Direction and speed of slow phase eye movements were compared with the local flow field motion in gaze direction (the foveal flow). Eye movement direction matched well the foveal motion. Small systematic deviations could be attributed to an integration of the global motion pattern. Eye speed on average did not match foveal stimulus speed, as the median gain was only ∼0.5–0.6. The gain was always lower for expanding than for contracting stimuli. We analyzed the time course of the eye movement immediately after each saccade. We found remarkable differences in the initial development of gain and directional following for expansion and contraction. For expansion, directional following and gain were initially poor and strongly influenced by the ongoing eye movement before the saccade. This was not the case for contraction. These differences also can be linked to properties of the optokinetic system. We conclude that optokinetic eye movements can be elicited by radial optic flow fields simulating self-motion. These eye movements are linked to the parafoveal flow field, i.e., the motion in the direction of gaze. In the retinal projection of the optic flow, such eye movements superimpose retinal slip. This results in complex retinal motion patterns, especially because the gain of the eye movement is small and variable. This observation has special relevance for mechanisms that determine self-motion from retinal flow fields. It is necessary to consider the influence of eye movements in optic flow analysis, but our results suggest that direction and speed of an eye movement should be treated differently.


1997 ◽  
Vol 77 (5) ◽  
pp. 2252-2267 ◽  
Author(s):  
Douglas D. Burman ◽  
Charles J. Bruce

Burman, Douglas D. and Charles J. Bruce. Suppression of task-related saccades by electrical stimulation in the primate's frontal eye field. J. Neurophysiol. 77: 2252–2267, 1997. Patients with frontal lobe damage have difficulty suppressing reflexive saccades to salient visual stimuli, indicating that frontal lobe neocortex helps to suppress saccades as well as to produce them. In the present study, a role for the frontal eye field (FEF) in suppressing saccades was demonstrated in macaque monkeys by application of intracortical microstimulation during the performance of a visually guided saccade task, a memory prosaccade task, and a memory antisaccade task. A train of low-intensity (20–50 μA) electrical pulses was applied simultaneously with the disappearance of a central fixation target, which was always the cue to initiate a saccade. Trials with and without stimulation were compared, and significantly longer saccade latencies on stimulation trials were considered evidence of suppression. Low-intensity stimulation suppressed task-related saccades at 30 of 77 sites tested. In many cases saccades were suppressed throughout the microstimulation period (usually 450 ms) and then executed shortly after the train ended. Memory-guided saccades were most dramatically suppressed and were often rendered hypometric, whereas visually guided saccades were less severely suppressed by stimulation. At 18 FEF sites, the suppression of saccades was the only observable effect of electrical stimulation. Contraversive saccades were usually more strongly suppressed than ipsiversive ones, and cells recorded at such purely suppressive sites commonly had either foveal receptive fields or postsaccadic responses. At 12 other FEF sites at which saccadic eye movements were elicited at low thresholds, task-related saccades whose vectors differed from that of the electrically elicited saccade were suppressed by electrical stimulation. Such suppression at saccade sites was observed even with currents below the threshold for eliciting saccades. Pure suppression sites tended to be located near or in the fundus, deeper in the anterior bank of the arcuate than elicited saccade sites. Stimulation in the prefrontal association cortex anterior to FEF did not suppress saccades, nor did stimulation in premotor cortex posterior to FEF. These findings indicate that the primate FEF can help orchestrate saccadic eye movements by suppressing inappropriate saccade vectors as well as by selecting, specifying, and triggering appropriate saccades. We hypothesize that saccades could be suppressed both through local FEF interactions and through FEF projections to subcortical regions involved in maintaining fixation.


1994 ◽  
Vol 72 (6) ◽  
pp. 2754-2770 ◽  
Author(s):  
E. L. Keller ◽  
J. A. Edelman

1. We recorded the spatial and temporal dynamics of saccade-related burst neurons (SRBNs) found in the intermediate layers of the superior colliculus (SC) in the alert, behaving monkey. These burst cells are normally the first neurons recorded during radially directed microelectrode penetrations of the SC after the electrode has left the more dorsally situated visual layers. They have spatially delimited movement fields whose centers describe the well-studied motor map of the SC. They have a rather sharp, saccade-locked burst of activity that peaks just before saccade onset and then declines steeply during the saccade. Many of these cells, when recorded during saccade trials, also have an early, transient visual response and an irregular prelude of presaccadic activity. 2. Because saccadic eye movements normally have very stereotyped durations and velocity trajectories that vary systematically with saccade size, it has been difficult in the past to establish quantitatively whether the activity of SRBNs temporally codes dynamic saccadic control signals, e.g., dynamic motor error or eye velocity, where dynamic motor error is defined as a signal proportional to the instantaneous difference between desired final eye position and the actual eye position during a saccade. It has also not been unequivocally established whether SRBNs participate in an organized spatial shift of ensemble activity in the intermediate layers of the SC during saccadic eye movements. 3. To address these issues, we studied the activity of SRBNs using an interrupted saccade paradigm. Saccades were interrupted with pulsatile electrical stimulation through a microelectrode implanted in the omnipauser region of the brain stem while recordings were made simultaneously from single SRBNs in the SC. 4. Shortly after the beginning of the stimulation (which was electronically triggered at saccade onset), the eyes decelerated rapidly and stopped completely. When the high-frequency (typically 300-400 pulses per second) stimulation was terminated (average duration 12 ms), the eye movement was reinitiated and a resumed saccade was made accurately to the location of the target. 5. When we recorded from SRBNs in the more caudal colliculus, which were active for large saccades, cell discharge was powerfully and rapidly suppressed by the stimulation (average latency = 3.8 ms). Activity in the same cells started again just before the onset of the resumed saccade and continued during this saccade even though it has a much smaller amplitude than would normally be associated with significant discharge for caudal SC cells.(ABSTRACT TRUNCATED AT 400 WORDS)


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
John-Ross Rizzo ◽  
Mahya Beheshti ◽  
Tahereh Naeimi ◽  
Farnia Feiz ◽  
Girish Fatterpekar ◽  
...  

Abstract Background Eye–hand coordination (EHC) is a sophisticated act that requires interconnected processes governing synchronization of ocular and manual motor systems. Precise, timely and skillful movements such as reaching for and grasping small objects depend on the acquisition of high-quality visual information about the environment and simultaneous eye and hand control. Multiple areas in the brainstem and cerebellum, as well as some frontal and parietal structures, have critical roles in the control of eye movements and their coordination with the head. Although both cortex and cerebellum contribute critical elements to normal eye-hand function, differences in these contributions suggest that there may be separable deficits following injury. Method As a preliminary assessment for this perspective, we compared eye and hand-movement control in a patient with cortical stroke relative to a patient with cerebellar stroke. Result We found the onset of eye and hand movements to be temporally decoupled, with significant decoupling variance in the patient with cerebellar stroke. In contrast, the patient with cortical stroke displayed increased hand spatial errors and less significant temporal decoupling variance. Increased decoupling variance in the patient with cerebellar stroke was primarily due to unstable timing of rapid eye movements, saccades. Conclusion These findings highlight a perspective in which facets of eye-hand dyscoordination are dependent on lesion location and may or may not cooperate to varying degrees. Broadly speaking, the results corroborate the general notion that the cerebellum is instrumental to the process of temporal prediction for eye and hand movements, while the cortex is instrumental to the process of spatial prediction, both of which are critical aspects of functional movement control.


1988 ◽  
Vol 1 (2) ◽  
pp. 239-244 ◽  
Author(s):  
James T. McIlwain

AbstractThe trajectories of saccadic eye movements evoked electrically from many brain structures are dependent to some degree on the initial position of the eye. Under certain conditions, likely to occur in stimulation experiments, local feedback models of the saccadic system can yield eye movements which behave in this way. The models in question assume that an early processing stage adds an internal representation of eye position to retinal error to yield a signal representing target position with respect to the head. The saccadic system is driven by the difference between this signal and one representing the current position of the eye. Albano & Wurtz (1982) pointed out that lesions perturbing the computation of eye position with respect to the head can result in initial position dependence of visually evoked saccades. It is shown here that position-dependent saccades will also result if electrical stimulation evokes a signal equivalent to retinal error but fails to effect a complete addition of eye position to this signal. Also, when multiple or staircase saccades are produced, as during long stimulus trains, they will have identical directions but decrease progressively in amplitude by a factor related to the fraction of added eye position.


2007 ◽  
Vol 16 (4) ◽  
pp. 219-222 ◽  
Author(s):  
John M. Henderson

When we view the visual world, our eyes flit from one location to another about three times each second. These frequent changes in gaze direction result from very fast saccadic eye movements. Useful visual information is acquired only during fixations, periods of relative gaze stability. Gaze control is defined as the process of directing fixation through a scene in real time in the service of ongoing perceptual, cognitive, and behavioral activity. This article discusses current approaches and new empirical findings that are allowing investigators to unravel how human gaze control operates during active real-world scene perception.


1986 ◽  
Vol 56 (1) ◽  
pp. 196-207 ◽  
Author(s):  
A. McKenzie ◽  
S. G. Lisberger

Monkeys were trained to make saccades to briefly flashed targets. We presented the flash during smooth pursuit of another target, so that there was a smooth change in eye position after the flash. We could then determine whether the flash-evoked saccades compensated for the intervening smooth eye movements to point the eyes at the position of the flash in space. We defined the "retinal error" as the vector from the position of the eye at the time of the flash to the position of the target. We defined "spatial error" as the vector from the position of the eye at the time of the saccade to the position of the flashed target in space. The direction of the saccade (in polar coordinates) was more highly correlated with the direction of the retinal error than with the direction of the spatial error. Saccade amplitude was also better correlated with the amplitude of the retinal error. We obtained the same results whether the flash was presented during pursuit with the head fixed or during pursuit with combined eye-head movements. Statistical analysis demonstrated that the direction of the saccade was determined only by the retinal error in two of the three monkeys. In the third monkey saccade direction was determined primarily by retinal error but had a consistent bias toward spatial error. The bias can be attributed to this monkey's earlier practice in which the flashed target was reilluminated so he could ultimately make a saccade to the correct position in space. These data suggest that the saccade generator does not normally use nonvisual feedback about smooth changes in eye or gaze position. In two monkeys we also provided sequential target flashes during pursuit with the second flash timed so that it occurred just before the first saccade. As above, the first saccade was appropriate for the retinal error provided by the first flash. The second saccade compensated for the first and pointed the eyes at the position of the second target in space. We conclude, as others have before (12, 21), that the saccade generator receives feedback about its own output, saccades. Our results require revision of existing models of the neural network that generates saccades. We suggest two models that retain the use of internal feedback suggested by others. We favor a model that accounts for our data by assuming that internal feedback originates directly from the output of the saccade generator and reports only saccadic changes in eye position.


Sign in / Sign up

Export Citation Format

Share Document