scholarly journals Learning the trajectory of a moving visual target and evolution of its tracking in the monkey

2016 ◽  
Vol 116 (6) ◽  
pp. 2739-2751 ◽  
Author(s):  
Clara Bourrelly ◽  
Julie Quinet ◽  
Patrick Cavanagh ◽  
Laurent Goffart

An object moving in the visual field triggers a saccade that brings its image onto the fovea. It is followed by a combination of slow eye movements and catch-up saccades that try to keep the target image on the fovea as long as possible. The accuracy of this ability to track the “here-and-now” location of a visual target contrasts with the spatiotemporally distributed nature of its encoding in the brain. We show in six experimentally naive monkeys how this performance is acquired and gradually evolves during successive daily sessions. During the early exposure, the tracking is mostly saltatory, made of relatively large saccades separated by low eye velocity episodes, demonstrating that accurate (here and now) pursuit is not spontaneous and that gaze direction lags behind its location most of the time. Over the sessions, while the pursuit velocity is enhanced, the gaze is more frequently directed toward the current target location as a consequence of a 25% reduction in the number of catch-up saccades and a 37% reduction in size (for the first saccade). This smoothing is observed at several scales: during the course of single trials, across the set of trials within a session, and over successive sessions. We explain the neurophysiological processes responsible for this combined evolution of saccades and pursuit in the absence of stringent training constraints. More generally, our study shows that the oculomotor system can be used to discover the neural mechanisms underlying the ability to synchronize a motor effector with a dynamic external event.

2015 ◽  
Vol 113 (4) ◽  
pp. 1206-1216 ◽  
Author(s):  
Naotoshi Abekawa ◽  
Hiroaki Gomi

To capture objects by hand, online motor corrections are required to compensate for self-body movements. Recent studies have shown that background visual motion, usually caused by body movement, plays a significant role in such online corrections. Visual motion applied during a reaching movement induces a rapid and automatic manual following response (MFR) in the direction of the visual motion. Importantly, the MFR amplitude is modulated by the gaze direction relative to the reach target location (i.e., foveal or peripheral reaching). That is, the brain specifies the adequate visuomotor gain for an online controller based on gaze-reach coordination. However, the time or state point at which the brain specifies this visuomotor gain remains unclear. More specifically, does the gain change occur even during the execution of reaching? In the present study, we measured MFR amplitudes during a task in which the participant performed a saccadic eye movement that altered the gaze-reach coordination during reaching. The results indicate that the MFR amplitude immediately after the saccade termination changed according to the new gaze-reach coordination, suggesting a flexible online updating of the MFR gain during reaching. An additional experiment showed that this gain updating mostly started before the saccade terminated. Therefore, the MFR gain updating process would be triggered by an ocular command related to saccade planning or execution based on forthcoming changes in the gaze-reach coordination. Our findings suggest that the brain flexibly updates the visuomotor gain for an online controller even during reaching movements based on continuous monitoring of the gaze-reach coordination.


2008 ◽  
Vol 99 (1) ◽  
pp. 96-111 ◽  
Author(s):  
Tamara Tchelidze ◽  
Bernhard J. M. Hess

To investigate the role of noncommutative computations in the oculomotor system, three-dimensional (3D) eye movements were measured in seven healthy subjects using a memory-contingent vestibulooculomotor paradigm. Subjects had to fixate a luminous point target that appeared briefly at an eccentricity of 20° in one of four diagonal directions in otherwise complete darkness. After a fixation period of ∼1 s, the subject was moved through a sequence of two rotations about mutually orthogonal axes in one of two orders (30° yaw followed by 30° pitch and vice versa in upright and 30° yaw followed by 20° roll and vice versa in both upright and supine orientations). We found that the change in ocular torsion induced by consecutive rotations about the yaw and the pitch axis depended on the order of rotations as predicted by 3D rotation kinematics. Similarly, after rotations about the yaw and roll axis, torsion depended on the order of rotations but now due to the change in final head orientation relative to gravity. Quantitative analyses of these ocular responses revealed that the rotational vestibuloocular reflexes (VORs) in far vision closely matched the predictions of 3D rotation kinematics. We conclude that the brain uses an optimal VOR strategy with the restriction of a reduced torsional position gain. This restriction implies a limited oculomotor range in torsion and systematic tilts of the angular eye velocity as a function of gaze direction.


Author(s):  
Agnes Wong

Vergence eye movements shift the gaze point between near and far, such that the image of a target is maintained simultaneously on both foveae. Unlike other eye movement systems, vergence movements are disjunctive, meaning that the eyes move in opposite directions. To move from a far to a near target, the eyes converge (i.e., rotate toward the nose) so that the lines of sight of the two eyes intersect at the target. To aim at a target farther away, the eyes diverge (i.e., rotate toward the temples). When the target is located at optical infinity, the lines of sight are parallel. During deep sleep, deep anesthesia, and coma, the eyes diverge beyond parallel, indicating that eye alignment is normally actively maintained by the brain because the orbits, in which the eyeballs are located, are divergent. The vergence system is believed to be relatively new evolutionarily. Just as a new version of computer software tends to have bugs, perhaps it is for this reason that vergence is the last of the eye movement systems to reach full development in children, that it is often the first system to be affected by fatigue, alcohol, and other drugs, and that defective vergence is a common cause of strabismus and diplopia. Vergence eye movements are very slow, lasting 1 sec or longer. One reason for this may be that vergence, unlike saccades, is driven by visual feedback, which normally takes at least 80 msec. Another reason may be that the speed of vergence movements is limited by how fast the lenses change shape (accommodation) and how fast the pupils constrict. There may simply be no advantage for vergence to take place quickly and then wait for the lenses and pupils to catch up. The triad of convergence, accommodation, and pupillary constriction constitutes the near triad. The two most important stimuli for vergence are retinal image blur and retinal disparity. If the retinal image of an object is blurred, the target is either too near or too far away.


1998 ◽  
Vol 79 (6) ◽  
pp. 3197-3215 ◽  
Author(s):  
Christian Quaia ◽  
Lance M. Optican

Quaia, Christian and Lance M. Optican. Commutative saccadic generator is sufficient to control a 3-D ocular plant with pulleys. J. Neurophysiol. 79: 3197–3215, 1998. One-dimensional models of oculomotor control rely on the fact that, when rotations around only one axis are considered, angular velocity is the derivative of orientation. However, when rotations around arbitrary axes [3-dimensional (3-D) rotations] are considered, this property does not hold, because 3-D rotations are noncommutative. The noncommutativity of rotations has prompted a long debate over whether or not the oculomotor system has to account for this property of rotations by employing noncommutative operators. Recently, Raphan presented a model of the ocular plant that incorporates the orbital pulleys discovered, and qualitatively modeled, by Miller and colleagues. Using one simulation, Raphan showed that the pulley model could produce realistic saccades even when the neural controller is commutative. However, no proof was offered that the good behavior of the Raphan-Miller pulley model holds for saccades different from those simulated. We demonstrate mathematically that the Raphan-Miller pulley model always produces movements that have an accurate dynamic behavior. This is possible because, if the pulleys are properly placed, the oculomotor plant (extraocular muscles, orbital pulleys, and eyeball) in a sense appears commutative to the neural controller. We demonstrate this finding by studying the effect that the pulleys have on the different components of the innervation signal provided by the brain to the extraocular muscles. Because the pulleys make the axes of action of the extraocular muscles dependent on eye orientation, the effect of the innervation signals varies correspondingly as a function of eye orientation. In particular, the Pulse of innervation, which in classical models of the saccadic system encoded eye velocity, here encodes a different signal, which is very close to the derivative of eye orientation. In contrast, the Step of innervation always encodes orientation, whether or not the plant contains pulleys. Thus the Step can be produced by simply integrating the Pulse. Particular care will be given to describing how the pulleys can have this differential effect on the Pulse and the Step. We will show that, if orbital pulleys are properly located, the neural control of saccades can be greatly simplified. Furthermore, the neural implementation of Listing's Law is simplified: eye orientation will lie in Listing's Plane as long as the Pulse is generated in that plane. These results also have implications for the surgical treatment of strabismus.


2020 ◽  
Author(s):  
Anouk J. de Brouwer ◽  
Michael J. Carter ◽  
Lauren C. Smail ◽  
Daniel M. Wolpert ◽  
Jason P. Gallivan ◽  
...  

AbstractIn daily tasks, we are often confronted with competing potential targets and must select one to act on. It has been suggested that, prior to target selection, the human brain encodes the motor goals of multiple, potential targets. However, this view remains controversial and it has been argued that only a single motor goal is encoded, or that motor goals are only specified after target selection. To investigate this issue, we measured participants’ gaze behaviour while viewing two potential reach targets, one of which was cued after a preview period. We applied visuomotor rotations to dissociate each visual target location from its corresponding motor goal location; i.e., the location participants needed to aim their hand toward to bring the rotated cursor to the target. During the preview period, participants most often fixated both motor goals but also frequently fixated one, or neither, motor goal location. Further gaze analysis revealed that on trials in which both motor goals were fixated, both locations were held in memory simultaneously. These findings show that, at the level of single trials, the brain most often encodes multiple motor goals prior to target selection, but may also encode either one or no motor goals. This result may help reconcile a key debate concerning the specification of motor goals in cases of target uncertainty.


2008 ◽  
Vol 100 (4) ◽  
pp. 1848-1867 ◽  
Author(s):  
Sigrid M. C. I. van Wetter ◽  
A. John van Opstal

Such perisaccadic mislocalization is maximal in the direction of the saccade and varies systematically with the target-saccade onset delay. We have recently shown that under head-fixed conditions perisaccadic errors do not follow the quantitative predictions of current visuomotor models that explain these mislocalizations in terms of spatial updating. These models all assume sluggish eye-movement feedback and therefore predict that errors should vary systematically with the amplitude and kinematics of the intervening saccade. Instead, we reported that errors depend only weakly on the saccade amplitude. An alternative explanation for the data is that around the saccade the perceived target location undergoes a uniform transient shift in the saccade direction, but that the oculomotor feedback is, on average, accurate. This “ visual shift” hypothesis predicts that errors will also remain insensitive to kinematic variability within much larger head-free gaze shifts. Here we test this prediction by presenting a brief visual probe near the onset of gaze saccades between 40 and 70° amplitude. According to models with inaccurate gaze-motor feedback, the expected perisaccadic errors for such gaze shifts should be as large as 30° and depend heavily on the kinematics of the gaze shift. In contrast, we found that the actual peak errors were similar to those reported for much smaller saccadic eye movements, i.e., on average about 10°, and that neither gaze-shift amplitude nor kinematics plays a systematic role. Our data further corroborate the visual origin of perisaccadic mislocalization under open-loop conditions and strengthen the idea that efferent feedback signals in the gaze-control system are fast and accurate.


1987 ◽  
Vol 58 (4) ◽  
pp. 832-849 ◽  
Author(s):  
D. Tweed ◽  
T. Vilis

1. This paper develops three-dimensional models for the vestibuloocular reflex (VOR) and the internal feedback loop of the saccadic system. The models differ qualitatively from previous, one-dimensional versions, because the commutative algebra used in previous models does not apply to the three-dimensional rotations of the eye. 2. The hypothesis that eye position signals are generated by an eye velocity integrator in the indirect path of the VOR must be rejected because in three dimensions the integral of angular velocity does not specify angular position. Computer simulations using eye velocity integrators show large, cumulative gaze errors and post-VOR drift. We describe a simple velocity to position transformation that works in three dimensions. 3. In the feedback control of saccades, eye position error is not the vector difference between actual and desired eye positions. Subtractive feedback models must continuously adjust the axis of rotation throughout a saccade, and they generate meandering, dysmetric gaze saccades. We describe a multiplicative feedback system that solves these problems and generates fixed-axis saccades that accord with Listing's law. 4. We show that Listing's law requires that most saccades have their axes out of Listing's plane. A corollary is that if three pools of short-lead burst neurons code the eye velocity command during saccades, the three pools are not yoked, but function independently during visually triggered saccades. 5. In our three-dimensional models, we represent eye position using four-component rotational operators called quaternions. This is not the only algebraic system for describing rotations, but it is the one that best fits the needs of the oculomotor system, and it yields much simpler models than do rotation matrix or other representations. 6. Quaternion models predict that eye position is represented on four channels in the oculomotor system: three for the vector components of eye position and one inversely related to gaze eccentricity and torsion. 7. Many testable predictions made by quaternion models also turn up in models based on other mathematics. These predictions are therefore more fundamental than the specific models that generate them. Among these predictions are 1) to compute eye position in the indirect path of the VOR, eye or head velocity signals are multiplied by eye position feedback and then integrated; consequently 2) eye position signals and eye or head velocity signals converge on vestibular neurons, and their interaction is multiplicative.(ABSTRACT TRUNCATED AT 400 WORDS)


2017 ◽  
Vol 117 (1) ◽  
pp. 65-78 ◽  
Author(s):  
Kévin Marche ◽  
Paul Apicella

Recent works highlight the importance of local inhibitory interneurons in regulating the function of the striatum. In particular, fast-spiking interneurons (FSIs), which likely correspond to a subgroup of GABAergic interneurons, have been involved in the control of movement by exerting strong inhibition on striatal output pathways. However, little is known about the exact contribution of these presumed interneurons in movement preparation, initiation, and execution. We recorded the activity of FSIs in the striatum of monkeys as they performed reaching movements to a visual target under two task conditions: one in which the movement target was presented at unsignaled left or right locations, and another in which advance information about target location was available, thus allowing monkeys to react faster. Modulations of FSI activity around the initiation of movement (53% of 55 neurons) consisted mostly of increases reaching maximal firing immediately before or, less frequently, after movement onset. Another subset of FSIs showed decreases in activity during movement execution. Rarely did movement-related changes in FSI firing depend on response direction and movement speed. Modulations of FSI activity occurring relatively early in relation to movement initiation were more influenced by the preparation for movement, compared with those occurring later. Conversely, FSI activity remained unaffected, as monkeys were preparing a movement toward a specific location and instead moved to the opposite direction when the trigger occurred. These results provide evidence that changes in activity of presumed GABAergic interneurons of the primate striatum could make distinct contributions to processes involved in movement generation. NEW & NOTEWORTHY We explored the functional contributions of striatal fast-spiking interneurons (FSIs), presumed GABAergic interneurons, to distinct steps of movement generation in monkeys performing a reaching task. The activity of individual FSIs was modulated before and during the movement, consisting mostly of increased in firing rates. Changes in activity also occurred during movement preparation. We interpret this variety of modulation types at different moments of task performance as reflecting differential FSI control over distinct phases of movement.


2016 ◽  
Vol 2 (8) ◽  
pp. e1501070 ◽  
Author(s):  
Liu Zhou ◽  
Teng Leng Ooi ◽  
Zijiang J. He

Our sense of vision reliably directs and guides our everyday actions, such as reaching and walking. This ability is especially fascinating because the optical images of natural scenes that project into our eyes are insufficient to adequately form a perceptual space. It has been proposed that the brain makes up for this inadequacy by using its intrinsic spatial knowledge. However, it is unclear what constitutes intrinsic spatial knowledge and how it is acquired. We investigated this question and showed evidence of an ecological basis, which uses the statistical spatial relationship between the observer and the terrestrial environment, namely, the ground surface. We found that in dark and reduced-cue environments where intrinsic knowledge has a greater contribution, perceived target location is more accurate when referenced to the ground than to the ceiling. Furthermore, taller observers more accurately localized the target. Superior performance was also observed in the full-cue environment, even when we compensated for the observers’ heights by having the taller observer sit on a chair and the shorter observers stand on a box. Although fascinating, this finding dovetails with the prediction of the ecological hypothesis for intrinsic spatial knowledge. It suggests that an individual’s accumulated lifetime experiences of being tall and his or her constant interactions with ground-based objects not only determine intrinsic spatial knowledge but also endow him or her with an advantage in spatial ability in the intermediate distance range.


Sign in / Sign up

Export Citation Format

Share Document