scholarly journals Multiple spatial representations interact to increase reach accuracy when coordinating a saccade with a reach

2017 ◽  
Vol 118 (4) ◽  
pp. 2328-2343 ◽  
Author(s):  
Yuriria Vazquez ◽  
Laura Federici ◽  
Bijan Pesaran

Reaching is an essential behavior that allows primates to interact with the environment. Precise reaching to visual targets depends on our ability to localize and foveate the target. Despite this, how the saccade system contributes to improvements in reach accuracy remains poorly understood. To assess spatial contributions of eye movements to reach accuracy, we performed a series of behavioral psychophysics experiments in nonhuman primates ( Macaca mulatta). We found that a coordinated saccade with a reach to a remembered target location increases reach accuracy without target foveation. The improvement in reach accuracy was similar to that obtained when the subject had visual information about the location of the current target in the visual periphery and executed the reach while maintaining central fixation. Moreover, we found that the increase in reach accuracy elicited by a coordinated movement involved a spatial coupling mechanism between the saccade and reach movements. We observed significant correlations between the saccade and reach errors for coordinated movements. In contrast, when the eye and arm movements were made to targets in different spatial locations, the magnitude of the error and the degree of correlation between the saccade and reach direction were determined by the spatial location of the eye and the hand targets. Hence, we propose that coordinated movements improve reach accuracy without target foveation due to spatial coupling between the reach and saccade systems. Spatial coupling could arise from a neural mechanism for coordinated visual behavior that involves interacting spatial representations. NEW & NOTEWORTHY How visual spatial representations guiding reach movements involve coordinated saccadic eye movements is unknown. Temporal coupling between the reach and saccade system during coordinated movements improves reach performance. However, the role of spatial coupling is unclear. Using behavioral psychophysics, we found that spatial coupling increases reach accuracy in addition to temporal coupling and visual acuity. These results suggest that a spatial mechanism to couple the reach and saccade systems increases the accuracy of coordinated movements.

2008 ◽  
Vol 100 (5) ◽  
pp. 2507-2514 ◽  
Author(s):  
Aidan A. Thompson ◽  
Denise Y. P. Henriques

Remembered object locations are stored in an eye-fixed reference frame, so that every time the eyes move, spatial representations must be updated for the arm-motor system to reflect the target's new relative position. To date, studies have not investigated how the brain updates these spatial representations during other types of eye movements, such as smooth-pursuit. Further, it is unclear what information is used in spatial updating. To address these questions we investigated whether remembered locations of pointing targets are updated following smooth-pursuit eye movements, as they are following saccades, and also investigated the role of visual information in estimating eye-movement amplitude for updating spatial memory. Misestimates of eye-movement amplitude were induced when participants visually tracked stimuli presented with a background that moved in either the same or opposite direction of the eye before pointing or looking back to the remembered target location. We found that gaze-dependent pointing errors were similar following saccades and smooth-pursuit and that incongruent background motion did result in a misestimate of eye-movement amplitude. However, the background motion had no effect on spatial updating for pointing, but did when subjects made a return saccade, suggesting that the oculomotor and arm-motor systems may rely on different sources of information for spatial updating.


2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Tim Holmes ◽  
Toby de Burgh ◽  
Samuel James Vine

Head-mounted eye tracking has been fundamental for developing an understanding of sporting expertise, as the way in which performers sample visual information from the environment is a major determinant of successful performance. There is, however, a long running tension between the desire to study realistic, in-situ gaze behaviour and the difficulties of acquiring accurate ocular measurements in dynamic and fast-moving sporting tasks. Here, we describe how immersive technologies, such as virtual reality, offer an increasingly compelling approach for conducting eye movement research in sport. The possibility of studying gaze behaviour in representative and realistic environments, but with high levels of experimental control, could enable significant strides forward for eye tracking in sport and improve understanding of how eye movements underpin sporting skills. By providing a rationale for virtual reality as an optimal environment for eye tracking research, as well as outlining practical considerations related to hardware, software and data analysis, we hope to guide researchers and practitioners in the use of this approach.


1968 ◽  
Vol 26 (2) ◽  
pp. 335-351 ◽  
Author(s):  
Gunnar Johansson

Continuous change of illuminance over retinal area in accordance with the sinusoidal function was studied as a stimulus for the human visual system. Its efficiency in controlling pursuit eye movements was compared with that of a stepwise luminance function (square wave). Such distributions of luminance were generated on a cathode ray screen (wavelength at the eye 9° and 3°) and were given a small translatory motion (2° – 12′). Ss were instructed to follow the moving pattern with pursuit eye movements. There is no difference in performance between the two types of brightness distributions. A stimulus motion of 24′ was sufficient to produce full evidence of eye tracking in all Ss also from the contour-free sinusoidal pattern. This means that the brightness change in every point of the CRT screen was far below the retinal sensitivity threshold at the illuminance level used. Thus a summation effect occurs. This was taken as a support for an hypothesis about “ordinal” stimulation. Arguments from modern neurophysiology are introduced and yield further support for the conclusion.


2013 ◽  
Vol 368 (1628) ◽  
pp. 20130056 ◽  
Author(s):  
Matteo Toscani ◽  
Matteo Valsecchi ◽  
Karl R. Gegenfurtner

When judging the lightness of objects, the visual system has to take into account many factors such as shading, scene geometry, occlusions or transparency. The problem then is to estimate global lightness based on a number of local samples that differ in luminance. Here, we show that eye fixations play a prominent role in this selection process. We explored a special case of transparency for which the visual system separates surface reflectance from interfering conditions to generate a layered image representation. Eye movements were recorded while the observers matched the lightness of the layered stimulus. We found that observers did focus their fixations on the target layer, and this sampling strategy affected their lightness perception. The effect of image segmentation on perceived lightness was highly correlated with the fixation strategy and was strongly affected when we manipulated it using a gaze-contingent display. Finally, we disrupted the segmentation process showing that it causally drives the selection strategy. Selection through eye fixations can so serve as a simple heuristic to estimate the target reflectance.


Sign in / Sign up

Export Citation Format

Share Document