scholarly journals The function of “looking-at-nothing” for sequential sensorimotor tasks: Eye movements to remembered action-target locations

2019 ◽  
Vol 12 (2) ◽  
Author(s):  
Rebecca Martina Foerster

When performing manual actions, eye movements precede hand movements to target locations: Before we grasp an object, we look at it. Eye-hand guidance is even preserved when visual targets are unavailable, e.g., grasping behind an occlusion. This “looking-at-nothing” behavior might be functional, e.g., as “deictic pointer” for manual control or as memory-retrieval cue, or a by-product of automatization. Here, it is studied if looking at empty locations before acting on them is beneficial for sensorimotor performance. In five experiments, participants completed a click sequence on eight visual targets for 0-100 trials while they had either to fixate on the screen center or could move their eyes freely. During 50-100 consecutive trials, participants clicked the same sequence on a blank screen with free or fixed gaze. During both phases, participants looked at target locations when gaze shifts were allowed. With visual targets, target fixations led to faster, more precise clicking, fewer errors, and sparser cursor-paths than central fixation. Without visual information, a tiny free-gaze benefit could sometimes be observed and was rather a memory than a motor-calculation benefit. Interestingly, central fixation during learning forced early explicit encoding causing a strong benefit for acting on remembered targets later, independent of whether eyes moved then.

2011 ◽  
Vol 105 (2) ◽  
pp. 846-859 ◽  
Author(s):  
Lore Thaler ◽  
Melvyn A. Goodale

Studies that have investigated how sensory feedback about the moving hand is used to control hand movements have relied on paradigms such as pointing or reaching that require subjects to acquire target locations. In the context of these target-directed tasks, it has been found repeatedly that the human sensory-motor system relies heavily on visual feedback to control the ongoing movement. This finding has been formalized within the framework of statistical optimality according to which different sources of sensory feedback are combined such as to minimize variance in sensory information during movement control. Importantly, however, many hand movements that people perform every day are not target-directed, but based on allocentric (object-centered) visual information. Examples of allocentric movements are gesture imitation, drawing, or copying. Here we tested if visual feedback about the moving hand is used in the same way to control target-directed and allocentric hand movements. The results show that visual feedback is used significantly more to reduce movement scatter in the target-directed as compared with the allocentric movement task. Furthermore, we found that differences in the use of visual feedback between target-directed and allocentric hand movements cannot be explained based on differences in uncertainty about the movement goal. We conclude that the role played by visual feedback for movement control is fundamentally different for target-directed and allocentric movements. The results suggest that current computational and neural models of sensorimotor control that are based entirely on data derived from target-directed paradigms have to be modified to accommodate performance in the allocentric tasks used in our experiments. As a consequence, the results cast doubt on the idea that models of sensorimotor control developed exclusively from data obtained in target-directed paradigms are also valid in the context of allocentric tasks, such as drawing, copying, or imitative gesturing, that characterize much of human behavior.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
John-Ross Rizzo ◽  
Mahya Beheshti ◽  
Tahereh Naeimi ◽  
Farnia Feiz ◽  
Girish Fatterpekar ◽  
...  

Abstract Background Eye–hand coordination (EHC) is a sophisticated act that requires interconnected processes governing synchronization of ocular and manual motor systems. Precise, timely and skillful movements such as reaching for and grasping small objects depend on the acquisition of high-quality visual information about the environment and simultaneous eye and hand control. Multiple areas in the brainstem and cerebellum, as well as some frontal and parietal structures, have critical roles in the control of eye movements and their coordination with the head. Although both cortex and cerebellum contribute critical elements to normal eye-hand function, differences in these contributions suggest that there may be separable deficits following injury. Method As a preliminary assessment for this perspective, we compared eye and hand-movement control in a patient with cortical stroke relative to a patient with cerebellar stroke. Result We found the onset of eye and hand movements to be temporally decoupled, with significant decoupling variance in the patient with cerebellar stroke. In contrast, the patient with cortical stroke displayed increased hand spatial errors and less significant temporal decoupling variance. Increased decoupling variance in the patient with cerebellar stroke was primarily due to unstable timing of rapid eye movements, saccades. Conclusion These findings highlight a perspective in which facets of eye-hand dyscoordination are dependent on lesion location and may or may not cooperate to varying degrees. Broadly speaking, the results corroborate the general notion that the cerebellum is instrumental to the process of temporal prediction for eye and hand movements, while the cortex is instrumental to the process of spatial prediction, both of which are critical aspects of functional movement control.


2021 ◽  
pp. 1-6
Author(s):  
Quentin Lenoble ◽  
Mohamad El Haj

Abstract. There has been a surge in social cognition and social neurosciences research comparing laboratory and real eye movements. Eye movements during the retrieval of autobiographical memories (i.e., personal memories) in laboratory situations are also receiving more attention. We compared eye movements during the retrieval of autobiographical memories using a strict laboratory design versus a design mimicking social interactions. In the first design, eye movements were recorded during autobiographical memory retrieval while participants were looking at a blank screen; in the second design, participants wore eye-tracking glasses and communicated autobiographical memories to the experimenter. Compared with the “screen” design, the “glasses” design yielded more fixations ( p < .05), shorter duration of fixations ( p < .001), more saccades ( p < .01), and longer duration of saccades ( p < .001). These findings demonstrate how eye movements during autobiographical memory retrieval differ between strict laboratory design and face-to-face interactions.


1995 ◽  
Vol 73 (1) ◽  
pp. 1-19 ◽  
Author(s):  
S. P. Scalaidhe ◽  
T. D. Albright ◽  
H. R. Rodman ◽  
C. G. Gross

1. On the basis of its anatomic connections and single-unit properties, the superior temporal polysensory area (STP) would seem to be primarily involved in visuospatial functions. We have examined the effects of lesions of STP on saccadic eye movements, visual fixation, and smooth pursuit eye movements to directly test the hypothesis that STP is involved in visuospatial and visuomotor behavior. 2. Seven monkeys were trained to make saccades to targets 8, 15, and 22 degrees from a central fixation point along the horizontal meridian and 8 degrees from the central fixation point along the vertical meridian. One monkey was also trained to make saccades to auditory targets. The same monkeys were trained to foveate a stationary central fixation point and to follow it with a smooth pursuit eye movement when it began moving 5, 13, or 20 degrees/s. Four monkeys received unilateral STP lesions, one received a bilateral STP lesion, and as a control, two received unilateral inferior temporal cortex (IT) lesions. After testing, three of the animals with unilateral STP lesions received an additional STP lesion in the hemisphere contralateral to the first lesion. Similarly, one animal with a unilateral IT lesion received an additional IT lesion in the hemisphere contralateral to the first lesion. 3. All monkeys with complete removal of STP showed a significant increase in saccade latency to the most peripheral contralateral target, and most also had increased saccade latencies to the other contralateral targets. Saccades directed to targets along the vertical meridian or toward targets in the hemifield ipsilateral to the lesion were not impaired by removal of STP. By contrast, IT lesions did not impair the monkeys' ability to make saccadic eye movements to visual stimuli at any location, showing that saccades to visually guided targets are not impaired nonspecifically by damage to visual cortex. 4. The deficit in making eye movements after STP lesions was specific to saccade latency, with little effect on the accuracy of saccades to visual targets. 5. In the one monkey trained to make saccades to auditory targets, removal of STP did not impair saccades to auditory targets contralateral to its lesion, despite this monkey showing the largest increase in saccades latencies to visual targets. 6. There was complete recovery of saccade latency to the baseline level of performance on the saccade task after all STP lesions.(ABSTRACT TRUNCATED AT 400 WORDS)


2008 ◽  
Vol 100 (3) ◽  
pp. 1533-1543 ◽  
Author(s):  
J. Randall Flanagan ◽  
Yasuo Terao ◽  
Roland S. Johansson

People naturally direct their gaze to visible hand movement goals. Doing so improves reach accuracy through use of signals related to gaze position and visual feedback of the hand. Here, we studied where people naturally look when acting on remembered target locations. Four targets were presented on a screen, in peripheral vision, while participants fixed a central cross (encoding phase). Four seconds later, participants used a pen to mark the remembered locations while free to look wherever they wished (recall phase). Visual references, including the screen and the cross, were present throughout. During recall, participants neither looked at the marked locations nor prevented eye movements. Instead, gaze behavior was erratic and was comprised of gaze shifts loosely coupled in time and space with hand movements. To examine whether eye and hand movements during encoding affected gaze behavior during recall, in additional encoding conditions, participants marked the visible targets with either free gaze or with central cross fixation or just looked at the targets. All encoding conditions yielded similar erratic gaze behavior during recall. Furthermore, encoding mode did not influence recall performance, suggesting that participants, during recall, did not exploit sensorimotor memories related to hand and gaze movements during encoding. Finally, we recorded a similar lose coupling between hand and eye movements during an object manipulation task performed in darkness after participants had viewed the task environment. We conclude that acting on remembered versus visible targets can engage fundamentally different control strategies, with gaze largely decoupled from movement goals during memory-guided actions.


2012 ◽  
Vol 5 (2) ◽  
Author(s):  
Sébastien Miellet ◽  
Lingnan He ◽  
Xinyue Zhou ◽  
Junpeng Lao ◽  
Roberto Caldara

Culture impacts on how people sample visual information for face processing. Westerners deploy fixations towards the eyes and the mouth to achieve face recognition. In contrast, Easterners reach equal performance by deploying more central fixations, suggesting an effective extrafoveal information use. However, this hypothesis has not been yet directly investigated, i.e. by providing only extrafoveal information to both groups of observers. We used a parametric gaze-contingent technique dynamically masking central vision - the Blindspot – with Western and Eastern observers during face recognition. Westerners shifted progressively towards the typical Eastern central fixation pattern with larger Blindspots, whereas Easterners were insensitive to the Blindspots. These observations clearly show that Easterners preferentially sample information extrafoveally for faces. Conversely, the Western data also show that culturally-dependent visuo-motor strategies can flexibly adjust to constrained visual situations.


2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Tim Holmes ◽  
Toby de Burgh ◽  
Samuel James Vine

Head-mounted eye tracking has been fundamental for developing an understanding of sporting expertise, as the way in which performers sample visual information from the environment is a major determinant of successful performance. There is, however, a long running tension between the desire to study realistic, in-situ gaze behaviour and the difficulties of acquiring accurate ocular measurements in dynamic and fast-moving sporting tasks. Here, we describe how immersive technologies, such as virtual reality, offer an increasingly compelling approach for conducting eye movement research in sport. The possibility of studying gaze behaviour in representative and realistic environments, but with high levels of experimental control, could enable significant strides forward for eye tracking in sport and improve understanding of how eye movements underpin sporting skills. By providing a rationale for virtual reality as an optimal environment for eye tracking research, as well as outlining practical considerations related to hardware, software and data analysis, we hope to guide researchers and practitioners in the use of this approach.


Sign in / Sign up

Export Citation Format

Share Document