Saccade-Related Potentials During Eye-Hand Coordination: Effects of Hand Movements on Saccade Preparation

Motor Control ◽  
2016 ◽  
Vol 20 (3) ◽  
pp. 316-336 ◽  
Author(s):  
Uta Sailer ◽  
Florian Güldenpfennig ◽  
Thomas Eggert

This study investigated the effect of hand movements on behavioral and electro-physiological parameters of saccade preparation. While event-related potentials were recorded in 17 subjects, they performed saccades to a visual target either together with a hand movement in the same direction, a hand movement in the opposite direction, a hand movement to a third, independent direction, or without any accompanying hand movements. Saccade latencies increased with any kind of accompanying hand movement. Both saccade and manual latencies were largest when both movements aimed at opposite directions. In contrast, saccade-related potentials indicating preparatory activity were mainly affected by hand movements in the same direction. The data suggest that concomitant hand movements interfere with saccade preparation, particularly when the two movements involve motor preparations that access the same visual stimulus. This indicates that saccade preparation is continually informed about hand movement preparation.

2004 ◽  
Vol 26 (2) ◽  
pp. 317-337 ◽  
Author(s):  
Tsung-Min Hung ◽  
Thomas W. Spalding ◽  
D. Laine Santa Maria ◽  
Bradley D. Hatfield

Motor readiness, visual attention, and reaction time (RT) were assessed in 15 elite table tennis players (TTP) and 15 controls (C) during Posner’s cued attention task. Lateralized readiness potentials (LRP) were derived from contingent negative variation (CNV) at Sites C3 and C4, elicited between presentation of directional cueing (S1) and the appearance of the imperative stimulus (S2), to assess preparation for hand movement while P1 and N1 component amplitudes were derived from occipital event-related potentials (ERPs) in response to S2 to assess visual attention. Both groups had faster RT to validly cued stimuli and slower RT to invalidly cued stimuli relative to the RT to neutral stimuli that were not preceded by directional cueing, but the groups did not differ in attention benefit or cost. However, TTP did have faster RT to all imperative stimuli; they maintained superior reactivity to S2 whether preceded by valid, invalid, or neutral warning cues. Although both groups generated LRP in response to the directional cues, TTP generated larger LRP to prepare the corresponding hand for movement to the side of the cued location. TTP also had an inverse cueing effect for N1 amplitude (i.e., amplitude of N1 to the invalid cue > amplitude of N1 to the valid cue) while C visually attended to the expected and unexpected locations equally. It appears that TTP preserve superior reactivity to stimuli of uncertain location by employing a compensatory strategy to prepare their motor response to an event associated with high probability, while simultaneously devoting more visual attention to an upcoming event of lower probability.


1989 ◽  
Vol 68 (3) ◽  
pp. 707-714 ◽  
Author(s):  
Eugene A. Lovelace

Two experiments examined the accuracy with which college students were able to touch a target when knowledge of the target location had been gained either visually, kinesthetically, or by both modalities. In all but “baseline” trials, individuals were not allowed to guide the hand visually and so relied on kinesthetic cues during movement to the target location. No feedback was provided. Contrary to students' expectations, accuracy of the movements was greater when the target location had been given kinesthetically (passive movement to the target) as opposed to visually. When target location was provided by seeing one's hand move to the target (kinesthetic plus visual), performance was slightly poorer (though nonsignificantly) than for the purely kinesthetic condition, but significantly better than for a purely visual target condition. These results are discussed in terms of visual dominance and the roles of vision and kinesthesis in guiding normal hand movements.


2018 ◽  
Author(s):  
Gábor Csifcsák ◽  
Viktória Roxána Balla ◽  
Vera Daniella Dalos ◽  
Tünde Kilencz ◽  
Edit Magdolna Biró ◽  
...  

This study investigated the influence of action-associated predictive processes on visual event-related potentials (ERPs). In two experiments (N=17 and N=19), we sought evidence for sensory attenuation (SA) indexed by ERP amplitude reductions for self-induced stimuli when compared to passive viewing of the same images. We assessed if SA (1) is stronger for ecologically valid versus abstract stimuli (by comparing ERPs to pictures depicting hands versus checkerboards), (2) is specific to stimulus identity (certain versus uncertain action-effect contingencies), and (3) is sensitive to laterality of hand movements (dominant versus subdominant hand actions). We found reduced occipital responses for self-triggered hand stimuli very early, between 80-90 ms (C1 component), but this effect was absent for checkerboards. On the contrary, the P1 component (100-140 ms) was enhanced for all action-associated stimuli, and this effect proved to be sensitive to stimulus predictability for hands only. The parietal N1 component (170-190 ms) showed amplitude enhancement after right-hand movements for checkerboards only. Overall, our findings indicate that action-associated predictive processes attenuate early cortical responses to ecologically valid visual stimuli. Moreover, we propose that subsequent ERPs show amplitude enhancement that might result from the interaction between expectation-based SA and attention. Movement-initiated modulation of visual ERPs does not appear to show strong lateralization in healthy individuals, although the absence of lateralized effects cannot be excluded. These results can have implications for assessing the influence of action-associated predictions on visual processing in psychiatric disorders characterized by aberrant sensory predictions and alterations in hemispheric asymmetry, such as schizophrenia.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
John-Ross Rizzo ◽  
Mahya Beheshti ◽  
Tahereh Naeimi ◽  
Farnia Feiz ◽  
Girish Fatterpekar ◽  
...  

Abstract Background Eye–hand coordination (EHC) is a sophisticated act that requires interconnected processes governing synchronization of ocular and manual motor systems. Precise, timely and skillful movements such as reaching for and grasping small objects depend on the acquisition of high-quality visual information about the environment and simultaneous eye and hand control. Multiple areas in the brainstem and cerebellum, as well as some frontal and parietal structures, have critical roles in the control of eye movements and their coordination with the head. Although both cortex and cerebellum contribute critical elements to normal eye-hand function, differences in these contributions suggest that there may be separable deficits following injury. Method As a preliminary assessment for this perspective, we compared eye and hand-movement control in a patient with cortical stroke relative to a patient with cerebellar stroke. Result We found the onset of eye and hand movements to be temporally decoupled, with significant decoupling variance in the patient with cerebellar stroke. In contrast, the patient with cortical stroke displayed increased hand spatial errors and less significant temporal decoupling variance. Increased decoupling variance in the patient with cerebellar stroke was primarily due to unstable timing of rapid eye movements, saccades. Conclusion These findings highlight a perspective in which facets of eye-hand dyscoordination are dependent on lesion location and may or may not cooperate to varying degrees. Broadly speaking, the results corroborate the general notion that the cerebellum is instrumental to the process of temporal prediction for eye and hand movements, while the cortex is instrumental to the process of spatial prediction, both of which are critical aspects of functional movement control.


2007 ◽  
Vol 2007 ◽  
pp. 1-14 ◽  
Author(s):  
Qibin Zhao ◽  
Liqing Zhang

Brain-computer interface (BCI) systems create a novel communication channel from the brain to an output device bypassing conventional motor output pathways of nerves and muscles. Modern BCI technology is essentially based on techniques for the classification of single-trial brain signals. With respect to the topographic patterns of brain rhythm modulations, the common spatial patterns (CSPs) algorithm has been proven to be very useful to produce subject-specific and discriminative spatial filters; but it didn't consider temporal structures of event-related potentials which may be very important for single-trial EEG classification. In this paper, we propose a new framework of feature extraction for classification of hand movement imagery EEG. Computer simulations on real experimental data indicate that independent residual analysis (IRA) method can provide efficient temporal features. Combining IRA features with the CSP method, we obtain the optimal spatial and temporal features with which we achieve the best classification rate. The high classification rate indicates that the proposed method is promising for an EEG-based brain-computer interface.


2016 ◽  
Vol 115 (5) ◽  
pp. 2470-2484 ◽  
Author(s):  
Atul Gopal ◽  
Aditya Murthy

Voluntary control has been extensively studied in the context of eye and hand movements made in isolation, yet little is known about the nature of control during eye-hand coordination. We probed this with a redirect task. Here subjects had to make reaching/pointing movements accompanied by coordinated eye movements but had to change their plans when the target occasionally changed its position during some trials. Using a race model framework, we found that separate effector-specific mechanisms may be recruited to control eye and hand movements when executed in isolation but when the same effectors are coordinated a unitary mechanism to control coordinated eye-hand movements is employed. Specifically, we found that performance curves were distinct for the eye and hand when these movements were executed in isolation but were comparable when they were executed together. Second, the time to switch motor plans, called the target step reaction time, was different in the eye-alone and hand-alone conditions but was similar in the coordinated condition under assumption of a ballistic stage of ∼40 ms, on average. Interestingly, the existence of this ballistic stage could predict the extent of eye-hand dissociations seen in individual subjects. Finally, when subjects were explicitly instructed to control specifically a single effector (eye or hand), redirecting one effector had a strong effect on the performance of the other effector. Taken together, these results suggest that a common control signal and a ballistic stage are recruited when coordinated eye-hand movement plans require alteration.


1999 ◽  
Vol 354 (1387) ◽  
pp. 1135-1144 ◽  
Author(s):  
Scott Makeig ◽  
Marissa Westerfield ◽  
Jeanne Townsend ◽  
Tzyy-Ping Jung ◽  
Eric Courchesne ◽  
...  

Spatial visual attention modulates the first negative–going deflection in the human averaged event–related potential (ERP) in response to visual target and non–target stimuli (the N1 complex). Here we demonstrate a decomposition of N1 into functionally independent subcomponents with functionally distinct relations to task and stimulus conditions. ERPs were collected from 20 subjects in response to visual target and non–target stimuli presented at five attended and non–attended screen locations. Independent component analysis, a new method for blind source separation, was trained simultaneously on 500 ms grand average responses from all 25 stimulus–attention conditions and decomposed the non–target N1 complexes into five spatially fixed, temporally independent and physiologically plausible components. Activity of an early, laterally symmetrical component pair (N1a R and N1a L ) was evoked by the left and right visual field stimuli, respectively. Component N1a R peaked ca. 9 ms earlier than N1a L . Central stimuli evoked both components with the same peak latency difference, producing a bilateral scalp distribution. The amplitudes of these components were not reliably augmented by spatial attention. Stimuli in the right visual field evoked activity in a spatio–temporally overlapping bilateral component (N1b) that peaked at ca. 180 ms and was strongly enhanced by attention. Stimuli presented at unattended locations evoked a fourth component (P2a) peaking near 240 ms. A fifth component (P3f) was evoked only by targets presented in either visual field. The distinct response patterns of these components across the array of stimulus and attention conditions suggest that they reflect activity in functionally independent brain systems involved in processing attended and unattended visuospatial events.


2018 ◽  
Vol 11 (6) ◽  
Author(s):  
Damla Topalli ◽  
Nergiz Ercil Cagiltay

Endoscopic surgery procedures require specific skills, such as eye-hand coordination to be developed. Current education programs are facing with problems to provide appropriate skill improvement and assessment methods in this field. This study aims to propose objective metrics for hand-movement skills and assess eye-hand coordination. An experimental study is conducted with 15 surgical residents to test the newly proposed measures. Two computer-based both-handed endoscopic surgery practice scenarios are developed in a simulation environment to gather the participants’ eye-gaze data with the help of an eye tracker as well as the related hand movement data through haptic interfaces. Additionally, participants’ eye-hand coordination skills are analyzed. The results indicate higher correlations in the intermediates’ eye-hand movements compared to the novices. An increase in intermediates’ visual concentration leads to smoother hand movements. Similarly, the novices’ hand movements are shown to remain at a standstill. After the first round of practice, all participants’ eye-hand coordination skills are improved on the specific task targeted in this study. According to these results, it can be concluded that the proposed metrics can potentially provide some additional insights about trainees’ eye-hand coordination skills and help instructional system designers to better address training requirements.


Sign in / Sign up

Export Citation Format

Share Document