scholarly journals Theory of Visual Attention (TVA) in Action: Assessing Premotor Attention in Simultaneous Eye-Hand Movements

Author(s):  
Philipp Kreyenmeier ◽  
Heiner Deubel ◽  
Nina M. Hanning

AbstractAttention shifts that precede goal-directed eye and hand movements are regarded as markers of motor target selection. Whether effectors compete for a single, shared attentional resource during simultaneous eye-hand movements or whether attentional resources can be allocated independently towards multiple target locations is controversially debated. Independent, effector-specific target selection mechanisms underlying parallel allocation of visuospatial attention to saccade and reach targets would predict an increase of the overall attention capacity with the number of active effectors. We test this hypothesis in a modified Theory of Visual Attention (TVA; Bundesen, 1990) paradigm. Participants reported briefly presented letters during eye, hand, or combined eye-hand movement preparation to centrally cued locations. Modeling the data according to TVA allowed us to assess both the overall attention capacity and the deployment of visual attention to individual locations in the visual work space. In two experiments, we show that attention is predominantly allocated to the motor targets – without pronounced competition between effectors. The parallel benefits at eye and hand targets, however, have concomitant costs at non-motor locations, and the overall attention capacity does not increase by the simultaneous recruitment of both effector systems. Moreover, premotor shifts of attention dominate over voluntary deployment of processing resources, yielding severe impairments of voluntary attention allocation. We conclude that attention shifts to multiple effector targets without mutual competition given that sufficient processing resources can be withdrawn from movement-irrelevant locations.

2018 ◽  
Vol 30 (12) ◽  
pp. 1846-1857 ◽  
Author(s):  
Daniel Baldauf

In two EEG experiments, we studied the role of visual attention during the preparation of manual movements around an obstacle. Participants performed rapid hand movements to a goal position avoiding a central obstacle either on the left or right side, depending on the pitch of the acoustical go signal. We used a dot probe paradigm to analyze the deployment of spatial attention in the visual field during the motor preparation. Briefly after the go signal but still before the hand movement actually started, a visual transient was flashed either on the planned pathway of the hand (congruent trials) or on the opposite, movement-irrelevant side (incongruent trials). The P1/N1 components that were evoked by the onset of the dot probe were enhanced in congruent trials where the visual transient was presented on the planned path of the hand. The results indicate that, during movement preparation, attention is allocated selectively to the planned trajectory the hand is going to take around the obstacle.


2008 ◽  
Vol 100 (3) ◽  
pp. 1533-1543 ◽  
Author(s):  
J. Randall Flanagan ◽  
Yasuo Terao ◽  
Roland S. Johansson

People naturally direct their gaze to visible hand movement goals. Doing so improves reach accuracy through use of signals related to gaze position and visual feedback of the hand. Here, we studied where people naturally look when acting on remembered target locations. Four targets were presented on a screen, in peripheral vision, while participants fixed a central cross (encoding phase). Four seconds later, participants used a pen to mark the remembered locations while free to look wherever they wished (recall phase). Visual references, including the screen and the cross, were present throughout. During recall, participants neither looked at the marked locations nor prevented eye movements. Instead, gaze behavior was erratic and was comprised of gaze shifts loosely coupled in time and space with hand movements. To examine whether eye and hand movements during encoding affected gaze behavior during recall, in additional encoding conditions, participants marked the visible targets with either free gaze or with central cross fixation or just looked at the targets. All encoding conditions yielded similar erratic gaze behavior during recall. Furthermore, encoding mode did not influence recall performance, suggesting that participants, during recall, did not exploit sensorimotor memories related to hand and gaze movements during encoding. Finally, we recorded a similar lose coupling between hand and eye movements during an object manipulation task performed in darkness after participants had viewed the task environment. We conclude that acting on remembered versus visible targets can engage fundamentally different control strategies, with gaze largely decoupled from movement goals during memory-guided actions.


2020 ◽  
Vol 132 (5) ◽  
pp. 1358-1366
Author(s):  
Chao-Hung Kuo ◽  
Timothy M. Blakely ◽  
Jeremiah D. Wander ◽  
Devapratim Sarma ◽  
Jing Wu ◽  
...  

OBJECTIVEThe activation of the sensorimotor cortex as measured by electrocorticographic (ECoG) signals has been correlated with contralateral hand movements in humans, as precisely as the level of individual digits. However, the relationship between individual and multiple synergistic finger movements and the neural signal as detected by ECoG has not been fully explored. The authors used intraoperative high-resolution micro-ECoG (µECoG) on the sensorimotor cortex to link neural signals to finger movements across several context-specific motor tasks.METHODSThree neurosurgical patients with cortical lesions over eloquent regions participated. During awake craniotomy, a sensorimotor cortex area of hand movement was localized by high-frequency responses measured by an 8 × 8 µECoG grid of 3-mm interelectrode spacing. Patients performed a flexion movement of the thumb or index finger, or a pinch movement of both, based on a visual cue. High-gamma (HG; 70–230 Hz) filtered µECoG was used to identify dominant electrodes associated with thumb and index movement. Hand movements were recorded by a dataglove simultaneously with µECoG recording.RESULTSIn all 3 patients, the electrodes controlling thumb and index finger movements were identifiable approximately 3–6-mm apart by the HG-filtered µECoG signal. For HG power of cortical activation measured with µECoG, the thumb and index signals in the pinch movement were similar to those observed during thumb-only and index-only movement, respectively (all p > 0.05). Index finger movements, measured by the dataglove joint angles, were similar in both the index-only and pinch movements (p > 0.05). However, despite similar activation across the conditions, markedly decreased thumb movement was observed in pinch relative to independent thumb-only movement (all p < 0.05).CONCLUSIONSHG-filtered µECoG signals effectively identify dominant regions associated with thumb and index finger movement. For pinch, the µECoG signal comprises a combination of the signals from individual thumb and index movements. However, while the relationship between the index finger joint angle and HG-filtered signal remains consistent between conditions, there is not a fixed relationship for thumb movement. Although the HG-filtered µECoG signal is similar in both thumb-only and pinch conditions, the actual thumb movement is markedly smaller in the pinch condition than in the thumb-only condition. This implies a nonlinear relationship between the cortical signal and the motor output for some, but importantly not all, movement types. This analysis provides insight into the tuning of the motor cortex toward specific types of motor behaviors.


1979 ◽  
Vol 48 (1) ◽  
pp. 207-214 ◽  
Author(s):  
Luis R. Marcos

16 subordinate bilingual subjects produced 5-min. monologues in their nondominant languages, i.e., English or Spanish. Hand-movement activity manifested during the videotape monologues was scored and related to measures of fluency in the nondominant language. The hand-movement behavior categorized as Groping Movement was significantly related to all of the nondominant-language fluency measures. These correlations support the assumption that Groping Movement may have a function in the process of verbal encoding. The results are discussed in terms of the possibility of monitoring central cognitive processes through the study of “visible” motor behavior.


Autism ◽  
2019 ◽  
Vol 24 (3) ◽  
pp. 730-743 ◽  
Author(s):  
Emma Gowen ◽  
Andrius Vabalas ◽  
Alexander J Casson ◽  
Ellen Poliakoff

This study investigated whether reduced visual attention to an observed action might account for altered imitation in autistic adults. A total of 22 autistic and 22 non-autistic adults observed and then imitated videos of a hand producing sequences of movements that differed in vertical elevation while their hand and eye movements were recorded. Participants first performed a block of imitation trials with general instructions to imitate the action. They then performed a second block with explicit instructions to attend closely to the characteristics of the movement. Imitation was quantified according to how much participants modulated their movement between the different heights of the observed movements. In the general instruction condition, the autistic group modulated their movements significantly less compared to the non-autistic group. However, following instructions to attend to the movement, the autistic group showed equivalent imitation modulation to the non-autistic group. Eye movement recording showed that the autistic group spent significantly less time looking at the hand movement for both instruction conditions. These findings show that visual attention contributes to altered voluntary imitation in autistic individuals and have implications for therapies involving imitation as well as for autistic people’s ability to understand the actions of others.


1998 ◽  
Vol 79 (3) ◽  
pp. 1574-1578 ◽  
Author(s):  
Ewa Wojciulik ◽  
Nancy Kanwisher ◽  
Jon Driver

Wojciulik, Ewa, Nancy Kanwisher, and Jon Driver. Covert visual attention modulates face-specific activity in the human fusiform gyrus: an fMRI study. J. Neurophysiol. 79: 1574–1578, 1998. Several lines of evidence demonstrate that faces undergo specialized processing within the primate visual system. It has been claimed that dedicated modules for such biologically significant stimuli operate in a mandatory fashion whenever their triggering input is presented. However, the possible role of covert attention to the activating stimulus has never been examined for such cases. We used functional magnetic resonance imaging to test whether face-specific activity in the human fusiform face area (FFA) is modulated by covert attention. The FFA was first identified individually in each subject as the ventral occipitotemporal region that responded more strongly to visually presented faces than to other visual objects under passive central viewing. This then served as the region of interest within which attentional modulation was tested independently, using active tasks and a very different stimulus set. Subjects viewed brief displays each comprising two peripheral faces and two peripheral houses (all presented simultaneously). They performed a matching task on either the two faces or the two houses, while maintaining central fixation to equate retinal stimulation across tasks. Signal intensity was reliably stronger during face-matching than house matching in both right- and left-hemisphere predefined FFAs. These results show that face-specific fusiform activity is reduced when stimuli appear outside (vs. inside) the focus of attention. Despite the modular nature of the FFA (i.e., its functional specificity and anatomic localization), face processing in this region nonetheless depends on voluntary attention.


2019 ◽  
Vol 9 (11) ◽  
pp. 315 ◽  
Author(s):  
Andrea Orlandi ◽  
Alice Mado Proverbio

It has been shown that selective attention enhances the activity in visual regions associated with stimulus processing. The left hemisphere seems to have a prominent role when non-spatial attention is directed towards specific stimulus features (e.g., color, spatial frequency). The present electrophysiological study investigated the time course and neural correlates of object-based attention, under the assumption of left-hemispheric asymmetry. Twenty-nine right-handed participants were presented with 3D graphic images representing the shapes of different object categories (wooden dummies, chairs, structures of cubes) which lacked detail. They were instructed to press a button in response to a target stimulus indicated at the beginning of each run. The perception of non-target stimuli elicited a larger anterior N2 component, which was likely associated with motor inhibition. Conversely, target selection resulted in an enhanced selection negativity (SN) response lateralized over the left occipito-temporal regions, followed by a larger centro-parietal P300 response. These potentials were interpreted as indexing attentional selection and categorization processes, respectively. The standardized weighted low-resolution electromagnetic tomography (swLORETA) source reconstruction showed the engagement of a fronto-temporo-limbic network underlying object-based visual attention. Overall, the SN scalp distribution and relative neural generators hinted at a left-hemispheric advantage for non-spatial object-based visual attention.


2019 ◽  
Vol 121 (5) ◽  
pp. 1967-1976 ◽  
Author(s):  
Niels Gouirand ◽  
James Mathew ◽  
Eli Brenner ◽  
Frederic R. Danion

Adapting hand movements to changes in our body or the environment is essential for skilled motor behavior. Although eye movements are known to assist hand movement control, how eye movements might contribute to the adaptation of hand movements remains largely unexplored. To determine to what extent eye movements contribute to visuomotor adaptation of hand tracking, participants were asked to track a visual target that followed an unpredictable trajectory with a cursor using a joystick. During blocks of trials, participants were either allowed to look wherever they liked or required to fixate a cross at the center of the screen. Eye movements were tracked to ensure gaze fixation as well as to examine free gaze behavior. The cursor initially responded normally to the joystick, but after several trials, the direction in which it responded was rotated by 90°. Although fixating the eyes had a detrimental influence on hand tracking performance, participants exhibited a rather similar time course of adaptation to rotated visual feedback in the gaze-fixed and gaze-free conditions. More importantly, there was extensive transfer of adaptation between the gaze-fixed and gaze-free conditions. We conclude that although eye movements are relevant for the online control of hand tracking, they do not play an important role in the visuomotor adaptation of such tracking. These results suggest that participants do not adapt by changing the mapping between eye and hand movements, but rather by changing the mapping between hand movements and the cursor’s motion independently of eye movements. NEW & NOTEWORTHY Eye movements assist hand movements in everyday activities, but their contribution to visuomotor adaptation remains largely unknown. We compared adaptation of hand tracking under free gaze and fixed gaze. Although our results confirm that following the target with the eyes increases the accuracy of hand movements, they unexpectedly demonstrate that gaze fixation does not hinder adaptation. These results suggest that eye movements have distinct contributions for online control and visuomotor adaptation of hand movements.


Sign in / Sign up

Export Citation Format

Share Document