The Influence of Premovement Visual Information on Manual Aiming

1987 ◽  
Vol 39 (3) ◽  
pp. 541-559 ◽  
Author(s):  
Digby Elliott ◽  
John Madalena

Three experiments were conducted to determine whether a visual representation of the movement environment, useful for movement control, exists after visual occlusion. In Experiment 1 subjects moved a stylus to small targets in five different visual conditions. As in other studies (e.g. Elliott and Allard, 1985), subjects moved to the targets in a condition involving full visual information (lights on) and a condition in which the lights were extinguished upon movement initiation (lights off). Subjects also pointed to the targets under conditions in which the lights went off 2, 5 and 10 sec prior to movement initiation. While typical lights-on-lights-off differences in accuracy were obtained in this experiment (Keele and Posner, 1968), the more striking finding was the influence of the pointing delay on movement accuracy. Specifically, subjects exhibited a twofold increase in pointing error after only 2 sec of visual occlusion prior to movement initiation. In Experiment 2, we were able to replicate our 2-sec pointing delay effect with a between-subjects design, providing evidence that the results in Experiment 1 were not due to asymmetrical transfer effects. In a third experiment, the delay effect was reduced by making the target position visible in all lights-off situations. Together, the findings provide evidence for the existence of a brief (< 2 sec) visual representation of the environment useful in the control of aiming movements.

2006 ◽  
Vol 96 (1) ◽  
pp. 352-362 ◽  
Author(s):  
Sabine M. Beurze ◽  
Stan Van Pelt ◽  
W. Pieter Medendorp

At some stage in the process of a sensorimotor transformation for a reaching movement, information about the current position of the hand and information about the location of the target must be encoded in the same frame of reference to compute the hand-to-target difference vector. Two main hypotheses have been proposed regarding this reference frame: an eye-centered and a body-centered frame. Here we evaluated these hypotheses using the pointing errors that subjects made when planning and executing arm movements to memorized targets starting from various initial hand positions while keeping gaze fixed in various directions. One group of subjects ( n = 10) was tested without visual information about hand position during movement planning (unseen-hand condition); another group ( n = 8) was tested with hand and target position simultaneously visible before movement onset (seen-hand condition). We found that both initial hand position and gaze fixation direction had a significant effect on the magnitude and direction of the pointing error. Errors were significantly smaller in the seen-hand condition. For both conditions, though, a reference frame analysis showed that the errors arose at an eye- or hand-centered stage or both, but not at a body-centered stage. As a common reference frame is required to specify a movement vector, these results suggest that an eye-centered mechanism is involved in integrating target and hand position in programming reaching movements. We discuss how simple gain elements modulating the eye-centered target and hand-position signals can account for these results.


2018 ◽  
Vol 120 (5) ◽  
pp. 2311-2324 ◽  
Author(s):  
Andrey R. Nikolaev ◽  
Radha Nila Meghanathan ◽  
Cees van Leeuwen

In free viewing, the eyes return to previously visited locations rather frequently, even though the attentional and memory-related processes controlling eye-movement show a strong antirefixation bias. To overcome this bias, a special refixation triggering mechanism may have to be recruited. We probed the neural evidence for such a mechanism by combining eye tracking with EEG recording. A distinctive signal associated with refixation planning was observed in the EEG during the presaccadic interval: the presaccadic potential was reduced in amplitude before a refixation compared with normal fixations. The result offers direct evidence for a special refixation mechanism that operates in the saccade planning stage of eye movement control. Once the eyes have landed on the revisited location, acquisition of visual information proceeds indistinguishably from ordinary fixations. NEW & NOTEWORTHY A substantial proportion of eye fixations in human natural viewing behavior are revisits of recently visited locations, i.e., refixations. Our recently developed methods enabled us to study refixations in a free viewing visual search task, using combined eye movement and EEG recording. We identified in the EEG a distinctive refixation-related signal, signifying a control mechanism specific to refixations as opposed to ordinary eye fixations.


2020 ◽  
Vol 12 (6) ◽  
pp. 168781402092265
Author(s):  
Zhou Wang ◽  
Yin Chen ◽  
Tao Wang ◽  
Bo Zhang

As an important modern weapon, the development of infrared-guided missile reflects comprehensive national strength of a country. Therefore, it is especially important to establish a semi-physical simulation device to test the performance of missile, and the test device requires high accuracy. Based on the above background, an infrared guidance test device is designed in this article. The accuracy of its shell and rotating mechanism are studied in detail, and the error factors are quantified to provide theoretical basis for structural optimization. The orthogonal experiment design reduces the number of sensitivity analysis experiments on key design parameters. Factors affecting the maximum deformation and overall quality of the shell were determined. The range method was used to analyze sensitivity factors, and the final optimization result that met the minimum deformation and minimum quality was determined. Experimental results show that the rotation error of the main shaft of the rotating mechanism includes axial, radial, and angular motion errors, and experimental value is basically consistent with theoretical value. After the shell optimization, the infrared target pointing error [Formula: see text] and the infrared target position offset error ξ′ = 0.1525 mm meet the accuracy requirements. This method can provide new ideas for precision research and optimization of structural design of rotating mechanism.


2004 ◽  
Vol 92 (4) ◽  
pp. 2380-2393 ◽  
Author(s):  
M. A. Admiraal ◽  
N.L.W. Keijsers ◽  
C.C.A.M. Gielen

We have investigated pointing movements toward remembered targets after an intervening self-generated body movement. We tested to what extent visual information about the environment or finger position is used in updating target position relative to the body after a step and whether gaze plays a role in the accuracy of the pointing movement. Subjects were tested in three visual conditions: complete darkness (DARK), complete darkness with visual feedback of the finger (FINGER), and with vision of a well-defined environment and with feedback of the finger (FRAME). Pointing accuracy was rather poor in the FINGER and DARK conditions, which did not provide vision of the environment. Constant pointing errors were mainly in the direction of the step and ranged from about 10 to 20 cm. Differences between binocular fixation and target position were often related to the step size and direction. At the beginning of the trial, when the target was visible, fixation was on target. After target extinction, fixation moved away from the target relative to the subject. The variability in the pointing positions appeared to be related to the variable errors in fixation, and the co-variance increases during the delay period after the step, reaching a highly significant value at the time of pointing. The significant co-variance between fixation position and pointing is not the result of a mutual dependence on the step, since we corrected for any direct contributions of the step in both signals. We conclude that the co-variance between fixation and pointing position reflects 1) a common command signal for gaze and arm movements and 2) an effect of fixation on pointing accuracy at the time of pointing.


2011 ◽  
Vol 105 (2) ◽  
pp. 846-859 ◽  
Author(s):  
Lore Thaler ◽  
Melvyn A. Goodale

Studies that have investigated how sensory feedback about the moving hand is used to control hand movements have relied on paradigms such as pointing or reaching that require subjects to acquire target locations. In the context of these target-directed tasks, it has been found repeatedly that the human sensory-motor system relies heavily on visual feedback to control the ongoing movement. This finding has been formalized within the framework of statistical optimality according to which different sources of sensory feedback are combined such as to minimize variance in sensory information during movement control. Importantly, however, many hand movements that people perform every day are not target-directed, but based on allocentric (object-centered) visual information. Examples of allocentric movements are gesture imitation, drawing, or copying. Here we tested if visual feedback about the moving hand is used in the same way to control target-directed and allocentric hand movements. The results show that visual feedback is used significantly more to reduce movement scatter in the target-directed as compared with the allocentric movement task. Furthermore, we found that differences in the use of visual feedback between target-directed and allocentric hand movements cannot be explained based on differences in uncertainty about the movement goal. We conclude that the role played by visual feedback for movement control is fundamentally different for target-directed and allocentric movements. The results suggest that current computational and neural models of sensorimotor control that are based entirely on data derived from target-directed paradigms have to be modified to accommodate performance in the allocentric tasks used in our experiments. As a consequence, the results cast doubt on the idea that models of sensorimotor control developed exclusively from data obtained in target-directed paradigms are also valid in the context of allocentric tasks, such as drawing, copying, or imitative gesturing, that characterize much of human behavior.


2020 ◽  
Vol 8 (3) ◽  
pp. 516-527
Author(s):  
Samira Moeinirad ◽  
Behrouz Abdoli ◽  
Alireza Farsi ◽  
Nasour Ahmadi

The quiet eye is a characteristic of highly skilled perceptual and motor performance that is considered as the final fixation toward a target before movement initiation. The aim of this study was to extend quiet eye–related knowledge by investigating expertise effects on overall quiet eye duration among expert and near-expert basketball players, as well as to determine the relative contribution of early and late visual information in a basketball jump shot by comparing the timing components of quiet eye duration (early and late quite eye). Twenty-seven expert and near-expert male basketball players performed the jump shots. Gaze was recorded with the SensoMotoric Instruments eye tracking glasses and shooting performance accuracy was evaluated by scoring each shot on a scale of 1–8. Six infrared cameras circularly arranged around the participants were used to collect the kinematic information of the players. The performance accuracy, gaze behavior, and kinematic characteristics of the participants during the test were calculated. The experts with longer quiet eye duration had better performance in a basketball jump shot compared to the near-experts. Also the experts had longer early and late quiet eye duration than the near-experts. The results revealed a relationship between quiet eye duration and performance. The combined visual strategy is a more efficient strategy in complex far-aiming tasks such as a basketball jump shot.


2016 ◽  
Vol 28 (11) ◽  
pp. 1828-1837 ◽  
Author(s):  
Emiliano Brunamonti ◽  
Aldo Genovesio ◽  
Pierpaolo Pani ◽  
Roberto Caminiti ◽  
Stefano Ferraina

Reaching movements require the integration of both somatic and visual information. These signals can have different relevance, depending on whether reaches are performed toward visual or memorized targets. We tested the hypothesis that under such conditions, therefore depending on target visibility, posterior parietal neurons integrate differently somatic and visual signals. Monkeys were trained to execute both types of reaches from different hand resting positions and in total darkness. Neural activity was recorded in Area 5 (PE) and analyzed by focusing on the preparatory epoch, that is, before movement initiation. Many neurons were influenced by the initial hand position, and most of them were further modulated by the target visibility. For the same starting position, we found a prevalence of neurons with activity that differed depending on whether hand movement was performed toward memorized or visual targets. This result suggests that posterior parietal cortex integrates available signals in a flexible way based on contextual demands.


2005 ◽  
Vol 37 (5) ◽  
pp. 343-347 ◽  
Author(s):  
Steve Hansen ◽  
John D. Cullen ◽  
Digby Elliott

Sign in / Sign up

Export Citation Format

Share Document