The effects of movement distance and movement time on visual feedback processing in aimed hand movements

1987 ◽  
Vol 65 (2) ◽  
pp. 181-191 ◽  
Author(s):  
Howard N. Zelaznik ◽  
Brian Hawkins ◽  
Lorraine Kisselburgh
Author(s):  
Shang H. Hsu ◽  
Chien C. Huang

The purpose of this study was to investigate the effects of target width, movement direction, movement amplitude, and remote distance on remote positioning performance. Movement time and movement distance ratio were taken as measures of remote positioning performance. It was found that the effects of target width, movement amplitude, and movement direction on the two measures were significant. The effect of remote distance was significant only for movement distance ratio. The magnitude of the effect of target width on movement time was larger than that of movement amplitude; a modification of Fitts' Law was thus proposed. Moreover, there was an interactive effect between target width and movement direction- i.e., movement direction had an effect only when the target width was small. Among the eight movement directions, upward vertical movement was the best for remote positioning. The results shed some light onto the design of remote control user interface.


Author(s):  
John Sermarini ◽  
Joseph T. Kider ◽  
Joseph J. LaViola ◽  
Daniel S. McConnell

We present the results of a study investigating the influence of task and effector constraints on the kinematics of pointing movements performed in immersive virtual environments. We compared the effect of target width, as a task constraint, to the effect of movement distance, as an effector constraint, in terms of overall effect on movement time in a pointing task. We also compared a linear ray-cast pointing technique to a parabolic pointing technique to understand how interaction style may be understood in the context of task and effector constraints. The effect of target width as an information constraint on pointing performance was amplified in VR. Pointing technique acted as an effector constraint, with linear ray-cast pointing resulting in faster performance than parabolic pointers.


2019 ◽  
Vol 121 (4) ◽  
pp. 1561-1574 ◽  
Author(s):  
Dimitrios J. Palidis ◽  
Joshua G. A. Cashaback ◽  
Paul L. Gribble

At least two distinct processes have been identified by which motor commands are adapted according to movement-related feedback: reward-based learning and sensory error-based learning. In sensory error-based learning, mappings between sensory targets and motor commands are recalibrated according to sensory error feedback. In reward-based learning, motor commands are associated with subjective value, such that successful actions are reinforced. We designed two tasks to isolate reward- and sensory error-based motor adaptation, and we used electroencephalography in humans to identify and dissociate the neural correlates of reward and sensory error feedback processing. We designed a visuomotor rotation task to isolate sensory error-based learning that was induced by altered visual feedback of hand position. In a reward learning task, we isolated reward-based learning induced by binary reward feedback that was decoupled from the visual target. A fronto-central event-related potential called the feedback-related negativity (FRN) was elicited specifically by binary reward feedback but not sensory error feedback. A more posterior component called the P300 was evoked by feedback in both tasks. In the visuomotor rotation task, P300 amplitude was increased by sensory error induced by perturbed visual feedback and was correlated with learning rate. In the reward learning task, P300 amplitude was increased by reward relative to nonreward and by surprise regardless of feedback valence. We propose that during motor adaptation the FRN specifically reflects a reward-based learning signal whereas the P300 reflects feedback processing that is related to adaptation more generally. NEW & NOTEWORTHY We studied the event-related potentials evoked by feedback stimuli during motor adaptation tasks that isolate reward- and sensory error-based learning mechanisms. We found that the feedback-related negativity was specifically elicited by binary reward feedback, whereas the P300 was observed in both tasks. These results reveal neural processes associated with different learning mechanisms and elucidate which classes of errors, from a computational standpoint, elicit the feedback-related negativity and P300.


2011 ◽  
Vol 105 (2) ◽  
pp. 846-859 ◽  
Author(s):  
Lore Thaler ◽  
Melvyn A. Goodale

Studies that have investigated how sensory feedback about the moving hand is used to control hand movements have relied on paradigms such as pointing or reaching that require subjects to acquire target locations. In the context of these target-directed tasks, it has been found repeatedly that the human sensory-motor system relies heavily on visual feedback to control the ongoing movement. This finding has been formalized within the framework of statistical optimality according to which different sources of sensory feedback are combined such as to minimize variance in sensory information during movement control. Importantly, however, many hand movements that people perform every day are not target-directed, but based on allocentric (object-centered) visual information. Examples of allocentric movements are gesture imitation, drawing, or copying. Here we tested if visual feedback about the moving hand is used in the same way to control target-directed and allocentric hand movements. The results show that visual feedback is used significantly more to reduce movement scatter in the target-directed as compared with the allocentric movement task. Furthermore, we found that differences in the use of visual feedback between target-directed and allocentric hand movements cannot be explained based on differences in uncertainty about the movement goal. We conclude that the role played by visual feedback for movement control is fundamentally different for target-directed and allocentric movements. The results suggest that current computational and neural models of sensorimotor control that are based entirely on data derived from target-directed paradigms have to be modified to accommodate performance in the allocentric tasks used in our experiments. As a consequence, the results cast doubt on the idea that models of sensorimotor control developed exclusively from data obtained in target-directed paradigms are also valid in the context of allocentric tasks, such as drawing, copying, or imitative gesturing, that characterize much of human behavior.


Author(s):  
Colin G. Drury

Two experiments on reciprocal foot tapping between pedals showed that a modified version of Fitts' Law can predict movement time for a variety of pedal sizes and separations. Using a relationship between times for reciprocal tapping and single movements found for hand movements, the present results predict closely the movement times obtained under specific conditions by earlier researchers. When pedals are at minimum safe separation it is concluded that pedal widths and direction of movement have only a slight effect on movement time.


2013 ◽  
Vol 109 (11) ◽  
pp. 2680-2690 ◽  
Author(s):  
Sandra Sülzenbrück ◽  
Herbert Heuer

Extending the body with a tool could imply that characteristics of hand movements become characteristics of the movement of the effective part of the tool. Recent research suggests that such distal shifts are subject to boundary conditions. Here we propose the existence of three constraints: a strategy constraint, a constraint of movement characteristics, and a constraint of mode of control. We investigate their validity for the curvature of transverse movements aimed at a target while using a sliding first-order lever. Participants moved the tip of the effort arm of a real or virtual lever to control a cursor representing movements of the tip of the load arm of the lever on a monitor. With this tool, straight transverse hand movements are associated with concave curvature of the path of the tip of the tool. With terminal visual feedback and when targets were presented for the hand, hand paths were slightly concave in the absence of the dynamic transformation of the tool and slightly convex in its presence. When targets were presented for the tip of the lever, both the concave and convex curvatures of the hand paths became stronger. Finally, with continuous visual feedback of the tip of the lever, curvature of hand paths became convex and concave curvature of the paths of the tip of the lever was reduced. In addition, the effect of the dynamic transformation on curvature was attenuated. These findings support the notion that distal shifts are subject to at least the three proposed constraints.


2012 ◽  
Vol 31 (4) ◽  
pp. 791-800 ◽  
Author(s):  
Gerd Schmitz ◽  
Otmar Bock ◽  
Valentina Grigorova ◽  
Steliana Borisova

1983 ◽  
Vol 56 (2) ◽  
pp. 355-358 ◽  
Author(s):  
Michael P. Sullivan ◽  
Robert W. Christina

The accuracy of a long aiming movement was studied as a function of whether it was performed toward or away from the midline of the subject's body in the presence or absence of visual feedback. 30 right-handed, male university students (19—26 yr.) served as subjects. With movement distance and duration controlled, the mean percentage of error was 6 34% less for movements made toward the body's midline than for those performed away from the midline. The mean percentage of error was also 48% less in the presence of visual feedback than in its absence. However, contrary to our expectation, movements executed toward the body's midline were not appreciably less disrupted in the absence of visual feedback than movements performed away from the midline.


Sign in / Sign up

Export Citation Format

Share Document