scholarly journals Impairments of reaching movements in patients without proprioception. II. Effects of visual information on accuracy

1995 ◽  
Vol 73 (1) ◽  
pp. 361-372 ◽  
Author(s):  
C. Ghez ◽  
J. Gordon ◽  
M. F. Ghilardi

1. The aim of this study was to determine how vision of a cursor indicating hand position on a computer screen or vision of the limb itself improves the accuracy of reaching movements in patients deprived of limb proprioception due to large-fiber sensory neuropathy. In particular, we wished to ascertain the contribution of such information to improved planning rather than to feedback corrections. We analyzed spatial errors and hand trajectories of reaching movements made by subjects moving a hand-held cursor on a digitizing tablet while viewing targets displayed on a computer screen. The errors made when movements were performed without vision of their arm or of a screen cursor were compared with errors made when this information was available concurrently or prior to movement. 2. Both monitoring the screen cursor and seeing their limb in peripheral vision during movement improved the accuracy of the patients' movements. Improvements produced by seeing the cursor during movement are attributable simply to feedback corrections. However, because the target was not present in the actual workspace, improvements associated with vision of the limb must involve more complex corrective mechanisms. 3. Significant improvements in performance also occurred in trials without vision that were performed after viewing the limb at rest or during movements. In particular, prior vision of the limb in motion improved the ability of patients to vary the duration of movements in different directions so as to compensate for the inertial anisotropy of the limb. In addition, there were significant reductions in directional errors, path curvature, and late secondary movements. Comparable improvements in extent, direction, and curvature were produced when subjects could see the screen cursor during alternate movements to targets in different directions. 4. The effects of viewing the limb were transient and decayed during a period of minutes once vision of the limb was no longer available. 5. It is proposed that the improvements in performance produced after vision of the limb were mediated by the visual updating of internal models of the limb. Vision of the limb at rest may provide configuration information while vision of the limb in motion provides additional dynamic information. Vision of the cursor and the resulting ability to correct ongoing movements, however, is considered primarily to provide information about the dynamic properties of the limb and its response to neural commands.

2006 ◽  
Vol 96 (1) ◽  
pp. 352-362 ◽  
Author(s):  
Sabine M. Beurze ◽  
Stan Van Pelt ◽  
W. Pieter Medendorp

At some stage in the process of a sensorimotor transformation for a reaching movement, information about the current position of the hand and information about the location of the target must be encoded in the same frame of reference to compute the hand-to-target difference vector. Two main hypotheses have been proposed regarding this reference frame: an eye-centered and a body-centered frame. Here we evaluated these hypotheses using the pointing errors that subjects made when planning and executing arm movements to memorized targets starting from various initial hand positions while keeping gaze fixed in various directions. One group of subjects ( n = 10) was tested without visual information about hand position during movement planning (unseen-hand condition); another group ( n = 8) was tested with hand and target position simultaneously visible before movement onset (seen-hand condition). We found that both initial hand position and gaze fixation direction had a significant effect on the magnitude and direction of the pointing error. Errors were significantly smaller in the seen-hand condition. For both conditions, though, a reference frame analysis showed that the errors arose at an eye- or hand-centered stage or both, but not at a body-centered stage. As a common reference frame is required to specify a movement vector, these results suggest that an eye-centered mechanism is involved in integrating target and hand position in programming reaching movements. We discuss how simple gain elements modulating the eye-centered target and hand-position signals can account for these results.


1995 ◽  
Vol 73 (1) ◽  
pp. 347-360 ◽  
Author(s):  
J. Gordon ◽  
M. F. Ghilardi ◽  
C. Ghez

1. This paper introduces a series of studies in which we analyze the impairments in a planar reaching task in human patients with severe proprioceptive deficits resulting from large-fiber sensory neuropathy. We studied three patients, all of whom showed absence of discriminative tactile sensation, position sense, and stretch reflexes in the upper extremities. Muscle strength was normal. We compared the reaching movements of the patients with those of normal control subjects. The purpose of this first paper was no characterize the spatial errors in these patients that result primarily from impairments in the planning and execution of movement rather than in feedback control. This was done by using a task in which visual feedback of errors during movement was prevented. 2. Subjects were instructed to move their hand from given starting positions of different targets on a horizontal digitizing tablet. Hand position and targets were displayed on a computer screen. Subjects could not see their hand, and the screen display of hand position was blanked at the signal to move. Thus visual feedback during movement could not be used to achieve accuracy. Movement paths were displayed as knowledge of results after each trial. 3. Compared with controls, the patients made large spatial errors in both movement direction and extent. Directional errors were evident from movement onset, suggesting that they resulted from improper planning. In addition, patients' hand paths showed large curves and secondary movements after initial stops. 4. The overall control strategy used by patients appeared the same as that used by controls. Hand trajectories were approximately bell shaped, and movement extent was controlled by scaling a trajectory waveform in amplitude and time. However, both control subjects and patients showed systematic errors in movement extent that depended on the direction of hand movement. In control subjects, these systematic dependencies of extent on direction were small, but in patients they produced large and prominent errors. Analysis of the hand trajectories revealed that errors were associated with differences in velocity and acceleration for movements in different directions. In an earlier study, we showed that in subjects with normal sensation that the dependence of acceleration and velocity on direction results from a failure to take the inertial properties of the limb into account in programming the initial trajectory. In control subjects, these differences in initial acceleration are partially compensated by direction-dependent variations in movement time.(ABSTRACT TRUNCATED AT 400 WORDS)


2018 ◽  
Vol 119 (5) ◽  
pp. 1981-1992 ◽  
Author(s):  
Laura Mikula ◽  
Valérie Gaveau ◽  
Laure Pisella ◽  
Aarlenne Z. Khan ◽  
Gunnar Blohm

When reaching to an object, information about the target location as well as the initial hand position is required to program the motor plan for the arm. The initial hand position can be determined by proprioceptive information as well as visual information, if available. Bayes-optimal integration posits that we utilize all information available, with greater weighting on the sense that is more reliable, thus generally weighting visual information more than the usually less reliable proprioceptive information. The criterion by which information is weighted has not been explicitly investigated; it has been assumed that the weights are based on task- and effector-dependent sensory reliability requiring an explicit neuronal representation of variability. However, the weights could also be determined implicitly through learned modality-specific integration weights and not on effector-dependent reliability. While the former hypothesis predicts different proprioceptive weights for left and right hands, e.g., due to different reliabilities of dominant vs. nondominant hand proprioception, we would expect the same integration weights if the latter hypothesis was true. We found that the proprioceptive weights for the left and right hands were extremely consistent regardless of differences in sensory variability for the two hands as measured in two separate complementary tasks. Thus we propose that proprioceptive weights during reaching are learned across both hands, with high interindividual range but independent of each hand’s specific proprioceptive variability. NEW & NOTEWORTHY How visual and proprioceptive information about the hand are integrated to plan a reaching movement is still debated. The goal of this study was to clarify how the weights assigned to vision and proprioception during multisensory integration are determined. We found evidence that the integration weights are modality specific rather than based on the sensory reliabilities of the effectors.


2012 ◽  
Vol 25 (0) ◽  
pp. 58
Author(s):  
Katrina Quinn ◽  
Francia Acosta-Saltos ◽  
Jan W. de Fockert ◽  
Charles Spence ◽  
Andrew J. Bremner

Information about where our hands are arises from different sensory modalities; chiefly proprioception and vision. These inputs differ in variability from situation to situation (or task to task). According to the idea of ‘optimal integration’, the information provided by different sources is combined in proportion to their relative reliabilities, thus maximizing the reliability of the combined estimate. It is uncertain whether optimal multisensory integration of multisensory contributions to limb position requires executive resources. If so, then it should be possible to observe effects of secondary task performance and/or working memory load (WML) on the relative weighting of the senses under conditions of crossmodal sensory conflict. Alternatively, an integrated signal may be affected by upstream influences of WML or a secondary task on the reliabilities of the individual sensory inputs. We examine these possibilities in two experiments which examine effects of WML on reaching tasks in which bisensory visual-proprioceptive (Exp. 1), and unisensory proprioceptive (Exp. 2) cues to hand position are provided. WML increased visual capture under conditions of visual-proprioceptive conflict, regardless of the direction of visual-proprioceptive conflict, and the degree of load imposed. This indicates that task-switching (rather than WML load) leads to an increased reliance on visual information regardless of its task-specific reliability (Exp. 1). This could not be explained due to an increase in the variability of proprioception under secondary working memory task conditions (Exp. 2). We conclude that executive resources are involved in the relative weighting of visual and proprioceptive cues to hand position.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 127-127
Author(s):  
M Desmurget ◽  
Y Rossetti ◽  
C Prablanc

The problem whether movement accuracy is better in the full open-loop condition (FOL, hand never visible) than in the static closed-loop condition (SCL, hand only visible prior to movement onset) remains widely debated. To investigate this controversial question, we studied conditions for which visual information available to the subject prior to movement onset was strictly controlled. The results of our investigation showed that the accuracy improvement observed when human subjects were allowed to see their hand, in the peripheral visual field, prior to movement: (1) concerned only the variable errors; (2) did not depend on the simultaneous vision of the hand and target (hand and target viewed simultaneously vs sequentially); (3) remained significant when pointing to proprioceptive targets; and (4) was not suppressed when the visual information was temporally (visual presentation for less than 300 ms) or spatially (vision of only the index fingertip) restricted. In addition, dissociating vision and proprioception with wedge prisms showed that a weighed hand position was used to program hand trajectory. When considered together, these results suggest that: (i) knowledge of the initial upper limb configuration or position is necessary to plan accurately goal-directed movements; (ii) static proprioceptive receptors are partially ineffective in providing an accurate estimate of the limb posture, and/or hand location relative to the body, and (iii) visual and proprioceptive information is not used in an exclusive way, but combined to furnish an accurate representation of the state of the effector prior to movement.


2020 ◽  
Vol 10 (9) ◽  
pp. 3066 ◽  
Author(s):  
Yuki Sakazume ◽  
Sho Furubayashi ◽  
Eizo Miyashita

An eye saccade provides appropriate visual information for motor control. The present study was aimed to reveal the role of saccades in hand movements. Two types of movements, i.e., hitting and circle-drawing movements, were adopted, and saccades during the movements were classified as either a leading saccade (LS) or catching saccade (CS) depending on the relative gaze position of the saccade to the hand position. The ratio of types of the saccades during the movements was heavily dependent on the skillfulness of the subjects. In the late phase of the movements in a less skillful subject, CS tended to occur in less precise movements, and precision of the movement tended to be improved in the subsequent movement in the hitting. While LS directing gaze to a target point was observed in both types of the movements regardless of skillfulness of the subjects, LS in between a start point and a target point, which led gaze to a local minimum variance point on a hand movement trajectory, was exclusively found in the drawing in a less skillful subject. These results suggest that LS and some types of CS may provide positional information of via-points in addition to a target point and visual information to improve precision of a feedforward controller in the brain, respectively.


2000 ◽  
Vol 83 (6) ◽  
pp. 3230-3240 ◽  
Author(s):  
Joseph V. Cohn ◽  
Paul DiZio ◽  
James R. Lackner

Subjects who are in an enclosed chamber rotating at constant velocity feel physically stationary but make errors when pointing to targets. Reaching paths and endpoints are deviated in the direction of the transient inertial Coriolis forces generated by their arm movements. By contrast, reaching movements made during natural, voluntary torso rotation seem to be accurate, and subjects are unaware of the Coriolis forces generated by their movements. This pattern suggests that the motor plan for reaching movements uses a representation of body motion to prepare compensations for impending self-generated accelerative loads on the arm. If so, stationary subjects who are experiencing illusory self-rotation should make reaching errors when pointing to a target. These errors should be in the direction opposite the Coriolis accelerations their arm movements would generate if they were actually rotating. To determine whether such compensations exist, we had subjects in four experiments make visually open-loop reaches to targets while they were experiencing compelling illusory self-rotation and displacement induced by rotation of a complex, natural visual scene. The paths and endpoints of their initial reaching movements were significantly displaced leftward during counterclockwise illusory rotary displacement and rightward during clockwise illusory self-displacement. Subjects reached in a curvilinear path to the wrong place. These reaching errors were opposite in direction to the Coriolis forces that would have been generated by their arm movements during actual torso rotation. The magnitude of path curvature and endpoint errors increased as the speed of illusory self-rotation increased. In successive reaches, movement paths became straighter and endpoints more accurate despite the absence of visual error feedback or tactile feedback about target location. When subjects were again presented a stationary scene, their initial reaches were indistinguishable from pre-exposure baseline, indicating a total absence of aftereffects. These experiments demonstrate that the nervous system automatically compensates in a context-specific fashion for the Coriolis forces associated with reaching movements.


2016 ◽  
Vol 28 (11) ◽  
pp. 1828-1837 ◽  
Author(s):  
Emiliano Brunamonti ◽  
Aldo Genovesio ◽  
Pierpaolo Pani ◽  
Roberto Caminiti ◽  
Stefano Ferraina

Reaching movements require the integration of both somatic and visual information. These signals can have different relevance, depending on whether reaches are performed toward visual or memorized targets. We tested the hypothesis that under such conditions, therefore depending on target visibility, posterior parietal neurons integrate differently somatic and visual signals. Monkeys were trained to execute both types of reaches from different hand resting positions and in total darkness. Neural activity was recorded in Area 5 (PE) and analyzed by focusing on the preparatory epoch, that is, before movement initiation. Many neurons were influenced by the initial hand position, and most of them were further modulated by the target visibility. For the same starting position, we found a prevalence of neurons with activity that differed depending on whether hand movement was performed toward memorized or visual targets. This result suggests that posterior parietal cortex integrates available signals in a flexible way based on contextual demands.


2012 ◽  
Vol 108 (6) ◽  
pp. 1764-1780 ◽  
Author(s):  
Ignasi Cos ◽  
Farid Medleg ◽  
Paul Cisek

Recent work has shown that human subjects are able to predict the biomechanical ease of potential reaching movements and use these predictions to influence their choices. Here, we examined how reach decisions are influenced by specific biomechanical factors related to the control of end-point stability, such as aiming accuracy or stopping control. Human subjects made free choices between two potential reaching movements that varied in terms of path distance and biomechanical cost in four separate blocks that additionally varied two constraints: the width of the targets (narrow or wide) and the requirement of stopping in them. When movements were unconstrained (very wide targets and no requirement of stopping), subjects' choices were strongly biased toward directions aligned with the direction of maximal mobility. However, as the movements became progressively constrained, factors related to the control of the end point gained relevance, thus reducing this bias. This demonstrates that, before movement onset, constraints such as stopping and aiming participate in a remarkably adaptive and flexible action selection process that trades off the advantage of moving along directions of maximal mobility for unconstrained movements against exploiting biomechanical anisotropies to facilitate control of end-point stability whenever the movement constraints require it. These results support a view of decision making between motor actions as a highly context-dependent gradual process in which the subjective desirability of potential actions is influenced by their dynamic properties in relation to the intrinsic properties of the motor apparatus.


Sign in / Sign up

Export Citation Format

Share Document