Visuomotor Adaptation Does Not Recalibrate Kinesthetic Sense of Felt Hand Path

2009 ◽  
Vol 101 (2) ◽  
pp. 614-623 ◽  
Author(s):  
Teser Wong ◽  
Denise Y. P. Henriques

Motor control relies on multiple sources of information. To estimate the position and motion of the hand, the brain uses both vision and body-position (proprioception and kinesthesia) senses from sensors in the muscles, tendons, joints, and skin. Although performance is better when more than one sensory modality is present, visuomotor adaptation suggests that people tend to rely much more on visual information of the hand to guide their arm movements to targets, even when the visual information and kinesthetic information about the hand motion are in conflict. The aim of this study is to test whether adapting hand movements in response to false visual feedback of the hand will result in the change or recalibration of the kinesthetic sense of hand motion. The advantage of this cross-sensory recalibration would ensure on-line consistency between the senses. To test this, we mapped participants' sensitivity to tilted and curved hand paths and then examined whether adapting their hand movements in response to false visual feedback affected their felt sense of hand path. We found that participants could accurately estimate hand path directions and curvature after adapting to false visual feedback of their hand when reaching to targets. Our results suggest that although vision can override kinesthesia to recalibrate arm motor commands, it does not recalibrate the kinesthetic sense of hand path geometry.

2011 ◽  
Vol 105 (2) ◽  
pp. 846-859 ◽  
Author(s):  
Lore Thaler ◽  
Melvyn A. Goodale

Studies that have investigated how sensory feedback about the moving hand is used to control hand movements have relied on paradigms such as pointing or reaching that require subjects to acquire target locations. In the context of these target-directed tasks, it has been found repeatedly that the human sensory-motor system relies heavily on visual feedback to control the ongoing movement. This finding has been formalized within the framework of statistical optimality according to which different sources of sensory feedback are combined such as to minimize variance in sensory information during movement control. Importantly, however, many hand movements that people perform every day are not target-directed, but based on allocentric (object-centered) visual information. Examples of allocentric movements are gesture imitation, drawing, or copying. Here we tested if visual feedback about the moving hand is used in the same way to control target-directed and allocentric hand movements. The results show that visual feedback is used significantly more to reduce movement scatter in the target-directed as compared with the allocentric movement task. Furthermore, we found that differences in the use of visual feedback between target-directed and allocentric hand movements cannot be explained based on differences in uncertainty about the movement goal. We conclude that the role played by visual feedback for movement control is fundamentally different for target-directed and allocentric movements. The results suggest that current computational and neural models of sensorimotor control that are based entirely on data derived from target-directed paradigms have to be modified to accommodate performance in the allocentric tasks used in our experiments. As a consequence, the results cast doubt on the idea that models of sensorimotor control developed exclusively from data obtained in target-directed paradigms are also valid in the context of allocentric tasks, such as drawing, copying, or imitative gesturing, that characterize much of human behavior.


2005 ◽  
Vol 93 (6) ◽  
pp. 3200-3213 ◽  
Author(s):  
Robert A. Scheidt ◽  
Michael A. Conditt ◽  
Emanuele L. Secco ◽  
Ferdinando A. Mussa-Ivaldi

People tend to make straight and smooth hand movements when reaching for an object. These trajectory features are resistant to perturbation, and both proprioceptive as well as visual feedback may guide the adaptive updating of motor commands enforcing this regularity. How is information from the two senses combined to generate a coherent internal representation of how the arm moves? Here we show that eliminating visual feedback of hand-path deviations from the straight-line reach (constraining visual feedback of motion within a virtual, “visual channel”) prevents compensation of initial direction errors induced by perturbations. Because adaptive reduction in direction errors occurred with proprioception alone, proprioceptive and visual information are not combined in this reaching task using a fixed, linear weighting scheme as reported for static tasks not requiring arm motion. A computer model can explain these findings, assuming that proprioceptive estimates of initial limb posture are used to select motor commands for a desired reach and visual feedback of hand-path errors brings proprioceptive estimates into registration with a visuocentric representation of limb position relative to its target. Simulations demonstrate that initial configuration estimation errors lead to movement direction errors as observed experimentally. Registration improves movement accuracy when veridical visual feedback is provided but is not invoked when hand-path errors are eliminated. However, the visual channel did not exclude adjustment of terminal movement features maximizing hand-path smoothness. Thus visual and proprioceptive feedback may be combined in fundamentally different ways during trajectory control and final position regulation of reaching movements.


2007 ◽  
Vol 98 (5) ◽  
pp. 3081-3094 ◽  
Author(s):  
E. Macaluso ◽  
C. D. Frith ◽  
J. Driver

To perform eye or hand movements toward a relevant location, the brain must translate sensory input into motor output. Recent studies revealed segregation between circuits for translating visual information into saccadic or manual movements, but less is known about translation of tactile information into such movements. Using human functional magnetic resonance imaging (fMRI) in a delay paradigm, we factorially crossed sensory modality (vision or touch) and motor effector (eyes or hands) for lateralized movements (gaze shifts to left or right or pressing a left or right button with the corresponding left or right hand located there). We investigated activity in the delay-period between stimulation and response, asking whether the currently relevant side (left or right) during the delay was encoded according to sensory modality, upcoming motor response, or some interactive combination of these. Delay activity mainly reflected the motor response subsequently required. Irrespective of visual or tactile input, we found sustained activity in posterior partial cortex, frontal-eye field, and contralateral visual cortex when subjects would later make an eye movement. For delays prior to manual button-press response, activity increased in contralateral precentral regions, again regardless of stimulated modality. Posterior superior temporal sulcus showed sustained delay activity, irrespective of sensory modality, side, and response type. We conclude that the delay activations reflect translation of sensory signals into effector-specific motor circuits in parietal and frontal cortex (plus an impact on contralateral visual cortex for planned saccades), regardless of cue modality, whereas posterior STS provides a representation that generalizes across both sensory modality and motor effector.


2014 ◽  
Vol 111 (12) ◽  
pp. 2675-2687 ◽  
Author(s):  
Jennifer A. Semrau ◽  
Joel S. Perlmutter ◽  
Kurt A. Thoroughman

To perform simple everyday tasks, we use visual feedback from our external environment to generate and guide movements. However, tasks like reaching for a cup may become extremely difficult in movement disorders such as Parkinson's disease (PD), and it is unknown whether PD patients use visual information to compensate for motor deficiencies. We tested adaptation to changes in visual feedback of the hand in three subject groups, PD patients on daily levodopa (l-dopa) therapy (PD ON), PD patients off l-dopa (PD OFF), and age-matched control subjects, to determine the effects of PD on the visual control of movement. Subjects were tested on two classes of visual perturbations, one that altered visual direction of movement and one that altered visual extent of movement, allowing us to test adaptive sensitivity to changes in both movement direction (visual rotations) and extent (visual gain). The PD OFF group displayed more complete adaptation to visuomotor rotations compared with control subjects but initial, transient difficulty with adaptation to visual gain perturbations. The PD ON group displayed feedback control more sensitive to visual error compared with control subjects but compared with the PD OFF group had mild impairments during adaptation to changes in visual extent. We conclude that PD subjects can adapt to changes in visual information but that l-dopa may impair visual-based motor adaptation.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jacques Pesnot Lerousseau ◽  
Gabriel Arnold ◽  
Malika Auvray

AbstractSensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promise in the range of behavioral abilities they allow, the processes underlying their use remain underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants' auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.


2000 ◽  
Vol 84 (4) ◽  
pp. 1708-1718 ◽  
Author(s):  
Andrew B. Slifkin ◽  
David E. Vaillancourt ◽  
Karl M. Newell

The purpose of the current investigation was to examine the influence of intermittency in visual information processes on intermittency in the control continuous force production. Adult human participants were required to maintain force at, and minimize variability around, a force target over an extended duration (15 s), while the intermittency of on-line visual feedback presentation was varied across conditions. This was accomplished by varying the frequency of successive force-feedback deliveries presented on a video display. As a function of a 128-fold increase in feedback frequency (0.2 to 25.6 Hz), performance quality improved according to hyperbolic functions (e.g., force variability decayed), reaching asymptotic values near the 6.4-Hz feedback frequency level. Thus, the briefest interval over which visual information could be integrated and used to correct errors in motor output was approximately 150 ms. The observed reductions in force variability were correlated with parallel declines in spectral power at about 1 Hz in the frequency profile of force output. In contrast, power at higher frequencies in the force output spectrum were uncorrelated with increases in feedback frequency. Thus, there was a considerable lag between the generation of motor output corrections (1 Hz) and the processing of visual feedback information (6.4 Hz). To reconcile these differences in visual and motor processing times, we proposed a model where error information is accumulated by visual information processes at a maximum frequency of 6.4 per second, and the motor system generates a correction on the basis of the accumulated information at the end of each 1-s interval.


Author(s):  
JAMES DAVIS ◽  
MUBARAK SHAH

This paper presents a glove-free method for tracking hand movements using a set of 3-D models. In this approach, the hand is represented by five cylindrical models which are fit to the third phalangeal segments of the fingers. Six 3-D motion parameters for each model are calculated that correspond to the movement of the fingertips in the image plane. Trajectories of the moving models are then established to show the 3-D nature of the hand motion.


2006 ◽  
Vol 95 (2) ◽  
pp. 922-931 ◽  
Author(s):  
David E. Vaillancourt ◽  
Mary A. Mayka ◽  
Daniel M. Corcos

The cerebellum, parietal cortex, and premotor cortex are integral to visuomotor processing. The parameters of visual information that modulate their role in visuomotor control are less clear. From motor psychophysics, the relation between the frequency of visual feedback and force variability has been identified as nonlinear. Thus we hypothesized that visual feedback frequency will differentially modulate the neural activation in the cerebellum, parietal cortex, and premotor cortex related to visuomotor processing. We used functional magnetic resonance imaging at 3 Tesla to examine visually guided grip force control under frequent and infrequent visual feedback conditions. Control conditions with intermittent visual feedback alone and a control force condition without visual feedback were examined. As expected, force variability was reduced in the frequent compared with the infrequent condition. Three novel findings were identified. First, infrequent (0.4 Hz) visual feedback did not result in visuomotor activation in lateral cerebellum (lobule VI/Crus I), whereas frequent (25 Hz) intermittent visual feedback did. This is in contrast to the anterior intermediate cerebellum (lobule V/VI), which was consistently active across all force conditions compared with rest. Second, confirming previous observations, the parietal and premotor cortices were active during grip force with frequent visual feedback. The novel finding was that the parietal and premotor cortex were also active during grip force with infrequent visual feedback. Third, right inferior parietal lobule, dorsal premotor cortex, and ventral premotor cortex had greater activation in the frequent compared with the infrequent grip force condition. These findings demonstrate that the frequency of visual information reduces motor error and differentially modulates the neural activation related to visuomotor processing in the cerebellum, parietal cortex, and premotor cortex.


2018 ◽  
Author(s):  
Janna M. Gottwald

This thesis assesses the link between action and cognition early in development. Thus the notion of an embodied cognition is investigated by tying together two levels of action control in the context of reaching in infancy: prospective motor control and executive functions. The ability to plan our actions is the inevitable foundation of reaching our goals. Thus actions can be stratified on different levels of control. There is the relatively low level of prospective motor control and the comparatively high level of cognitive control. Prospective motor control is concerned with goal-directed actions on the level of single movements and movement combinations of our body and ensures purposeful, coordinated movements, such as reaching for a cup of coffee. Cognitive control, in the context of this thesis more precisely referred to as executive functions, deals with goal-directed actions on the level of whole actions and action combinations and facilitates directedness towards mid- and long-term goals, such as finishing a doctoral thesis. Whereas prospective motor control and executive functions are well studied in adulthood, the early development of both is not sufficiently understood.This thesis comprises three empirical motion-tracking studies that shed light on prospective motor control and executive functions in infancy. Study I investigated the prospective motor control of current actions by having 14-month-olds lift objects of varying weights. In doing so, multi-cue integration was addressed by comparing the use of visual and non-visual information to non-visual information only. Study II examined the prospective motor control of future actions in action sequences by investigating reach-to-place actions in 14-month-olds. Thus the extent to which Fitts’ law can explain movement duration in infancy was addressed. Study III lifted prospective motor control to a higher that is cognitive level, by investigating it relative to executive functions in 18-months-olds.Main results were that 14-month-olds are able to prospectively control their manual actions based on object weight. In this action planning process, infants use different sources of information. Beyond this ability to prospectively control their current action, 14-month-olds also take future actions into account and plan their actions based on the difficulty of the subsequentaction in action sequences. In 18-month-olds, prospective motor control in manual actions, such as reaching, is related to early executive functions, as demonstrated for behavioral prohibition and working memory. These findings are consistent with the idea that executive functions derive from prospective motor control. I suggest that executive functions could be grounded in the development of motor control. In other words, early executive functions should be seen as embodied.


2018 ◽  
Author(s):  
Ahmed A. Mostafa ◽  
Bernard Marius ’t Hart ◽  
Denise Y.P. Henriques

AbstractAn accurate estimate of limb position is necessary for movement planning, before and after motor learning. Where we localize our unseen hand after a reach depends on felt hand position, or proprioception, but in studies and theories on motor adaptation this is quite often neglected in favour of predicted sensory consequences based on efference copies of motor commands. Both sources of information should contribute, so here we set out to further investigate how much of hand localization depends on proprioception and how much on predicted sensory consequences. We use a training paradigm combining robot controlled hand movements with rotated visual feedback that eliminates the possibility to update predicted sensory consequences (‘exposure training’), but still recalibrates proprioception, as well as a classic training paradigm with self-generated movements in another set of participants. After each kind of training we measure participants’ hand location estimates based on both efference-based predictions and afferent proprioceptive signals with self-generated hand movements (‘active localization’) as well as based on proprioception only with robot-generated movements (‘passive localization’). In the exposure training group, we find indistinguishable shifts in passive and active hand localization, but after classic training, active localization shifts more than passive, indicating a contribution from updated predicted sensory consequences. Both changes in open-loop reaches and hand localization are only slightly smaller after exposure training as compared to after classic training, confirming that proprioception plays a large role in estimating limb position and in planning movements, even after adaptation. (data: https://doi.org/10.17605/osf.io/zfdth, preprint: https://doi.org/10.1101/384941)


Sign in / Sign up

Export Citation Format

Share Document