scholarly journals Motor Learning Without Moving: Proprioceptive and Predictive Hand Localization After Passive Visuoproprioceptive Discrepancy Training

2018 ◽  
Author(s):  
Ahmed A. Mostafa ◽  
Bernard Marius ’t Hart ◽  
Denise Y.P. Henriques

AbstractAn accurate estimate of limb position is necessary for movement planning, before and after motor learning. Where we localize our unseen hand after a reach depends on felt hand position, or proprioception, but in studies and theories on motor adaptation this is quite often neglected in favour of predicted sensory consequences based on efference copies of motor commands. Both sources of information should contribute, so here we set out to further investigate how much of hand localization depends on proprioception and how much on predicted sensory consequences. We use a training paradigm combining robot controlled hand movements with rotated visual feedback that eliminates the possibility to update predicted sensory consequences (‘exposure training’), but still recalibrates proprioception, as well as a classic training paradigm with self-generated movements in another set of participants. After each kind of training we measure participants’ hand location estimates based on both efference-based predictions and afferent proprioceptive signals with self-generated hand movements (‘active localization’) as well as based on proprioception only with robot-generated movements (‘passive localization’). In the exposure training group, we find indistinguishable shifts in passive and active hand localization, but after classic training, active localization shifts more than passive, indicating a contribution from updated predicted sensory consequences. Both changes in open-loop reaches and hand localization are only slightly smaller after exposure training as compared to after classic training, confirming that proprioception plays a large role in estimating limb position and in planning movements, even after adaptation. (data: https://doi.org/10.17605/osf.io/zfdth, preprint: https://doi.org/10.1101/384941)

2018 ◽  
Author(s):  
Shanaathanan Modchalingam ◽  
Chad Michael Vachon ◽  
Bernard Marius ’t Hart ◽  
Denise Y. P. Henriques

ABSTRACTExplicit awareness of a task is often evoked during rehabilitation and sports training with the intention of accelerating learning and improving performance. However, the effects of awareness of perturbations on the resulting sensory and motor changes produced during motor learning are not well understood. Here, we use explicit instructions as well as large rotation sizes to generate awareness of the perturbation during a visuomotor rotation task and test the resulting changes in both perceived and predicted sensory consequences as well as implicit motor changes.We split participants into 4 groups which differ in both magnitude of the rotation (either 30° or 60°) during adaptation, and whether they receive a strategy to counter the rotation or not prior to adaptation. Performance benefits of explicit instruction are largest during early adaptation but continued to lead to improved performance through 90 trials of training. We show that with either instruction, or with large perturbations, participants become aware of countering the rotation. However, we find a base amount of implicit learning, with equal magnitudes, across all groups, even when asked to exclude any strategies while reaching with no visual feedback of the hand.Participants also estimate the location of the unseen hand when it is moved by the robot (passive localization) and when they generate their own movement (active localization) following adaptation. These learning-induced shifts in estimates of hand position reflect both proprioceptive recalibration and updates in the predicted consequences of movements. We find that these estimates of felt hand position, which reflect updates in both proprioception and efference based estimates of hand position, shift significantly for all groups and were not modulated by either instruction or perturbation size.Our results indicate that not all processes of motor learning benefit from an explicit awareness of the task. Particularly, proprioceptive recalibration and the updating of predicted sensory consequences are largely implicit processes.


2018 ◽  
Author(s):  
Shanaathanan Modchalingam ◽  
Chad Vachon ◽  
Bernard Marius 't Hart ◽  
Denise Henriques

Awareness of task demands is often used during rehabilitation and sports training by providing instructions which appears to accelerate learning and improve performance through explicit motor learning. However, the effects of awareness of perturbations on the changes in estimates of hand position resulting from motor learning are not well understood. In this study, people adapted their reaches to a visuomotor rotation while either receiving instructions on the nature of the perturbation, experiencing a large rotation, or both to generate awareness of the perturbation and increase the contribution of explicit learning. We found that instructions and/or larger rotations allowed people to activate or deactivate part of the learned strategy at will and elicited explicit changes in open-loop reaches, while a small rotation without instructions did not. However, these differences in awareness, and even manipulations of awareness and perturbation size, did not appear to affect learning-induced changes in hand-localization estimates. This was true when estimates of the adapted hand location reflected changes in proprioception, produced when the hand was displaced by a robot, and also when hand location estimates were based on efferent-based predictions of self-generated hand movements. In other words, visuomotor adaptation led to significant shifts in predicted and perceived hand location that were not modulated by either instruction or perturbation size. Our results indicate that not all outcomes of motor learning benefit from an explicit awareness of the task. Particularly, proprioceptive recalibration and the updating of predicted sensory consequences appear to be largely implicit. (data: DOI 10.17605/OSF.IO/MX5U2, preprint: DOI[url])


2012 ◽  
Vol 108 (1) ◽  
pp. 187-199 ◽  
Author(s):  
Christopher A. Buneo ◽  
Richard A. Andersen

Previous findings suggest the posterior parietal cortex (PPC) contributes to arm movement planning by transforming target and limb position signals into a desired reach vector. However, the neural mechanisms underlying this transformation remain unclear. In the present study we examined the responses of 109 PPC neurons as movements were planned and executed to visual targets presented over a large portion of the reaching workspace. In contrast to previous studies, movements were made without concurrent visual and somatic cues about the starting position of the hand. For comparison, a subset of neurons was also examined with concurrent visual and somatic hand position cues. We found that single cells integrated target and limb position information in a very consistent manner across the reaching workspace. Approximately two-thirds of the neurons with significantly tuned activity (42/61 and 30/46 for left and right workspaces, respectively) coded targets and initial hand positions separably, indicating no hand-centered encoding, whereas the remaining one-third coded targets and hand positions inseparably, in a manner more consistent with the influence of hand-centered coordinates. The responses of both types of neurons were largely invariant with respect to the presence or absence of visual hand position cues, suggesting their corresponding coordinate frames and gain effects were unaffected by cue integration. The results suggest that the PPC uses a consistent scheme for computing reach vectors in different parts of the workspace that is robust to changes in the availability of somatic and visual cues about hand position.


2006 ◽  
Vol 96 (1) ◽  
pp. 352-362 ◽  
Author(s):  
Sabine M. Beurze ◽  
Stan Van Pelt ◽  
W. Pieter Medendorp

At some stage in the process of a sensorimotor transformation for a reaching movement, information about the current position of the hand and information about the location of the target must be encoded in the same frame of reference to compute the hand-to-target difference vector. Two main hypotheses have been proposed regarding this reference frame: an eye-centered and a body-centered frame. Here we evaluated these hypotheses using the pointing errors that subjects made when planning and executing arm movements to memorized targets starting from various initial hand positions while keeping gaze fixed in various directions. One group of subjects ( n = 10) was tested without visual information about hand position during movement planning (unseen-hand condition); another group ( n = 8) was tested with hand and target position simultaneously visible before movement onset (seen-hand condition). We found that both initial hand position and gaze fixation direction had a significant effect on the magnitude and direction of the pointing error. Errors were significantly smaller in the seen-hand condition. For both conditions, though, a reference frame analysis showed that the errors arose at an eye- or hand-centered stage or both, but not at a body-centered stage. As a common reference frame is required to specify a movement vector, these results suggest that an eye-centered mechanism is involved in integrating target and hand position in programming reaching movements. We discuss how simple gain elements modulating the eye-centered target and hand-position signals can account for these results.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 127-127
Author(s):  
M Desmurget ◽  
Y Rossetti ◽  
C Prablanc

The problem whether movement accuracy is better in the full open-loop condition (FOL, hand never visible) than in the static closed-loop condition (SCL, hand only visible prior to movement onset) remains widely debated. To investigate this controversial question, we studied conditions for which visual information available to the subject prior to movement onset was strictly controlled. The results of our investigation showed that the accuracy improvement observed when human subjects were allowed to see their hand, in the peripheral visual field, prior to movement: (1) concerned only the variable errors; (2) did not depend on the simultaneous vision of the hand and target (hand and target viewed simultaneously vs sequentially); (3) remained significant when pointing to proprioceptive targets; and (4) was not suppressed when the visual information was temporally (visual presentation for less than 300 ms) or spatially (vision of only the index fingertip) restricted. In addition, dissociating vision and proprioception with wedge prisms showed that a weighed hand position was used to program hand trajectory. When considered together, these results suggest that: (i) knowledge of the initial upper limb configuration or position is necessary to plan accurately goal-directed movements; (ii) static proprioceptive receptors are partially ineffective in providing an accurate estimate of the limb posture, and/or hand location relative to the body, and (iii) visual and proprioceptive information is not used in an exclusive way, but combined to furnish an accurate representation of the state of the effector prior to movement.


2012 ◽  
Vol 25 (0) ◽  
pp. 119 ◽  
Author(s):  
Jonathan Schubert ◽  
Brigitte Roeder ◽  
Tobias Heed

Crossing effects in temporal order judgment (TOJ) have been interpreted to indicate remapping of touch from somatotopic into external spatial coordinates. Such crossing effects have been reported to be absent in the congenitally blind, presumably indicating that they do not, by default, remap touch (e.g., Röder et al., 2004). Here, we devised a TOJ task in which participants, trial by trial, took on an uncrossed or crossed start posture and executed a cued movement with both arms into an uncrossed or crossed end posture. When stimulated during movement planning (i.e., before movement execution into the end posture), sighted participants’ performance was affected both by start posture (i.e., the posture during stimulation) as well as end posture (i.e., the currently planned posture). In contrast, blind participants showed a crossing effect for the start posture, but no effect of end posture. Thus, the blind do seem to remap touch when hand posture must be explicitly coded to perform the task such as when planning hand movements. However, whereas the sighted relate touch not only to current, but also to planned future postures, the blind seem to restrict remapping to current posture.


2016 ◽  
Vol 826 ◽  
pp. 140-145
Author(s):  
Hasrul Che Shamsudin ◽  
Mohammad Afif Ayob ◽  
Wan Nurshazwani Wan Zakaria ◽  
Mohamad Fauzi Zakaria

In legged robot movement planning, the leg must be carefully design before trajectory analysis can be done. The objective of this paper is to develop a 3 DOF leg which will be used in quadruped robot. In addition, forward kinematic and comparison between real and simulation of the leg is presented. To achieve the objective, SolidWorks 2013 x64 Edition is used to develop the 3D modeling of the leg while SimMechanics with First Generation Format was applied to export the models to Simulink. For the comparison purposes, real model of 3 DOF leg with Arduino Pro Mini 328 - 5V/16MHz as a microcontroller to control the rotation of three servomotors was constructed. With MIT AI2 Companion software, android apps is developed to send signal to rotate each servomotor wirelessly. The zero position of the leg robot has been determined and the maximum rotation range of each servomotor. This is very important in determination of D-H Parameters which allow the resolving of kinematics problems. It is found that specific rotations of each servomotor provide the trajectory pattern of the leg which is compared through Simulink and real model. Nevertheless, there are errors between simulation and real position of the robot leg due to the open loop system.


2019 ◽  
Author(s):  
Nathan Dunn ◽  
Deepak Unni ◽  
Colin Diesh ◽  
Monica Munoz-Torres ◽  
Nomi L. Harris ◽  
...  

AbstractGenome annotation is the process of identifying the location and function of a genome’s encoded features. Improving the biological accuracy of annotation is a complex and iterative process requiring researchers to review and incorporate multiple sources of information such as transcriptome alignments, predictive models based on sequence profiles, and comparisons to features found in related organisms. Because rapidly decreasing costs are enabling an ever-growing number of scientists to incorporate sequencing as a routine laboratory technique, there is widespread demand for tools that can assist in the deliberative analytical review of genomic information. To this end, Apollo is an open source software package that enables researchers to efficiently inspect and refine the precise structure and role of genomic features in a graphical browser-based platform.In this paper we first outline some of Apollo’s newer user interface features, which were driven by the needs of this expanding genomics community. These include support for real-time collaboration, allowing distributed users to simultaneously edit the same encoded features while also instantly seeing the updates made by other researchers on the same region in a manner similar to Google Docs. Its technical architecture enables Apollo to be integrated into multiple existing genomic analysis pipelines and heterogeneous laboratory workflow platforms. Finally, we consider the implications that Apollo and related applications may have on how the results of genome research are published and made accessible. Source: https://github.com/GMOD/ApolloLicense (BSD-3): https://github.com/GMOD/Apollo/blob/master/LICENSE.mdDocker: https://hub.docker.com/r/gmod/apollo/tags/, https://github.com/GMOD/docker-apolloRequirements: JDK 1.8, Node v6.0+User guide: http://genomearchitect.org; technical guide: http://genomearchitect.readthedocs.io/en/latest/Mailing list: [email protected]


1995 ◽  
Vol 117 (1) ◽  
pp. 83-88
Author(s):  
S. C. Jen ◽  
D. Kohli

A new numerical approach for determining inverse kinematic polynomials of manipulators is presented in this paper. Let the inverse kinematic polynomial of a manipulator in one revolute joint variable θi be represented by gnTn + gn-1Tn-1 + gn-2Tn-2 + • + g1T + go = 0. T = tanθi/2 and go, g1...gn are polynomial type functions of hand position variables. The coefficients g are expressed in terms of undetermined coefficients and hand position variables. Then the undetermined coefficients are evaluated by using direct kinematics and the solutions of sets of linear equations, thus determining coefficients g and the inverse kinematic polynomial. The method is general and may be applied for determining inverse kinematic polynomials of any manipulator. However, the number of linear equations required in determining coefficients g become significantly larger as the number of links and the degrees of the manipulator increase. Numerical examples of 2R planar and 3R spatial manipulator are presented for illustration.


2018 ◽  
Vol 119 (1) ◽  
pp. 118-123 ◽  
Author(s):  
Tom Nissens ◽  
Katja Fiehler

Simultaneous eye and hand movements are highly coordinated and tightly coupled. This raises the question whether the selection of eye and hand targets relies on a shared attentional mechanism or separate attentional systems. Previous studies have revealed conflicting results by reporting evidence for both a shared as well as separate systems. Movement properties such as movement curvature can provide novel insights into this question as they provide a sensitive measure for attentional allocation during target selection. In the current study, participants performed simultaneous eye and hand movements to the same or different visual target locations. We show that both saccade and reaching movements curve away from the other effector’s target location when they are simultaneously performed to spatially distinct locations. We argue that there is a shared attentional mechanism involved in selecting eye and hand targets that may be found on the level of effector-independent priority maps. NEW & NOTEWORTHY Movement properties such as movement curvature have been widely neglected as important sources of information in investigating whether the attentional systems underlying target selection for eye and hand movements are separate or shared. We convincingly show that movement curvature is influenced by the other effector’s target location in simultaneous eye and hand movements to spatially distinct locations. Our results provide evidence for shared attentional systems involved in the selection of saccade and reach targets.


Sign in / Sign up

Export Citation Format

Share Document