visuomotor learning
Recently Published Documents


TOTAL DOCUMENTS

142
(FIVE YEARS 30)

H-INDEX

31
(FIVE YEARS 3)

2022 ◽  
Author(s):  
Constantinos Eleftheriou

The goal of this protocol is to assess visuomotor learning and motor flexibility in freely-moving mice, using the Visiomode touchscreen platform. Water-restricted mice first learn to associate touching a visual stimulus on the screen with a water reward. They then learn to discriminate between different visual stimuli on the touchscreen by nose-poking, before asked to switch their motor strategy to forelimb reaching.


2022 ◽  
Author(s):  
Constantinos Eleftheriou

The goal of this protocol is to assess visuomotor learning and motor flexibility in freely-moving mice, using the Visiomode touchscreen platform. It modifies the original protocol's (dx.doi.org/10.17504/protocols.io.bumgnu3w) last stage by replacing forelimb reaching with a reversal learning paradigm


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
E. Tatti ◽  
F. Ferraioli ◽  
J. Peter ◽  
T. Alalade ◽  
A. B. Nelson ◽  
...  

2021 ◽  
Author(s):  
Constantinos Eleftheriou

The goal of this protocol is to assess visuomotor learning and motor flexibility in freely-moving mice, using the Visiomode touchscreen platform. Water-restricted mice first learn to associate touching a visual stimulus on the screen with a water reward. They then learn to discriminate between different visual stimuli on the touchscreen by nose-poking, before asked to switch their motor strategy to forelimb reaching. Version 1 of the protocol uses traditional water deprivation and water rewards in the task as a means of motivating mice to perform the task. Version 2 of the protocol uses Citric Acid for water restriction and sucrose as rewards in the task instead of the traditional water deprivation protocol.


2021 ◽  
Author(s):  
Jack De Havas ◽  
Patrick Haggard ◽  
Hiroaki Gomi ◽  
Sven Bestmann ◽  
Yuji Ikegaya ◽  
...  

Humans continuously adapt their movement to a novel environment by recalibrating their sensorimotor system. Recent evidence, however, shows that explicit planning to compensate for external changes, i.e. a cognitive strategy, can also aid performance. If such a strategy is indeed planned in external space, it should improve performance in an effector independent manner. We tested this hypothesis by examining whether promoting a cognitive strategy during a visual-force adaptation task performed in one hand can facilitate learning for the opposite hand. Participants rapidly adjusted the height of visual bar on screen to a target level by isometrically exerting force on a handle using their right hand. Visuomotor gain increased during the task and participants learned the increased gain. Visual feedback was continuously provided for one group, while for another group only the endpoint of the force trajectory was presented. The latter has been reported to promote cognitive strategy use. We found that endpoint feedback produced stronger intermanual transfer of learning and slower response times than continuous feedback. In a separate experiment, we confirmed that the aftereffect is indeed reduced when only endpoint feedback is provided, a finding that has been consistently observed when cognitive strategies are used. The results suggest that intermanual transfer can be facilitated by a cognitive strategy. This indicates that the behavioral observation of intermanual transfer can be achieved either by forming an effector-independent motor representation, or by sharing an effector-independent cognitive strategy between the hands.


2021 ◽  
Vol 79 ◽  
pp. 102858
Author(s):  
Andrew Hooyman ◽  
James Gordon ◽  
Carolee Winstein

2021 ◽  
Author(s):  
Jonathan Tsay ◽  
Adrian Haith ◽  
Richard B Ivry ◽  
Hyosub E Kim

While sensory-prediction error (SPE), the difference between predicted and actual sensory feedback, is recognized as the primary signal that drives implicit motor recalibration, recent studies have shown that task error (TE), the difference between sensory feedback and the movement goal, also plays a modulatory role. To systematically examine how SPE and TE collectively shape implicit recalibration, we performed a series of visuomotor learning experiments, introducing perturbations that varied the size of TE using a popular target displacement method and the size of SPE using a clamped visual feedback method. In Experiments 1 & 2, we observed robust sign-dependent changes in hand angle in response to perturbations with both SPE and TE but failed to observe changes in hand angle in response to TE-only perturbations. Yet in Experiments 3 & 4, the magnitude of TE modulated implicit recalibration in the presence of a fixed SPE. Taken together, these results underscore that implicit recalibration is driven by both SPE and TE (Kim, Parvin, & Ivry, 2019), while specifying unappreciated interactions between these two error-based processes. First, TE only impacts implicit calibration when SPE is present. Second, transient changes occurring when the target is displaced to manipulate TE has an attenuating effect on implicit recalibration, perhaps due to attention being directed away from the sensory feedback.


2021 ◽  
Author(s):  
Constantinos Eleftheriou

The goal of this protocol is to assess visuomotor learning and motor flexibility in freely-moving mice, using the Visiomode touchscreen platform. Water-restricted mice first learn to associate touching a visual stimulus on the screen with a water reward. They then learn to discriminate between different visual stimuli on the touchscreen by nose-poking, before asked to switch their motor strategy to forelimb reaching.


2021 ◽  
pp. 1-7
Author(s):  
Kotaro Nishimura ◽  
Ozge Ozlem Saracbasi ◽  
Yoshikatsu Hayashi ◽  
Toshiyuki Kondo

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Jana Masselink ◽  
Markus Lappe

Sensorimotor learning adapts motor output to maintain movement accuracy. For saccadic eye movements, learning also alters space perception, suggesting a dissociation between the performed saccade and its internal representation derived from corollary discharge (CD). This is critical since learning is commonly believed to be driven by CD-based visual prediction error. We estimate the internal saccade representation through pre- and trans-saccadic target localization, showing that it decouples from the actual saccade during learning. We present a model that explains motor and perceptual changes by collective plasticity of spatial target percept, motor command, and a forward dynamics model that transforms CD from motor into visuospatial coordinates. We show that learning does not follow visual prediction error but instead a postdictive update of space after saccade landing. We conclude that trans-saccadic space perception guides motor learning via CD-based postdiction of motor error under the assumption of a stable world.


Sign in / Sign up

Export Citation Format

Share Document