scholarly journals Generalization via superposition: Combined effects of mixed reference frame representations for explicit and implicit learning in a visuomotor adaptation task

2018 ◽  
Author(s):  
Eugene Poh ◽  
Jordan A. Taylor

AbstractStudies on generalization of learned visuomotor perturbations has generally focused on whether learning is coded in extrinsic or intrinsic reference frames. This dichotomy, however, is challenged by recent findings showing that learning is represented in a mixed reference frame. Overlooked in this framework is how learning is the result of multiple processes, such as explicit re-aiming and implicit motor adaptation. Therefore the proposed mixed representation may simply reflect the superposition of explicit and implicit generalization functions, each represented in different reference frames. Here, we characterized the individual generalization functions of explicit and implicit learning in relative isolation to determine if their combination could predict the overall generalization function when both processes are in operation. We modified the form of feedback in a visuomotor rotation task to isolate explicit and implicit learning, and tested generalization across different limb postures to dissociate the extrinsic and intrinsic representations. We found that explicit generalization occurred predominantly in an extrinsic reference frame but the amplitude was reduced with postural changes, whereas implicit generalization was phase-shifted according to a mixed reference frame representation and amplitude was maintained. A linear combination of individual explicit and implicit generalization functions accounted for nearly 85% of the variance associated with the generalization function in a typical visuomotor rotation task, where both processes are in operation. This suggests that each form of learning results from a mixed representation with distinct extrinsic and intrinsic contributions, and the combination of these features shape the generalization pattern observed at novel limb postures.New and noteworthyGeneralization following learning in visuomotor adaptation tasks can reflect how the brain represents what it learns. In this study, we isolated explicit and implicit forms of learning, and showed that they are derived from a mixed reference frame representation with distinct extrinsic and intrinsic contributions. Furthermore, we showed that the overall generalization pattern at novel workspaces is due to the superposition of independent generalization effects developed by explicit and implicit learning processes.

2019 ◽  
Vol 121 (5) ◽  
pp. 1953-1966 ◽  
Author(s):  
Eugene Poh ◽  
Jordan A. Taylor

Studies on generalization of learned visuomotor perturbations have generally focused on whether learning is coded in extrinsic or intrinsic reference frames. This dichotomy, however, is challenged by recent findings showing that learning is represented in a mixed reference frame. Overlooked in this framework is how learning appears to consist of multiple processes, such as explicit reaiming and implicit motor adaptation. Therefore, the proposed mixed representation may simply reflect the superposition of explicit and implicit generalization functions, each represented in different reference frames. Here we characterized the individual generalization functions of explicit and implicit learning in relative isolation to determine whether their combination could predict the overall generalization function when both processes are in operation. We modified the form of feedback in a visuomotor rotation task in an attempt to isolate explicit and implicit learning and tested generalization across new limb postures to dissociate the extrinsic/intrinsic representations. We found that the amplitude of explicit generalization was reduced with postural change and was only marginally shifted, resembling an extrinsic representation. In contrast, implicit generalization maintained its amplitude but was significantly shifted, resembling a mixed representation. A linear combination of individual explicit and implicit generalization functions accounted for nearly 85% of the variance associated with the generalization function in a typical visuomotor rotation task, where both processes are in operation. This suggests that each form of learning results from a mixed representation with distinct extrinsic and intrinsic contributions and the combination of these features shapes the generalization pattern observed at novel limb postures. NEW & NOTEWORTHY Generalization following learning in visuomotor adaptation tasks can reflect how the brain represents what it learns. In this study, we isolated explicit and implicit forms of learning and showed that they are derived from a mixed reference frame representation with distinct extrinsic and intrinsic contributions. Furthermore, we showed that the overall generalization pattern at novel workspaces is due to the superposition of independent generalization effects developed by explicit and implicit learning processes.


2009 ◽  
Vol 101 (5) ◽  
pp. 2263-2269 ◽  
Author(s):  
Aymar de Rugy ◽  
Mark R. Hinder ◽  
Daniel G. Woolley ◽  
Richard G. Carson

Reaching to visual targets engages the nervous system in a series of transformations between sensory information and motor commands. That which remains to be determined is the extent to which the processes that mediate sensorimotor adaptation to novel environments engage neural circuits that represent the required movement in joint-based or muscle-based coordinate systems. We sought to establish the contribution of these alternative representations to the process of visuomotor adaptation. To do so we applied a visuomotor rotation during a center-out isometric torque production task that involved flexion/extension and supination/pronation at the elbow-joint complex. In separate sessions, distinct half-quadrant rotations (i.e., 45°) were applied such that adaptation could be achieved either by only rescaling the individual joint torques (i.e., the visual target and torque target remained in the same quadrant) or by additionally requiring torque reversal at a contributing joint (i.e., the visual target and torque target were in different quadrants). Analysis of the time course of directional errors revealed that the degree of adaptation was lower (by ∼20%) when reversals in the direction of joint torques were required. It has been established previously that in this task space, a transition between supination and pronation requires the engagement of a different set of muscle synergists, whereas in a transition between flexion and extension no such change is required. The additional observation that the initial level of adaptation was lower and the subsequent aftereffects were smaller, for trials that involved a pronation–supination transition than for those that involved a flexion–extension transition, supports the conclusion that the process of adaptation engaged, at least in part, neural circuits that represent the required motor output in a muscle-based coordinate system.


2015 ◽  
Vol 113 (10) ◽  
pp. 3836-3849 ◽  
Author(s):  
Krista M. Bond ◽  
Jordan A. Taylor

There is mounting evidence for the idea that performance in a visuomotor rotation task can be supported by both implicit and explicit forms of learning. The implicit component of learning has been well characterized in previous experiments and is thought to arise from the adaptation of an internal model driven by sensorimotor prediction errors. However, the role of explicit learning is less clear, and previous investigations aimed at characterizing the explicit component have relied on indirect measures such as dual-task manipulations, posttests, and descriptive computational models. To address this problem, we developed a new method for directly assaying explicit learning by having participants verbally report their intended aiming direction on each trial. While our previous research employing this method has demonstrated the possibility of measuring explicit learning over the course of training, it was only tested over a limited scope of manipulations common to visuomotor rotation tasks. In the present study, we sought to better characterize explicit and implicit learning over a wider range of task conditions. We tested how explicit and implicit learning change as a function of the specific visual landmarks used to probe explicit learning, the number of training targets, and the size of the rotation. We found that explicit learning was remarkably flexible, responding appropriately to task demands. In contrast, implicit learning was strikingly rigid, with each task condition producing a similar degree of implicit learning. These results suggest that explicit learning is a fundamental component of motor learning and has been overlooked or conflated in previous visuomotor tasks.


2019 ◽  
Author(s):  
Lukas Schneider ◽  
Adan-Ulises Dominguez-Vargas ◽  
Lydia Gibson ◽  
Igor Kagan ◽  
Melanie Wilke

AbstractMost sensorimotor cortical areas contain eye position information thought to ensure perceptual stability across saccades and underlie spatial transformations supporting goal-directed actions. One pathway by which eye position signals could be relayed to and across cortical areas is via the dorsal pulvinar. Several studies demonstrated saccade-related activity in the dorsal pulvinar and we have recently shown that many neurons exhibit post-saccadic spatial preference long after the saccade execution. In addition, dorsal pulvinar lesions lead to gaze-holding deficits expressed as nystagmus or ipsilesional gaze bias, prompting us to investigate the effects of eye position. We tested three starting eye positions (−15°/0°/15°) in monkeys performing a visually-cued memory saccade task. We found two main types of gaze dependence. First, ∼50% of neurons showed an effect of static gaze direction during initial and post-saccadic fixation. Eccentric gaze preference was more common than straight ahead. Some of these neurons were not visually-responsive and might be primarily signaling the position of the eyes in the orbit, or coding foveal targets in a head/body/world-centered reference frame. Second, many neurons showed a combination of eye-centered and gaze-dependent modulation of visual, memory and saccadic responses to a peripheral target. A small subset showed effects consistent with eye position-dependent gain modulation. Analysis of reference frames across task epochs from visual cue to post-saccadic target fixation indicated a transition from predominantly eye-centered encoding to representation of final gaze or foveated locations in non-retinocentric coordinates. These results show that dorsal pulvinar neurons carry information about eye position, which could contribute to steady gaze during postural changes and to reference frame transformations for visually-guided eye and limb movements.New & NoteworthyWork on the pulvinar focused on eye-centered visuospatial representations, but position of the eyes in the orbit is also an important factor that needs to be taken into account during spatial orienting and goal-directed reaching. Here we show that dorsal pulvinar neurons are influenced by eye position. Gaze direction modulated ongoing firing during stable fixation, as well as visual and saccade responses to peripheral targets, suggesting involvement of the dorsal pulvinar in spatial coordinate transformations.


2020 ◽  
Vol 123 (1) ◽  
pp. 367-391 ◽  
Author(s):  
Lukas Schneider ◽  
Adan-Ulises Dominguez-Vargas ◽  
Lydia Gibson ◽  
Igor Kagan ◽  
Melanie Wilke

Sensorimotor cortical areas contain eye position information thought to ensure perceptual stability across saccades and underlie spatial transformations supporting goal-directed actions. One pathway by which eye position signals could be relayed to and across cortical areas is via the dorsal pulvinar. Several studies have demonstrated saccade-related activity in the dorsal pulvinar, and we have recently shown that many neurons exhibit postsaccadic spatial preference. In addition, dorsal pulvinar lesions lead to gaze-holding deficits expressed as nystagmus or ipsilesional gaze bias, prompting us to investigate the effects of eye position. We tested three starting eye positions (−15°, 0°, 15°) in monkeys performing a visually cued memory saccade task. We found two main types of gaze dependence. First, ~50% of neurons showed dependence on static gaze direction during initial and postsaccadic fixation, and might be signaling the position of the eyes in the orbit or coding foveal targets in a head/body/world-centered reference frame. The population-derived eye position signal lagged behind the saccade. Second, many neurons showed a combination of eye-centered and gaze-dependent modulation of visual, memory, and saccadic responses to a peripheral target. A small subset showed effects consistent with eye position-dependent gain modulation. Analysis of reference frames across task epochs from visual cue to postsaccadic fixation indicated a transition from predominantly eye-centered encoding to representation of final gaze or foveated locations in nonretinocentric coordinates. These results show that dorsal pulvinar neurons carry information about eye position, which could contribute to steady gaze during postural changes and to reference frame transformations for visually guided eye and limb movements. NEW & NOTEWORTHY Work on the pulvinar focused on eye-centered visuospatial representations, but position of the eyes in the orbit is also an important factor that needs to be taken into account during spatial orienting and goal-directed reaching. We show that dorsal pulvinar neurons are influenced by eye position. Gaze direction modulated ongoing firing during stable fixation, as well as visual and saccade responses to peripheral targets, suggesting involvement of the dorsal pulvinar in spatial coordinate transformations.


2020 ◽  
Vol 1 ◽  
Author(s):  
Sarah H. E. M. Voets ◽  
Muriel T. N. Panouilleres ◽  
Ned Jenkinson

AbstractMotor adaptation is a process by which the brain gradually reduces error induced by a predictable change in the environment, e.g., pointing while wearing prism glasses. It is thought to occur via largely implicit processes, though explicit strategies are also thought to contribute. Research suggests a role of the cerebellum in the implicit aspects of motor adaptation. Using non-invasive brain stimulation, we sought to investigate the involvement of the cerebellum in implicit motor adaptation in healthy participants. Inhibition of the cerebellum was attained through repetitive transcranial magnetic stimulation (rTMS), after which participants performed a visuomotor-rotation task while using an explicit strategy. Adaptation and aftereffects of the TMS group showed no difference in behaviour compared to a Sham stimulation group, therefore this study did not provide any further evidence of a specific role of the cerebellum in implicit motor adaptation. However, our behavioral findings replicate those in the seminal study by Mazzoni and Krakauer (2006).


1991 ◽  
Vol 127 ◽  
pp. 123-129
Author(s):  
Kenneth J. Johnston ◽  
Jane L. Russell ◽  
Christian de Vegt ◽  
N. Zacharias ◽  
R. Hindsley ◽  
...  

The celestial positions of extragalactic radio sources may be determined to a precision of less than a milliarcsecond. Further, since these sources are believed to be at great distances from the galaxy, little or no proper motion is expected on scales of order a milliarcsecond. Therefore a reference frame based on the positions of carefully selected sources so that display compact radiation on scales less than a milliarcsecond will noticeably improve the precision of present celestial reference frames. If the radio objects making up the reference frame also emit radiation at optical wavelengths, and assuming the optical/radio radiation is coincident, the radio frame can update the optical frame to the accuracy of the individual optical positions.


2018 ◽  
Vol 15 (3) ◽  
pp. 229-236 ◽  
Author(s):  
Gennaro Ruggiero ◽  
Alessandro Iavarone ◽  
Tina Iachini

Objective: Deficits in egocentric (subject-to-object) and allocentric (object-to-object) spatial representations, with a mainly allocentric impairment, characterize the first stages of the Alzheimer's disease (AD). Methods: To identify early cognitive signs of AD conversion, some studies focused on amnestic-Mild Cognitive Impairment (aMCI) by reporting alterations in both reference frames, especially the allocentric ones. However, spatial environments in which we move need the cooperation of both reference frames. Such cooperating processes imply that we constantly switch from allocentric to egocentric frames and vice versa. This raises the question of whether alterations of switching abilities might also characterize an early cognitive marker of AD, potentially suitable to detect the conversion from aMCI to dementia. Here, we compared AD and aMCI patients with Normal Controls (NC) on the Ego-Allo- Switching spatial memory task. The task assessed the capacity to use switching (Ego-Allo, Allo-Ego) and non-switching (Ego-Ego, Allo-Allo) verbal judgments about relative distances between memorized stimuli. Results: The novel finding of this study is the neat impairment shown by aMCI and AD in switching from allocentric to egocentric reference frames. Interestingly, in aMCI when the first reference frame was egocentric, the allocentric deficit appeared attenuated. Conclusion: This led us to conclude that allocentric deficits are not always clinically detectable in aMCI since the impairments could be masked when the first reference frame was body-centred. Alongside, AD and aMCI also revealed allocentric deficits in the non-switching condition. These findings suggest that switching alterations would emerge from impairments in hippocampal and posteromedial areas and from concurrent dysregulations in the locus coeruleus-noradrenaline system or pre-frontal cortex.


Author(s):  
Steven M. Weisberg ◽  
Anjan Chatterjee

Abstract Background Reference frames ground spatial communication by mapping ambiguous language (for example, navigation: “to the left”) to properties of the speaker (using a Relative reference frame: “to my left”) or the world (Absolute reference frame: “to the north”). People’s preferences for reference frame vary depending on factors like their culture, the specific task in which they are engaged, and differences among individuals. Although most people are proficient with both reference frames, it is unknown whether preference for reference frames is stable within people or varies based on the specific spatial domain. These alternatives are difficult to adjudicate because navigation is one of few spatial domains that can be naturally solved using multiple reference frames. That is, while spatial navigation directions can be specified using Absolute or Relative reference frames (“go north” vs “go left”), other spatial domains predominantly use Relative reference frames. Here, we used two domains to test the stability of reference frame preference: one based on navigating a four-way intersection; and the other based on the sport of ultimate frisbee. We recruited 58 ultimate frisbee players to complete an online experiment. We measured reaction time and accuracy while participants solved spatial problems in each domain using verbal prompts containing either Relative or Absolute reference frames. Details of the task in both domains were kept as similar as possible while remaining ecologically plausible so that reference frame preference could emerge. Results We pre-registered a prediction that participants would be faster using their preferred reference frame type and that this advantage would correlate across domains; we did not find such a correlation. Instead, the data reveal that people use distinct reference frames in each domain. Conclusion This experiment reveals that spatial reference frame types are not stable and may be differentially suited to specific domains. This finding has broad implications for communicating spatial information by offering an important consideration for how spatial reference frames are used in communication: task constraints may affect reference frame choice as much as individual factors or culture.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jennifer E. Ruttle ◽  
Bernard Marius ’t Hart ◽  
Denise Y. P. Henriques

AbstractIn motor learning, the slow development of implicit learning is traditionally taken for granted. While much is known about training performance during adaptation to a perturbation in reaches, saccades and locomotion, little is known about the time course of the underlying implicit processes during normal motor adaptation. Implicit learning is characterized by both changes in internal models and state estimates of limb position. Here, we measure both as reach aftereffects and shifts in hand localization in our participants, after every training trial. The observed implicit changes were near asymptote after only one to three perturbed training trials and were not predicted by a two-rate model’s slow process that is supposed to capture implicit learning. Hence, we show that implicit learning is much faster than conventionally believed, which has implications for rehabilitation and skills training.


Sign in / Sign up

Export Citation Format

Share Document