Prism adaptation of reaching is dependent on the type of visual feedback of hand and target position

2001 ◽  
Vol 905 (1-2) ◽  
pp. 207-219 ◽  
Author(s):  
Scott A Norris ◽  
Bradley E Greger ◽  
Tod A Martin ◽  
W.Thomas Thach
1973 ◽  
Vol 37 (3) ◽  
pp. 683-693
Author(s):  
Mark H. Healy ◽  
David Symmes ◽  
Ayub K. Ommaya

Contrary to previous reports, adaptation to laterally displaced visual input does require visual perception of the visuomotor mismatch. Using 4 rhesus monkeys as Ss, it was found that reaching errors induced by wearing 20 diopter wedge prisms remained at optically predicted magnitudes for 24 hr., provided that no visual misreaching cues were available. Unrestricted head movement did not provide such cues. However, terminal viewing of the prism induced reaching errors produced dramatic, rapid adaptation. Tactile and proprioceptive discordance cues alone, without visual feedback, were not corrective.


2015 ◽  
Vol 113 (1) ◽  
pp. 328-338 ◽  
Author(s):  
Masato Inoue ◽  
Motoaki Uchimura ◽  
Ayaka Karibe ◽  
Jacinta O'Shea ◽  
Yves Rossetti ◽  
...  

It has been proposed that motor adaptation depends on at least two learning systems, one that learns fast but with poor retention and another that learns slowly but with better retention (Smith MA, Ghazizadeh A, Shadmehr R. PLoS Biol 4: e179, 2006). This two-state model has been shown to account for a range of behavior in the force field adaptation task. In the present study, we examined whether such a two-state model could also account for behavior arising from adaptation to a prismatic displacement of the visual field. We first confirmed that an “adaptation rebound,” a critical prediction of the two-state model, occurred when visual feedback was deprived after an adaptation-extinction episode. We then examined the speed of decay of the prism aftereffect (without any visual feedback) after repetitions of 30, 150, and 500 trials of prism exposure. The speed of decay decreased with the number of exposure trials, a phenomenon that was best explained by assuming an “ultraslow” system, in addition to the fast and slow systems. Finally, we compared retention of aftereffects 24 h after 150 or 500 trials of exposure: retention was significantly greater after 500 than 150 trials. This difference in retention could not be explained by the two-state model but was well explained by the three-state model as arising from the difference in the amount of adaptation of the “ultraslow process.” These results suggest that there are not only fast and slow systems but also an ultraslow learning system in prism adaptation that is activated by prolonged prism exposure of 150–500 trials.


2020 ◽  
Author(s):  
Yusuke Ujihara ◽  
Hiroshi Matsui ◽  
Ei-Ichi Izawa

AbstractInterception of a moving target is a fundamental behaviour of predators and requires tight coupling between the sensory and motor systems. In the literature of foraging studies, feedback mechanisms based on current target position are frequently reported. However, there have also been recent reports of animals employing feedforward mechanisms, in which prediction of future target location plays an important role. In nature, coordination of these two mechanisms may contribute to intercepting evasive prey. However, how animals weigh these two mechanisms remain poorly understood. Here, we conducted a behavioural experiment involving crows (which show flexible sensorimotor coordination in various domains) capturing a moving target. We changed the velocity of the target to examine how the crows utilised prediction of the target location. The analysis of moment-to-moment head movements and computational simulations revealed that the crows used prediction of future target location when the target velocity was high. In contrast, their interception depended on the current momentary position of the target when the target velocity was slow. These results suggest that crows successfully intercept targets by weighing predictive and visual feedback mechanisms, depending on the target velocity.


2012 ◽  
Vol 29 (2) ◽  
pp. 119-129 ◽  
Author(s):  
MASAKI YAMAMOTO ◽  
HIROSHI ANDO

AbstractThis study aims to create a prediction model for state-space estimation and to elucidate the required information processing for identifying an external space in prism adaptation. Subjects were 57 healthy students. The subjects were instructed to rapidly perform reaching movements to one of the randomly illuminating light-emitting diode lights. Their movements were measured while wearing prism glasses and after removing that. We provided the following four conditions and control. In target condition, reaching error distance was visually fed back to the subject. In trajectory condition, the trajectory of fingertip movement could be seen, and the final reaching error was not fed back. Two restricted visual feedback conditions were prepared based on a different presentation timing (on-time and late-time conditions). We set up a linear parametric model and an estimation model using Kalman filtering. The goodness of fit between the estimated and observed values in each model was examined using Akaike information criterion (AIC). AIC would be one way to evaluate two models with different number of parameters. In the control, the value of AIC was 179.0 and 154.0 for the linear model and Kalman filtering, respectively, while these values were 173.6 and 161.1 for the target condition, 202.8 and 159.7 for the trajectory condition, 192.7 and 180.8 for the on-time condition, and 206.9 and 174.0 for the late-time condition. Kalman gain in the control was 0.07–0.26. Kalman gain relies on the prior estimation distribution when its value is below 0.5. Kalman gain in the trajectory and late-time conditions was 0.03–0.60 and 0.08–0.95, respectively. The Kalman filter, a state estimation model based on Bayesian theory, expressed the dynamics of the internal model under uncertain feedback information better than the linear parametric model. The probabilistic estimation model can clearly simulate state estimation according to the reliability of the visual feedback.


2019 ◽  
Vol 2019 ◽  
pp. 1-11
Author(s):  
Yinan Chen ◽  
Song Wu ◽  
Zhengting Tang ◽  
Jinglu Zhang ◽  
Lin Wang ◽  
...  

Objective. To compare the effects of training of jaw and finger movements with and without visual feedback on precision and accuracy. Method. Twenty healthy participants (10 men and 10 women; mean age 24.6±0.8 years) performed two tasks: a jaw open-close movement and a finger lifting task with and without visual feedback before and after 3-day training. Individually determined target positions for the jaw corresponded to 50% of the maximal jaw opening position, and a fixed target position of 20 mm was set for the finger. Movements were repeated 10 times each. The variability in the amplitude of the movements was expressed as percentage in relation to the target position (Daccu—accuracy) and as coefficient of variation (CVprec—precision). Result. Daccu and CVprec were significantly influenced by visual feedback (P=0.001 and P<0.001, respectively) and reduced after training jaw and finger movements (P<0.001). Daccu (P=0.004) and CVprec (P=0.019) were significantly different between jaw and finger movements. The relative changes in Daccu (P=0.017) and CVprec (P=0.027) were different from pretraining to posttraining between jaw and finger movements. Conclusion. The accuracy and precision of standardized jaw and finger movements are dependent on visual feedback and appears to improve more by training in the trigeminal system possibly reflecting significant neuroplasticity in motor control mechanisms.


2002 ◽  
Vol 95 (3_suppl) ◽  
pp. 1129-1140 ◽  
Author(s):  
Luc F. De Nil ◽  
Sophie J. Lafaille

The present study revisited the issue whether the presence of added visual feedback differentially affects the accuracy of finger and jaw movements. 15 men were instructed to move either the index finger on the dominant (right) hand, or the jaw, to a predefined target position with the highest precision possible. During execution of the task, on-line visual feedback of the moving articulator was either present or removed In contrast to previous findings, significant improvement was observed for both finger and jaw movements in the visual feedback condition. Movement error in the nonvisual condition was proportionally greater for finger than for jaw movements which may have reflected a speed-accuracy trade-off because finger movements in the nonvisual condition were executed significantly faster than those of the jaw. The present findings support the beneficial effects of adding visual feedback during dynamic oral and finger movements that require a high spatial precision. Such findings support current methods of clinical intervention in speech-language pathology and other disciplines. Furthermore, the results contribute to our understanding of the role of various modalities of feedback during motor execution.


2017 ◽  
Vol 50 (6) ◽  
pp. 689-696 ◽  
Author(s):  
Hiroshi Matsui ◽  
Marika Ryu ◽  
Hideaki Kawabata

2019 ◽  
Vol 122 (5) ◽  
pp. 1849-1860 ◽  
Author(s):  
Nobuyuki Nishimura ◽  
Motoaki Uchimura ◽  
Shigeru Kitazawa

We previously showed that the brain automatically represents a target position for reaching relative to a large square in the background. In the present study, we tested whether a natural scene with many complex details serves as an effective background for representing a target. In the first experiment, we used upright and inverted pictures of a natural scene. A shift of pictures significantly attenuated prism adaptation of reaching movements as long as they were upright. In one-third of participants, adaptation was almost completely cancelled whether the pictures were upright or inverted. It was remarkable that there were two distinct groups of participants, one who relies fully on the allocentric coordinate and the other who depended only when the scene was upright. In the second experiment, we examined how long it takes for a novel upright scene to serve as a background. A shift of the novel scene had no significant effects when it was presented for 500 ms before presenting a target, but significant effects were recovered when presented for 1,500 ms. These results show that a natural scene serves as a background against which a target is automatically represented once we spend 1,500 ms in the scene. NEW & NOTEWORTHY Prism adaptation of reaching was attenuated by a shift of natural scenes as long as they were upright. In one-third of participants, adaptation was fully canceled whether the scene was upright or inverted. When an upright scene was novel, it took 1,500 ms to prepare the scene for allocentric coding. These results show that a natural scene serves as a background against which a target is automatically represented once we spend 1,500 ms in the scene.


Sign in / Sign up

Export Citation Format

Share Document