Visual Feedback of Target Position Affects Accuracy of Sequential Movements at Even Spaces

2017 ◽  
Vol 50 (6) ◽  
pp. 689-696 ◽  
Author(s):  
Hiroshi Matsui ◽  
Marika Ryu ◽  
Hideaki Kawabata
2020 ◽  
Author(s):  
Yusuke Ujihara ◽  
Hiroshi Matsui ◽  
Ei-Ichi Izawa

AbstractInterception of a moving target is a fundamental behaviour of predators and requires tight coupling between the sensory and motor systems. In the literature of foraging studies, feedback mechanisms based on current target position are frequently reported. However, there have also been recent reports of animals employing feedforward mechanisms, in which prediction of future target location plays an important role. In nature, coordination of these two mechanisms may contribute to intercepting evasive prey. However, how animals weigh these two mechanisms remain poorly understood. Here, we conducted a behavioural experiment involving crows (which show flexible sensorimotor coordination in various domains) capturing a moving target. We changed the velocity of the target to examine how the crows utilised prediction of the target location. The analysis of moment-to-moment head movements and computational simulations revealed that the crows used prediction of future target location when the target velocity was high. In contrast, their interception depended on the current momentary position of the target when the target velocity was slow. These results suggest that crows successfully intercept targets by weighing predictive and visual feedback mechanisms, depending on the target velocity.


2001 ◽  
Vol 905 (1-2) ◽  
pp. 207-219 ◽  
Author(s):  
Scott A Norris ◽  
Bradley E Greger ◽  
Tod A Martin ◽  
W.Thomas Thach

2019 ◽  
Vol 2019 ◽  
pp. 1-11
Author(s):  
Yinan Chen ◽  
Song Wu ◽  
Zhengting Tang ◽  
Jinglu Zhang ◽  
Lin Wang ◽  
...  

Objective. To compare the effects of training of jaw and finger movements with and without visual feedback on precision and accuracy. Method. Twenty healthy participants (10 men and 10 women; mean age 24.6±0.8 years) performed two tasks: a jaw open-close movement and a finger lifting task with and without visual feedback before and after 3-day training. Individually determined target positions for the jaw corresponded to 50% of the maximal jaw opening position, and a fixed target position of 20 mm was set for the finger. Movements were repeated 10 times each. The variability in the amplitude of the movements was expressed as percentage in relation to the target position (Daccu—accuracy) and as coefficient of variation (CVprec—precision). Result. Daccu and CVprec were significantly influenced by visual feedback (P=0.001 and P<0.001, respectively) and reduced after training jaw and finger movements (P<0.001). Daccu (P=0.004) and CVprec (P=0.019) were significantly different between jaw and finger movements. The relative changes in Daccu (P=0.017) and CVprec (P=0.027) were different from pretraining to posttraining between jaw and finger movements. Conclusion. The accuracy and precision of standardized jaw and finger movements are dependent on visual feedback and appears to improve more by training in the trigeminal system possibly reflecting significant neuroplasticity in motor control mechanisms.


2002 ◽  
Vol 95 (3_suppl) ◽  
pp. 1129-1140 ◽  
Author(s):  
Luc F. De Nil ◽  
Sophie J. Lafaille

The present study revisited the issue whether the presence of added visual feedback differentially affects the accuracy of finger and jaw movements. 15 men were instructed to move either the index finger on the dominant (right) hand, or the jaw, to a predefined target position with the highest precision possible. During execution of the task, on-line visual feedback of the moving articulator was either present or removed In contrast to previous findings, significant improvement was observed for both finger and jaw movements in the visual feedback condition. Movement error in the nonvisual condition was proportionally greater for finger than for jaw movements which may have reflected a speed-accuracy trade-off because finger movements in the nonvisual condition were executed significantly faster than those of the jaw. The present findings support the beneficial effects of adding visual feedback during dynamic oral and finger movements that require a high spatial precision. Such findings support current methods of clinical intervention in speech-language pathology and other disciplines. Furthermore, the results contribute to our understanding of the role of various modalities of feedback during motor execution.


2019 ◽  
Vol 121 (1) ◽  
pp. 269-284 ◽  
Author(s):  
Florian Perdreau ◽  
James R. H. Cooke ◽  
Mathieu Koppen ◽  
W. Pieter Medendorp

The brain uses self-motion information to internally update egocentric representations of locations of remembered world-fixed visual objects. If a discrepancy is observed between this internal update and reafferent visual feedback, this could be either due to an inaccurate update or because the object has moved during the motion. To optimally infer the object’s location it is therefore critical for the brain to estimate the probabilities of these two causal structures and accordingly integrate and/or segregate the internal and sensory estimates. To test this hypothesis, we designed a spatial updating task involving passive whole body translation. Participants, seated on a vestibular sled, had to remember the world-fixed position of a visual target. Immediately after the translation, the reafferent visual feedback was provided by flashing a second target around the estimated “updated” target location, and participants had to report the initial target location. We found that the participants’ responses were systematically biased toward the position of the second target position for relatively small but not for large differences between the “updated” and the second target location. This pattern was better captured by a Bayesian causal inference model than by alternative models that would always either integrate or segregate the internally updated target location and the visual feedback. Our results suggest that the brain implicitly represents the posterior probability that the internally updated estimate and the visual feedback come from a common cause and uses this probability to weigh the two sources of information in mediating spatial constancy across whole body motion. NEW & NOTEWORTHY When we move, egocentric representations of object locations require internal updating to keep them in register with their true world-fixed locations. How does this mechanism interact with reafferent visual input, given that objects typically do not disappear from view? Here we show that the brain implicitly represents the probability that both types of information derive from the same object and uses this probability to weigh their contribution for achieving spatial constancy across whole body motion.


Author(s):  
Hui Xiao ◽  
Xu Chen

Abstract Although visual feedback has enabled a wide range of robotic capabilities such as autonomous navigation and robotic surgery, low sampling rate and time delays of visual outputs continue to hinder real-time applications. When partial knowledge of the target dynamics is available, however, we show the potential of significant performance gain in vision-based target following. Specifically, we propose a new framework with Kalman filters and multirate model-based prediction (1) to reconstruct fast-sampled 3D target position and velocity data, and (2) to compensate the time delay for general robotic motion profiles. Along the path, we study the impact of modeling choices and the delay duration, build simulation tools, and experimentally verify different algorithms with a robot manipulator equipped with an eye-in-hand camera. The results show that the robot can track a moving target with fast dynamics even if the visual measurements are slow and incapable of providing timely information.


2019 ◽  
Vol 4 (6) ◽  
pp. 1589-1594
Author(s):  
Yvonne van Zaalen ◽  
Isabella Reichel

Purpose Among the best strategies to address inadequate speech monitoring skills and other parameters of communication in people with cluttering (PWC) is the relatively new but very promising auditory–visual feedback (AVF) training ( van Zaalen & Reichel, 2015 ). This study examines the effects of AVF training on articulatory accuracy, pause duration, frequency, and type of disfluencies of PWC, as well as on the emotional and cognitive aspects that may be present in clients with this communication disorder ( Reichel, 2010 ; van Zaalen & Reichel, 2015 ). Methods In this study, 12 male adolescents and adults—6 with phonological and 6 with syntactic cluttering—were provided with weekly AVF training for 12 weeks, with a 3-month follow-up. Data was gathered on baseline (T0), Week 6 (T1), Week 12 (T2), and after follow-up (T3). Spontaneous speech was recorded and analyzed by using digital audio-recording and speech analysis software known as Praat ( Boersma & Weenink, 2017 ). Results The results of this study indicated that PWC demonstrated significant improvements in articulatory rate measurements and in pause duration following the AVF training. In addition, the PWC in the study reported positive effects on their ability to retell a story and to speak in more complete sentences. PWC felt better about formulating their ideas and were more satisfied with their interactions with people around them. Conclusions The AVF training was found to be an effective approach for improving monitoring skills of PWC with both quantitative and qualitative benefits in the behavioral, cognitive, emotional, and social domains of communication.


2012 ◽  
Vol 220 (1) ◽  
pp. 3-9 ◽  
Author(s):  
Sandra Sülzenbrück

For the effective use of modern tools, the inherent visuo-motor transformation needs to be mastered. The successful adjustment to and learning of these transformations crucially depends on practice conditions, particularly on the type of visual feedback during practice. Here, a review about empirical research exploring the influence of continuous and terminal visual feedback during practice on the mastery of visuo-motor transformations is provided. Two studies investigating the impact of the type of visual feedback on either direction-dependent visuo-motor gains or the complex visuo-motor transformation of a virtual two-sided lever are presented in more detail. The findings of these studies indicate that the continuous availability of visual feedback supports performance when closed-loop control is possible, but impairs performance when visual input is no longer available. Different approaches to explain these performance differences due to the type of visual feedback during practice are considered. For example, these differences could reflect a process of re-optimization of motor planning in a novel environment or represent effects of the specificity of practice. Furthermore, differences in the allocation of attention during movements with terminal and continuous visual feedback could account for the observed differences.


1960 ◽  
Author(s):  
R. S. Nickerson ◽  
J. S. Duva
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document