scholarly journals Touch as an auxiliary proprioceptive cue for movement control

2019 ◽  
Vol 5 (6) ◽  
pp. eaaw3121 ◽  
Author(s):  
A. Moscatelli ◽  
M. Bianchi ◽  
S. Ciotti ◽  
G. C. Bettelani ◽  
C. V. Parise ◽  
...  

Recent studies extended the classical view that touch is mainly devoted to the perception of the external world. Perceptual tasks where the hand was stationary demonstrated that cutaneous stimuli from contact with objects provide the illusion of hand displacement. Here, we tested the hypothesis that touch provides auxiliary proprioceptive feedback for guiding actions. We used a well-established perceptual phenomenon to dissociate the estimates of reaching direction from touch and musculoskeletal proprioception. Participants slid their fingertip on a ridged plate to move toward a target without any visual feedback on hand location. Tactile motion estimates were biased by ridge orientation, inducing a systematic deviation in hand trajectories in accordance with our hypothesis. Results are in agreement with an ideal observer model, where motion estimates from different somatosensory cues are optimally integrated for the control of movement. These outcomes shed new light on the interplay between proprioception and touch in active tasks.

2020 ◽  
Vol 2020 (16) ◽  
pp. 41-1-41-7
Author(s):  
Orit Skorka ◽  
Paul J. Kane

Many of the metrics developed for informational imaging are useful in automotive imaging, since many of the tasks – for example, object detection and identification – are similar. This work discusses sensor characterization parameters for the Ideal Observer SNR model, and elaborates on the noise power spectrum. It presents cross-correlation analysis results for matched-filter detection of a tribar pattern in sets of resolution target images that were captured with three image sensors over a range of illumination levels. Lastly, the work compares the crosscorrelation data to predictions made by the Ideal Observer Model and demonstrates good agreement between the two methods on relative evaluation of detection capabilities.


Author(s):  
Wakana Ishihara ◽  
Karen Moxon ◽  
Sheryl Ehrman ◽  
Mark Yarborough ◽  
Tina L. Panontin ◽  
...  

This systematic review addresses the plausibility of using novel feedback modalities for brain–computer interface (BCI) and attempts to identify the best feedback modality on the basis of the effectiveness or learning rate. Out of the chosen studies, it was found that 100% of studies tested visual feedback, 31.6% tested auditory feedback, 57.9% tested tactile feedback, and 21.1% tested proprioceptive feedback. Visual feedback was included in every study design because it was intrinsic to the response of the task (e.g. seeing a cursor move). However, when used alone, it was not very effective at improving accuracy or learning. Proprioceptive feedback was most successful at increasing the effectiveness of motor imagery BCI tasks involving neuroprosthetics. The use of auditory and tactile feedback resulted in mixed results. The limitations of this current study and further study recommendations are discussed.


1990 ◽  
Vol 110 (2) ◽  
pp. 228-235 ◽  
Author(s):  
A. Beuter ◽  
J.G. Milton ◽  
C. Labrie ◽  
L. Glass ◽  
S. Gauthier

2018 ◽  
Author(s):  
Abdellah Fourtassi ◽  
Michael C. Frank

Identifying a spoken word in a referential context requires both the ability to integrate multimodal input and the ability to reason under uncertainty. How do these tasks interact with one another? We study how adults identify novel words under joint uncertainty in the auditory and visual modalities and we propose an ideal observer model of how cues in these modalities are combined optimally. Model predictions are tested in four experiments where recognition is made under various sources of uncertainty. We found that participants use both auditory and visual cues to recognize novel words. When the signal is not distorted with environmental noise, participants weight the auditory and visual cues optimally, that is, according to the relative reliability of each modality. In contrast, when one modality has noise added to it, human perceivers systematically prefer the unperturbed modality to a greater extent than the optimal model does. This work extends the literature on perceptual cue combination to the case of word recognition in a referential context. In addition, this context offers a link to the study of multimodal information in word meaning learning.


2016 ◽  
Author(s):  
Adrian E Radillo ◽  
Alan Veliz-Cuba ◽  
Kresimir Josic ◽  
Zachary Kilpatrick

In a constantly changing world, animals must account for environmental volatility when making decisions. To appropriately discount older, irrelevant information, they need to learn the rate at which the environment changes. We develop an ideal observer model capable of inferring the present state of the environment along with its rate of change. Key to this computation is updating the posterior probability of all possible changepoint counts. This computation can be challenging, as the number of possibilities grows rapidly with time. However, we show how the computations can be simplified in the continuum limit by a moment closure approximation. The resulting low-dimensional system can be used to infer the environmental state and change rate with accuracy comparable to the ideal observer. The approximate computations can be performed by a neural network model via a rate-correlation based plasticity rule. We thus show how optimal observers accumulates evidence in changing environments, and map this computation to reduced models which perform inference using plausible neural mechanisms.


2011 ◽  
Vol 105 (2) ◽  
pp. 846-859 ◽  
Author(s):  
Lore Thaler ◽  
Melvyn A. Goodale

Studies that have investigated how sensory feedback about the moving hand is used to control hand movements have relied on paradigms such as pointing or reaching that require subjects to acquire target locations. In the context of these target-directed tasks, it has been found repeatedly that the human sensory-motor system relies heavily on visual feedback to control the ongoing movement. This finding has been formalized within the framework of statistical optimality according to which different sources of sensory feedback are combined such as to minimize variance in sensory information during movement control. Importantly, however, many hand movements that people perform every day are not target-directed, but based on allocentric (object-centered) visual information. Examples of allocentric movements are gesture imitation, drawing, or copying. Here we tested if visual feedback about the moving hand is used in the same way to control target-directed and allocentric hand movements. The results show that visual feedback is used significantly more to reduce movement scatter in the target-directed as compared with the allocentric movement task. Furthermore, we found that differences in the use of visual feedback between target-directed and allocentric hand movements cannot be explained based on differences in uncertainty about the movement goal. We conclude that the role played by visual feedback for movement control is fundamentally different for target-directed and allocentric movements. The results suggest that current computational and neural models of sensorimotor control that are based entirely on data derived from target-directed paradigms have to be modified to accommodate performance in the allocentric tasks used in our experiments. As a consequence, the results cast doubt on the idea that models of sensorimotor control developed exclusively from data obtained in target-directed paradigms are also valid in the context of allocentric tasks, such as drawing, copying, or imitative gesturing, that characterize much of human behavior.


1997 ◽  
Vol 104 (3) ◽  
pp. 524-553 ◽  
Author(s):  
Gordon E. Legge ◽  
Timothy S. Klitz ◽  
Bosco S. Tjan

1975 ◽  
Vol 19 (2) ◽  
pp. 162-165 ◽  
Author(s):  
Jack A. Adams ◽  
Daniel Gopher ◽  
Gavan Lintern

A self paced linear positioning task was used to study the effects of visual and proprioceptive feedback on learning and performance. Subjects were trained with knowledge of results (KR) and tested without it. The analysis of the absolute error scores of the no-KR trials is discussed in this paper. Visual feedback was the more effective source of sensory feedback, but proprioceptive feedback was also effective. An observation that the response did not become independent of sensory feedback as a result of learning, was interpreted as supporting Adams closed loop theory of motor learning in preference to the motor program hypothesis. Other data showed that the presence of visual feedback during learning could inhibit the later effectiveness of proprioceptive feedback.


2021 ◽  
Author(s):  
Robin L Shafer ◽  
Zheng Wang ◽  
James Bartolotti ◽  
Matthew W. Mosconi

Abstract Background Individuals with Autism Spectrum Disorder (ASD) show deficits processing sensory feedback to reactively adjust ongoing motor behaviors. Atypical reliance on visual and proprioceptive feedback each have been reported during motor behaviors in ASD suggesting that impairments are not specific to one sensory domain but may instead reflect a deficit in multisensory processing, resulting in reliance on unimodal feedback. The present study tested this hypothesis by examining motor behavior across different visual and proprioceptive feedback conditions during a visually guided precision grip force test. Methods Participants with ASD (N = 43) and age-matched typically developing (TD) controls (N = 23), range 10–20 years, completed a test of precision gripping. They pressed on force sensors with their index finger and thumb while receiving visual feedback on a computer screen in the form of a horizontal bar that moved upwards with increased force. They were instructed to press so that the bar reached the level of a static target bar and then to hold their grip force as steadily as possible. Visual feedback was manipulated by changing the gain of the force bar. Proprioceptive feedback was manipulated by applying 80 Hz tendon vibration at the wrist to induce an illusion of muscle elongation. Force variability (standard deviation) and irregularity (sample entropy) were examined using multilevel linear models. Results While TD controls showed increased force variability with the tendon vibration on compared to off, individuals with ASD showed similar levels of force variability across tendon vibration conditions. Individuals with ASD showed stronger age-associated reductions in force variability relative to controls across conditions. The ASD group also showed greater age-associated increases in force irregularity relative to controls, especially at higher gain levels and when the tendon vibrator was turned on. Conclusions Our findings that individuals with ASD show similar levels of force variability and regularity during induced proprioceptive illusions suggest a reduced ability to integrate proprioceptive feedback information to guide ongoing precision manual motor behavior. We also document stronger age-associated gains in force control in ASD relative to TD suggesting delayed development of multisensory feedback control of motor behavior.


Sign in / Sign up

Export Citation Format

Share Document