Impact of Coordinate Transformation Uncertainty on Human Sensorimotor Control

2007 ◽  
Vol 97 (6) ◽  
pp. 4203-4214 ◽  
Author(s):  
Erik J. Schlicht ◽  
Paul R. Schrater

Humans build representations of objects and their locations by integrating imperfect information from multiple perceptual modalities (e.g., visual, haptic). Because sensory information is specified in different frames of reference (i.e., eye- and body-centered), it must be remapped into a common coordinate frame before integration and storage in memory. Such transformations require an understanding of body articulation, which is estimated through noisy sensory data. Consequently, target information acquires additional coordinate transformation uncertainty (CTU) during remapping because of errors in joint angle sensing. As a result, CTU creates differences in the reliability of target information depending on the reference frame used for storage. This paper explores whether the brain represents and compensates for CTU when making grasping movements. To address this question, we varied eye position in the head, while participants reached to grasp a spatially fixed object, both when the object was in view and when it was occluded. Varying eye position changes CTU between eye and head, producing additional uncertainty in remapped information away from forward view. The results showed that people adjust their maximum grip aperture to compensate both for changes in visual information and for changes in CTU when the target is occluded. Moreover, the amount of compensation is predicted by a Bayesian model for location inference that uses eye-centered storage.

2012 ◽  
Vol 25 (0) ◽  
pp. 111
Author(s):  
Shuichi Sakamoto ◽  
Gen Hasegawa ◽  
Akio Honda ◽  
Yukio Iwaya ◽  
Yôiti Suzuki ◽  
...  

High-definition multimodal displays are necessary to advance information and communications technologies. Such systems mainly present audio–visual information because this sensory information includes rich spatiotemporal information. Recently, not only audio–visual information but also other sensory information, for example touch, smell, and vibration, has come to be presented easily. The potential of such information is expanded to realize high-definition multimodal displays. We specifically examined the effects of full body vibration information on perceived reality from audio–visual content. As indexes of perceived reality, we used the sense of presence and the sense of verisimilitude. The latter is the appreciative role of foreground components in multimodal contents, although the former is related more closely to background components included in a scene. Our previous report described differences of characteristics of both senses to audio–visual contents (Kanda et al., IMRF2011). In the present experiments, various amounts of full body vibration were presented with an audio–visual movie, which was recorded via a camera and microphone set on wheelchair. Participants reported the amounts of perceived sense of presence and verisimilitude. Results revealed that the intensity of full body vibration characterized both senses differently. The sense of presence increased linearly according to the intensity of full body vibration, while the sense of verisimilitude showed a nonlinear tendency. These results suggest that not only audio–visual information but also full body vibration is importantto develop high-definition multimodal displays.


2018 ◽  
Vol 5 (2) ◽  
pp. 171785 ◽  
Author(s):  
Martin F. Strube-Bloss ◽  
Wolfgang Rössler

Flowers attract pollinating insects like honeybees by sophisticated compositions of olfactory and visual cues. Using honeybees as a model to study olfactory–visual integration at the neuronal level, we focused on mushroom body (MB) output neurons (MBON). From a neuronal circuit perspective, MBONs represent a prominent level of sensory-modality convergence in the insect brain. We established an experimental design allowing electrophysiological characterization of olfactory, visual, as well as olfactory–visual induced activation of individual MBONs. Despite the obvious convergence of olfactory and visual pathways in the MB, we found numerous unimodal MBONs. However, a substantial proportion of MBONs (32%) responded to both modalities and thus integrated olfactory–visual information across MB input layers. In these neurons, representation of the olfactory–visual compound was significantly increased compared with that of single components, suggesting an additive, but nonlinear integration. Population analyses of olfactory–visual MBONs revealed three categories: (i) olfactory, (ii) visual and (iii) olfactory–visual compound stimuli. Interestingly, no significant differentiation was apparent regarding different stimulus qualities within these categories. We conclude that encoding of stimulus quality within a modality is largely completed at the level of MB input, and information at the MB output is integrated across modalities to efficiently categorize sensory information for downstream behavioural decision processing.


2003 ◽  
Vol 90 (2) ◽  
pp. 1279-1294 ◽  
Author(s):  
Ralph M. Siegel ◽  
Milena Raffi ◽  
Raymond E. Phinney ◽  
Jessica A. Turner ◽  
Gábor Jandó

In the behaving monkey, inferior parietal lobe cortical neurons combine visual information with eye position signals. However, an organized topographic map of these neurons' properties has never been demonstrated. Intrinsic optical imaging revealed a functional architecture for the effect of eye position on the visual response to radial optic flow. The map was distributed across two subdivisions of the inferior parietal lobule, area 7a and the dorsal prelunate area, DP. Area 7a contains a representation of the lower eye position gain fields while area DP represents the upper eye position gain fields. Horizontal eye position is represented orthogonal to the vertical eye position across the medial lateral extents of the cortices. Similar topographies were found in three hemispheres of two monkeys; the horizontal and vertical gain field representations were not isotropic with a greater modulation found with the vertical. Monte Carlo methods demonstrated the significance of the maps, and they were verified in part using multiunit recordings. The novel topographic organization of this association cortex area provides a substrate for constructing representations of surrounding space for perception and the guidance of motor behaviors.


2011 ◽  
Vol 105 (2) ◽  
pp. 846-859 ◽  
Author(s):  
Lore Thaler ◽  
Melvyn A. Goodale

Studies that have investigated how sensory feedback about the moving hand is used to control hand movements have relied on paradigms such as pointing or reaching that require subjects to acquire target locations. In the context of these target-directed tasks, it has been found repeatedly that the human sensory-motor system relies heavily on visual feedback to control the ongoing movement. This finding has been formalized within the framework of statistical optimality according to which different sources of sensory feedback are combined such as to minimize variance in sensory information during movement control. Importantly, however, many hand movements that people perform every day are not target-directed, but based on allocentric (object-centered) visual information. Examples of allocentric movements are gesture imitation, drawing, or copying. Here we tested if visual feedback about the moving hand is used in the same way to control target-directed and allocentric hand movements. The results show that visual feedback is used significantly more to reduce movement scatter in the target-directed as compared with the allocentric movement task. Furthermore, we found that differences in the use of visual feedback between target-directed and allocentric hand movements cannot be explained based on differences in uncertainty about the movement goal. We conclude that the role played by visual feedback for movement control is fundamentally different for target-directed and allocentric movements. The results suggest that current computational and neural models of sensorimotor control that are based entirely on data derived from target-directed paradigms have to be modified to accommodate performance in the allocentric tasks used in our experiments. As a consequence, the results cast doubt on the idea that models of sensorimotor control developed exclusively from data obtained in target-directed paradigms are also valid in the context of allocentric tasks, such as drawing, copying, or imitative gesturing, that characterize much of human behavior.


2013 ◽  
Vol 109 (1) ◽  
pp. 183-192 ◽  
Author(s):  
Bernhard J. M. Hess

Although the motion of the line of sight is a straightforward consequence of a particular rotation of the eye, it is much trickier to predict the rotation underlying a particular motion of the line of sight in accordance with Listing's law. Helmholtz's notion of the direction-circle together with the notion of primary and secondary reference directions in visual space provide an elegant solution to this reverse engineering problem, which the brain is faced with whenever generating a saccade. To test whether these notions indeed apply for saccades, we analyzed three-dimensional eye movements recorded in four rhesus monkeys. We found that on average saccade trajectories closely matched with the associated direction-circles. Torsional, vertical, and horizontal eye position of saccades scattered around the position predicted by the associated direction-circles with standard deviations of 0.5°, 0.3°, and 0.4°, respectively. Comparison of saccade trajectories with the likewise predicted fixed-axis rotations yielded mean coefficients of determinations (±SD) of 0.72 (±0.26) for torsion, 0.97 (±0.10) for vertical, and 0.96 (±0.11) for horizontal eye position. Reverse engineering of three-dimensional saccadic rotations based on visual information suggests that motor control of saccades, compatible with Listing's law, not only uses information on the fixation directions at saccade onset and offset but also relies on the computation of secondary reference positions that vary from saccade to saccade.


Author(s):  
Yuri B. Saalmann ◽  
Sabine Kastner

Neural mechanisms of selective attention route behaviourally relevant information through brain networks for detailed processing. These attention mechanisms are classically viewed as being solely implemented in the cortex, relegating the thalamus to a passive relay of sensory information. However, this passive view of the thalamus is being revised in light of recent studies supporting an important role for the thalamus in selective attention. Evidence suggests that the first-order thalamic nucleus, the lateral geniculate nucleus, regulates the visual information transmitted from the retina to visual cortex, while the higher-order thalamic nucleus, the pulvinar, regulates information transmission between visual cortical areas, according to attentional demands. This chapter discusses how modulation of thalamic responses, switching the response mode of thalamic neurons, and changes in neural synchrony across thalamo-cortical networks contribute to selective attention.


2018 ◽  
Vol 31 (3-4) ◽  
pp. 227-249 ◽  
Author(s):  
Alix L. de Dieuleveult ◽  
Anne-Marie Brouwer ◽  
Petra C. Siemonsma ◽  
Jan B. F. van Erp ◽  
Eli Brenner

Older individuals seem to find it more difficult to ignore inaccurate sensory cues than younger individuals. We examined whether this could be quantified using an interception task. Twenty healthy young adults (age 18–34) and twenty-four healthy older adults (age 60–82) were asked to tap on discs that were moving downwards on a screen with their finger. Moving the background to the left made the discs appear to move more to the right. Moving the background to the right made them appear to move more to the left. The discs disappeared before the finger reached the screen, so participants had to anticipate how the target would continue to move. We examined how misjudging the disc’s motion when the background moves influenced tapping. Participants received veridical feedback about their performance, so their sensitivity to the illusory motion indicates to what extent they could ignore the task-irrelevant visual information. We expected older adults to be more sensitive to the illusion than younger adults. To investigate whether sensorimotor or cognitive load would increase this sensitivity, we also asked participants to do the task while standing on foam or counting tones. Background motion influenced older adults more than younger adults. The secondary tasks did not increase the background’s influence. Older adults might be more sensitive to the moving background because they find it more difficult to ignore irrelevant sensory information in general, but they may rely more on vision because they have less reliable proprioceptive and vestibular information.


2005 ◽  
Vol 24 (4) ◽  
pp. 339-352
Author(s):  
Guillaume Giraudet ◽  
Christian Corbé ◽  
Corinne Roumes

ABSTRACTAge-related macular degeneration (ARMD) is a frequent cause of vision loss among people over age of 60. It is an aging process involving a progressive degradation of the central retina. It does not induce total blindness, since it does not affect the peripheral vision. Nonetheless, it makes difficult to read, drive, and perform all daily activities requiring fine details perception. Low-vision care consists in inducing an eccentric fixation so that relevant visual targets impact an unaffected retinal locus. It is necessary but not sufficient to enhance visual extraction. The present work aims to draw the attention of low-vision professionals to the necessity of developing new re-education tools. Beyond the perceptual re-education linked to an optimization of visual information extraction, a cognitive re-education should also be provided in order to enhance the interpretation processes. Indeed, the spatial-frequency properties of the visual world no longer match patient perceptual habits. The visually impaired person has to learn again to use these new sensory data in an optimal way. Contextual information can be a precious help in this learning process. An experimental study involving young people provides elements for another method of low-vision care, in terms of visual cognitive re-education.


Sign in / Sign up

Export Citation Format

Share Document