Human Visuospatial Updating After Noncommutative Rotations

2007 ◽  
Vol 98 (1) ◽  
pp. 537-541 ◽  
Author(s):  
Eliana M. Klier ◽  
Dora E. Angelaki ◽  
Bernhard J. M. Hess

As we move our bodies in space, we often undergo head and body rotations about different axes—yaw, pitch, and roll. The order in which we rotate about these axes is an important factor in determining the final position of our bodies in space because rotations, unlike translations, do not commute. Does our brain keep track of the noncommutativity of rotations when computing changes in head and body orientation and then use this information when planning subsequent motor commands? We used a visuospatial updating task to investigate whether saccades to remembered visual targets are accurate after intervening, whole-body rotational sequences. The sequences were reversed, either yaw then roll or roll then yaw, such that the final required eye movements to reach the same space-fixed target were different in each case. While each subject performed consistently irrespective of target location and rotational combination, we found great intersubject variability in their capacity to update. The distance between the noncommutative endpoints was, on average, half of that predicted by perfect noncommutativity. Nevertheless, most subjects did make eye movements to distinct final endpoint locations and not to one unique location in space as predicted by a commutative model. In addition, their noncommutative performance significantly improved when their less than ideal updating performance was taken into account. Thus the brain can produce movements that are consistent with the processing of noncommutative rotations, although it is often poor in using internal estimates of rotation for updating.

2015 ◽  
Vol 114 (6) ◽  
pp. 3211-3219 ◽  
Author(s):  
J. J. Tramper ◽  
W. P. Medendorp

It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms.


2018 ◽  
Author(s):  
Florian Perdreau ◽  
James Cooke ◽  
Mathieu Koppen ◽  
W. Pieter Medendorp

AbstractThe brain can estimate the amplitude and direction of self-motion by integrating multiple sources of sensory information, and use this estimate to update object positions in order to provide us with a stable representation of the world. A strategy to improve the precision of the object position estimate would be to integrate this internal estimate and the sensory feedback about the object position based on their reliabilities. Integrating these cues, however, would only be optimal under the assumption that the object has not moved in the world during the intervening body displacement. Therefore, the brain would have to infer whether the internal estimate and the feedback relate to a same external position (stable object), and integrate and/or segregate these cues based on this inference – a process that can be modeled as Bayesian Causal inference. To test this hypothesis, we designed a spatial updating task across passive whole body translation in complete darkness, in which participants (n=11), seated on a vestibular sled, had to remember the world-fixed position of a visual target. Immediately after the translation, a second target (feedback) was briefly flashed around the estimated “updated” target location, and participants had to report the initial target location. We found that the participants’ responses were systematically biased toward the position of the second target position for relatively small but not for large differences between the “updated” and the second target location. This pattern was better captured by a Bayesian causal inference model than by alternative models that would always either integrate or segregate the internally-updated target position and the visual feedback. Our results suggest that the brain implicitly represents the posterior probability that the internally updated estimate and the sensory feedback come from a common cause, and use this probability to weigh the two sources of information in mediating spatial constancy across whole-body motion.Author SummaryA change of an object’s position on our retina can be caused by a change of the object’s location in the world or by a movement of the eye and body. Here, we examine how the brain solves this problem for spatial updating by assessing the probability that the internally-updated location during body motion and observed retinal feedback after the motion stems from the same object location in the world. Guided by Bayesian causal inference model, we demonstrate that participants’ errrors in spatial updating depend nonlinearly on the spatial discrepancy between internally-updated and reafferent visual feedback about the object’s location in the world. We propose that the brain implicitly represents the probability that the internally updated estimate and the sensory feedback come from a common cause, and use this probability to weigh the two sources of information in mediating spatial constancy across whole-body motion.


2019 ◽  
Vol 121 (1) ◽  
pp. 269-284 ◽  
Author(s):  
Florian Perdreau ◽  
James R. H. Cooke ◽  
Mathieu Koppen ◽  
W. Pieter Medendorp

The brain uses self-motion information to internally update egocentric representations of locations of remembered world-fixed visual objects. If a discrepancy is observed between this internal update and reafferent visual feedback, this could be either due to an inaccurate update or because the object has moved during the motion. To optimally infer the object’s location it is therefore critical for the brain to estimate the probabilities of these two causal structures and accordingly integrate and/or segregate the internal and sensory estimates. To test this hypothesis, we designed a spatial updating task involving passive whole body translation. Participants, seated on a vestibular sled, had to remember the world-fixed position of a visual target. Immediately after the translation, the reafferent visual feedback was provided by flashing a second target around the estimated “updated” target location, and participants had to report the initial target location. We found that the participants’ responses were systematically biased toward the position of the second target position for relatively small but not for large differences between the “updated” and the second target location. This pattern was better captured by a Bayesian causal inference model than by alternative models that would always either integrate or segregate the internally updated target location and the visual feedback. Our results suggest that the brain implicitly represents the posterior probability that the internally updated estimate and the visual feedback come from a common cause and uses this probability to weigh the two sources of information in mediating spatial constancy across whole body motion. NEW & NOTEWORTHY When we move, egocentric representations of object locations require internal updating to keep them in register with their true world-fixed locations. How does this mechanism interact with reafferent visual input, given that objects typically do not disappear from view? Here we show that the brain implicitly represents the probability that both types of information derive from the same object and uses this probability to weigh their contribution for achieving spatial constancy across whole body motion.


1996 ◽  
Vol 76 (6) ◽  
pp. 3617-3632 ◽  
Author(s):  
A. Z. Zivotofsky ◽  
K. G. Rottach ◽  
L. Averbuch-Heller ◽  
A. A. Kori ◽  
C. W. Thomas ◽  
...  

1. Measurements were made in four normal human subjects of the accuracy of saccades to remembered locations of targets that were flashed on a 20 x 30 deg random dot display that was either stationary or moving horizontally and sinusoidally at +/-9 deg at 0.3 Hz. During the interval between the target flash and the memory-guided saccade, the “memory period” (1.4 s), subjects either fixated a stationary spot or pursued a spot moving vertically sinusoidally at +/-9 deg at 0.3 Hz. 2. When saccades were made toward the location of targets previously flashed on a stationary background as subjects fixated the stationary spot, median saccadic error was 0.93 deg horizontally and 1.1 deg vertically. These errors were greater than for saccades to visible targets, which had median values of 0.59 deg horizontally and 0.60 deg vertically. 3. When targets were flashed as subjects smoothly pursued a spot that moved vertically across the stationary background, median saccadic error was 1.1 deg horizontally and 1.2 deg vertically, thus being of similar accuracy to when targets were flashed during fixation. In addition, the vertical component of the memory-guided saccade was much more closely correlated with the “spatial error” than with the “retinal error” this indicated that, when programming the saccade, the brain had taken into account eye movements that occurred during the memory period. 4. When saccades were made to targets flashed during attempted fixation of a stationary spot on a horizontally moving background, a condition that produces a weak Duncker-type illusion of horizontal movement of the primary target, median saccadic error increased horizontally to 3.2 deg but was 1.1 deg vertically. 5. When targets were flashed as subjects smoothly pursued a spot that moved vertically on the horizontally moving background, a condition that induces a strong illusion of diagonal target motion, median saccadic error was 4.0 deg horizontally and 1.5 deg vertically; thus the horizontal error was greater than under any other experimental condition. 6. In most trials, the initial saccade to the remembered target was followed by additional saccades while the subject was still in darkness. These secondary saccades, which were executed in the absence of visual feedback, brought the eye closer to the target location. During paradigms involving horizontal background movement, these corrections were more prominent horizontally than vertically. 7. Further measurements were made in two subjects to determine whether inaccuracy of memory-guided saccades, in the horizontal plane, was due to mislocalization at the time that the target flashed, misrepresentation of the trajectory of the pursuit eye movement during the memory period, or both. 8. The magnitude of the saccadic error, both with and without corrections made in darkness, was mislocalized by approximately 30% of the displacement of the background at the time that the target flashed. The magnitude of the saccadic error also was influenced by net movement of the background during the memory period, corresponding to approximately 25% of net background movement for the initial saccade and approximately 13% for the final eye position achieved in darkness. 9. We formulated simple linear models to test specific hypotheses about which combinations of signals best describe the observed saccadic amplitudes. We tested the possibilities that the brain made an accurate memory of target location and a reliable representation of the eye movement during the memory period, or that one or both of these was corrupted by the illusory visual stimulus. Our data were best accounted for by a model in which both the working memory of target location and the internal representation of the horizontal eye movements were corrupted by the illusory visual stimulus. We conclude that extraretinal signals played only a minor role, in comparison with visual estimates of the direction of gaze, in planning eye movements to remembered targ


2008 ◽  
Vol 99 (4) ◽  
pp. 1799-1809 ◽  
Author(s):  
Eliana M. Klier ◽  
Bernhard J. M. Hess ◽  
Dora E. Angelaki

To maintain a stable representation of the visual environment as we move, the brain must update the locations of targets in space using extra-retinal signals. Humans can accurately update after intervening active whole-body translations. But can they also update for passive translations (i.e., without efference copy signals of an outgoing motor command)? We asked six head-fixed subjects to remember the location of a briefly flashed target (five possible targets were located at depths of 23, 33, 43, 63, and 150 cm in front of the cyclopean eye) as they moved 10 cm left, right, up, down, forward, or backward while fixating a head-fixed target at 53 cm. After the movement, the subjects made a saccade to the remembered location of the flash with a combination of version and vergence eye movements. We computed an updating ratio where 0 indicates no updating and 1 indicates perfect updating. For lateral and vertical whole-body motion, where updating performance is judged by the size of the version movement, the updating ratios were similar for leftward and rightward translations, averaging 0.84 ± 0.28 (mean ± SD) as compared with 0.51 ± 0.33 for downward and 1.05 ± 0.50 for upward translations. For forward/backward movements, where updating performance is judged by the size of the vergence movement, the average updating ratio was 1.12 ± 0.45. Updating ratios tended to be larger for far targets than near targets, although both intra- and intersubject variabilities were smallest for near targets. Thus in addition to self-generated movements, extra-retinal signals involving otolith and proprioceptive cues can also be used for spatial constancy.


Author(s):  
Philip S. Murphy ◽  
Neel Patel ◽  
Timothy J. McCarthy

Pharmaceutical research and development requires a systematic interrogation of a candidate molecule through clinical studies. To ensure resources are spent on only the most promising molecules, early clinical studies must understand fundamental attributes of the drug candidate, including exposure at the target site, target binding and pharmacological response in disease. Molecular imaging has the potential to quantitatively characterize these properties in small, efficient clinical studies. Specific benefits of molecular imaging in this setting (compared to blood and tissue sampling) include non-invasiveness and the ability to survey the whole body temporally. These methods have been adopted primarily for neuroscience drug development, catalysed by the inability to access the brain compartment by other means. If we believe molecular imaging is a technology platform able to underpin clinical drug development, why is it not adopted further to enable earlier decisions? This article considers current drug development needs, progress towards integration of molecular imaging into studies, current impediments and proposed models to broaden use and increase impact. This article is part of the themed issue ‘Challenges for chemistry in molecular imaging’.


Author(s):  
Audrey Rousseaud ◽  
Stephanie Moriceau ◽  
Mariana Ramos-Brossier ◽  
Franck Oury

AbstractReciprocal relationships between organs are essential to maintain whole body homeostasis. An exciting interplay between two apparently unrelated organs, the bone and the brain, has emerged recently. Indeed, it is now well established that the brain is a powerful regulator of skeletal homeostasis via a complex network of numerous players and pathways. In turn, bone via a bone-derived molecule, osteocalcin, appears as an important factor influencing the central nervous system by regulating brain development and several cognitive functions. In this paper we will discuss this complex and intimate relationship, as well as several pathologic conditions that may reinforce their potential interdependence.


2013 ◽  
Vol 34 (6) ◽  
pp. 540-543 ◽  
Author(s):  
Kuruva Manohar ◽  
Anish Bhattacharya ◽  
Bhagwant R. Mittal
Keyword(s):  
Fdg Pet ◽  
Pet Ct ◽  
18F Fdg ◽  

1991 ◽  
Vol 31 (4) ◽  
pp. 693-715 ◽  
Author(s):  
James W. Gnadt ◽  
R. Martyn Bracewell ◽  
Richard A. Andersen

Author(s):  
Alison Pienciak-Siewert ◽  
Alaa A Ahmed

How does the brain coordinate concurrent adaptation of arm movements and standing posture? From previous studies, the postural control system can use information about previously adapted arm movement dynamics to plan appropriate postural control; however, it is unclear whether postural control can be adapted and controlled independently of arm control. The present study addresses that question. Subjects practiced planar reaching movements while standing and grasping the handle of a robotic arm, which generated a force field to create novel perturbations. Subjects were divided into two groups, for which perturbations were introduced in either an abrupt or gradual manner. All subjects adapted to the perturbations while reaching with their dominant (right) arm, then switched to reaching with their non-dominant (left) arm. Previous studies of seated reaching movements showed that abrupt perturbation introduction led to transfer of learning between arms, but gradual introduction did not. Interestingly, in this study neither group showed evidence of transferring adapted control of arm or posture between arms. These results suggest primarily that adapted postural control cannot be transferred independently of arm control in this task paradigm. In other words, whole-body postural movement planning related to a concurrent arm task is dependent on information about arm dynamics. Finally, we found that subjects were able to adapt to the gradual perturbation while experiencing very small errors, suggesting that both error size and consistency play a role in driving motor adaptation.


Sign in / Sign up

Export Citation Format

Share Document