scholarly journals Parallel updating and weighting of multiple spatial maps for visual stability during whole body motion

2015 ◽  
Vol 114 (6) ◽  
pp. 3211-3219 ◽  
Author(s):  
J. J. Tramper ◽  
W. P. Medendorp

It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms.

2019 ◽  
Vol 121 (6) ◽  
pp. 2392-2400 ◽  
Author(s):  
Romy S. Bakker ◽  
Luc P. J. Selen ◽  
W. Pieter Medendorp

In daily life, we frequently reach toward objects while our body is in motion. We have recently shown that body accelerations influence the decision of which hand to use for the reach, possibly by modulating the body-centered computations of the expected reach costs. However, head orientation relative to the body was not manipulated, and hence it remains unclear whether vestibular signals contribute in their head-based sensory frame or in a transformed body-centered reference frame to these cost calculations. To test this, subjects performed a preferential reaching task to targets at various directions while they were sinusoidally translated along the lateral body axis, with their head either aligned with the body (straight ahead) or rotated 18° to the left. As a measure of hand preference, we determined the target direction that resulted in equiprobable right/left-hand choices. Results show that head orientation affects this balanced target angle when the body is stationary but does not further modulate hand preference when the body is in motion. Furthermore, reaction and movement times were larger for reaches to the balanced target angle, resembling a competitive selection process, and were modulated by head orientation when the body was stationary. During body translation, reaction and movement times depended on the phase of the motion, but this phase-dependent modulation had no interaction with head orientation. We conclude that the brain transforms vestibular signals to body-centered coordinates at the early stage of reach planning, when the decision of hand choice is computed. NEW & NOTEWORTHY The brain takes inertial acceleration into account in computing the anticipated biomechanical costs that guide hand selection during whole body motion. Whereas these costs are defined in a body-centered, muscle-based reference frame, the otoliths detect the inertial acceleration in head-centered coordinates. By systematically manipulating head position relative to the body, we show that the brain transforms otolith signals into body-centered coordinates at an early stage of reach planning, i.e., before the decision of hand choice is computed.


2018 ◽  
Author(s):  
Florian Perdreau ◽  
James Cooke ◽  
Mathieu Koppen ◽  
W. Pieter Medendorp

AbstractThe brain can estimate the amplitude and direction of self-motion by integrating multiple sources of sensory information, and use this estimate to update object positions in order to provide us with a stable representation of the world. A strategy to improve the precision of the object position estimate would be to integrate this internal estimate and the sensory feedback about the object position based on their reliabilities. Integrating these cues, however, would only be optimal under the assumption that the object has not moved in the world during the intervening body displacement. Therefore, the brain would have to infer whether the internal estimate and the feedback relate to a same external position (stable object), and integrate and/or segregate these cues based on this inference – a process that can be modeled as Bayesian Causal inference. To test this hypothesis, we designed a spatial updating task across passive whole body translation in complete darkness, in which participants (n=11), seated on a vestibular sled, had to remember the world-fixed position of a visual target. Immediately after the translation, a second target (feedback) was briefly flashed around the estimated “updated” target location, and participants had to report the initial target location. We found that the participants’ responses were systematically biased toward the position of the second target position for relatively small but not for large differences between the “updated” and the second target location. This pattern was better captured by a Bayesian causal inference model than by alternative models that would always either integrate or segregate the internally-updated target position and the visual feedback. Our results suggest that the brain implicitly represents the posterior probability that the internally updated estimate and the sensory feedback come from a common cause, and use this probability to weigh the two sources of information in mediating spatial constancy across whole-body motion.Author SummaryA change of an object’s position on our retina can be caused by a change of the object’s location in the world or by a movement of the eye and body. Here, we examine how the brain solves this problem for spatial updating by assessing the probability that the internally-updated location during body motion and observed retinal feedback after the motion stems from the same object location in the world. Guided by Bayesian causal inference model, we demonstrate that participants’ errrors in spatial updating depend nonlinearly on the spatial discrepancy between internally-updated and reafferent visual feedback about the object’s location in the world. We propose that the brain implicitly represents the probability that the internally updated estimate and the sensory feedback come from a common cause, and use this probability to weigh the two sources of information in mediating spatial constancy across whole-body motion.


2019 ◽  
Vol 121 (1) ◽  
pp. 269-284 ◽  
Author(s):  
Florian Perdreau ◽  
James R. H. Cooke ◽  
Mathieu Koppen ◽  
W. Pieter Medendorp

The brain uses self-motion information to internally update egocentric representations of locations of remembered world-fixed visual objects. If a discrepancy is observed between this internal update and reafferent visual feedback, this could be either due to an inaccurate update or because the object has moved during the motion. To optimally infer the object’s location it is therefore critical for the brain to estimate the probabilities of these two causal structures and accordingly integrate and/or segregate the internal and sensory estimates. To test this hypothesis, we designed a spatial updating task involving passive whole body translation. Participants, seated on a vestibular sled, had to remember the world-fixed position of a visual target. Immediately after the translation, the reafferent visual feedback was provided by flashing a second target around the estimated “updated” target location, and participants had to report the initial target location. We found that the participants’ responses were systematically biased toward the position of the second target position for relatively small but not for large differences between the “updated” and the second target location. This pattern was better captured by a Bayesian causal inference model than by alternative models that would always either integrate or segregate the internally updated target location and the visual feedback. Our results suggest that the brain implicitly represents the posterior probability that the internally updated estimate and the visual feedback come from a common cause and uses this probability to weigh the two sources of information in mediating spatial constancy across whole body motion. NEW & NOTEWORTHY When we move, egocentric representations of object locations require internal updating to keep them in register with their true world-fixed locations. How does this mechanism interact with reafferent visual input, given that objects typically do not disappear from view? Here we show that the brain implicitly represents the probability that both types of information derive from the same object and uses this probability to weigh their contribution for achieving spatial constancy across whole body motion.


2005 ◽  
Vol 94 (4) ◽  
pp. 2331-2352 ◽  
Author(s):  
O'Dhaniel A. Mullette-Gillman ◽  
Yale E. Cohen ◽  
Jennifer M. Groh

The integration of visual and auditory events is thought to require a joint representation of visual and auditory space in a common reference frame. We investigated the coding of visual and auditory space in the lateral and medial intraparietal areas (LIP, MIP) as a candidate for such a representation. We recorded the activity of 275 neurons in LIP and MIP of two monkeys while they performed saccades to a row of visual and auditory targets from three different eye positions. We found 45% of these neurons to be modulated by the locations of visual targets, 19% by auditory targets, and 9% by both visual and auditory targets. The reference frame for both visual and auditory receptive fields ranged along a continuum between eye- and head-centered reference frames with ∼10% of auditory and 33% of visual neurons having receptive fields that were more consistent with an eye- than a head-centered frame of reference and 23 and 18% having receptive fields that were more consistent with a head- than an eye-centered frame of reference, leaving a large fraction of both visual and auditory response patterns inconsistent with both head- and eye-centered reference frames. The results were similar to the reference frame we have previously found for auditory stimuli in the inferior colliculus and core auditory cortex. The correspondence between the visual and auditory receptive fields of individual neurons was weak. Nevertheless, the visual and auditory responses were sufficiently well correlated that a simple one-layer network constructed to calculate target location from the activity of the neurons in our sample performed successfully for auditory targets even though the weights were fit based only on the visual responses. We interpret these results as suggesting that although the representations of space in areas LIP and MIP are not easily described within the conventional conceptual framework of reference frames, they nevertheless process visual and auditory spatial information in a similar fashion.


2015 ◽  
Vol 113 (5) ◽  
pp. 1574-1584 ◽  
Author(s):  
T. P. Gutteling ◽  
L. P. J. Selen ◽  
W. P. Medendorp

Despite the constantly changing retinal image due to eye, head, and body movements, we are able to maintain a stable representation of the visual environment. Various studies on retinal image shifts caused by saccades have suggested that occipital and parietal areas correct for these perturbations by a gaze-centered remapping of the neural image. However, such a uniform, rotational, remapping mechanism cannot work during translations when objects shift on the retina in a more complex, depth-dependent fashion due to motion parallax. Here we tested whether the brain's activity patterns show parallax-sensitive remapping of remembered visual space during whole-body motion. Under continuous recording of electroencephalography (EEG), we passively translated human subjects while they had to remember the location of a world-fixed visual target, briefly presented in front of or behind the eyes' fixation point prior to the motion. Using a psychometric approach we assessed the quality of the memory update, which had to be made based on vestibular feedback and other extraretinal motion cues. All subjects showed a variable amount of parallax-sensitive updating errors, i.e., the direction of the errors depended on the depth of the target relative to fixation. The EEG recordings show a neural correlate of this parallax-sensitive remapping in the alpha-band power at occipito-parietal electrodes. At parietal electrodes, the strength of these alpha-band modulations correlated significantly with updating performance. These results suggest that alpha-band oscillatory activity reflects the time-varying updating of gaze-centered spatial information during parallax-sensitive remapping during whole-body motion.


2018 ◽  
Vol 15 (3) ◽  
pp. 229-236 ◽  
Author(s):  
Gennaro Ruggiero ◽  
Alessandro Iavarone ◽  
Tina Iachini

Objective: Deficits in egocentric (subject-to-object) and allocentric (object-to-object) spatial representations, with a mainly allocentric impairment, characterize the first stages of the Alzheimer's disease (AD). Methods: To identify early cognitive signs of AD conversion, some studies focused on amnestic-Mild Cognitive Impairment (aMCI) by reporting alterations in both reference frames, especially the allocentric ones. However, spatial environments in which we move need the cooperation of both reference frames. Such cooperating processes imply that we constantly switch from allocentric to egocentric frames and vice versa. This raises the question of whether alterations of switching abilities might also characterize an early cognitive marker of AD, potentially suitable to detect the conversion from aMCI to dementia. Here, we compared AD and aMCI patients with Normal Controls (NC) on the Ego-Allo- Switching spatial memory task. The task assessed the capacity to use switching (Ego-Allo, Allo-Ego) and non-switching (Ego-Ego, Allo-Allo) verbal judgments about relative distances between memorized stimuli. Results: The novel finding of this study is the neat impairment shown by aMCI and AD in switching from allocentric to egocentric reference frames. Interestingly, in aMCI when the first reference frame was egocentric, the allocentric deficit appeared attenuated. Conclusion: This led us to conclude that allocentric deficits are not always clinically detectable in aMCI since the impairments could be masked when the first reference frame was body-centred. Alongside, AD and aMCI also revealed allocentric deficits in the non-switching condition. These findings suggest that switching alterations would emerge from impairments in hippocampal and posteromedial areas and from concurrent dysregulations in the locus coeruleus-noradrenaline system or pre-frontal cortex.


Author(s):  
Steven M. Weisberg ◽  
Anjan Chatterjee

Abstract Background Reference frames ground spatial communication by mapping ambiguous language (for example, navigation: “to the left”) to properties of the speaker (using a Relative reference frame: “to my left”) or the world (Absolute reference frame: “to the north”). People’s preferences for reference frame vary depending on factors like their culture, the specific task in which they are engaged, and differences among individuals. Although most people are proficient with both reference frames, it is unknown whether preference for reference frames is stable within people or varies based on the specific spatial domain. These alternatives are difficult to adjudicate because navigation is one of few spatial domains that can be naturally solved using multiple reference frames. That is, while spatial navigation directions can be specified using Absolute or Relative reference frames (“go north” vs “go left”), other spatial domains predominantly use Relative reference frames. Here, we used two domains to test the stability of reference frame preference: one based on navigating a four-way intersection; and the other based on the sport of ultimate frisbee. We recruited 58 ultimate frisbee players to complete an online experiment. We measured reaction time and accuracy while participants solved spatial problems in each domain using verbal prompts containing either Relative or Absolute reference frames. Details of the task in both domains were kept as similar as possible while remaining ecologically plausible so that reference frame preference could emerge. Results We pre-registered a prediction that participants would be faster using their preferred reference frame type and that this advantage would correlate across domains; we did not find such a correlation. Instead, the data reveal that people use distinct reference frames in each domain. Conclusion This experiment reveals that spatial reference frame types are not stable and may be differentially suited to specific domains. This finding has broad implications for communicating spatial information by offering an important consideration for how spatial reference frames are used in communication: task constraints may affect reference frame choice as much as individual factors or culture.


2017 ◽  
Vol 118 (4) ◽  
pp. 2499-2506 ◽  
Author(s):  
A. Pomante ◽  
L. P. J. Selen ◽  
W. P. Medendorp

The vestibular system provides information for spatial orientation. However, this information is ambiguous: because the otoliths sense the gravitoinertial force, they cannot distinguish gravitational and inertial components. As a consequence, prolonged linear acceleration of the head can be interpreted as tilt, referred to as the somatogravic effect. Previous modeling work suggests that the brain disambiguates the otolith signal according to the rules of Bayesian inference, combining noisy canal cues with the a priori assumption that prolonged linear accelerations are unlikely. Within this modeling framework the noise of the vestibular signals affects the dynamic characteristics of the tilt percept during linear whole-body motion. To test this prediction, we devised a novel paradigm to psychometrically characterize the dynamic visual vertical—as a proxy for the tilt percept—during passive sinusoidal linear motion along the interaural axis (0.33 Hz motion frequency, 1.75 m/s2peak acceleration, 80 cm displacement). While subjects ( n=10) kept fixation on a central body-fixed light, a line was briefly flashed (5 ms) at different phases of the motion, the orientation of which had to be judged relative to gravity. Consistent with the model’s prediction, subjects showed a phase-dependent modulation of the dynamic visual vertical, with a subject-specific phase shift with respect to the imposed acceleration signal. The magnitude of this modulation was smaller than predicted, suggesting a contribution of nonvestibular signals to the dynamic visual vertical. Despite their dampening effect, our findings may point to a link between the noise components in the vestibular system and the characteristics of dynamic visual vertical.NEW & NOTEWORTHY A fundamental question in neuroscience is how the brain processes vestibular signals to infer the orientation of the body and objects in space. We show that, under sinusoidal linear motion, systematic error patterns appear in the disambiguation of linear acceleration and spatial orientation. We discuss the dynamics of these illusory percepts in terms of a dynamic Bayesian model that combines uncertainty in the vestibular signals with priors based on the natural statistics of head motion.


2021 ◽  
pp. 1-29
Author(s):  
Lisa Lorentz ◽  
Kaian Unwalla ◽  
David I. Shore

Abstract Successful interaction with our environment requires accurate tactile localization. Although we seem to localize tactile stimuli effortlessly, the processes underlying this ability are complex. This is evidenced by the crossed-hands deficit, in which tactile localization performance suffers when the hands are crossed. The deficit results from the conflict between an internal reference frame, based in somatotopic coordinates, and an external reference frame, based in external spatial coordinates. Previous evidence in favour of the integration model employed manipulations to the external reference frame (e.g., blindfolding participants), which reduced the deficit by reducing conflict between the two reference frames. The present study extends this finding by asking blindfolded participants to visually imagine their crossed arms as uncrossed. This imagery manipulation further decreased the magnitude of the crossed-hands deficit by bringing information in the two reference frames into alignment. This imagery manipulation differentially affected males and females, which was consistent with the previously observed sex difference in this effect: females tend to show a larger crossed-hands deficit than males and females were more impacted by the imagery manipulation. Results are discussed in terms of the integration model of the crossed-hands deficit.


2004 ◽  
Vol 91 (4) ◽  
pp. 1608-1619 ◽  
Author(s):  
Robert L. White ◽  
Lawrence H. Snyder

Neurons in many cortical areas involved in visuospatial processing represent remembered spatial information in retinotopic coordinates. During a gaze shift, the retinotopic representation of a target location that is fixed in the world (world-fixed reference frame) must be updated, whereas the representation of a target fixed relative to the center of gaze (gaze-fixed) must remain constant. To investigate how such computations might be performed, we trained a 3-layer recurrent neural network to store and update a spatial location based on a gaze perturbation signal, and to do so flexibly based on a contextual cue. The network produced an accurate readout of target position when cued to either reference frame, but was less precise when updating was performed. This output mimics the pattern of behavior seen in animals performing a similar task. We tested whether updating would preferentially use gaze position or gaze velocity signals, and found that the network strongly preferred velocity for updating world-fixed targets. Furthermore, we found that gaze position gain fields were not present when velocity signals were available for updating. These results have implications for how updating is performed in the brain.


Sign in / Sign up

Export Citation Format

Share Document