AbstractThe brain can estimate the amplitude and direction of self-motion by integrating multiple sources of sensory information, and use this estimate to update object positions in order to provide us with a stable representation of the world. A strategy to improve the precision of the object position estimate would be to integrate this internal estimate and the sensory feedback about the object position based on their reliabilities. Integrating these cues, however, would only be optimal under the assumption that the object has not moved in the world during the intervening body displacement. Therefore, the brain would have to infer whether the internal estimate and the feedback relate to a same external position (stable object), and integrate and/or segregate these cues based on this inference – a process that can be modeled as Bayesian Causal inference. To test this hypothesis, we designed a spatial updating task across passive whole body translation in complete darkness, in which participants (n=11), seated on a vestibular sled, had to remember the world-fixed position of a visual target. Immediately after the translation, a second target (feedback) was briefly flashed around the estimated “updated” target location, and participants had to report the initial target location. We found that the participants’ responses were systematically biased toward the position of the second target position for relatively small but not for large differences between the “updated” and the second target location. This pattern was better captured by a Bayesian causal inference model than by alternative models that would always either integrate or segregate the internally-updated target position and the visual feedback. Our results suggest that the brain implicitly represents the posterior probability that the internally updated estimate and the sensory feedback come from a common cause, and use this probability to weigh the two sources of information in mediating spatial constancy across whole-body motion.Author SummaryA change of an object’s position on our retina can be caused by a change of the object’s location in the world or by a movement of the eye and body. Here, we examine how the brain solves this problem for spatial updating by assessing the probability that the internally-updated location during body motion and observed retinal feedback after the motion stems from the same object location in the world. Guided by Bayesian causal inference model, we demonstrate that participants’ errrors in spatial updating depend nonlinearly on the spatial discrepancy between internally-updated and reafferent visual feedback about the object’s location in the world. We propose that the brain implicitly represents the probability that the internally updated estimate and the sensory feedback come from a common cause, and use this probability to weigh the two sources of information in mediating spatial constancy across whole-body motion.