Temporal Information Can Influence Spatial Localization

2009 ◽  
Vol 102 (1) ◽  
pp. 490-495 ◽  
Author(s):  
Femke Maij ◽  
Eli Brenner ◽  
Jeroen B. J. Smeets

To localize objects relative to ourselves, we need to combine various sensory and motor signals. When these signals change abruptly, as information about eye orientation does during saccades, small differences in latency between the signals could introduce localization errors. We examine whether independent temporal information can influence such errors. We asked participants to follow a randomly jumping dot with their eyes and to point at flashes that occurred near the time they made saccades. Such flashes are mislocalized. We presented a tone at different times relative to the flash. We found that the flash was mislocalized as if it had occurred closer in time to the tone. This demonstrates that temporal information is taken into consideration when combining sensory information streams for localization.

2005 ◽  
Vol 17 (4) ◽  
pp. 668-686 ◽  
Author(s):  
Joost C. Dessing ◽  
C. (Lieke) E. Peper ◽  
Daniel Bullock ◽  
Peter J. Beek

The cerebral cortex contains circuitry for continuously computing properties of the environment and one's body, as well as relations among those properties. The success of complex perceptuomotor performances requires integrated, simultaneous use of such relational information. Ball catching is a good example as it involves reaching and grasping of visually pursued objects that move relative to the catcher. Although integrated neural control of catching has received sparse attention in the neuroscience literature, behavioral observations have led to the identification of control principles that may be embodied in the involved neural circuits. Here, we report a catching experiment that refines those principles via a novel manipulation. Visual field motion was used to perturb velocity information about balls traveling on various trajectories relative to a seated catcher, with various initial hand positions. The experiment produced evidence for a continuous, prospective catching strategy, in which hand movements are planned based on gaze-centered ball velocity and ball position information. Such a strategy was implemented in a new neural model, which suggests how position, velocity, and temporal information streams combine to shape catching movements. The model accurately reproduces the main and interaction effects found in the behavioral experiment and provides an interpretation of recently observed target motion-related activity in the motor cortex during interceptive reaching by monkeys. It functionally interprets a broad range of neurobiological and behavioral data, and thus contributes to a unified theory of the neural control of reaching to stationary and moving targets.


2000 ◽  
Vol 84 (3) ◽  
pp. 1204-1223 ◽  
Author(s):  
Todd W. Troyer ◽  
Allison J. Doupe

Birdsong learning provides an ideal model system for studying temporally complex motor behavior. Guided by the well-characterized functional anatomy of the song system, we have constructed a computational model of the sensorimotor phase of song learning. Our model uses simple Hebbian and reinforcement learning rules and demonstrates the plausibility of a detailed set of hypotheses concerning sensory-motor interactions during song learning. The model focuses on the motor nuclei HVc and robust nucleus of the archistriatum (RA) of zebra finches and incorporates the long-standing hypothesis that a series of song nuclei, the Anterior Forebrain Pathway (AFP), plays an important role in comparing the bird's own vocalizations with a previously memorized song, or “template.” This “AFP comparison hypothesis” is challenged by the significant delay that would be experienced by presumptive auditory feedback signals processed in the AFP. We propose that the AFP does not directly evaluate auditory feedback, but instead, receives an internally generated prediction of the feedback signal corresponding to each vocal gesture, or song “syllable.” This prediction, or “efference copy,” is learned in HVc by associating premotor activity in RA-projecting HVc neurons with the resulting auditory feedback registered within AFP-projecting HVc neurons. We also demonstrate how negative feedback “adaptation” can be used to separate sensory and motor signals within HVc. The model predicts that motor signals recorded in the AFP during singing carry sensory information and that the primary role for auditory feedback during song learning is to maintain an accurate efference copy. The simplicity of the model suggests that associational efference copy learning may be a common strategy for overcoming feedback delay during sensorimotor learning.


2021 ◽  
Author(s):  
Apoorva Karsolia ◽  
Vallabh E. Das ◽  
Scott B. Stevenson

Abstract Knowledge of eye position in the brain is critical for localization of objects in space. To investigate the accuracy and precision of eye position feedback in an unreferenced environment, subjects with normal ocular alignment attempted to localize briefly presented targets during monocular and dichoptic viewing. In the task, subjects’ used a computer mouse to position a response disk at the remembered location of the target. Under dichoptic viewing (with red (right eye) - green (left eye) glasses), target and response disks were presented to the same or alternate eyes, leading to four conditions [green target – green response cue (LL), green-red (LR), red-green (RL), and red-red (RR)]. Time interval between target and response disks was varied and localization errors were the difference between the estimated and real positions of the target disk. Overall, the precision of spatial localization (variance across trials) became progressively worse with time. Under dichoptic viewing, localization errors were significantly greater for alternate-eye trials as compared to same-eye trials and were correlated to the average phoria of each subject. We suggest that during these tasks, subjects are unable to compensate for their phoria, implying that oculomotor proprioception may not provide the required feedback of eye position.


2017 ◽  
Vol 118 (1) ◽  
pp. 187-193 ◽  
Author(s):  
Femke Maij ◽  
Alan M. Wing ◽  
W. Pieter Medendorp

People make systematic errors when localizing a brief tactile stimulus in the external space presented on the index finger while moving the arm. Although these errors likely arise in the spatiotemporal integration of the tactile input and information about arm position, the underlying arm position information used in this process is not known. In this study, we tested the contributions of afferent proprioceptive feedback and predictive arm position signals by comparing localization errors during passive vs. active arm movements. In the active trials, participants were instructed to localize a tactile stimulus in the external space that was presented to the index finger near the time of a self-generated arm movement. In the passive trials, each of the active trials was passively replayed in randomized order, using a robotic device. Our results provide evidence that the localization error patterns of the passive trials are similar to the active trials and, moreover, did not lag but rather led the active trials, which suggests that proprioceptive feedback makes an important contribution to tactile localization. To further test which kinematic property of this afferent feedback signal drives the underlying computations, we examined the localization errors with movements that had differently skewed velocity profiles but overall the same displacement. This revealed a difference in the localization patterns, which we explain by a probabilistic model in which temporal uncertainty about the stimulus is converted into a spatial likelihood, depending on the actual velocity of the arm rather than involving an efferent, preprogrammed movement. NEW & NOTEWORTHY We show that proprioceptive feedback of arm motion rather than efferent motor signals contributes to tactile localization during an arm movement. Data further show that localization errors depend on arm velocity, not displacement per se, suggesting that instantaneous velocity feedback plays a role in the underlying computations. Model simulation using Bayesian inference suggests that these errors depend not only on spatial but also on temporal uncertainties of sensory and motor signals.


2019 ◽  
Author(s):  
Raúl Hernández-Pérez ◽  
Eduardo Rojas-Hortelano ◽  
Victor de Lafuente

AbstractOur choices are often informed by temporally integrating streams of sensory information. This has been well demonstrated in the visual and auditory domains, but the integration of tactile information over time has been less studied. We designed an active touch task in which subjects explored a spheroid-shaped object to determine its inclination with respect to the horizontal plane (inclined to the left or to the right). In agreement with previous findings, our results show that more errors, and longer decision times, accompany difficult decisions (small inclination angles). To gain insight into the decision-making process, we used a task in which the time available for tactile exploration was varied by the experimenter, in a trial-by-trial basis. The behavioral results were fit with a model of bounded accumulation, and also with an independent-sampling model which assumes no sensory accumulation. The results of model fits favor an accumulation-to-bound mechanism, and suggest that participants integrate the first 600 ms of 1800 ms-long stimuli. This means that the somatosensory system benefits from longer streams of information although it does not make use of all available evidence.HighlightsThe somatosensory system integrates information streams through time.Somatosensory discrimination thresholds decrease with longer stimuli.A bounded accumulation model is favored over independent sampling.Humans accumulate up to 600 ms, out of 1800 ms-long stimuli.


2020 ◽  
Author(s):  
Antonio Cataldo ◽  
Lucile Dupin ◽  
Hiroaki Gomi ◽  
Patrick Haggard

Perception of space has puzzled scientists since antiquity and is among the foundational questions of scientific psychology. Classical “local sign” theories assert that perception of spatial extent ultimately derives from efferent signals specifying the intensity of motor commands. Everyday cases of self-touch, such as stroking the left forearm with the right index fingertip, provide an important platform for studying spatial perception, because of the tight correlation between motor and tactile extents. Nevertheless, if the motor and sensory information in self-touch were artificially decoupled, these classical theories would clearly predict that motor signals– especially if self-generated rather than passive – should influence spatial perceptualjudgements, but not vice versa. We tested this hypothesis by quantifying the contribution of tactile, kinaesthetic, and motor information to judgements of spatial extent. In a self-touch paradigm involving two coupled robots in a master-slave configuration, voluntary movements of the right-hand produced simultaneous tactile stroking on the left forearm. Crucially, the coupling between robots was manipulated so that tactile stimulation could be shorter, equal, or longer in extent than the movement that caused it. Participants judged either the extent of the movementor the extent of the tactile stroke. By controlling sensorimotor gains in this way, we quantified how motor signals influence tactile spatial perception and vice versa. Perception of tactile extent was strongly biased by the amplitude of the movement performed. Importantly, touch also affected the perceived extent of movement. Finally, the effect of movement on touch was significantly stronger when movements were actively-generated compared to when the participant’s right hand was passively moved by the experimenter. Overall, these results suggest that motor signals indeed dominate the construction of spatial percepts, at least when the normal tight correlation between motor and sensory signals is broken. Importantly, however, thisdominance is not total, as classical theory might suggest.


2020 ◽  
Vol 44 (3) ◽  
pp. 419-434
Author(s):  
RE Burnham

Understanding the biogeography of a species begins by mapping its presence over time and space. The use of home ranges, breeding and feeding areas, migration paths and movement patterns between the two are also inherent to their ecology. However, this is an overly simplified view of life histories. It ignores nuanced and complex exchanges and responses to the environment and between conspecifics. Having previously advocated for a more species-centric approach in a discussion of ‘whale geography’, I look to better understand the driving factors of migrations, and the information streams guiding the movement, which is key to the biogeography of large whale species. First, I consider the processes underlying the navigation capacities of species to complete migration, and how, and over what scales, sensory information contributes to cognitive maps. I specifically draw on examples of large-scale, en masse migrators to then apply this to whales. I focus on the acoustic sense as the principal way whales gain and exchange information, drawing on a case study of grey whale ( Eschrichtius robustus) calling behaviour to illustrate my arguments. Their consistent employment of far-propagating calls appears to be tied to travel behaviours and probably aids navigation and social cohesion. The range over which calls are being propagated to conspecifics, or perhaps being echoed back to the individual, underlies the distance over which the cognitive maps are being both formed and employed. I believe understanding these processes edges us closer to understanding species biogeography.


Sign in / Sign up

Export Citation Format

Share Document