scholarly journals Saccadic suppression as a perceptual consequence of efficient sensorimotor estimation

eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Frédéric Crevecoeur ◽  
Konrad P Kording

Humans perform saccadic eye movements two to three times per second. When doing so, the nervous system strongly suppresses sensory feedback for extended periods of time in comparison to movement time. Why does the brain discard so much visual information? Here we suggest that perceptual suppression may arise from efficient sensorimotor computations, assuming that perception and control are fundamentally linked. More precisely, we show theoretically that a Bayesian estimator should reduce the weight of sensory information around the time of saccades, as a result of signal dependent noise and of sensorimotor delays. Such reduction parallels the behavioral suppression occurring prior to and during saccades, and the reduction in neural responses to visual stimuli observed across the visual hierarchy. We suggest that saccadic suppression originates from efficient sensorimotor processing, indicating that the brain shares neural resources for perception and control.

2017 ◽  
Author(s):  
F. Crevecoeur ◽  
K. P. Kording

AbstractHumans perform saccadic eye movements two to three times per second. When doing so, the nervous system strongly suppresses sensory feedback for extended periods of time in comparison with the movement time. Why does the brain discard so much visual information? Here we suggest that perceptual suppression may arise from efficient sensorimotor computations, assuming that perception and control are fundamentally linked. More precisely, we show that a Bayesian estimator should reduce the weight of sensory information around the time of saccades, as a result of signal dependent noise and of sensorimotor delays. Such reduction parallels the behavioral suppression occurring prior to and during saccades, and the reduction in neural responses to visual stimuli observed across the visual hierarchy. We suggest that saccadic suppression originates from efficient sensorimotor processing, indicating that the brain shares neural resources for perception and control.


Author(s):  
Farran Briggs

Many mammals, including humans, rely primarily on vision to sense the environment. While a large proportion of the brain is devoted to vision in highly visual animals, there are not enough neurons in the visual system to support a neuron-per-object look-up table. Instead, visual animals evolved ways to rapidly and dynamically encode an enormous diversity of visual information using minimal numbers of neurons (merely hundreds of millions of neurons and billions of connections!). In the mammalian visual system, a visual image is essentially broken down into simple elements that are reconstructed through a series of processing stages, most of which occur beneath consciousness. Importantly, visual information processing is not simply a serial progression along the hierarchy of visual brain structures (e.g., retina to visual thalamus to primary visual cortex to secondary visual cortex, etc.). Instead, connections within and between visual brain structures exist in all possible directions: feedforward, feedback, and lateral. Additionally, many mammalian visual systems are organized into parallel channels, presumably to enable efficient processing of information about different and important features in the visual environment (e.g., color, motion). The overall operations of the mammalian visual system are to: (1) combine unique groups of feature detectors in order to generate object representations and (2) integrate visual sensory information with cognitive and contextual information from the rest of the brain. Together, these operations enable individuals to perceive, plan, and act within their environment.


Author(s):  
Min Guo ◽  
Yinghua Yu ◽  
Jiajia Yang ◽  
Jinglong Wu

To perceive our world, we make full use of multiple sources of sensory information derived from different modalities which include five basic sensory systems; visual, auditory, tactile, olfactory, and gustatory. In the real world, we normally simultaneously acquire information from different sensory receptors. Therefore, multisensory integration in the brain plays an important role in performance and perception. This review focuses on the crossmodal between the visual and tactile. Many previous studies have indicated that visual information effects tactile perception and in return, tactile perception is also active in the MT, the main visual motion information processing area. However, few studies have explored how information of the crossmodal between the visual and tactile is processed. Here, the authors highlight the processing mechanism of the crossmodal in the brain. They show that integration between the visual and tactile has two stages: combination and integration.


2006 ◽  
Vol 95 (6) ◽  
pp. 3502-3511 ◽  
Author(s):  
C. Kip Rodgers ◽  
Douglas P. Munoz ◽  
Stephen H. Scott ◽  
Martin Paré

The intermediate layers of the superior colliculus (SC) contain neurons that clearly play a major role in regulating the production of saccadic eye movements: a burst of activity from saccade neurons (SNs) is thought to provide a drive signal to set the eyes in motion, whereas the tonic activity of fixation neurons (FNs) is thought to suppress saccades during fixation. The exact contribution of these neurons to saccade control is, however, unclear because the nature of the signals sent by the SC to the brain stem saccade generation circuit has not been studied in detail. Here we tested the hypothesis that the SC output signal is sufficient to control saccades by examining whether antidromically identified tectoreticular neurons (TRNs: 33 SNs and 13 FNs) determine the end of saccades. First, TRNs had discharge properties similar to those of nonidentified SC neurons and a proportion of output SNs had visually evoked responses, which signify that the saccade generator must receive and process visual information. Second, only a minority of TRNs possessed the temporal patterns of activity sufficient to terminate saccades: Output SNs did not cease discharging at the time of saccade end, possibly continuing to drive the brain stem during postsaccadic fixations, and output FNs did not resume their activity before saccade end. These results argue against a role for SC in regulating the timing of saccade termination by a temporal code and suggest that other saccade centers act to thwart the extraneous SC drive signal, unless it controls saccade termination by a spatial code.


2020 ◽  
Vol 117 (13) ◽  
pp. 7510-7515 ◽  
Author(s):  
Tessel Blom ◽  
Daniel Feuerriegel ◽  
Philippa Johnson ◽  
Stefan Bode ◽  
Hinze Hogendoorn

The transmission of sensory information through the visual system takes time. As a result of these delays, the visual information available to the brain always lags behind the timing of events in the present moment. Compensating for these delays is crucial for functioning within dynamic environments, since interacting with a moving object (e.g., catching a ball) requires real-time localization of the object. One way the brain might achieve this is via prediction of anticipated events. Using time-resolved decoding of electroencephalographic (EEG) data, we demonstrate that the visual system represents the anticipated future position of a moving object, showing that predictive mechanisms activate the same neural representations as afferent sensory input. Importantly, this activation is evident before sensory input corresponding to the stimulus position is able to arrive. Finally, we demonstrate that, when predicted events do not eventuate, sensory information arrives too late to prevent the visual system from representing what was expected but never presented. Taken together, we demonstrate how the visual system can implement predictive mechanisms to preactivate sensory representations, and argue that this might allow it to compensate for its own temporal constraints, allowing us to interact with dynamic visual environments in real time.


2020 ◽  
Vol 7 (8) ◽  
pp. 192056
Author(s):  
Nienke B. Debats ◽  
Herbert Heuer

Successful computer use requires the operator to link the movement of the cursor to that of his or her hand. Previous studies suggest that the brain establishes this perceptual link through multisensory integration, whereby the causality evidence that drives the integration is provided by the correlated hand and cursor movement trajectories. Here, we explored the temporal window during which this causality evidence is effective. We used a basic cursor-control task, in which participants performed out-and-back reaching movements with their hand on a digitizer tablet. A corresponding cursor movement could be shown on a monitor, yet slightly rotated by an angle that varied from trial to trial. Upon completion of the backward movement, participants judged the endpoint of the outward hand or cursor movement. The mutually biased judgements that typically result reflect the integration of the proprioceptive information on hand endpoint with the visual information on cursor endpoint. We here manipulated the time period during which the cursor was visible, thereby selectively providing causality evidence either before or after sensory information regarding the to-be-judged movement endpoint was available. Specifically, the cursor was visible either during the outward or backward hand movement (conditions Out and Back , respectively). Our data revealed reduced integration in the condition Back compared with the condition Out , suggesting that causality evidence available before the to-be-judged movement endpoint is more powerful than later evidence in determining how strongly the brain integrates the endpoint information. This finding further suggests that sensory integration is not delayed until a judgement is requested.


2018 ◽  
Author(s):  
Noam Gordon ◽  
Naotsugu Tsuchiya ◽  
Roger Koenig-Robert ◽  
Jakob Hohwy

AbstractPerception results from the integration of incoming sensory information with pre-existing information available in the brain. In this EEG (electroencephalography) study we utilised the Hierarchical Frequency Tagging method to examine how such integration is modulated by expectation and attention. Using intermodulation (IM) components as a measure of non-linear signal integration, we show in three different experiments that both expectation and attention enhance integration between top-down and bottom-up signals. Based on multispectral phase coherence, we present two direct physiological measures to demonstrate the distinct yet related mechanisms of expectation and attention. Specifically, our results link expectation to the modulation of prediction signals and the integration of top-down and bottom-up information at lower levels of the visual hierarchy. Meanwhile, they link attention to the propagation of ascending signals and the integration of information at higher levels of the visual hierarchy. These results are consistent with the predictive coding account of perception.


2020 ◽  
Author(s):  
Madeline S. Cappelloni ◽  
Sabyasachi Shivkumar ◽  
Ralf M. Haefner ◽  
Ross K. Maddox

ABSTRACTThe brain combines information from multiple sensory modalities to interpret the environment. Multisensory integration is often modeled by ideal Bayesian causal inference, a model proposing that perceptual decisions arise from a statistical weighting of information from each sensory modality based on its reliability and relevance to the observer’s task. However, ideal Bayesian causal inference fails to describe human behavior in a simultaneous auditory spatial discrimination task in which spatially aligned visual stimuli improve performance despite providing no information about the correct response. This work tests the hypothesis that humans weight auditory and visual information in this task based on their relative reliabilities, even though the visual stimuli are task-uninformative, carrying no information about the correct response, and should be given zero weight. Listeners perform an auditory spatial discrimination task with relative reliabilities modulated by the stimulus durations. By comparing conditions in which task-uninformative visual stimuli are spatially aligned with auditory stimuli or centrally located (control condition), listeners are shown to have a larger multisensory effect when their auditory thresholds are worse. Even in cases in which visual stimuli are not task-informative, the brain combines sensory information that is scene-relevant, especially when the task is difficult due to unreliable auditory information.


2021 ◽  
Vol 118 (27) ◽  
pp. e2011905118
Author(s):  
Nadina O. Zweifel ◽  
Nicholas E. Bush ◽  
Ian Abraham ◽  
Todd D. Murphey ◽  
Mitra J. Z. Hartmann

As it becomes possible to simulate increasingly complex neural networks, it becomes correspondingly important to model the sensory information that animals actively acquire: the biomechanics of sensory acquisition directly determines the sensory input and therefore neural processing. Here, we exploit the tractable mechanics of the well-studied rodent vibrissal (“whisker”) system to present a model that can simulate the signals acquired by a full sensor array actively sampling the environment. Rodents actively “whisk” ∼60 vibrissae (whiskers) to obtain tactile information, and this system is therefore ideal to study closed-loop sensorimotor processing. The simulation framework presented here, WHISKiT Physics, incorporates realistic morphology of the rat whisker array to predict the time-varying mechanical signals generated at each whisker base during sensory acquisition. Single-whisker dynamics were optimized based on experimental data and then validated against free tip oscillations and dynamic responses to collisions. The model is then extrapolated to include all whiskers in the array, incorporating each whisker’s individual geometry. Simulation examples in laboratory and natural environments demonstrate that WHISKiT Physics can predict input signals during various behaviors, currently impossible in the biological animal. In one exemplary use of the model, the results suggest that active whisking increases in-plane whisker bending compared to passive stimulation and that principal component analysis can reveal the relative contributions of whisker identity and mechanics at each whisker base to the vibrissotactile response. These results highlight how interactions between array morphology and individual whisker geometry and dynamics shape the signals that the brain must process.


1999 ◽  
Vol 13 (2) ◽  
pp. 117-125 ◽  
Author(s):  
Laurence Casini ◽  
Françoise Macar ◽  
Marie-Hélène Giard

Abstract The experiment reported here was aimed at determining whether the level of brain activity can be related to performance in trained subjects. Two tasks were compared: a temporal and a linguistic task. An array of four letters appeared on a screen. In the temporal task, subjects had to decide whether the letters remained on the screen for a short or a long duration as learned in a practice phase. In the linguistic task, they had to determine whether the four letters could form a word or not (anagram task). These tasks allowed us to compare the level of brain activity obtained in correct and incorrect responses. The current density measures recorded over prefrontal areas showed a relationship between the performance and the level of activity in the temporal task only. The level of activity obtained with correct responses was lower than that obtained with incorrect responses. This suggests that a good temporal performance could be the result of an efficacious, but economic, information-processing mechanism in the brain. In addition, the absence of this relation in the anagram task results in the question of whether this relation is specific to the processing of sensory information only.


Sign in / Sign up

Export Citation Format

Share Document