scholarly journals Adaptation to Binocular Anticorrelation Results in Increased Neural Excitability

2020 ◽  
Vol 32 (1) ◽  
pp. 100-110
Author(s):  
Reuben Rideaux ◽  
Elizabeth Michael ◽  
Andrew E. Welchman

Throughout the brain, information from individual sources converges onto higher order neurons. For example, information from the two eyes first converges in binocular neurons in area V1. Some neurons are tuned to similarities between sources of information, which makes intuitive sense in a system striving to match multiple sensory signals to a single external cause—that is, establish causal inference. However, there are also neurons that are tuned to dissimilar information. In particular, some binocular neurons respond maximally to a dark feature in one eye and a light feature in the other. Despite compelling neurophysiological and behavioral evidence supporting the existence of these neurons [Katyal, S., Vergeer, M., He, S., He, B., & Engel, S. A. Conflict-sensitive neurons gate interocular suppression in human visual cortex. Scientific Reports, 8, 1239, 2018; Kingdom, F. A. A., Jennings, B. J., & Georgeson, M. A. Adaptation to interocular difference. Journal of Vision, 18, 9, 2018; Janssen, P., Vogels, R., Liu, Y., & Orban, G. A. At least at the level of inferior temporal cortex, the stereo correspondence problem is solved. Neuron, 37, 693–701, 2003; Tsao, D. Y., Conway, B. R., & Livingstone, M. S. Receptive fields of disparity-tuned simple cells in macaque V1. Neuron, 38, 103–114, 2003; Cumming, B. G., & Parker, A. J. Responses of primary visual cortical neurons to binocular disparity without depth perception. Nature, 389, 280–283, 1997], their function has remained opaque. To determine how neural mechanisms tuned to dissimilarities support perception, here we use electroencephalography to measure human observers' steady-state visually evoked potentials in response to change in depth after prolonged viewing of anticorrelated and correlated random-dot stereograms (RDS). We find that adaptation to anticorrelated RDS results in larger steady-state visually evoked potentials, whereas adaptation to correlated RDS has no effect. These results are consistent with recent theoretical work suggesting “what not” neurons play a suppressive role in supporting stereopsis [Goncalves, N. R., & Welchman, A. E. “What not” detectors help the brain see in depth. Current Biology, 27, 1403–1412, 2017]; that is, selective adaptation of neurons tuned to binocular mismatches reduces suppression resulting in increased neural excitability.

2019 ◽  
Author(s):  
Reuben Rideaux ◽  
Elizabeth Michael ◽  
Andrew E Welchman

ABSTRACTThroughout the brain, information from individual sources converges onto higher order neurons. For example, information from the two eyes first converges in binocular neurons in area V1. Many neurons appear tuned to similarities between sources of information, which makes intuitive sense in a system striving to match multiple sensory signals to a single external cause, i.e., establish causal inference. However, there are also neurons that are tuned to dissimilar information. In particular, many binocular neurons respond maximally to a dark feature in one eye and a light feature in the other. Despite compelling neurophysiological and behavioural evidence supporting the existence of these neurons (Cumming & Parker, 1997; Janssen, Vogels, Liu, & Orban, 2003; Katyal, Vergeer, He, He, & Engel, 2018; Kingdom, Jennings, & Georgeson, 2018; Tsao, Conway, & Livingstone, 2003), their function has remained opaque. To determine how neural mechanisms tuned to dissimilarities support perception, here we use electroencephalography to measure human observers’ steady-state visually evoked potentials (SSVEPs) in response to change in depth after prolonged viewing of anticorrelated and correlated random-dot stereograms (RDS). We find that adaptation to anticorrelated RDS results in larger SSVEPs, while adaptation to correlated RDS has no effect. These results are consistent with recent theoretical work suggesting ‘what not’ neurons play a suppressive role in supporting stereopsis (Goncalves & Welchman, 2017); that is, selective adaptation of neurons tuned to binocular mismatches reduces suppression resulting in increased neural excitability.


2010 ◽  
Vol 22 (12) ◽  
pp. 2979-3035 ◽  
Author(s):  
Stefan Klampfl ◽  
Wolfgang Maass

Neurons in the brain are able to detect and discriminate salient spatiotemporal patterns in the firing activity of presynaptic neurons. It is open how they can learn to achieve this, especially without the help of a supervisor. We show that a well-known unsupervised learning algorithm for linear neurons, slow feature analysis (SFA), is able to acquire the discrimination capability of one of the best algorithms for supervised linear discrimination learning, the Fisher linear discriminant (FLD), given suitable input statistics. We demonstrate the power of this principle by showing that it enables readout neurons from simulated cortical microcircuits to learn without any supervision to discriminate between spoken digits and to detect repeated firing patterns that are embedded into a stream of noise spike trains with the same firing statistics. Both these computer simulations and our theoretical analysis show that slow feature extraction enables neurons to extract and collect information that is spread out over a trajectory of firing states that lasts several hundred ms. In addition, it enables neurons to learn without supervision to keep track of time (relative to a stimulus onset, or the initiation of a motor response). Hence, these results elucidate how the brain could compute with trajectories of firing states rather than only with fixed point attractors. It also provides a theoretical basis for understanding recent experimental results on the emergence of view- and position-invariant classification of visual objects in inferior temporal cortex.


2021 ◽  
Author(s):  
Ge Zhang ◽  
Yan Cui ◽  
Yangsong Zhang ◽  
Hefei Cao ◽  
Guanyu Zhou ◽  
...  

AbstractPeriodic visual stimulation can induce stable steady-state visual evoked potentials (SSVEPs) distributed in multiple brain regions and has potential applications in both neural engineering and cognitive neuroscience. However, the underlying dynamic mechanisms of SSVEPs at the whole-brain level are still not completely understood. Here, we addressed this issue by simulating the rich dynamics of SSVEPs with a large-scale brain model designed with constraints of neuroimaging data acquired from the human brain. By eliciting activity of the occipital areas using an external periodic stimulus, our model was capable of replicating both the spatial distributions and response features of SSVEPs that were observed in experiments. In particular, we confirmed that alpha-band (8-12 Hz) stimulation could evoke stronger SSVEP responses; this frequency sensitivity was due to nonlinear resonance and could be modulated by endogenous factors in the brain. Interestingly, the stimulus-evoked brain networks also exhibited significant superiority in topological properties near this frequency-sensitivity range, and stronger SSVEP responses were demonstrated to be supported by more efficient functional connectivity at the neural activity level. These findings not only provide insights into the mechanistic understanding of SSVEPs at the whole-brain level but also indicate a bright future for large-scale brain modeling in characterizing the complicated dynamics and functions of the brain.


2019 ◽  
Vol 19 (6) ◽  
pp. 8 ◽  
Author(s):  
Jing Chen ◽  
Meaghan McManus ◽  
Matteo Valsecchi ◽  
Laurence R. Harris ◽  
Karl R. Gegenfurtner

Sign in / Sign up

Export Citation Format

Share Document