neural responses
Recently Published Documents


TOTAL DOCUMENTS

2141
(FIVE YEARS 691)

H-INDEX

97
(FIVE YEARS 9)

2022 ◽  
Vol 2 (1) ◽  
pp. 100081
Author(s):  
Yingying Wang ◽  
Rebecca Custead ◽  
Hyuntaek Oh ◽  
Steven M. Barlow

2022 ◽  
Author(s):  
Ruosi Wang ◽  
Daniel Janini ◽  
Talia Konkle

Responses to visually-presented objects along the cortical surface of the human brain have a large-scale organization reflecting the broad categorical divisions of animacy and object size. Mounting evidence indicates that this topographical organization is driven by differences between objects in mid-level perceptual features. With regard to the timing of neural responses, images of objects quickly evoke neural responses with decodable information about animacy and object size, but are mid-level features sufficient to evoke these rapid neural responses? Or is slower iterative neural processing required to untangle information about animacy and object size from mid-level features? To answer this question, we used electroencephalography(EEG) to measure human neural responses to images of objects and their texform counterparts - unrecognizable images which preserve some mid-level feature information about texture and coarse form. We found that texform images evoked neural responses with early decodable information about both animacy and real-world size, as early as responses evoked by original images. Further, successful cross-decoding indicates that both texform and original images evoke information about animacy and size through a common underlying neural basis. Broadly, these results indicate that the visual system contains a mid-level feature bank carrying linearly decodable information on animacy and size, which can be rapidly activated without requiring explicit recognition or protracted temporal processing.


2022 ◽  
Vol 13 (1) ◽  
Author(s):  
Yi Yang ◽  
Tian Wang ◽  
Yang Li ◽  
Weifeng Dai ◽  
Guanzhong Yang ◽  
...  

AbstractBoth surface luminance and edge contrast of an object are essential features for object identification. However, cortical processing of surface luminance remains unclear. In this study, we aim to understand how the primary visual cortex (V1) processes surface luminance information across its different layers. We report that edge-driven responses are stronger than surface-driven responses in V1 input layers, but luminance information is coded more accurately by surface responses. In V1 output layers, the advantage of edge over surface responses increased eight times and luminance information was coded more accurately at edges. Further analysis of neural dynamics shows that such substantial changes for neural responses and luminance coding are mainly due to non-local cortical inhibition in V1’s output layers. Our results suggest that non-local cortical inhibition modulates the responses elicited by the surfaces and edges of objects, and that switching the coding strategy in V1 promotes efficient coding for luminance.


2022 ◽  
Vol 119 (2) ◽  
pp. e2023340118
Author(s):  
Srinath Nizampatnam ◽  
Lijun Zhang ◽  
Rishabh Chandak ◽  
James Li ◽  
Baranidharan Raman

Invariant stimulus recognition is a challenging pattern-recognition problem that must be dealt with by all sensory systems. Since neural responses evoked by a stimulus are perturbed in a multitude of ways, how can this computational capability be achieved? We examine this issue in the locust olfactory system. We find that locusts trained in an appetitive-conditioning assay robustly recognize the trained odorant independent of variations in stimulus durations, dynamics, or history, or changes in background and ambient conditions. However, individual- and population-level neural responses vary unpredictably with many of these variations. Our results indicate that linear statistical decoding schemes, which assign positive weights to ON neurons and negative weights to OFF neurons, resolve this apparent confound between neural variability and behavioral stability. Furthermore, simplification of the decoder using only ternary weights ({+1, 0, −1}) (i.e., an “ON-minus-OFF” approach) does not compromise performance, thereby striking a fine balance between simplicity and robustness.


2022 ◽  
Vol 18 (1) ◽  
pp. e1009739
Author(s):  
Nathan C. L. Kong ◽  
Eshed Margalit ◽  
Justin L. Gardner ◽  
Anthony M. Norcia

Task-optimized convolutional neural networks (CNNs) show striking similarities to the ventral visual stream. However, human-imperceptible image perturbations can cause a CNN to make incorrect predictions. Here we provide insight into this brittleness by investigating the representations of models that are either robust or not robust to image perturbations. Theory suggests that the robustness of a system to these perturbations could be related to the power law exponent of the eigenspectrum of its set of neural responses, where power law exponents closer to and larger than one would indicate a system that is less susceptible to input perturbations. We show that neural responses in mouse and macaque primary visual cortex (V1) obey the predictions of this theory, where their eigenspectra have power law exponents of at least one. We also find that the eigenspectra of model representations decay slowly relative to those observed in neurophysiology and that robust models have eigenspectra that decay slightly faster and have higher power law exponents than those of non-robust models. The slow decay of the eigenspectra suggests that substantial variance in the model responses is related to the encoding of fine stimulus features. We therefore investigated the spatial frequency tuning of artificial neurons and found that a large proportion of them preferred high spatial frequencies and that robust models had preferred spatial frequency distributions more aligned with the measured spatial frequency distribution of macaque V1 cells. Furthermore, robust models were quantitatively better models of V1 than non-robust models. Our results are consistent with other findings that there is a misalignment between human and machine perception. They also suggest that it may be useful to penalize slow-decaying eigenspectra or to bias models to extract features of lower spatial frequencies during task-optimization in order to improve robustness and V1 neural response predictivity.


Author(s):  
Yaqiong Xiao ◽  
Teresa H. Wen ◽  
Lauren Kupis ◽  
Lisa T. Eyler ◽  
Disha Goel ◽  
...  

2022 ◽  
Author(s):  
Byron H Price ◽  
Cambria M Jensen ◽  
Anthony A Khoudary ◽  
Jeffrey P Gavornik

Repeated exposure to visual sequences changes the form of evoked activity in the primary visual cortex (V1). Predictive coding theory provides a potential explanation for this, namely that plasticity shapes cortical circuits to encode spatiotemporal predictions and that subsequent responses are modulated by the degree to which actual inputs match these expectations. Here we use a recently developed statistical modeling technique called Model-Based Targeted Dimensionality Reduction (MbTDR) to study visually-evoked dynamics in mouse V1 in context of a previously described experimental paradigm called "sequence learning". We report that evoked spiking activity changed significantly with training, in a manner generally consistent with the predictive coding framework. Neural responses to expected stimuli were suppressed in a late window (100-150ms) after stimulus onset following training, while responses to novel stimuli were not. Omitting predictable stimuli led to increased firing at the expected time of stimulus onset, but only in trained mice. Substituting a novel stimulus for a familiar one led to changes in firing that persisted for at least 300ms. In addition, we show that spiking data can be used to accurately decode time within the sequence. Our findings are consistent with the idea that plasticity in early visual circuits is involved in coding spatiotemporal information.


Sign in / Sign up

Export Citation Format

Share Document