visual cortex
Recently Published Documents


TOTAL DOCUMENTS

10797
(FIVE YEARS 1507)

H-INDEX

251
(FIVE YEARS 16)

2022 ◽  
Author(s):  
Andrea Kóbor ◽  
Karolina Janacsek ◽  
Petra Hermann ◽  
Zsofia Zavecz ◽  
Vera Varga ◽  
...  

Previous research recognized that humans could extract statistical regularities of the environment to automatically predict upcoming events. However, it has remained unexplored how the brain encodes the distribution of statistical regularities if it continuously changes. To investigate this question, we devised an fMRI paradigm where participants (N = 32) completed a visual four-choice reaction time (RT) task consisting of statistical regularities. Two types of blocks involving the same perceptual elements alternated with one another throughout the task: While the distribution of statistical regularities was predictable in one block type, it was unpredictable in the other. Participants were unaware of the presence of statistical regularities and of their changing distribution across the subsequent task blocks. Based on the RT results, although statistical regularities were processed similarly in both the predictable and unpredictable blocks, participants acquired less statistical knowledge in the unpredictable as compared with the predictable blocks. Whole-brain random-effects analyses showed increased activity in the early visual cortex and decreased activity in the precuneus for the predictable as compared with the unpredictable blocks. Therefore, the actual predictability of statistical regularities is likely to be represented already at the early stages of visual cortical processing. However, decreased precuneus activity suggests that these representations are imperfectly updated to track the multiple shifts in predictability throughout the task. The results also highlight that the processing of statistical regularities in a changing environment could be habitual.


eNeuro ◽  
2022 ◽  
pp. ENEURO.0280-21.2021
Author(s):  
William G. P. Mayner ◽  
William Marshall ◽  
Yazan N. Billeh ◽  
Saurabh R. Gandhi ◽  
Shiella Caldejon ◽  
...  

2022 ◽  
Vol 23 (1) ◽  
Author(s):  
Noemi Meylakh ◽  
Luke A. Henderson

Abstract Background Migraine is a neurological disorder characterized by intense, debilitating headaches, often coupled with nausea, vomiting and sensitivity to light and sound. Whilst changes in sensory processes during a migraine attack have been well-described, there is growing evidence that even between migraine attacks, sensory abilities are disrupted in migraine. Brain imaging studies have investigated altered coupling between areas of the descending pain modulatory pathway but coupling between somatosensory processing regions between migraine attacks has not been properly studied. The aim of this study was to determine if ongoing functional connectivity between visual, auditory, olfactory, gustatory and somatosensory cortices are altered during the interictal phase of migraine. Methods To explore the neural mechanisms underpinning interictal changes in sensory processing, we used functional magnetic resonance imaging to compare resting brain activity patterns and connectivity in migraineurs between migraine attacks (n = 32) and in healthy controls (n = 71). Significant differences between groups were determined using two-sample random effects procedures (p < 0.05, corrected for multiple comparisons, minimum cluster size 10 contiguous voxels, age and gender included as nuisance variables). Results In the migraine group, increases in infra-slow oscillatory activity were detected in the right primary visual cortex (V1), secondary visual cortex (V2) and third visual complex (V3), and left V3. In addition, resting connectivity analysis revealed that migraineurs displayed significantly enhanced connectivity between V1 and V2 with other sensory cortices including the auditory, gustatory, motor and somatosensory cortices. Conclusions These data provide evidence for a dysfunctional sensory network in pain-free migraine patients which may be underlying altered sensory processing between migraine attacks.


2022 ◽  
Vol 13 (1) ◽  
Author(s):  
Yi Yang ◽  
Tian Wang ◽  
Yang Li ◽  
Weifeng Dai ◽  
Guanzhong Yang ◽  
...  

AbstractBoth surface luminance and edge contrast of an object are essential features for object identification. However, cortical processing of surface luminance remains unclear. In this study, we aim to understand how the primary visual cortex (V1) processes surface luminance information across its different layers. We report that edge-driven responses are stronger than surface-driven responses in V1 input layers, but luminance information is coded more accurately by surface responses. In V1 output layers, the advantage of edge over surface responses increased eight times and luminance information was coded more accurately at edges. Further analysis of neural dynamics shows that such substantial changes for neural responses and luminance coding are mainly due to non-local cortical inhibition in V1’s output layers. Our results suggest that non-local cortical inhibition modulates the responses elicited by the surfaces and edges of objects, and that switching the coding strategy in V1 promotes efficient coding for luminance.


Author(s):  
Sunyoung Park ◽  
John T. Serences

Top-down spatial attention enhances cortical representations of behaviorally relevant visual information and increases the precision of perceptual reports. However, little is known about the relative precision of top-down attentional modulations in different visual areas, especially compared to the highly precise stimulus-driven responses that are observed in early visual cortex. For example, the precision of attentional modulations in early visual areas may be limited by the relatively coarse spatial selectivity and the anatomical connectivity of the areas in prefrontal cortex that generate and relay the top-down signals. Here, we used fMRI and human participants to assess the precision of bottom-up spatial representations evoked by high contrast stimuli across the visual hierarchy. Then, we examined the relative precision of top-down attentional modulations in the absence of spatially-specific bottom-up drive. While V1 showed the largest relative difference between the precision of top-down attentional modulations and the precision of bottom-up modulations, mid-level areas such as V4 showed relatively smaller differences between the precision of top-down and bottom-up modulations. Overall, this interaction between visual areas (e.g. V1 vs V4) and the relative precision of top-down and bottom-up modulations suggests that the precision of top-down attentional modulations is limited by the representational fidelity of areas that generate and relay top-down feedback signals.


2022 ◽  
Author(s):  
Sohrab Najafian ◽  
Erin Koch ◽  
Kai-Lun Teh ◽  
Jianzhong Jin ◽  
Hamed Rahimi-Nasrabadi ◽  
...  

The cerebral cortex receives multiple afferents from the thalamus that segregate by stimulus modality forming cortical maps for each sense. In vision, the primary visual cortex also maps the multiple dimensions of the stimulus in patterns that vary across species for reasons unknown. Here we introduce a general theory of cortical map formation, which proposes that map diversity emerges from variations in sampling density of sensory space across species. In the theory, increasing afferent sampling density enlarges the cortical domains representing the same visual point allowing the segregation of afferents and cortical targets by additional stimulus dimensions. We illustrate the theory with a computational model that accurately replicates the maps of different species through afferent segregation followed by thalamocortical convergence pruned by visual experience. Because thalamocortical pathways use similar mechanisms for axon sorting and pruning, the theory may extend to other sensory areas of the mammalian brain.


2022 ◽  
Vol 13 (1) ◽  
Author(s):  
Yan-Liang Shi ◽  
Nicholas A. Steinmetz ◽  
Tirin Moore ◽  
Kwabena Boahen ◽  
Tatiana A. Engel

AbstractCorrelated activity fluctuations in the neocortex influence sensory responses and behavior. Neural correlations reflect anatomical connectivity but also change dynamically with cognitive states such as attention. Yet, the network mechanisms defining the population structure of correlations remain unknown. We measured correlations within columns in the visual cortex. We show that the magnitude of correlations, their attentional modulation, and dependence on lateral distance are explained by columnar On-Off dynamics, which are synchronous activity fluctuations reflecting cortical state. We developed a network model in which the On-Off dynamics propagate across nearby columns generating spatial correlations with the extent controlled by attentional inputs. This mechanism, unlike previous proposals, predicts spatially non-uniform changes in correlations during attention. We confirm this prediction in our columnar recordings by showing that in superficial layers the largest changes in correlations occur at intermediate lateral distances. Our results reveal how spatially structured patterns of correlated variability emerge through interactions of cortical state dynamics, anatomical connectivity, and attention.


2022 ◽  
Author(s):  
Akshay Vivek Jagadeesh ◽  
Justin Gardner

The human visual ability to recognize objects and scenes is widely thought to rely on representations in category-selective regions of visual cortex. These representations could support object vision by specifically representing objects, or, more simply, by representing complex visual features regardless of the particular spatial arrangement needed to constitute real world objects. That is, by representing visual textures. To discriminate between these hypotheses, we leveraged an image synthesis approach that, unlike previous methods, provides independent control over the complexity and spatial arrangement of visual features. We found that human observers could easily detect a natural object among synthetic images with similar complex features that were spatially scrambled. However, observer models built from BOLD responses from category-selective regions, as well as a model of macaque inferotemporal cortex and Imagenet-trained deep convolutional neural networks, were all unable to identify the real object. This inability was not due to a lack of signal-to-noise, as all of these observer models could predict human performance in image categorization tasks. How then might these texture-like representations in category-selective regions support object perception? An image-specific readout from category-selective cortex yielded a representation that was more selective for natural feature arrangement, showing that the information necessary for object discrimination is available. Thus, our results suggest that the role of human category-selective visual cortex is not to explicitly encode objects but rather to provide a basis set of texture-like features that can be infinitely reconfigured to flexibly learn and identify new object categories.


Sign in / Sign up

Export Citation Format

Share Document