scholarly journals Representation of Visual Landmarks in Retrosplenial Cortex

2019 ◽  
Author(s):  
Lukas F. Fischer ◽  
Raul Mojica Soto-Albors ◽  
Friederike Buck ◽  
Mark T. Harnett

AbstractThe process by which visual information is incorporated into the brain’s spatial framework to represent landmarks is poorly understood. Studies in humans and rodents suggest that retrosplenial cortex (RSC) plays a key role in these computations. We developed an RSC-dependent behavioral task in which head-fixed mice learned the spatial relationship between visual landmark cues and hidden reward locations. Two-photon imaging revealed that these cues served as dominant reference points for most task-active neurons and anchored the spatial code in RSC. Presenting the same environment but decoupled from mouse behavior degraded encoding fidelity. Analyzing visual and motor responses showed that landmark codes were the result of supralinear integration. Surprisingly, V1 axons recorded in RSC showed similar receptive fields. However, they were less modulated by task engagement, indicating that landmark representations in RSC are the result of local computations. Our data provide cellular- and network-level insight into how RSC represents landmarks.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Lukas F Fischer ◽  
Raul Mojica Soto-Albors ◽  
Friederike Buck ◽  
Mark T Harnett

The process by which visual information is incorporated into the brain’s spatial framework to represent landmarks is poorly understood. Studies in humans and rodents suggest that retrosplenial cortex (RSC) plays a key role in these computations. We developed an RSC-dependent behavioral task in which head-fixed mice learned the spatial relationship between visual landmark cues and hidden reward locations. Two-photon imaging revealed that these cues served as dominant reference points for most task-active neurons and anchored the spatial code in RSC. This encoding was more robust after task acquisition. Decoupling the virtual environment from mouse behavior degraded spatial representations and provided evidence that supralinear integration of visual and motor inputs contributes to landmark encoding. V1 axons recorded in RSC were less modulated by task engagement but showed surprisingly similar spatial tuning. Our data indicate that landmark representations in RSC are the result of local integration of visual, motor, and spatial information.



2019 ◽  
Author(s):  
Joseph B. Wekselblatt ◽  
Cristopher M. Niell

AbstractLearning can cause significant changes in neural responses to relevant stimuli, in addition to modulation due to task engagement. However, it is not known how different functional types of excitatory neurons contribute to these changes. To address this gap, we performed two-photon calcium imaging of excitatory neurons in layer 2/3 of mouse primary visual cortex before and after learning of a visual discrimination. We found that excitatory neurons show striking diversity in the temporal dynamics of their response to visual stimuli during the behavior, and based on this we classified them into transient, sustained, and suppressed groups. Notably, these functionally defined cell classes exhibit different visual stimulus selectivity and modulation by locomotion, and were differentially affected by training condition. In particular, we observed a decrease in the number of transient neurons responsive during behavior after learning, while both transient and sustained cells showed an increase in modulation due to task engagement after learning. The identification of functional diversity within the excitatory population, with distinct changes during learning and task engagement, provides insight into the cortical pathways that allow context-dependent neural representations.



2016 ◽  
Author(s):  
Vy A. Vo ◽  
Thomas C. Sprague ◽  
John T. Serences

ABSTRACTSelective visual attention enables organisms to enhance the representation of behaviorally relevant stimuli by altering the encoding properties of single receptive fields (RFs). Yet we know little about how the attentional modulations of single RFs contribute to the encoding of an entire visual scene. Addressing this issue requires (1) measuring a group of RFs that tile a continuous portion of visual space, (2) constructing a population-level measurement of spatial representations based on these RFs, and (3) linking how different types of RF attentional modulations change the population-level representation. To accomplish these aims, we used fMRI to characterize the responses of thousands of voxels in retinotopically organized human cortex. First, we found that the response modulations of voxel RFs (vRFs) depend on the spatial relationship between the RF center and the visual location of the attended target. Second, we used two analyses to assess the spatial encoding quality of a population of voxels. We found that attention increased fine spatial discriminability and representational fidelity near the attended target. Third, we linked these findings by manipulating the observed vRF attentional modulations and recomputing our measures of the fidelity of population codes. Surprisingly, we discovered that attentional enhancements of population-level representations largely depend on position shifts of vRFs, rather than changes in size or gain. Our data suggest that position shifts of single RFs are a principal mechanism by which attention enhances population-level representations in visual cortex.SIGNIFICANCE STATEMENTWhile changes in the gain and size of RFs have dominated our view of how attention modulates information codes of visual space, such hypotheses have largely relied on the extrapolation of single-cell responses to population responses. Here we use fMRI to relate changes in single voxel receptive fields (vRFs) to changes in the precision of representations based on larger populations of voxels. We find that vRF position shifts contribute more to population-level enhancements of visual information than changes in vRF size or gain. This finding suggests that position shifts are a principal mechanism by which spatial attention enhances population codes for relevant visual information in sensory cortex. This poses challenges for labeled line theories of information processing, suggesting that downstream regions likely rely on distributed inputs rather than single neuron-to-neuron mappings.



2021 ◽  
Vol 10 (7) ◽  
pp. 432
Author(s):  
Nicolai Moos ◽  
Carsten Juergens ◽  
Andreas P. Redecker

This paper describes a methodological approach that is able to analyse socio-demographic and -economic data in large-scale spatial detail. Based on the two variables, population density and annual income, one investigates the spatial relationship of these variables to identify locations of imbalance or disparities assisted by bivariate choropleth maps. The aim is to gain a deeper insight into spatial components of socioeconomic nexuses, such as the relationships between the two variables, especially for high-resolution spatial units. The used methodology is able to assist political decision-making, target group advertising in the field of geo-marketing and for the site searches of new shop locations, as well as further socioeconomic research and urban planning. The developed methodology was tested in a national case study in Germany and is easily transferrable to other countries with comparable datasets. The analysis was carried out utilising data about population density and average annual income linked to spatially referenced polygons of postal codes. These were disaggregated initially via a readapted three-class dasymetric mapping approach and allocated to large-scale city block polygons. Univariate and bivariate choropleth maps generated from the resulting datasets were then used to identify and compare spatial economic disparities for a study area in North Rhine-Westphalia (NRW), Germany. Subsequently, based on these variables, a multivariate clustering approach was conducted for a demonstration area in Dortmund. In the result, it was obvious that the spatially disaggregated data allow more detailed insight into spatial patterns of socioeconomic attributes than the coarser data related to postal code polygons.



2007 ◽  
Vol 98 (4) ◽  
pp. 2089-2098 ◽  
Author(s):  
Sean P. MacEvoy ◽  
Russell A. Epstein

Complex visual scenes preferentially activate several areas of the human brain, including the parahippocampal place area (PPA), the retrosplenial complex (RSC), and the transverse occipital sulcus (TOS). The sensitivity of neurons in these regions to the retinal position of stimuli is unknown, but could provide insight into their roles in scene perception and navigation. To address this issue, we used functional magnetic resonance imaging (fMRI) to measure neural responses evoked by sequences of scenes and objects confined to either the left or right visual hemifields. We also measured the level of adaptation produced when stimuli were either presented first in one hemifield and then repeated in the opposite hemifield or repeated in the same hemifield. Although overall responses in the PPA, RSC, and TOS tended to be higher for contralateral stimuli than for ipsilateral stimuli, all three regions exhibited position-invariant adaptation, insofar as the magnitude of adaptation did not depend on whether stimuli were repeated in the same or opposite hemifields. In contrast, object-selective regions showed significantly greater adaptation when objects were repeated in the same hemifield. These results suggest that neuronal receptive fields (RFs) in scene-selective regions span the vertical meridian, whereas RFs in object-selective regions do not. The PPA, RSC, and TOS may support scene perception and navigation by maintaining stable representations of large-scale features of the visual environment that are insensitive to the shifts in retinal stimulation that occur frequently during natural vision.



2020 ◽  
Author(s):  
Haider Al-Tahan ◽  
Yalda Mohsenzadeh

AbstractWhile vision evokes a dense network of feedforward and feedback neural processes in the brain, visual processes are primarily modeled with feedforward hierarchical neural networks, leaving the computational role of feedback processes poorly understood. Here, we developed a generative autoencoder neural network model and adversarially trained it on a categorically diverse data set of images. We hypothesized that the feedback processes in the ventral visual pathway can be represented by reconstruction of the visual information performed by the generative model. We compared representational similarity of the activity patterns in the proposed model with temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) visual brain responses. The proposed generative model identified two segregated neural dynamics in the visual brain. A temporal hierarchy of processes transforming low level visual information into high level semantics in the feedforward sweep, and a temporally later dynamics of inverse processes reconstructing low level visual information from a high level latent representation in the feedback sweep. Our results append to previous studies on neural feedback processes by presenting a new insight into the algorithmic function and the information carried by the feedback processes in the ventral visual pathway.Author summaryIt has been shown that the ventral visual cortex consists of a dense network of regions with feedforward and feedback connections. The feedforward path processes visual inputs along a hierarchy of cortical areas that starts in early visual cortex (an area tuned to low level features e.g. edges/corners) and ends in inferior temporal cortex (an area that responds to higher level categorical contents e.g. faces/objects). Alternatively, the feedback connections modulate neuronal responses in this hierarchy by broadcasting information from higher to lower areas. In recent years, deep neural network models which are trained on object recognition tasks achieved human-level performance and showed similar activation patterns to the visual brain. In this work, we developed a generative neural network model that consists of encoding and decoding sub-networks. By comparing this computational model with the human brain temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) response patterns, we found that the encoder processes resemble the brain feedforward processing dynamics and the decoder shares similarity with the brain feedback processing dynamics. These results provide an algorithmic insight into the spatiotemporal dynamics of feedforward and feedback processes in biological vision.



eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Wankun L Li ◽  
Monica W Chu ◽  
An Wu ◽  
Yusuke Suzuki ◽  
Itaru Imayoshi ◽  
...  

The rodent olfactory bulb incorporates thousands of newly generated inhibitory neurons daily throughout adulthood, but the role of adult neurogenesis in olfactory processing is not fully understood. Here we adopted a genetic method to inducibly suppress adult neurogenesis and investigated its effect on behavior and bulbar activity. Mice without young adult-born neurons (ABNs) showed normal ability in discriminating very different odorants but were impaired in fine discrimination. Furthermore, two-photon calcium imaging of mitral cells (MCs) revealed that the ensemble odor representations of similar odorants were more ambiguous in the ablation animals. This increased ambiguity was primarily due to a decrease in MC suppressive responses. Intriguingly, these deficits in MC encoding were only observed during task engagement but not passive exposure. Our results indicate that young olfactory ABNs are essential for the enhancement of MC pattern separation in a task engagement-dependent manner, potentially functioning as a gateway for top-down modulation.



Author(s):  
Xueting Long ◽  
Jieyu Wu ◽  
Sirui Yang ◽  
Ziqi Deng ◽  
Yusen Zheng ◽  
...  

Two positional isomers (regioisomers) through changing the substituted position of perylenetetracarboxylic diimide and benzanthrone moieties were designed and synthesized. These two regioisomers exhibit totally different aggregation behaviors. The meta (bay)-substituted...



1995 ◽  
Vol 74 (3) ◽  
pp. 1083-1094 ◽  
Author(s):  
V. J. Brown ◽  
R. Desimone ◽  
M. Mishkin

1. The tail of the caudate nucleus and adjacent ventral putamen (ventrocaudal neostriatum) are major projection sites of the extrastriate visual cortex. Visual information is then relayed, directly or indirectly, to a variety of structures with motor functions. To test for a role of the ventrocaudal neostriatum in stimulus-response association learning, or habit formation, neuronal responses were recorded while monkeys performed a visual discrimination task. Additional data were collected from cells in cortical area TF, which serve as a comparison and control for the caudate data. 2. Two monkeys were trained to perform an asymmetrically reinforced go-no go visual discrimination. The stimuli were complex colored patterns, randomly assigned to be either positive or negative. The monkey was rewarded with juice for releasing a bar when a positive stimulus was presented, whereas a negative stimulus signaled that no reward was available and that the monkey should withhold its response. Neuronal responses were recorded both while the monkey performed the task with previously learned stimuli and while it learned the task with new stimuli. In some cases, responses were recorded during reversal learning. 3. There was no evidence that cells in the ventrocaudal neostriatum were influenced by the reward contingencies of the task. Cells did not fire preferentially to the onset of either positive or negative stimuli; neither did cells fire in response to the reward itself or in association with the motor response of the monkey. Only visual responses were apparent. 4. The visual properties of cells in these structures resembled those of cells in some of the cortical areas projecting to them. Most cells responded selectively to different visual stimuli. The degree of stimulus selectivity was assessed with discriminant analysis and was found to be quantitatively similar to that of inferior temporal cells tested with similar stimuli. Likewise, like inferior temporal cells, many cells in the ventrocaudal neostriatum had large, bilateral receptive fields. Some cells had "doughnut"-shaped receptive fields, with stronger responses in the periphery of both visual fields than at the fovea, similar to the fields of some cells in the superior temporal polysensory area. Although the absence of task-specific responses argues that ventrocaudal neostriatal cells are not themselves the mediators of visual learning in the task employed, their cortical-like visual properties suggest that they might relay visual information important for visuomotor plasticity in other structures. (ABSTRACT TRUNCATED AT 400 WORDS)



2020 ◽  
Vol 6 (8) ◽  
pp. eaaz2322 ◽  
Author(s):  
Andrew S. Alexander ◽  
Lucas C. Carstensen ◽  
James R. Hinman ◽  
Florian Raudies ◽  
G. William Chapman ◽  
...  

The retrosplenial cortex is reciprocally connected with multiple structures implicated in spatial cognition, and damage to the region itself produces numerous spatial impairments. Here, we sought to characterize spatial correlates of neurons within the region during free exploration in two-dimensional environments. We report that a large percentage of retrosplenial cortex neurons have spatial receptive fields that are active when environmental boundaries are positioned at a specific orientation and distance relative to the animal itself. We demonstrate that this vector-based location signal is encoded in egocentric coordinates, is localized to the dysgranular retrosplenial subregion, is independent of self-motion, and is context invariant. Further, we identify a subpopulation of neurons with this response property that are synchronized with the hippocampal theta oscillation. Accordingly, the current work identifies a robust egocentric spatial code in retrosplenial cortex that can facilitate spatial coordinate system transformations and support the anchoring, generation, and utilization of allocentric representations.



Sign in / Sign up

Export Citation Format

Share Document