scholarly journals Retrosplenial and postsubicular head direction cells compared during visual landmark discrimination

2017 ◽  
Vol 1 ◽  
pp. 239821281772185 ◽  
Author(s):  
Yave Roberto Lozano ◽  
Hector Page ◽  
Pierre-Yves Jacob ◽  
Eleonora Lomi ◽  
James Street ◽  
...  

Background: Visual landmarks are used by head direction (HD) cells to establish and help update the animal’s representation of head direction, for use in orientation and navigation. Two cortical regions that are connected to primary visual areas, postsubiculum (PoS) and retrosplenial cortex (RSC), possess HD cells: we investigated whether they differ in how they process visual landmarks. Methods: We compared PoS and RSC HD cell activity from tetrode-implanted rats exploring an arena in which correct HD orientation required discrimination of two opposing landmarks having high, moderate or low discriminability. Results: RSC HD cells had higher firing rates than PoS HD cells and slightly lower modulation by angular head velocity, and anticipated actual head direction by ~48 ms, indicating that RSC spiking leads PoS spiking. Otherwise, we saw no differences in landmark processing, in that HD cells in both regions showed equal responsiveness to and discrimination of the cues, with cells in both regions having unipolar directional tuning curves and showing better discrimination of the highly discriminable cues. There was a small spatial component to the signal in some cells, consistent with their role in interacting with the place cell navigation system, and there was also slight modulation by running speed. Neither region showed theta modulation of HD cell spiking. Conclusions: That the cells can immediately respond to subtle differences in spatial landmarks is consistent with rapid processing of visual snapshots or scenes; similarities in PoS and RSC responding may be due either to similar computations being performed on the visual inputs, or to rapid sharing of information between these regions. More generally, this two-cue HD cell paradigm may be a useful method for testing rapid spontaneous visual discrimination capabilities in other experimental settings.

2021 ◽  
Vol 17 (9) ◽  
pp. e1009434
Author(s):  
Yijia Yan ◽  
Neil Burgess ◽  
Andrej Bicanski

Environmental information is required to stabilize estimates of head direction (HD) based on angular path integration. However, it is unclear how this happens in real-world (visually complex) environments. We present a computational model of how visual feedback can stabilize HD information in environments that contain multiple cues of varying stability and directional specificity. We show how combinations of feature-specific visual inputs can generate a stable unimodal landmark bearing signal, even in the presence of multiple cues and ambiguous directional specificity. This signal is associated with the retrosplenial HD signal (inherited from thalamic HD cells) and conveys feedback to the subcortical HD circuitry. The model predicts neurons with a unimodal encoding of the egocentric orientation of the array of landmarks, rather than any one particular landmark. The relationship between these abstract landmark bearing neurons and head direction cells is reminiscent of the relationship between place cells and grid cells. Their unimodal encoding is formed from visual inputs via a modified version of Oja’s Subspace Algorithm. The rule allows the landmark bearing signal to disconnect from directionally unstable or ephemeral cues, incorporate newly added stable cues, support orientation across many different environments (high memory capacity), and is consistent with recent empirical findings on bidirectional HD firing reported in the retrosplenial cortex. Our account of visual feedback for HD stabilization provides a novel perspective on neural mechanisms of spatial navigation within richer sensory environments, and makes experimentally testable predictions.


2016 ◽  
Author(s):  
Pierre-Yves Jacob ◽  
Giulio Casali ◽  
Laure Spieser ◽  
Hector Page ◽  
Dorothy Overington ◽  
...  

AbstractSpatial cognition is an important model system with which to investigate how sensory signals are transformed into cognitive representations. Head direction cells, found in several cortical and subcortical regions, fire when an animal faces a given direction and express a global directional signal which is anchored by visual landmarks and underlies the “sense of direction”. We investigated the interface between visual and spatial cortical brain regions and report the discovery that a population of neurons in the dysgranular retrosplenial cortex, which we co-recorded with classic head direction cells in a rotationally symmetrical two-compartment environment, were dominated by a local visually defined reference frame and could be decoupled from the main head direction signal. A second population showed rotationally symmetric activity within a single sub-compartment suggestive of an acquired interaction with the head direction cells. These observations reveal an unexpected incoherence within the head direction system, and suggest that dysgranular retrosplenial cortex may mediate between visual landmarks and the multimodal sense of direction. Importantly, it appears that this interface supports a bi-directional exchange of information, which could explain how it is that landmarks can inform the direction sense while at the same time, the direction sense can be used to interpret landmarks.


eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Andrej Bicanski ◽  
Neil Burgess

We present a model of how neural representations of egocentric spatial experiences in parietal cortex interface with viewpoint-independent representations in medial temporal areas, via retrosplenial cortex, to enable many key aspects of spatial cognition. This account shows how previously reported neural responses (place, head-direction and grid cells, allocentric boundary- and object-vector cells, gain-field neurons) can map onto higher cognitive function in a modular way, and predicts new cell types (egocentric and head-direction-modulated boundary- and object-vector cells). The model predicts how these neural populations should interact across multiple brain regions to support spatial memory, scene construction, novelty-detection, ‘trace cells’, and mental navigation. Simulated behavior and firing rate maps are compared to experimental data, for example showing how object-vector cells allow items to be remembered within a contextual representation based on environmental boundaries, and how grid cells could update the viewpoint in imagery during planning and short-cutting by driving sequential place cell activity.


2021 ◽  
Author(s):  
Ningyu Zhang ◽  
Roddy M Grieves ◽  
Kate J Jeffery

A class of neurons showing bidirectional tuning in a two-compartment environment was recently discovered in dysgranular retrosplenial cortex (dRSC). We investigated here whether these neurons possess a more general environmental symmetry-encoding property, potentially useful in representing complex spatial structure. We report that directional tuning of dRSC neurons reflected environment symmetry in onefold, twofold and fourfold-symmetric environments: this was the case not just globally, but also locally within each sub-compartment. Thus, these cells use environmental cues to organize multiple directional tuning curves, which perhaps sometimes combine via interaction with classic head direction cells. A consequence is that both local and global environmental symmetry are simultaneously encoded even within local sub-compartments, which may be important for cognitive mapping of the space beyond immediate perceptual reach.


2005 ◽  
Vol 565 (2) ◽  
pp. 579-591 ◽  
Author(s):  
Franco A. Taverna ◽  
John Georgiou ◽  
Robert J. McDonald ◽  
Nancy S. Hong ◽  
Alexander Kraev ◽  
...  

2001 ◽  
Vol 85 (1) ◽  
pp. 105-116 ◽  
Author(s):  
James J. Knierim ◽  
Bruce L. McNaughton

“Place” cells of the rat hippocampus are coupled to “head direction” cells of the thalamus and limbic cortex. Head direction cells are sensitive to head direction in the horizontal plane only, which leads to the question of whether place cells similarly encode locations in the horizontal plane only, ignoring the z axis, or whether they encode locations in three dimensions. This question was addressed by recording from ensembles of CA1 pyramidal cells while rats traversed a rectangular track that could be tilted and rotated to different three-dimensional orientations. Cells were analyzed to determine whether their firing was bound to the external, three-dimensional cues of the environment, to the two-dimensional rectangular surface, or to some combination of these cues. Tilting the track 45° generally provoked a partial remapping of the rectangular surface in that some cells maintained their place fields, whereas other cells either gained new place fields, lost existing fields, or changed their firing locations arbitrarily. When the tilted track was rotated relative to the distal landmarks, most place fields remapped, but a number of cells maintained the same place field relative to the x-y coordinate frame of the laboratory, ignoring the z axis. No more cells were bound to the local reference frame of the recording apparatus than would be predicted by chance. The partial remapping demonstrated that the place cell system was sensitive to the three-dimensional manipulations of the recording apparatus. Nonetheless the results were not consistent with an explicit three-dimensional tuning of individual hippocampal neurons nor were they consistent with a model in which different sets of cells are tightly coupled to different sets of environmental cues. The results are most consistent with the statement that hippocampal neurons can change their “tuning functions” in arbitrary ways when features of the sensory input or behavioral context are altered. Understanding the rules that govern the remapping phenomenon holds promise for deciphering the neural circuitry underlying hippocampal function.


2010 ◽  
Vol 104 (4) ◽  
pp. 2075-2081 ◽  
Author(s):  
Lars Strother ◽  
Adrian Aldcroft ◽  
Cheryl Lavell ◽  
Tutis Vilis

Functional MRI (fMRI) studies of the human object recognition system commonly identify object-selective cortical regions by comparing blood oxygen level–dependent (BOLD) responses to objects versus those to scrambled objects. Object selectivity distinguishes human lateral occipital cortex (LO) from earlier visual areas. Recent studies suggest that, in addition to being object selective, LO is retinotopically organized; LO represents both object and location information. Although LO responses to objects have been shown to depend on location, it is not known whether responses to scrambled objects vary similarly. This is important because it would suggest that the degree of object selectivity in LO does not vary with retinal stimulus position. We used a conventional functional localizer to identify human visual area LO by comparing BOLD responses to objects versus scrambled objects presented to either the upper (UVF) or lower (LVF) visual field. In agreement with recent findings, we found evidence of position-dependent responses to objects. However, we observed the same degree of position dependence for scrambled objects and thus object selectivity did not differ for UVF and LVF stimuli. We conclude that, in terms of BOLD response, LO discriminates objects from non-objects equally well in either visual field location, despite stronger responses to objects in the LVF.


Neuroscience ◽  
2003 ◽  
Vol 117 (4) ◽  
pp. 1025-1035 ◽  
Author(s):  
T Kobayashi ◽  
A.H Tran ◽  
H Nishijo ◽  
T Ono ◽  
G Matsumoto

Sign in / Sign up

Export Citation Format

Share Document