scholarly journals Orbitofrontal control of visual cortex gain promotes visual associative learning

2019 ◽  
Author(s):  
Dechen Liu ◽  
Juan Deng ◽  
Zhewei Zhang ◽  
Zhi-Yu Zhang ◽  
Yan-Gang Sun ◽  
...  

AbstractThe orbitofrontal cortex (OFC) encodes expected outcomes and plays a critical role in flexible, outcome-guided behavior. The OFC projects to primary visual cortex (V1), yet the function of this top-down projection is unclear. We find that optogenetic activation of OFC projection to V1 reduces the amplitude of V1 visual responses via the recruitment of local somatostatin-expressing (SST) interneurons. Using mice performing a Go/No-Go visual task, we show that the OFC projection to V1 mediates the outcome-expectancy modulation of V1 responses to the reward-irrelevant No-Go stimulus. Furthermore, V1-projecting OFC neurons reduce firing during expectation of reward. In addition, chronic optogenetic inactivation of OFC projection to V1 impairs, whereas chronic activation of SST interneurons in V1 improves the learning of Go/No-Go visual task, without affecting the immediate performance. Thus, OFC top-down projection to V1 is crucial to drive visual associative learning by modulating the response gain of V1 neurons to non-relevant stimulus.

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Brittany C. Clawson ◽  
Emily J. Pickup ◽  
Amy Ensing ◽  
Laura Geneseo ◽  
James Shaver ◽  
...  

AbstractLearning-activated engram neurons play a critical role in memory recall. An untested hypothesis is that these same neurons play an instructive role in offline memory consolidation. Here we show that a visually-cued fear memory is consolidated during post-conditioning sleep in mice. We then use TRAP (targeted recombination in active populations) to genetically label or optogenetically manipulate primary visual cortex (V1) neurons responsive to the visual cue. Following fear conditioning, mice respond to activation of this visual engram population in a manner similar to visual presentation of fear cues. Cue-responsive neurons are selectively reactivated in V1 during post-conditioning sleep. Mimicking visual engram reactivation optogenetically leads to increased representation of the visual cue in V1. Optogenetic inhibition of the engram population during post-conditioning sleep disrupts consolidation of fear memory. We conclude that selective sleep-associated reactivation of learning-activated sensory populations serves as a necessary instructive mechanism for memory consolidation.


2019 ◽  
Vol 121 (6) ◽  
pp. 2202-2214 ◽  
Author(s):  
John P. McClure ◽  
Pierre-Olivier Polack

Multimodal sensory integration facilitates the generation of a unified and coherent perception of the environment. It is now well established that unimodal sensory perceptions, such as vision, are improved in multisensory contexts. Whereas multimodal integration is primarily performed by dedicated multisensory brain regions such as the association cortices or the superior colliculus, recent studies have shown that multisensory interactions also occur in primary sensory cortices. In particular, sounds were shown to modulate the responses of neurons located in layers 2/3 (L2/3) of the mouse primary visual cortex (V1). Yet, the net effect of sound modulation at the V1 population level remained unclear. In the present study, we performed two-photon calcium imaging in awake mice to compare the representation of the orientation and the direction of drifting gratings by V1 L2/3 neurons in unimodal (visual only) or multimodal (audiovisual) conditions. We found that sound modulation depended on the tuning properties (orientation and direction selectivity) and response amplitudes of V1 L2/3 neurons. Sounds potentiated the responses of neurons that were highly tuned to the cue’s orientation and direction but weakly active in the unimodal context, following the principle of inverse effectiveness of multimodal integration. Moreover, sound suppressed the responses of neurons untuned for the orientation and/or the direction of the visual cue. Altogether, sound modulation improved the representation of the orientation and direction of the visual stimulus in V1 L2/3. Namely, visual stimuli presented with auditory stimuli recruited a neuronal population better tuned to the visual stimulus orientation and direction than when presented alone. NEW & NOTEWORTHY The primary visual cortex (V1) receives direct inputs from the primary auditory cortex. Yet, the impact of sounds on visual processing in V1 remains controverted. We show that the modulation by pure tones of V1 visual responses depends on the orientation selectivity, direction selectivity, and response amplitudes of V1 neurons. Hence, audiovisual stimuli recruit a population of V1 neurons better tuned to the orientation and direction of the visual stimulus than unimodal visual stimuli.


2017 ◽  
Author(s):  
Aman B. Saleem ◽  
E. Mika Diamanti ◽  
Julien Fournier ◽  
Kenneth D. Harris ◽  
Matteo Carandini

A major role of vision is to guide navigation, and navigation is strongly driven by vision1-4. Indeed, the brain’s visual and navigational systems are known to interact5, 6, and signals related to position in the environment have been suggested to appear as early as in visual cortex6, 7. To establish the nature of these signals we recorded in primary visual cortex (V1) and in the CA1 region of the hippocampus while mice traversed a corridor in virtual reality. The corridor contained identical visual landmarks in two positions, so that a purely visual neuron would respond similarly in those positions. Most V1 neurons, however, responded solely or more strongly to the landmarks in one position. This modulation of visual responses by spatial location was not explained by factors such as running speed. To assess whether the modulation is related to navigational signals and to the animal’s subjective estimate of position, we trained the mice to lick for a water reward upon reaching a reward zone in the corridor. Neuronal populations in both CA1 and V1 encoded the animal’s position along the corridor, and the errors in their representations were correlated. Moreover, both representations reflected the animal’s subjective estimate of position, inferred from the animal’s licks, better than its actual position. Indeed, when animals licked in a given location – whether correct or incorrect – neural populations in both V1 and CA1 placed the animal in the reward zone. We conclude that visual responses in V1 are tightly controlled by navigational signals, which are coherent with those encoded in hippocampus, and reflect the animal’s subjective position in the environment. The presence of such navigational signals as early as in a primary sensory area suggests that these signals permeate sensory processing in the cortex.


2020 ◽  
Author(s):  
Brittany C. Clawson ◽  
Emily J. Pickup ◽  
Amy Enseng ◽  
Laura Geneseo ◽  
James Shaver ◽  
...  

AbstractLearning-activated engram neurons play a critical role in memory recall. An untested hypothesis is that these same neurons play an instructive role in offline memory consolidation. Here we show that a visually-cued fear memory is consolidated during post-conditioning sleep in mice. We then use TRAP (targeted recombination in active populations) to genetically label or optogenetically manipulate primary visual cortex (V1) neurons responsive to the visual cue. Following fear conditioning, mice respond to activation of this visual engram population in a manner similar to visual presentation of fear cues. Cue-responsive neurons are selectively reactivated in V1 during post-conditioning sleep. Mimicking visual engram reactivation optogenetically leads to increased representation of the visual cue in V1. Optogenetic inhibition of the engram population during post-conditioning sleep disrupts consolidation of fear memory. We conclude that selective sleep-associated reactivation of learning-activated sensory populations serves as a necessary instructive mechanism for memory consolidation.


2018 ◽  
Author(s):  
Jumpei Ukita ◽  
Takashi Yoshida ◽  
Kenichi Ohki

AbstractA comprehensive understanding of the stimulus-response properties of individual neurons is necessary to crack the neural code of sensory cortices. However, a barrier to achieving this goal is the difficulty of analyzing the nonlinearity of neuronal responses. In computer vision, artificial neural networks, especially convolutional neural networks (CNNs), have demonstrated state-of-the-art performance in image recognition by capturing the higher-order statistics of natural images. Here, we incorporated CNN for encoding models of neurons in the visual cortex to develop a new method of nonlinear response characterization, especially nonlinear estimation of receptive fields (RFs), without assumptions regarding the type of nonlinearity. Briefly, after training CNN to predict the visual responses of neurons to natural images, we synthesized the RF image such that the image would predictively evoke a maximum response (“maximization-of-activation” method). We first demonstrated the proof-of-principle using a dataset of simulated cells with various types of nonlinearity, revealing that CNN could be used to estimate the nonlinear RF of simulated cells. In particular, we could visualize various types of nonlinearity underlying the responses, such as shift-invariant RFs or rotation-invariant RFs. These results suggest that the method may be applicable to neurons with complex nonlinearities, such as rotation-invariant neurons in higher visual areas. Next, we applied the method to a dataset of neurons in the mouse primary visual cortex (V1) whose responses to natural images were recorded via two-photon Ca2+ imaging. We could visualize shift-invariant RFs with Gabor-like shapes for some V1 neurons. By quantifying the degree of shift-invariance, each V1 neuron was classified as either a shift-variant (simple) cell or shift-invariant (complex-like) cell, and these two types of neurons were not clustered in cortical space. These results suggest that the novel CNN encoding model is useful in nonlinear response analyses of visual neurons and potentially of any sensory neurons.


1978 ◽  
Vol 41 (1) ◽  
pp. 55-64 ◽  
Author(s):  
B. E. Stein

1. The effects of cortical cooling on the responses of cells to visual, somatic, and acoustic stimuli were studied in the cat superior colliculus (SC). When the visual cortex was cooled, the responses of many visual cells of the SC were depressed or eliminated, but the activity of nonvisual cells remained unchanged. This response depression was found in visual cells located in both superficial and deep laminae and was most pronounced in neurons which were binocular and directionally selective. 2. Cooling somatic and/or auditory cortex had no effect on visual SC cells and, with few exceptions, did not alter the activity of somatic or acoustic cells either. 3. The specificity of visual cortex influences on visual responding in the SC was most apparent in multimodal cells. In trimodal cells, the simultaneous cooling of visual, somatic, and auditory cortex eliminated responses to visual stimuli, but did not affect responses to somatic or acoustic stimuli. Visual responses were returned to the precooling level in both unimodal and multimodal cells by cortical rewarming. 4. The present experiments indicate that despite the organizational parallels among visual, somatic, and acoustic cells of the cat SC, the influences they receive from cortex are non-equivalent. Cortical influences appear to play a more critical role in the responses of visual cells than in the responses of somatic and acoustic cells. These observations raise questions about the functional significance of nonvisual corticotectal systems.


2020 ◽  
Author(s):  
Aleena R. Garner ◽  
Georg B. Keller

ABSTRACTLearned associations between stimuli in different sensory modalities can shape the way we perceive these stimuli (Mcgurk and Macdonald, 1976). During audio-visual associative learning, auditory cortex is thought to underlie multi-modal plasticity in visual cortex (McIntosh et al., 1998; Mishra et al., 2007; Zangenehpour and Zatorre, 2010). However, it is not well understood how processing in visual cortex is altered by an auditory stimulus that is predictive of a visual stimulus and what the mechanisms are that mediate such experience-dependent, audio-visual associations in sensory cortex. Here we describe a neural mechanism by which an auditory input can shape visual representations of behaviorally relevant stimuli through direct interactions between auditory and visual cortices. We show that the association of an auditory stimulus with a visual stimulus in a behaviorally relevant context leads to an experience-dependent suppression of visual responses in primary visual cortex (V1). Auditory cortex axons carry a mixture of auditory and retinotopically-matched visual input to V1, and optogenetic stimulation of these axons selectively suppresses V1 neurons responsive to the associated visual stimulus after, but not before, learning. Our results suggest that cross-modal associations can be stored in long-range cortical connections and that with learning these cross-modal connections function to suppress the responses to predictable input.


2019 ◽  
Author(s):  
Chris Robert Harrison Brown

Attention has long been characterised within prominent models as reflecting a competition between goal-driven and stimulus-driven processes. It remains unclear, however, how involuntary attentional capture by affective stimuli, such as threat-laden content, fits into such models. While such effects were traditionally held to reflect stimulus-driven processes, recent research has increasingly implicated a critical role of goal-driven processes. Here we test an alternative goal-driven account of involuntary attentional capture by threat, using an experimental manipulation of goal-driven attention. To this end we combined the classic ‘contingent capture’ and ‘emotion-induced blink’ (EIB) paradigms in an RSVP task with both positive or threatening target search goals. Across six experiments, positive and threat distractors were presented in peripheral, parafoveal, and central locations. Across all distractor locations, we found that involuntary attentional capture by irrelevant threatening distractors could be induced via the adoption of a search goal for a threatening category; adopting a goal for a positive category conversely led to capture only by positive stimuli. Our findings provide direct experimental evidence for a causal role of voluntary goals in involuntary capture by irrelevant threat stimuli, and hence demonstrate the plausibility of a top-down account of this phenomenon. We discuss the implications of these findings in relation to current cognitive models of attention and clinical disorders.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Domenica Veniero ◽  
Joachim Gross ◽  
Stephanie Morand ◽  
Felix Duecker ◽  
Alexander T. Sack ◽  
...  

AbstractVoluntary allocation of visual attention is controlled by top-down signals generated within the Frontal Eye Fields (FEFs) that can change the excitability of lower-level visual areas. However, the mechanism through which this control is achieved remains elusive. Here, we emulated the generation of an attentional signal using single-pulse transcranial magnetic stimulation to activate the FEFs and tracked its consequences over the visual cortex. First, we documented changes to brain oscillations using electroencephalography and found evidence for a phase reset over occipital sites at beta frequency. We then probed for perceptual consequences of this top-down triggered phase reset and assessed its anatomical specificity. We show that FEF activation leads to cyclic modulation of visual perception and extrastriate but not primary visual cortex excitability, again at beta frequency. We conclude that top-down signals originating in FEF causally shape visual cortex activity and perception through mechanisms of oscillatory realignment.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Caitlin Siu ◽  
Justin Balsor ◽  
Sam Merlin ◽  
Frederick Federer ◽  
Alessandra Angelucci

AbstractThe mammalian sensory neocortex consists of hierarchically organized areas reciprocally connected via feedforward (FF) and feedback (FB) circuits. Several theories of hierarchical computation ascribe the bulk of the computational work of the cortex to looped FF-FB circuits between pairs of cortical areas. However, whether such corticocortical loops exist remains unclear. In higher mammals, individual FF-projection neurons send afferents almost exclusively to a single higher-level area. However, it is unclear whether FB-projection neurons show similar area-specificity, and whether they influence FF-projection neurons directly or indirectly. Using viral-mediated monosynaptic circuit tracing in macaque primary visual cortex (V1), we show that V1 neurons sending FF projections to area V2 receive monosynaptic FB inputs from V2, but not other V1-projecting areas. We also find monosynaptic FB-to-FB neuron contacts as a second motif of FB connectivity. Our results support the existence of FF-FB loops in primate cortex, and suggest that FB can rapidly and selectively influence the activity of incoming FF signals.


Sign in / Sign up

Export Citation Format

Share Document