scholarly journals Memory recall involves a transient break in excitatory-inhibitory balance

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Renée S Koolschijn ◽  
Anna Shpektor ◽  
William T Clarke ◽  
I Betina Ip ◽  
David Dupret ◽  
...  

The brain has a remarkable capacity to acquire and store memories that can later be selectively recalled. These processes are supported by the hippocampus which is thought to index memory recall by reinstating information stored across distributed neocortical circuits. However, the mechanism that supports this interaction remains unclear. Here, in humans, we show that recall of a visual cue from a paired associate is accompanied by a transient increase in the ratio between glutamate and GABA in visual cortex. Moreover, these excitatory-inhibitory fluctuations are predicted by activity in the hippocampus. These data suggest the hippocampus gates memory recall by indexing information stored across neocortical circuits using a disinhibitory mechanism.

2020 ◽  
Author(s):  
Renée S. Koolschijn ◽  
Anna Shpektor ◽  
I. Betina Ip ◽  
William T. Clarke ◽  
David Dupret ◽  
...  

ABSTRACTThe brain has a remarkable capacity to acquire and store memories that can later be selectively recalled. These processes are supported by the hippocampus which is thought to index memory recall by reinstating information stored across distributed neocortical circuits. However, the mechanism that supports this interaction remains unclear. Here, in humans, we show that recall of a visual cue from a paired associate is accompanied by a transient increase in the ratio between glutamate and GABA in visual cortex. Moreover, these excitatory-inhibitory fluctuations are predicted by activity in the hippocampus. These data suggest the hippocampus gates memory recall by indexing information stored across neocortical circuits using a disinhibitory mechanism.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Brittany C. Clawson ◽  
Emily J. Pickup ◽  
Amy Ensing ◽  
Laura Geneseo ◽  
James Shaver ◽  
...  

AbstractLearning-activated engram neurons play a critical role in memory recall. An untested hypothesis is that these same neurons play an instructive role in offline memory consolidation. Here we show that a visually-cued fear memory is consolidated during post-conditioning sleep in mice. We then use TRAP (targeted recombination in active populations) to genetically label or optogenetically manipulate primary visual cortex (V1) neurons responsive to the visual cue. Following fear conditioning, mice respond to activation of this visual engram population in a manner similar to visual presentation of fear cues. Cue-responsive neurons are selectively reactivated in V1 during post-conditioning sleep. Mimicking visual engram reactivation optogenetically leads to increased representation of the visual cue in V1. Optogenetic inhibition of the engram population during post-conditioning sleep disrupts consolidation of fear memory. We conclude that selective sleep-associated reactivation of learning-activated sensory populations serves as a necessary instructive mechanism for memory consolidation.


2020 ◽  
Author(s):  
Brittany C. Clawson ◽  
Emily J. Pickup ◽  
Amy Enseng ◽  
Laura Geneseo ◽  
James Shaver ◽  
...  

AbstractLearning-activated engram neurons play a critical role in memory recall. An untested hypothesis is that these same neurons play an instructive role in offline memory consolidation. Here we show that a visually-cued fear memory is consolidated during post-conditioning sleep in mice. We then use TRAP (targeted recombination in active populations) to genetically label or optogenetically manipulate primary visual cortex (V1) neurons responsive to the visual cue. Following fear conditioning, mice respond to activation of this visual engram population in a manner similar to visual presentation of fear cues. Cue-responsive neurons are selectively reactivated in V1 during post-conditioning sleep. Mimicking visual engram reactivation optogenetically leads to increased representation of the visual cue in V1. Optogenetic inhibition of the engram population during post-conditioning sleep disrupts consolidation of fear memory. We conclude that selective sleep-associated reactivation of learning-activated sensory populations serves as a necessary instructive mechanism for memory consolidation.


2015 ◽  
Vol 113 (9) ◽  
pp. 3159-3171 ◽  
Author(s):  
Caroline D. B. Luft ◽  
Alan Meeson ◽  
Andrew E. Welchman ◽  
Zoe Kourtzi

Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex.


2017 ◽  
Vol 372 (1715) ◽  
pp. 20160504 ◽  
Author(s):  
Megumi Kaneko ◽  
Michael P. Stryker

Mechanisms thought of as homeostatic must exist to maintain neuronal activity in the brain within the dynamic range in which neurons can signal. Several distinct mechanisms have been demonstrated experimentally. Three mechanisms that act to restore levels of activity in the primary visual cortex of mice after occlusion and restoration of vision in one eye, which give rise to the phenomenon of ocular dominance plasticity, are discussed. The existence of different mechanisms raises the issue of how these mechanisms operate together to converge on the same set points of activity. This article is part of the themed issue ‘Integrating Hebbian and homeostatic plasticity’.


2020 ◽  
Author(s):  
Yaelan Jung ◽  
Dirk B. Walther

AbstractNatural scenes deliver rich sensory information about the world. Decades of research has shown that the scene-selective network in the visual cortex represents various aspects of scenes. It is, however, unknown how such complex scene information is processed beyond the visual cortex, such as in the prefrontal cortex. It is also unknown how task context impacts the process of scene perception, modulating which scene content is represented in the brain. In this study, we investigate these questions using scene images from four natural scene categories, which also depict two types of global scene properties, temperature (warm or cold), and sound-level (noisy or quiet). A group of healthy human subjects from both sexes participated in the present study using fMRI. In the study, participants viewed scene images under two different task conditions; temperature judgment and sound-level judgment. We analyzed how different scene attributes (scene categories, temperature, and sound-level information) are represented across the brain under these task conditions. Our findings show that global scene properties are only represented in the brain, especially in the prefrontal cortex, when they are task-relevant. However, scene categories are represented in the brain, in both the parahippocampal place area and the prefrontal cortex, regardless of task context. These findings suggest that the prefrontal cortex selectively represents scene content according to task demands, but this task selectivity depends on the types of scene content; task modulates neural representations of global scene properties but not of scene categories.


2022 ◽  
Author(s):  
Andrea Kóbor ◽  
Karolina Janacsek ◽  
Petra Hermann ◽  
Zsofia Zavecz ◽  
Vera Varga ◽  
...  

Previous research recognized that humans could extract statistical regularities of the environment to automatically predict upcoming events. However, it has remained unexplored how the brain encodes the distribution of statistical regularities if it continuously changes. To investigate this question, we devised an fMRI paradigm where participants (N = 32) completed a visual four-choice reaction time (RT) task consisting of statistical regularities. Two types of blocks involving the same perceptual elements alternated with one another throughout the task: While the distribution of statistical regularities was predictable in one block type, it was unpredictable in the other. Participants were unaware of the presence of statistical regularities and of their changing distribution across the subsequent task blocks. Based on the RT results, although statistical regularities were processed similarly in both the predictable and unpredictable blocks, participants acquired less statistical knowledge in the unpredictable as compared with the predictable blocks. Whole-brain random-effects analyses showed increased activity in the early visual cortex and decreased activity in the precuneus for the predictable as compared with the unpredictable blocks. Therefore, the actual predictability of statistical regularities is likely to be represented already at the early stages of visual cortical processing. However, decreased precuneus activity suggests that these representations are imperfectly updated to track the multiple shifts in predictability throughout the task. The results also highlight that the processing of statistical regularities in a changing environment could be habitual.


Author(s):  
A. Jayanthiladevi ◽  
S. Murugan ◽  
K. Manivel

Today, images and image sequences (videos) make up about 80% of all corporate and public unstructured big data. As growth of unstructured data increases, analytical systems must assimilate and interpret images and videos as well as they interpret structured data such as text and numbers. An image is a set of signals sensed by the human eye and processed by the visual cortex in the brain creating a vivid experience of a scene that is instantly associated with concepts and objects previously perceived and recorded in one's memory. To a computer, images are either a raster image or a vector image. Simply put, raster images are a sequence of pixels with discreet numerical values for color; vector images are a set of color-annotated polygons. To perform analytics on images or videos, the geometric encoding must be transformed into constructs depicting physical features, objects and movement represented by the image or video. This chapter explores text, images, and video analytics in fog computing.


2016 ◽  
Vol 23 (5) ◽  
pp. 529-541 ◽  
Author(s):  
Sara Ajina ◽  
Holly Bridge

Damage to the primary visual cortex removes the major input from the eyes to the brain, causing significant visual loss as patients are unable to perceive the side of the world contralateral to the damage. Some patients, however, retain the ability to detect visual information within this blind region; this is known as blindsight. By studying the visual pathways that underlie this residual vision in patients, we can uncover additional aspects of the human visual system that likely contribute to normal visual function but cannot be revealed under physiological conditions. In this review, we discuss the residual abilities and neural activity that have been described in blindsight and the implications of these findings for understanding the intact system.


Author(s):  
Norman Yujen Teng

Tye argues that visual mental images have their contents encoded in topographically organized regions of the visual cortex, which support depictive representations; therefore, visual mental images rely at least in part on depictive representations. This argument, I contend, does not support its conclusion. I propose that we divide the problem about the depictive nature of mental imagery into two parts: one concerns the format of image representation and the other the conditions by virtue of which a representation becomes a depictive representation. Regarding the first part of the question, I argue that there exists a topographic format in the brain but that does not imply that there exists a depictive format of image representation. My answer to the second part of the question is that one needs a content analysis of a certain sort of topographic representations in order to make sense of depictive mental representations, and a topographic representation becomes a depictive representation by virtue of its content rather than its form.


Sign in / Sign up

Export Citation Format

Share Document