scholarly journals Memory consolidation and improvement by synaptic tagging and capture in recurrent neural networks

2020 ◽  
Author(s):  
Jannik Luboeinski ◽  
Christian Tetzlaff

AbstractThe synaptic-tagging-and-capture (STC) hypothesis formulates that at each synapse the concurrence of a tag with protein synthesis yields the maintenance of changes induced by synaptic plasticity. This hypothesis provides a biological principle underlying the synaptic consolidation of memories that is not verified for recurrent neural circuits. We developed a theoretical model integrating the mechanisms underlying the STC hypothesis with calcium-based synaptic plasticity in a recurrent spiking neural network. In the model, calcium-based synaptic plasticity yields the formation of strongly interconnected cell assemblies encoding memories, followed by consolidation through the STC mechanisms. Furthermore, we find that the STC mechanisms have an up to now undiscovered effect on memories – with the passage of time they modify the storage of memories, such that after several hours memory recall is significantly improved. This kind of memory enhancement can provide a new principle for storing information in biological and artificial neural circuits.

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Jannik Luboeinski ◽  
Christian Tetzlaff

AbstractThe synaptic-tagging-and-capture (STC) hypothesis formulates that at each synapse the concurrence of a tag with protein synthesis yields the maintenance of changes induced by synaptic plasticity. This hypothesis provides a biological principle underlying the synaptic consolidation of memories that is not verified for recurrent neural circuits. We developed a theoretical model integrating the mechanisms underlying the STC hypothesis with calcium-based synaptic plasticity in a recurrent spiking neural network. In the model, calcium-based synaptic plasticity yields the formation of strongly interconnected cell assemblies encoding memories, followed by consolidation through the STC mechanisms. Furthermore, we show for the first time that STC mechanisms modify the storage of memories such that after several hours memory recall is significantly improved. We identify two contributing processes: a merely time-dependent passive improvement, and an active improvement during recall. The described characteristics can provide a new principle for storing information in biological and artificial neural circuits.


2021 ◽  
Vol 23 (6) ◽  
pp. 317-326
Author(s):  
E.A. Ryndin ◽  
◽  
N.V. Andreeva ◽  
V.V. Luchinin ◽  
K.S. Goncharov ◽  
...  

In the current era, design and development of artificial neural networks exploiting the architecture of the human brain have evolved rapidly. Artificial neural networks effectively solve a wide range of common for artificial intelligence tasks involving data classification and recognition, prediction, forecasting and adaptive control of object behavior. Biologically inspired underlying principles of ANN operation have certain advantages over the conventional von Neumann architecture including unsupervised learning, architectural flexibility and adaptability to environmental change and high performance under significantly reduced power consumption due to heavy parallel and asynchronous data processing. In this paper, we present the circuit design of main functional blocks (neurons and synapses) intended for hardware implementation of a perceptron-based feedforward spiking neural network. As the third generation of artificial neural networks, spiking neural networks perform data processing utilizing spikes, which are discrete events (or functions) that take place at points in time. Neurons in spiking neural networks initiate precisely timing spikes and communicate with each other via spikes transmitted through synaptic connections or synapses with adaptable scalable weight. One of the prospective approach to emulate the synaptic behavior in hardware implemented spiking neural networks is to use non-volatile memory devices with analog conduction modulation (or memristive structures). Here we propose a circuit design for functional analogues of memristive structure to mimic a synaptic plasticity, pre- and postsynaptic neurons which could be used for developing circuit design of spiking neural network architectures with different training algorithms including spike-timing dependent plasticity learning rule. Two different circuits of electronic synapse were developed. The first one is an analog synapse with photoresistive optocoupler used to ensure the tunable conductivity for synaptic plasticity emulation. While the second one is a digital synapse, in which the synaptic weight is stored in a digital code with its direct conversion into conductivity (without digital-to-analog converter andphotoresistive optocoupler). The results of the prototyping of developed circuits for electronic analogues of synapses, pre- and postsynaptic neurons and the study of transient processes are presented. The developed approach could provide a basis for ASIC design of spiking neural networks based on CMOS (complementary metal oxide semiconductor) design technology.


2017 ◽  
Vol 2017 ◽  
pp. 1-15 ◽  
Author(s):  
Miwako Yamasaki ◽  
Tomonori Takeuchi

Most everyday memories including many episodic-like memories that we may form automatically in the hippocampus (HPC) are forgotten, while some of them are retained for a long time by a memory stabilization process, called initial memory consolidation. Specifically, the retention of everyday memory is enhanced, in humans and animals, when something novel happens shortly before or after the time of encoding. Converging evidence has indicated that dopamine (DA) signaling via D1/D5receptors in HPC is required for persistence of synaptic plasticity and memory, thereby playing an important role in the novelty-associated memory enhancement. In this review paper, we aim to provide an overview of the key findings related to D1/D5receptor-dependent persistence of synaptic plasticity and memory in HPC, especially focusing on the emerging evidence for a role of the locus coeruleus (LC) in DA-dependent memory consolidation. We then refer to candidate brain areas and circuits that might be responsible for detection and transmission of the environmental novelty signal and molecular and anatomical evidence for the LC-DA system. We also discuss molecular mechanisms that might mediate the environmental novelty-associated memory enhancement, including plasticity-related proteins that are involved in initial memory consolidation processes in HPC.


2017 ◽  
Author(s):  
Camilo J. Mininni ◽  
B. Silvano Zanutto

AbstractAnimals are proposed to learn the latent rules governing their environment in order to maximize their chances of survival. However, rules may change without notice, forcing animals to keep a memory of which one is currently at work. Rule switching can lead to situations in which the same stimulus/response pairing is positively and negatively rewarded in the long run, depending on variables that are not accessible to the animal. This fact rises questions on how neural systems are capable of reinforcement learning in environments where the reinforcement is inconsistent. Here we address this issue by asking about which aspects of connectivity, neural excitability and synaptic plasticity are key for a very general, stochastic spiking neural network model to solve a task in which rules change without being cued, taking the serial reversal task (SRT) as paradigm. Contrary to what could be expected, we found strong limitations for biologically plausible networks to solve the SRT. Especially, we proved that no network of neurons can learn a SRT if it is a single neural population that integrates stimuli information and at the same time is responsible of choosing the behavioural response. This limitation is independent of the number of neurons, neuronal dynamics or plasticity rules, and arises from the fact that plasticity is locally computed at each synapse, and that synaptic changes and neuronal activity are mutually dependent processes. We propose and characterize a spiking neural network model that solves the SRT, which relies on separating the functions of stimuli integration and response selection. The model suggests that experimental efforts to understand neural function should focus on the characterization of neural circuits according to their connectivity, neural dynamics, and the degree of modulation of synaptic plasticity with reward.


PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0244683
Author(s):  
Lei Guo ◽  
Enyu Kan ◽  
Youxi Wu ◽  
Huan Lv ◽  
Guizhi Xu

With the continuous improvement of automation and informatization, the electromagnetic environment has become increasingly complex. Traditional protection methods for electronic systems are facing with serious challenges. Biological nervous system has the self-adaptive advantages under the regulation of the nervous system. It is necessary to explore a new thought on electromagnetic protection by drawing from the self-adaptive advantage of the biological nervous system. In this study, the scale-free spiking neural network (SFSNN) is constructed, in which the Izhikevich neuron model is employed as a node, and the synaptic plasticity model including excitatory and inhibitory synapses is employed as an edge. Under white Gaussian noise, the noise suppression abilities of the SFSNNs with the high average clustering coefficient (ACC) and the SFSNNs with the low ACC are studied comparatively. The noise suppression mechanism of the SFSNN is explored. The experiment results demonstrate that the following. (1) The SFSNN has a certain degree of noise suppression ability, and the SFSNNs with the high ACC have higher noise suppression performance than the SFSNNs with the low ACC. (2) The neural information processing of the SFSNN is the linkage effect of dynamic changes in neuron firing, synaptic weight and topological characteristics. (3) The synaptic plasticity is the intrinsic factor of the noise suppression ability of the SFSNN.


2021 ◽  
Author(s):  
Jacopo Bono ◽  
Sara Zannone ◽  
Victor Pedrosa ◽  
Claudia Clopath

AbstractWe describe a framework where a biologically plausible spiking neural network mimicking hippocampal layers learns a cognitive map known as the successor representation. We show analytically how, on the algorithmic level, the learning follows the TD(λ) algorithm, which emerges from the underlying spike-timing dependent plasticity rule. We then analyze the implications of this framework, uncovering how behavioural activity and experience replays can play complementary roles when learning the representation of the environment, how we can learn relations over behavioural timescales with synaptic plasticity acting on the range of milliseconds, and how the learned representation can be flexibly encoded by allowing state-dependent delay discounting through neuromodulation and altered firing rates.


Sign in / Sign up

Export Citation Format

Share Document