scholarly journals STDP Forms Associations between Memory Traces in Networks of Spiking Neurons

2019 ◽  
Vol 30 (3) ◽  
pp. 952-968
Author(s):  
Christoph Pokorny ◽  
Matias J Ison ◽  
Arjun Rao ◽  
Robert Legenstein ◽  
Christos Papadimitriou ◽  
...  

Abstract Memory traces and associations between them are fundamental for cognitive brain function. Neuron recordings suggest that distributed assemblies of neurons in the brain serve as memory traces for spatial information, real-world items, and concepts. However, there is conflicting evidence regarding neural codes for associated memory traces. Some studies suggest the emergence of overlaps between assemblies during an association, while others suggest that the assemblies themselves remain largely unchanged and new assemblies emerge as neural codes for associated memory items. Here we study the emergence of neural codes for associated memory items in a generic computational model of recurrent networks of spiking neurons with a data-constrained rule for spike-timing-dependent plasticity. The model depends critically on 2 parameters, which control the excitability of neurons and the scale of initial synaptic weights. By modifying these 2 parameters, the model can reproduce both experimental data from the human brain on the fast formation of associations through emergent overlaps between assemblies, and rodent data where new neurons are recruited to encode the associated memories. Hence, our findings suggest that the brain can use both of these 2 neural codes for associations, and dynamically switch between them during consolidation.

2017 ◽  
Author(s):  
Christoph Pokorny ◽  
Matias J. Ison ◽  
Arjun Rao ◽  
Robert Legenstein ◽  
Christos Papadimitriou ◽  
...  

AbstractMemory traces and associations between them are fundamental for cognitive brain function. Neuron recordings suggest that distributed assemblies of neurons in the brain serve as memory traces for spatial information, real-world items, and concepts. How-ever, there is conflicting evidence regarding neural codes for associated memory traces. Some studies suggest the emergence of overlaps between assemblies during an association, while others suggest that the assemblies themselves remain largely unchanged and new assemblies emerge as neural codes for associated memory items. Here we study the emergence of neural codes for associated memory items in a generic computational model of recurrent networks of spiking neurons with a data-constrained rule for spike-timing-dependent plasticity (STDP). The model depends critically on two parameters, which control the excitability of neurons and the scale of initial synaptic weights. By modifying these two parameters, the model can reproduce both experimental data from the human brain on the fast formation of associations through emergent overlaps between assemblies, and rodent data where new neurons are recruited to encode the associated memories. Hence our findings suggest that the brain can use both of these two neural codes for associations, and dynamically switch between them during consolidation.


Author(s):  
Romain Brette

Abstract “Neural coding” is a popular metaphor in neuroscience, where objective properties of the world are communicated to the brain in the form of spikes. Here I argue that this metaphor is often inappropriate and misleading. First, when neurons are said to encode experimental parameters, the neural code depends on experimental details that are not carried by the coding variable (e.g., the spike count). Thus, the representational power of neural codes is much more limited than generally implied. Second, neural codes carry information only by reference to things with known meaning. In contrast, perceptual systems must build information from relations between sensory signals and actions, forming an internal model. Neural codes are inadequate for this purpose because they are unstructured and therefore unable to represent relations. Third, coding variables are observables tied to the temporality of experiments, whereas spikes are timed actions that mediate coupling in a distributed dynamical system. The coding metaphor tries to fit the dynamic, circular, and distributed causal structure of the brain into a linear chain of transformations between observables, but the two causal structures are incongruent. I conclude that the neural coding metaphor cannot provide a valid basis for theories of brain function, because it is incompatible with both the causal structure of the brain and the representational requirements of cognition.


2019 ◽  
Author(s):  
Guillaume Bellec ◽  
Franz Scherr ◽  
Anand Subramoney ◽  
Elias Hajek ◽  
Darjan Salaj ◽  
...  

AbstractRecurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. But in spite of extensive research, it has remained open how they can learn through synaptic plasticity to carry out complex network computations. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A new mathematical insight tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This new learning method – called e-prop – approaches the performance of BPTT (backpropagation through time), the best known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in novel energy-efficient spike-based hardware for AI.


2020 ◽  
Author(s):  
Eric C. Wong

ABSTRACTThe brain is thought to represent information in the form of activity in distributed groups of neurons known as attractors, but it is not clear how attractors are formed or used in processing. We show here that in a randomly connected network of simulated spiking neurons, periodic stimulation of neurons with distributed phase offsets, along with standard spike timing dependent plasticity (STDP), efficiently creates distributed attractors. These attractors may have a consistent ordered firing pattern, or become disordered, depending on the conditions. We also show that when two such attractors are stimulated in sequence, the same STDP mechanism can create a directed association between them, forming the basis of an associative network. We find that for an STDP time constant of 20ms, the dependence of the efficiency of attractor creation on the driving frequency has a broad peak centered around 8Hz. Upon restimulation, the attractors selfoscillate, but with an oscillation frequency that is higher than the driving frequency, ranging from 10-100Hz.


2019 ◽  
Author(s):  
David Rotermund ◽  
Klaus R. Pawelzik

AbstractArtificial deep convolutional networks (DCNs) meanwhile beat even human performance in challenging tasks. Recently DCNs were shown to also predict real neuronal responses. Their relevance for understanding the neuronal networks in the brain, however, remains questionable. In contrast to the unidirectional architecture of DCNs neurons in cortex are recurrently connected and exchange signals by short pulses, the action potentials. Furthermore, learning in the brain is based on local synaptic mechanisms, in stark contrast to the global optimization methods used in technical deep networks. What is missing is a similarly powerful approach with spiking neurons that employs local synaptic learning mechanisms for optimizing global network performance. Here, we present a framework consisting of mutually coupled local circuits of spiking neurons. The dynamics of the circuits is derived from first principles to optimally encode their respective inputs. From the same global objective function a local learning rule is derived that corresponds to spike-timing dependent plasticity of the excitatory inter-circuit synapses. For deep networks built from these circuits self-organization is based on the ensemble of inputs while for supervised learning the desired outputs are applied in parallel as additional inputs to output layers.Generality of the approach is shown with Boolean functions and its functionality is demonstrated with an image classification task, where networks of spiking neurons approach the performance of their artificial cousins. Since the local circuits operate independently and in parallel, the novel framework not only meets a fundamental property of the brain but also allows for the construction of special hardware. We expect that this will in future enable investigations of very large network architectures far beyond current DCNs, including also large scale models of cortex where areas consisting of many local circuits form a complex cyclic network.


2021 ◽  
pp. 1-22
Author(s):  
Eric C. Wong

The brain is thought to represent information in the form of activity in distributed groups of neurons known as attractors. We show here that in a randomly connected network of simulated spiking neurons, periodic stimulation of neurons with distributed phase offsets, along with standard spike-timing-dependent plasticity (STDP), efficiently creates distributed attractors. These attractors may have a consistent ordered firing pattern or become irregular, depending on the conditions. We also show that when two such attractors are stimulated in sequence, the same STDP mechanism can create a directed association between them, forming the basis of an associative network. We find that for an STDP time constant of 20 ms, the dependence of the efficiency of attractor creation on the driving frequency has a broad peak centered around 8 Hz. Upon restimulation, the attractors self-oscillate, but with an oscillation frequency that is higher than the driving frequency, ranging from 10 to 100 Hz.


2019 ◽  
Vol 42 ◽  
Author(s):  
Simon R. Schultz ◽  
Giuseppe P. Gava

Abstract Brains are information processing systems whose operational principles ultimately cannot be understood without resource to information theory. We suggest that understanding how external signals are represented in the brain is a necessary step towards employing further engineering tools (such as control theory) to understand the information processing performed by brain circuits during behaviour.


2013 ◽  
Vol 9 (2) ◽  
pp. e1002897 ◽  
Author(s):  
Robert R. Kerr ◽  
Anthony N. Burkitt ◽  
Doreen A. Thomas ◽  
Matthieu Gilson ◽  
David B. Grayden

Author(s):  
Jiankun Chen ◽  
Xiaolan Qiu ◽  
Chuanzhao Han ◽  
Yirong Wu

Recent neuroscience research results show that the nerve information in the brain is not only encoded by the spatial information. Spiking neural network based on pulse frequency coding plays a very important role in dealing with the problem of brain signal, especially complicated space-time information. In this paper, an unsupervised learning algorithm for bilayer feedforward spiking neural networks based on spike-timing dependent plasticity (STDP) competitiveness is proposed and applied to SAR image classification on MSTAR for the first time. The SNN learns autonomously from the input value without any labeled signal and the overall classification accuracy of SAR targets reached 80.8%. The experimental results show that the algorithm adopts the synaptic neurons and network structure with stronger biological rationality, and has the ability to classify targets on SAR image. Meanwhile, the feature map extraction ability of neurons is visualized by the generative property of SNN, which is a beneficial attempt to apply the brain-like neural network into SAR image interpretation.


Author(s):  
Preecha Yupapin ◽  
Amiri I. S. ◽  
Ali J. ◽  
Ponsuwancharoen N. ◽  
Youplao P.

The sequence of the human brain can be configured by the originated strongly coupling fields to a pair of the ionic substances(bio-cells) within the microtubules. From which the dipole oscillation begins and transports by the strong trapped force, which is known as a tweezer. The tweezers are the trapped polaritons, which are the electrical charges with information. They will be collected on the brain surface and transport via the liquid core guide wave, which is the mixture of blood content and water. The oscillation frequency is called the Rabi frequency, is formed by the two-level atom system. Our aim will manipulate the Rabi oscillation by an on-chip device, where the quantum outputs may help to form the realistic human brain function for humanoid robotic applications.


Sign in / Sign up

Export Citation Format

Share Document