scholarly journals Biologically plausible learning in a deep recurrent spiking network

2019 ◽  
Author(s):  
David Rotermund ◽  
Klaus R. Pawelzik

AbstractArtificial deep convolutional networks (DCNs) meanwhile beat even human performance in challenging tasks. Recently DCNs were shown to also predict real neuronal responses. Their relevance for understanding the neuronal networks in the brain, however, remains questionable. In contrast to the unidirectional architecture of DCNs neurons in cortex are recurrently connected and exchange signals by short pulses, the action potentials. Furthermore, learning in the brain is based on local synaptic mechanisms, in stark contrast to the global optimization methods used in technical deep networks. What is missing is a similarly powerful approach with spiking neurons that employs local synaptic learning mechanisms for optimizing global network performance. Here, we present a framework consisting of mutually coupled local circuits of spiking neurons. The dynamics of the circuits is derived from first principles to optimally encode their respective inputs. From the same global objective function a local learning rule is derived that corresponds to spike-timing dependent plasticity of the excitatory inter-circuit synapses. For deep networks built from these circuits self-organization is based on the ensemble of inputs while for supervised learning the desired outputs are applied in parallel as additional inputs to output layers.Generality of the approach is shown with Boolean functions and its functionality is demonstrated with an image classification task, where networks of spiking neurons approach the performance of their artificial cousins. Since the local circuits operate independently and in parallel, the novel framework not only meets a fundamental property of the brain but also allows for the construction of special hardware. We expect that this will in future enable investigations of very large network architectures far beyond current DCNs, including also large scale models of cortex where areas consisting of many local circuits form a complex cyclic network.


2019 ◽  
Vol 30 (3) ◽  
pp. 952-968
Author(s):  
Christoph Pokorny ◽  
Matias J Ison ◽  
Arjun Rao ◽  
Robert Legenstein ◽  
Christos Papadimitriou ◽  
...  

Abstract Memory traces and associations between them are fundamental for cognitive brain function. Neuron recordings suggest that distributed assemblies of neurons in the brain serve as memory traces for spatial information, real-world items, and concepts. However, there is conflicting evidence regarding neural codes for associated memory traces. Some studies suggest the emergence of overlaps between assemblies during an association, while others suggest that the assemblies themselves remain largely unchanged and new assemblies emerge as neural codes for associated memory items. Here we study the emergence of neural codes for associated memory items in a generic computational model of recurrent networks of spiking neurons with a data-constrained rule for spike-timing-dependent plasticity. The model depends critically on 2 parameters, which control the excitability of neurons and the scale of initial synaptic weights. By modifying these 2 parameters, the model can reproduce both experimental data from the human brain on the fast formation of associations through emergent overlaps between assemblies, and rodent data where new neurons are recruited to encode the associated memories. Hence, our findings suggest that the brain can use both of these 2 neural codes for associations, and dynamically switch between them during consolidation.



2019 ◽  
Vol 6 (4) ◽  
pp. 181098 ◽  
Author(s):  
Le Zhao ◽  
Jie Xu ◽  
Xiantao Shang ◽  
Xue Li ◽  
Qiang Li ◽  
...  

Non-volatile memristors are promising for future hardware-based neurocomputation application because they are capable of emulating biological synaptic functions. Various material strategies have been studied to pursue better device performance, such as lower energy cost, better biological plausibility, etc. In this work, we show a novel design for non-volatile memristor based on CoO/Nb:SrTiO 3 heterojunction. We found the memristor intrinsically exhibited resistivity switching behaviours, which can be ascribed to the migration of oxygen vacancies and charge trapping and detrapping at the heterojunction interface. The carrier trapping/detrapping level can be finely adjusted by regulating voltage amplitudes. Gradual conductance modulation can therefore be realized by using proper voltage pulse stimulations. And the spike-timing-dependent plasticity, an important Hebbian learning rule, has been implemented in the device. Our results indicate the possibility of achieving artificial synapses with CoO/Nb:SrTiO 3 heterojunction. Compared with filamentary type of the synaptic device, our device has the potential to reduce energy consumption, realize large-scale neuromorphic system and work more reliably, since no structural distortion occurs.



2003 ◽  
Vol 15 (3) ◽  
pp. 565-596 ◽  
Author(s):  
Daniel J. Amit ◽  
Gianluigi Mongillo

The collective behavior of a network, modeling a cortical module of spiking neurons connected by plastic synapses is studied. A detailed spike-driven synaptic dynamics is simulated in a large network of spiking neurons, implementing the full double dynamics of neurons and synapses. The repeated presentation of a set of external stimuli is shown to structure the network to the point of sustaining working memory (selective delay activity). When the synaptic dynamics is analyzed as a function of pre- and postsynaptic spike rates in functionally defined populations, it reveals a novel variation of the Hebbian plasticity paradigm: in any functional set of synapses between pairs of neurons (e.g., stimulated—stimulated, stimulated—delay, stimulated—spontaneous), there is a finite probability of potentiation as well as of depression. This leads to a saturation of potentiation or depression at the level of the ratio of the two probabilities. When one of the two probabilities is very high relative to the other, the familiar Hebbian mechanism is recovered. But where correlated working memory is formed, it prevents overlearning. Constraints relevant to the stability of the acquired synaptic structure and the regimes of global activity allowing for structuring are expressed in terms of the parameters describing the single-synapse dynamics. The synaptic dynamics is discussed in the light of experiments observing precise spike timing effects and related issues of biological plausibility.



2018 ◽  
Author(s):  
Alexandre Dizeux ◽  
Marc Gesnik ◽  
Harry Ahnine ◽  
Kevin Blaize ◽  
Fabrice Arcizet ◽  
...  

ABSTRACTIn recent decades, neuroimaging has played an invaluable role in improving the fundamental understanding of the brain. At the macro scale, neuroimaging modalities such as MRI, EEG, and MEG, exploit a wide field of view to explore the brain as a global network of interacting regions. However, this comes at the price of either limited spatiotemporal resolution or limited sensitivity. At the micro scale, electrophysiology is used to explore the dynamic aspects of neuronal activity with a very high temporal resolution. However, this modality requires a statistical averaging of several tens of single task responses. A large-scale neuroimaging modality of sufficient spatial and temporal resolution and sensitivity to study brain region activation dynamically would open new territories of possibility in neuroscienceWe show that neurofunctional ultrasound imaging (fUS) is both able to assess brain activation during single cognitive tasks within superficial and deeper areas of the frontal cortex areas, and image the directional propagation of information within and between these regions. Equipped with an fUS device, two macaque rhesus monkeys were instructed before a stimulus appeared to rest (fixation) or to look towards (saccade) or away (antisaccade) from a stimulus. Our results identified an abrupt transient change in activity for all acquisitions in the supplementary eye field (SEF) when the animals were required to change a rule regarding the task cued by a stimulus. Simultaneous imaging in the anterior cingulate cortex and SEF revealed a time delay in the directional functional connectivity of 0.27 ± 0.07 s and 0.9 ± 0.2 s for animals S and Y, respectively. These results provide initial evidence that recording cerebral hemodynamics over large brain areas at a high spatiotemporal resolution and sensitivity with neurofunctional ultrasound can reveal instantaneous monitoring of endogenous brain signals and behavior.



2003 ◽  
Vol 15 (2) ◽  
pp. 208-218 ◽  
Author(s):  
Yusuke Kanazawa ◽  
◽  
Tetsuya Asai ◽  
Yoshihito Amemiya

We discuss the integration architecture of spiking neurons, predicted to be next-generation basic circuits of neural processor and dynamic synapse circuits. A key to development of a brain-like processor is to learn from the brain. Learning from the brain, we try to develop circuits implementing neuron and synapse functions while enabling large-scale integration, so large-scale integrated circuits (LSIs) realize functional behavior of neural networks. With such VLSI, we try to construct a large-scale neural network on a single semiconductor chip. With circuit integration now reaching micron levels, however, problems have arisen in dispersion of device performance in analog IC and in the influence of electromagnetic noise. A genuine brain computer should solve such problems on the network level rather than the element level. To achieve such a target, we must develop an architecture that learns brain functions sufficiently and works correctly even in a noisy environment. As the first step, we propose an analog circuit architecture of spiking neurons and dynamic synapses representing the model of artificial neurons and synapses in a form closer to that of the brain. With the proposed circuit, the model of neurons and synapses can be integrated on a silicon chip with metal-oxide-semiconductor (MOS) devices. In the sections that follow, we discuss the dynamic performance of the proposed circuit by using a circuit simulator, HSPICE. As examples of networks using these circuits, we introduce a competitive neural network and an active pattern recognition network by extracting firing frequency information from input information. We also show simulation results of the operation of networks constructed with the proposed circuits.



2010 ◽  
Vol 22 (2) ◽  
pp. 467-510 ◽  
Author(s):  
Filip Ponulak ◽  
Andrzej Kasiński

Learning from instructions or demonstrations is a fundamental property of our brain necessary to acquire new knowledge and develop novel skills or behavioral patterns. This type of learning is thought to be involved in most of our daily routines. Although the concept of instruction-based learning has been studied for several decades, the exact neural mechanisms implementing this process remain unrevealed. One of the central questions in this regard is, How do neurons learn to reproduce template signals (instructions) encoded in precisely timed sequences of spikes? Here we present a model of supervised learning for biologically plausible neurons that addresses this question. In a set of experiments, we demonstrate that our approach enables us to train spiking neurons to reproduce arbitrary template spike patterns in response to given synaptic stimuli even in the presence of various sources of noise. We show that the learning rule can also be used for decision-making tasks. Neurons can be trained to classify categories of input signals based on only a temporal configuration of spikes. The decision is communicated by emitting precisely timed spike trains associated with given input categories. Trained neurons can perform the classification task correctly even if stimuli and corresponding decision times are temporally separated and the relevant information is consequently highly overlapped by the ongoing neural activity. Finally, we demonstrate that neurons can be trained to reproduce sequences of spikes with a controllable time shift with respect to target templates. A reproduced signal can follow or even precede the targets. This surprising result points out that spiking neurons can potentially be applied to forecast the behavior (firing times) of other reference neurons or networks.



2010 ◽  
Vol 104 (6) ◽  
pp. 3476-3493 ◽  
Author(s):  
Umberto Olcese ◽  
Steve K. Esser ◽  
Giulio Tononi

Recent evidence indicates that net synaptic strength in cortical and other networks increases during wakefulness and returns to a baseline level during sleep. These homeostatic changes in synaptic strength are accompanied by corresponding changes in sleep slow wave activity (SWA) and in neuronal firing rates and synchrony. Other evidence indicates that sleep is associated with an initial reactivation of learned firing patterns that decreases over time. Finally, sleep can enhance performance of learned tasks, aid memory consolidation, and desaturate the ability to learn. Using a large-scale model of the corticothalamic system equipped with a spike-timing dependent learning rule, in agreement with experimental results, we demonstrate a net increase in synaptic strength in the waking mode associated with an increase in neuronal firing rates and synchrony. In the sleep mode, net synaptic strength decreases accompanied by a decline in SWA. We show that the interplay of activity and plasticity changes implements a control loop yielding an exponential, self-limiting renormalization of synaptic strength. Moreover, when the model “learns” a sequence of activation during waking, the learned sequence is preferentially reactivated during sleep, and reactivation declines over time. Finally, sleep-dependent synaptic renormalization leads to increased signal-to-noise ratios, increased resistance to interference, and desaturation of learning capabilities. Although the specific mechanisms implemented in the model cannot capture the variety and complexity of biological substrates, and will need modifications in line with future evidence, the present simulations provide a unified, parsimonious account for diverse experimental findings coming from molecular, electrophysiological, and behavioral approaches.



2020 ◽  
Author(s):  
Eric C. Wong

ABSTRACTThe brain is thought to represent information in the form of activity in distributed groups of neurons known as attractors, but it is not clear how attractors are formed or used in processing. We show here that in a randomly connected network of simulated spiking neurons, periodic stimulation of neurons with distributed phase offsets, along with standard spike timing dependent plasticity (STDP), efficiently creates distributed attractors. These attractors may have a consistent ordered firing pattern, or become disordered, depending on the conditions. We also show that when two such attractors are stimulated in sequence, the same STDP mechanism can create a directed association between them, forming the basis of an associative network. We find that for an STDP time constant of 20ms, the dependence of the efficiency of attractor creation on the driving frequency has a broad peak centered around 8Hz. Upon restimulation, the attractors selfoscillate, but with an oscillation frequency that is higher than the driving frequency, ranging from 10-100Hz.



2019 ◽  
Author(s):  
Niels Trusbak Haumann ◽  
Minna Huotilainen ◽  
Peter Vuust ◽  
Elvira Brattico

AbstractThe accuracy of electroencephalography (EEG) and magnetoencephalography (MEG) is challenged by overlapping sources from within the brain. This lack of accuracy is a severe limitation to the possibilities and reliability of modern stimulation protocols in basic research and clinical diagnostics. As a solution, we here introduce a theory of stochastic neuronal spike timing probability densities for describing the large-scale spiking activity in neural networks, and a novel spike density component analysis (SCA) method for isolating specific neural sources. Three studies are conducted based on 564 cases of evoked responses to auditory stimuli from 94 human subjects each measured with 60 EEG electrodes and 306 MEG sensors. In the first study we show that the large-scale spike timing (but not non-encephalographic artifacts) in MEG/EEG waveforms can be modeled with Gaussian probability density functions with high accuracy (median 99.7%-99.9% variance explained), while gamma and sine functions fail describing the MEG and EEG waveforms. In the second study we confirm that SCA can isolate a specific evoked response of interest. Our findings indicate that the mismatch negativity (MMN) response is accurately isolated with SCA, while principal component analysis (PCA) fails supressing interference from overlapping brain activity, e.g. from P3a and alpha waves, and independent component analysis (ICA) distorts the evoked response. Finally, we confirm that SCA accurately reveals inter-individual variation in evoked brain responses, by replicating findings relating individual traits with MMN variations. The findings of this paper suggest that the commonly overlapping neural sources in single-subject or patient data can be more accurately separated by applying the introduced theory of large-scale spike timing and method of SCA in comparison to PCA and ICA.Significance statementElectroencephalography (EEG) and magnetoencelopraphy (MEG) are among the most applied non-invasive brain recording methods in humans. They are the only methods to measure brain function directly and in time resolutions smaller than seconds. However, in modern research and clinical diagnostics the brain responses of interest cannot be isolated, because of interfering signals of other ongoing brain activity. For the first time, we introduce a theory and method for mathematically describing and isolating overlapping brain signals, which are based on prior intracranial in vivo research on brain cells in monkey and human neural networks. Three studies mutually support our theory and suggest that a new level of accuracy in MEG/EEG can achieved by applying the procedures presented in this paper.



2021 ◽  
Author(s):  
Hao Wang ◽  
Hui-Jun Wu ◽  
Yang-Yu Liu ◽  
Linyuan Lu

Despite a relatively fixed anatomical structure, the human brain can support rich cognitive functions, triggering particular interest in investigating structure-function relationships. Myelin is a vital brain microstructure marker, yet the individual microstructure-function relationship is poorly understood. Here, we explore the brain microstructure-function relationships using a higher-order framework. Global (network-level) higher-order microstructure-function relationships negatively correlate with male participants' personality scores and decline with aging. Nodal (node-level) higher-order microstructure-function relationships are not aligned uniformly throughout the brain, being stronger in association cortices and lower in sensory cortices, showing gender differences. Notably, higher-order microstructure-function relationships are maintained from the whole-brain to local circuits, which uncovers a compelling and straightforward principle of brain structure-function interactions. Additionally, targeted artificial attacks can disrupt these higher-order relationships, and the main results are robust against several factors. Together, our results increase the collective knowledge of higher-order structure- function interactions that may underlie cognition, individual differences, and aging.



Sign in / Sign up

Export Citation Format

Share Document