AN ADAPTIVE VISUAL NEURONAL MODEL IMPLEMENTING COMPETITIVE, TEMPORALLY ASYMMETRIC HEBBIAN LEARNING

2006 ◽  
Vol 16 (03) ◽  
pp. 151-162 ◽  
Author(s):  
ZHIJUN YANG ◽  
KATHERINE L. CAMERON ◽  
ALAN F. MURRAY ◽  
VASIN BOONSOBHAK

A novel depth-from-motion vision model based on leaky integrate-and-fire (I&F) neurons incorporates the implications of recent neurophysiological findings into an algorithm for object discovery and depth analysis. Pulse-coupled I&F neurons capture the edges in an optical flow field and the associated time of travel of those edges is encoded as the neuron parameters, mainly the time constant of the membrane potential and synaptic weight. Correlations between spikes and their timing thus code depth in the visual field. Neurons have multiple output synapses connecting to neighbouring neurons with an initial Gaussian weight distribution. A temporally asymmetric learning rule is used to adapt the synaptic weights online, during which competitive behaviour emerges between the different input synapses of a neuron. It is shown that the competition mechanism can further improve the model performance. After training, the weights of synapses sourced from a neuron do not display a Gaussian distribution, having adapted to encode features of the scenes to which they have been exposed.

2004 ◽  
Vol 16 (3) ◽  
pp. 535-561 ◽  
Author(s):  
Reiner Schulz ◽  
James A. Reggia

We examine the extent to which modified Kohonen self-organizing maps (SOMs) can learn unique representations of temporal sequences while still supporting map formation. Two biologically inspired extensions are made to traditional SOMs: selection of multiple simultaneous rather than single “winners” and the use of local intramap connections that are trained according to a temporally asymmetric Hebbian learning rule. The extended SOM is then trained with variable-length temporal sequences that are composed of phoneme feature vectors, with each sequence corresponding to the phonetic transcription of a noun. The model transforms each input sequence into a spatial representation (final activation pattern on the map). Training improves this transformation by, for example, increasing the uniqueness of the spatial representations of distinct sequences, while still retaining map formation based on input patterns. The closeness of the spatial representations of two sequences is found to correlate significantly with the sequences' similarity. The extended model presented here raises the possibility that SOMs may ultimately prove useful as visualization tools for temporal sequences and as preprocessors for sequence pattern recognition systems.


2021 ◽  
Author(s):  
Moctar Dembélé ◽  
Bettina Schaefli ◽  
Grégoire Mariéthoz

<p>The diversity of remotely sensed or reanalysis-based rainfall data steadily increases, which on one hand opens new perspectives for large scale hydrological modelling in data scarce regions, but on the other hand poses challenging question regarding parameter identification and transferability under multiple input datasets. This study analyzes the variability of hydrological model performance when (1) a set of parameters is transferred from the calibration input dataset to a different meteorological datasets and reversely, when (2) an input dataset is used with a parameter set, originally calibrated for a different input dataset.</p><p>The research objective is to highlight the uncertainties related to input data and the limitations of hydrological model parameter transferability across input datasets. An ensemble of 17 rainfall datasets and 6 temperature datasets from satellite and reanalysis sources (Dembélé et al., 2020), corresponding to 102 combinations of meteorological data, is used to force the fully distributed mesoscale Hydrologic Model (mHM). The mHM model is calibrated for each combination of meteorological datasets, thereby resulting in 102 calibrated parameter sets, which almost all give similar model performance. Each of the 102 parameter sets is used to run the mHM model with each of the 102 input datasets, yielding 10404 scenarios to that serve for the transferability tests. The experiment is carried out for a decade from 2003 to 2012 in the large and data-scarce Volta River basin (415600 km2) in West Africa.</p><p>The results show that there is a high variability in model performance for streamflow (mean CV=105%) when the parameters are transferred from the original input dataset to other input datasets (test 1 above). Moreover, the model performance is in general lower and can drop considerably when parameters obtained under all other input datasets are transferred to a selected input dataset (test 2 above). This underlines the need for model performance evaluation when different input datasets and parameter sets than those used during calibration are used to run a model. Our results represent a first step to tackle the question of parameter transferability to climate change scenarios. An in-depth analysis of the results at a later stage will shed light on which model parameterizations might be the main source of performance variability.</p><p>Dembélé, M., Schaefli, B., van de Giesen, N., & Mariéthoz, G. (2020). Suitability of 17 rainfall and temperature gridded datasets for large-scale hydrological modelling in West Africa. Hydrology and Earth System Sciences (HESS). https://doi.org/10.5194/hess-24-5379-2020</p>


2018 ◽  
Author(s):  
Damien Drix ◽  
Verena V. Hafner ◽  
Michael Schmuker

AbstractCortical neurons are silent most of the time. This sparse activity is energy efficient, and the resulting neural code has favourable properties for associative learning. Most neural models of sparse coding use some form of homeostasis to ensure that each neuron fires infrequently. But homeostatic plasticity acting on a fast timescale may not be biologically plausible, and could lead to catastrophic forgetting in embodied agents that learn continuously. We set out to explore whether inhibitory plasticity could play that role instead, regulating both the population sparseness and the average firing rates. We put the idea to the test in a hybrid network where rate-based dendritic compartments integrate the feedforward input, while spiking somas compete through recurrent inhibition. A somato-dendritic learning rule allows somatic inhibition to modulate nonlinear Hebbian learning in the dendrites. Trained on MNIST digits and natural images, the network discovers independent components that form a sparse encoding of the input and support linear decoding. These findings con-firm that intrinsic plasticity is not strictly required for regulating sparseness: inhibitory plasticity can have the same effect, although that mechanism comes with its own stability-plasticity dilemma. Going beyond point neuron models, the network illustrates how a learning rule can make use of dendrites and compartmentalised inputs; it also suggests a functional interpretation for clustered somatic inhibition in cortical neurons.


2020 ◽  
Vol 117 (47) ◽  
pp. 29948-29958
Author(s):  
Maxwell Gillett ◽  
Ulises Pereira ◽  
Nicolas Brunel

Sequential activity has been observed in multiple neuronal circuits across species, neural structures, and behaviors. It has been hypothesized that sequences could arise from learning processes. However, it is still unclear whether biologically plausible synaptic plasticity rules can organize neuronal activity to form sequences whose statistics match experimental observations. Here, we investigate temporally asymmetric Hebbian rules in sparsely connected recurrent rate networks and develop a theory of the transient sequential activity observed after learning. These rules transform a sequence of random input patterns into synaptic weight updates. After learning, recalled sequential activity is reflected in the transient correlation of network activity with each of the stored input patterns. Using mean-field theory, we derive a low-dimensional description of the network dynamics and compute the storage capacity of these networks. Multiple temporal characteristics of the recalled sequential activity are consistent with experimental observations. We find that the degree of sparseness of the recalled sequences can be controlled by nonlinearities in the learning rule. Furthermore, sequences maintain robust decoding, but display highly labile dynamics, when synaptic connectivity is continuously modified due to noise or storage of other patterns, similar to recent observations in hippocampus and parietal cortex. Finally, we demonstrate that our results also hold in recurrent networks of spiking neurons with separate excitatory and inhibitory populations.


2019 ◽  
Vol 6 (4) ◽  
pp. 181098 ◽  
Author(s):  
Le Zhao ◽  
Jie Xu ◽  
Xiantao Shang ◽  
Xue Li ◽  
Qiang Li ◽  
...  

Non-volatile memristors are promising for future hardware-based neurocomputation application because they are capable of emulating biological synaptic functions. Various material strategies have been studied to pursue better device performance, such as lower energy cost, better biological plausibility, etc. In this work, we show a novel design for non-volatile memristor based on CoO/Nb:SrTiO 3 heterojunction. We found the memristor intrinsically exhibited resistivity switching behaviours, which can be ascribed to the migration of oxygen vacancies and charge trapping and detrapping at the heterojunction interface. The carrier trapping/detrapping level can be finely adjusted by regulating voltage amplitudes. Gradual conductance modulation can therefore be realized by using proper voltage pulse stimulations. And the spike-timing-dependent plasticity, an important Hebbian learning rule, has been implemented in the device. Our results indicate the possibility of achieving artificial synapses with CoO/Nb:SrTiO 3 heterojunction. Compared with filamentary type of the synaptic device, our device has the potential to reduce energy consumption, realize large-scale neuromorphic system and work more reliably, since no structural distortion occurs.


1989 ◽  
Vol 03 (07) ◽  
pp. 555-560 ◽  
Author(s):  
M.V. TSODYKS

We consider the Hopfield model with the most simple form of the Hebbian learning rule, when only simultaneous activity of pre- and post-synaptic neurons leads to modification of synapse. An extra inhibition proportional to full network activity is needed. Both symmetric nondiluted and asymmetric diluted networks are considered. The model performs well at extremely low level of activity p<K−1/2, where K is the mean number of synapses per neuron.


2010 ◽  
Vol 22 (3) ◽  
pp. 689-729 ◽  
Author(s):  
Vilson Luiz Dalle Mole ◽  
Aluizio Fausto Ribeiro Araújo

The growing self-organizing surface map (GSOSM) is a novel map model that learns a folded surface immersed in a 3D space. Starting from a dense point cloud, the surface is reconstructed through an incremental mesh composed of approximately equilateral triangles. Unlike other models such as neural meshes (NM), the GSOSM builds a surface topology while accepting any sequence of sample presentation. The GSOSM model introduces a novel connection learning rule called competitive connection Hebbian learning (CCHL), which produces a complete triangulation. GSOSM reconstructions are accurate and often free of false or overlapping faces. This letter presents and discusses the GSOSM model. It also presents and analyzes a set of results and compares GSOSM with some other models.


2010 ◽  
Vol 22 (8) ◽  
pp. 2059-2085 ◽  
Author(s):  
Daniel Bush ◽  
Andrew Philippides ◽  
Phil Husbands ◽  
Michael O'Shea

Rate-coded Hebbian learning, as characterized by the BCM formulation, is an established computational model of synaptic plasticity. Recently it has been demonstrated that changes in the strength of synapses in vivo can also depend explicitly on the relative timing of pre- and postsynaptic firing. Computational modeling of this spike-timing-dependent plasticity (STDP) has demonstrated that it can provide inherent stability or competition based on local synaptic variables. However, it has also been demonstrated that these properties rely on synaptic weights being either depressed or unchanged by an increase in mean stochastic firing rates, which directly contradicts empirical data. Several analytical studies have addressed this apparent dichotomy and identified conditions under which distinct and disparate STDP rules can be reconciled with rate-coded Hebbian learning. The aim of this research is to verify, unify, and expand on these previous findings by manipulating each element of a standard computational STDP model in turn. This allows us to identify the conditions under which this plasticity rule can replicate experimental data obtained using both rate and temporal stimulation protocols in a spiking recurrent neural network. Our results describe how the relative scale of mean synaptic weights and their dependence on stochastic pre- or postsynaptic firing rates can be manipulated by adjusting the exact profile of the asymmetric learning window and temporal restrictions on spike pair interactions respectively. These findings imply that previously disparate models of rate-coded autoassociative learning and temporally coded heteroassociative learning, mediated by symmetric and asymmetric connections respectively, can be implemented in a single network using a single plasticity rule. However, we also demonstrate that forms of STDP that can be reconciled with rate-coded Hebbian learning do not generate inherent synaptic competition, and thus some additional mechanism is required to guarantee long-term input-output selectivity.


Sign in / Sign up

Export Citation Format

Share Document