scholarly journals A Mathematical Analysis of the Effects of Hebbian Learning Rules on the Dynamics and Structure of Discrete-Time Random Recurrent Neural Networks

2008 ◽  
Vol 20 (12) ◽  
pp. 2937-2966 ◽  
Author(s):  
Benoît Siri ◽  
Hugues Berry ◽  
Bruno Cessac ◽  
Bruno Delord ◽  
Mathias Quoy

We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

1989 ◽  
Vol 03 (07) ◽  
pp. 555-560 ◽  
Author(s):  
M.V. TSODYKS

We consider the Hopfield model with the most simple form of the Hebbian learning rule, when only simultaneous activity of pre- and post-synaptic neurons leads to modification of synapse. An extra inhibition proportional to full network activity is needed. Both symmetric nondiluted and asymmetric diluted networks are considered. The model performs well at extremely low level of activity p<K−1/2, where K is the mean number of synapses per neuron.


2003 ◽  
Vol 13 (04) ◽  
pp. 215-223 ◽  
Author(s):  
Marko Jankovic ◽  
Hidemitsu Ogawa

This paper presents one possible implementation of a transformation that performs linear mapping to a lower-dimensional subspace. Principal component subspace will be the one that will be analyzed. Idea implemented in this paper represents generalization of the recently proposed ∞OH neural method for principal component extraction. The calculations in the newly proposed method are performed locally — a feature which is usually considered as desirable from the biological point of view. Comparing to some other wellknown methods, proposed synaptic efficacy learning rule requires less information about the value of the other efficacies to make single efficacy modification. Synaptic efficacies are modified by implementation of Modulated Hebb-type (MH) learning rule. Slightly modified MH algorithm named Modulated Hebb Oja (MHO) algorithm, will be also introduced. Structural similarity of the proposed network with part of the retinal circuit will be presented, too.


2009 ◽  
Vol 72 (10-12) ◽  
pp. 2477-2482 ◽  
Author(s):  
Alexander Goltsev ◽  
Vladimir Gritsenko

2006 ◽  
Vol 02 (03) ◽  
pp. 237-253 ◽  
Author(s):  
AMMAR BELATRECHE ◽  
LIAM P. MAGUIRE ◽  
MARTIN MCGINNITY ◽  
QING XIANG WU

Unlike traditional artificial neural networks (ANNs), which use a high abstraction of real neurons, spiking neural networks (SNNs) offer a biologically plausible model of realistic neurons. They differ from classical artificial neural networks in that SNNs handle and communicate information by means of timing of individual pulses, an important feature of neuronal systems being ignored by models based on rate coding scheme. However, in order to make the most of these realistic neuronal models, good training algorithms are required. Most existing learning paradigms tune the synaptic weights in an unsupervised way using an adaptation of the famous Hebbian learning rule, which is based on the correlation between the pre- and post-synaptic neurons activity. Nonetheless, supervised learning is more appropriate when prior knowledge about the outcome of the network is available. In this paper, a new approach for supervised training is presented with a biologically plausible architecture. An adapted evolutionary strategy (ES) is used for adjusting the synaptic strengths and delays, which underlie the learning and memory processes in the nervous system. The algorithm is applied to complex non-linearly separable problems, and the results show that the network is able to perform learning successfully by means of temporal encoding of presented patterns.


2003 ◽  
Vol 15 (8) ◽  
pp. 1897-1929 ◽  
Author(s):  
Barbara Hammer ◽  
Peter Tiňo

Recent experimental studies indicate that recurrent neural networks initialized with “small” weights are inherently biased toward definite memory machines (Tiňno, Čerňanský, & Beňušková, 2002a, 2002b). This article establishes a theoretical counterpart: transition function of recurrent network with small weights and squashing activation function is a contraction. We prove that recurrent networks with contractive transition function can be approximated arbitrarily well on input sequences of unbounded length by a definite memory machine. Conversely, every definite memory machine can be simulated by a recurrent network with contractive transition function. Hence, initialization with small weights induces an architectural bias into learning with recurrent neural networks. This bias might have benefits from the point of view of statistical learning theory: it emphasizes one possible region of the weight space where generalization ability can be formally proved. It is well known that standard recurrent neural networks are not distribution independent learnable in the probably approximately correct (PAC) sense if arbitrary precision and inputs are considered. We prove that recurrent networks with contractive transition function with a fixed contraction parameter fulfill the so-called distribution independent uniform convergence of empirical distances property and hence, unlike general recurrent networks, are distribution independent PAC learnable.


2019 ◽  
Vol 6 (4) ◽  
pp. 181098 ◽  
Author(s):  
Le Zhao ◽  
Jie Xu ◽  
Xiantao Shang ◽  
Xue Li ◽  
Qiang Li ◽  
...  

Non-volatile memristors are promising for future hardware-based neurocomputation application because they are capable of emulating biological synaptic functions. Various material strategies have been studied to pursue better device performance, such as lower energy cost, better biological plausibility, etc. In this work, we show a novel design for non-volatile memristor based on CoO/Nb:SrTiO 3 heterojunction. We found the memristor intrinsically exhibited resistivity switching behaviours, which can be ascribed to the migration of oxygen vacancies and charge trapping and detrapping at the heterojunction interface. The carrier trapping/detrapping level can be finely adjusted by regulating voltage amplitudes. Gradual conductance modulation can therefore be realized by using proper voltage pulse stimulations. And the spike-timing-dependent plasticity, an important Hebbian learning rule, has been implemented in the device. Our results indicate the possibility of achieving artificial synapses with CoO/Nb:SrTiO 3 heterojunction. Compared with filamentary type of the synaptic device, our device has the potential to reduce energy consumption, realize large-scale neuromorphic system and work more reliably, since no structural distortion occurs.


1992 ◽  
Vol 03 (01) ◽  
pp. 83-101 ◽  
Author(s):  
D. Saad

The Minimal Trajectory (MINT) algorithm for training recurrent neural networks with a stable end point is based on an algorithmic search for the systems’ representations in the neighbourhood of the minimal trajectory connecting the input-output representations. The said representations appear to be the most probable set for solving the global perceptron problem related to the common weight matrix, connecting all representations of successive time steps in a recurrent discrete neural networks. The search for a proper set of system representations is aided by representation modification rules similar to those presented in our former paper,1 aimed to support contributing hidden and non-end-point representations while supressing non-contributing ones. Similar representation modification rules were used in other training methods for feed-forward networks,2–4 based on modification of the internal representations. A feed-forward version of the MINT algorithm will be presented in another paper.5 Once a proper set of system representations is chosen, the weight matrix is then modified accordingly, via the Perceptron Learning Rule (PLR) to obtain the proper input-output relation. Computer simulations carried out for the restricted cases of parity and teacher-net problems show rapid convergence of the algorithm in comparison with other existing algorithms, together with modest memory requirements.


Sign in / Sign up

Export Citation Format

Share Document