scholarly journals An approximate backpropagation learning rule for memristor based neural networks using synaptic plasticity

2017 ◽  
Vol 237 ◽  
pp. 193-199 ◽  
Author(s):  
D. Negrov ◽  
I. Karandashev ◽  
V. Shakirov ◽  
Yu. Matveyev ◽  
W. Dunin-Barkowski ◽  
...  
1996 ◽  
Vol 8 (2) ◽  
pp. 451-460 ◽  
Author(s):  
Georg Thimm ◽  
Perry Moerland ◽  
Emile Fiesler

The backpropagation algorithm is widely used for training multilayer neural networks. In this publication the gain of its activation function(s) is investigated. In specific, it is proven that changing the gain of the activation function is equivalent to changing the learning rate and the weights. This simplifies the backpropagation learning rule by eliminating one of its parameters. The theorem can be extended to hold for some well-known variations on the backpropagation algorithm, such as using a momentum term, flat spot elimination, or adaptive gain. Furthermore, it is successfully applied to compensate for the nonstandard gain of optical sigmoids for optical neural networks.


In this paper a basic introduction to neural networks is made. An emphasis is given on a two layer perceptron used extensively for function approximation. The backpropagation learning rule is than briefly introduced. A short introduction into Python programming language is made and a program for the perceptron design is written and discussed in some detail. The “neurolab” library is used for this purpose.


Author(s):  
Serkan Kiranyaz ◽  
Junaid Malik ◽  
Habib Ben Abdallah ◽  
Turker Ince ◽  
Alexandros Iosifidis ◽  
...  

AbstractThe recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs) that are homogenous only with a linear neuron model. As a heterogenous network model, ONNs are based on a generalized neuron model that can encapsulate any set of non-linear operators to boost diversity and to learn highly complex and multi-modal functions or spaces with minimal network complexity and training data. However, the default search method to find optimal operators in ONNs, the so-called Greedy Iterative Search (GIS) method, usually takes several training sessions to find a single operator set per layer. This is not only computationally demanding, also the network heterogeneity is limited since the same set of operators will then be used for all neurons in each layer. To address this deficiency and exploit a superior level of heterogeneity, in this study the focus is drawn on searching the best-possible operator set(s) for the hidden neurons of the network based on the “Synaptic Plasticity” paradigm that poses the essential learning theory in biological neurons. During training, each operator set in the library can be evaluated by their synaptic plasticity level, ranked from the worst to the best, and an “elite” ONN can then be configured using the top-ranked operator sets found at each hidden layer. Experimental results over highly challenging problems demonstrate that the elite ONNs even with few neurons and layers can achieve a superior learning performance than GIS-based ONNs and as a result, the performance gap over the CNNs further widens.


Sign in / Sign up

Export Citation Format

Share Document