scholarly journals Generalized perceptron learning rule and its implications for photorefractive neural networks

1994 ◽  
Vol 11 (9) ◽  
pp. 1619 ◽  
Author(s):  
Chau-Jern Cheng ◽  
Pochi Yeh ◽  
Ken Yuh Hsu

1992 ◽  
Vol 03 (01) ◽  
pp. 83-101 ◽  
Author(s):  
D. Saad

The Minimal Trajectory (MINT) algorithm for training recurrent neural networks with a stable end point is based on an algorithmic search for the systems’ representations in the neighbourhood of the minimal trajectory connecting the input-output representations. The said representations appear to be the most probable set for solving the global perceptron problem related to the common weight matrix, connecting all representations of successive time steps in a recurrent discrete neural networks. The search for a proper set of system representations is aided by representation modification rules similar to those presented in our former paper,1 aimed to support contributing hidden and non-end-point representations while supressing non-contributing ones. Similar representation modification rules were used in other training methods for feed-forward networks,2–4 based on modification of the internal representations. A feed-forward version of the MINT algorithm will be presented in another paper.5 Once a proper set of system representations is chosen, the weight matrix is then modified accordingly, via the Perceptron Learning Rule (PLR) to obtain the proper input-output relation. Computer simulations carried out for the restricted cases of parity and teacher-net problems show rapid convergence of the algorithm in comparison with other existing algorithms, together with modest memory requirements.



2002 ◽  
Vol 12 (02) ◽  
pp. 83-93 ◽  
Author(s):  
BURKHARD LENZE ◽  
JÖRG RADDATZ

In this paper, we will take a further look at a generalized perceptron-like learning rule which uses dilation and translation parameters in order to enhance the recall performance of higher order Hopfield neural networks without significantly increasing their complexity. We will practically study the influence of these parameters on the perceptron learning and recall process, using a generalized version of the Hebbian learning rule for initialization. Our analysis will be based on a pattern recognition problem with random patterns. We will see that in case of a highly correlated set of patterns, there can be gained some improvements concerning the learning and recall performance. On the other hand, we will show that the dilation and translation parameters have to be chosen carefully for a positive result.





2021 ◽  
Author(s):  
Ceca Kraišniković ◽  
Wolfgang Maass ◽  
Robert Legenstein

The brain uses recurrent spiking neural networks for higher cognitive functions such as symbolic computations, in particular, mathematical computations. We review the current state of research on spike-based symbolic computations of this type. In addition, we present new results which show that surprisingly small spiking neural networks can perform symbolic computations on bit sequences and numbers and even learn such computations using a biologically plausible learning rule. The resulting networks operate in a rather low firing rate regime, where they could not simply emulate artificial neural networks by encoding continuous values through firing rates. Thus, we propose here a new paradigm for symbolic computation in neural networks that provides concrete hypotheses about the organization of symbolic computations in the brain. The employed spike-based network models are the basis for drastically more energy-efficient computer hardware – neuromorphic hardware. Hence, our results can be seen as creating a bridge from symbolic artificial intelligence to energy-efficient implementation in spike-based neuromorphic hardware.



2017 ◽  
Vol 237 ◽  
pp. 193-199 ◽  
Author(s):  
D. Negrov ◽  
I. Karandashev ◽  
V. Shakirov ◽  
Yu. Matveyev ◽  
W. Dunin-Barkowski ◽  
...  


2020 ◽  
Vol 34 (02) ◽  
pp. 1316-1323
Author(s):  
Zuozhu Liu ◽  
Thiparat Chotibut ◽  
Christopher Hillar ◽  
Shaowei Lin

Motivated by the celebrated discrete-time model of nervous activity outlined by McCulloch and Pitts in 1943, we propose a novel continuous-time model, the McCulloch-Pitts network (MPN), for sequence learning in spiking neural networks. Our model has a local learning rule, such that the synaptic weight updates depend only on the information directly accessible by the synapse. By exploiting asymmetry in the connections between binary neurons, we show that MPN can be trained to robustly memorize multiple spatiotemporal patterns of binary vectors, generalizing the ability of the symmetric Hopfield network to memorize static spatial patterns. In addition, we demonstrate that the model can efficiently learn sequences of binary pictures as well as generative models for experimental neural spike-train data. Our learning rule is consistent with spike-timing-dependent plasticity (STDP), thus providing a theoretical ground for the systematic design of biologically inspired networks with large and robust long-range sequence storage capacity.



1986 ◽  
Author(s):  
L. Personnaz ◽  
I. Guyon ◽  
A. Johannet ◽  
G. Dreyfus ◽  
G. Toulouse


Sign in / Sign up

Export Citation Format

Share Document