TOLERANCE OF ON-CHIP LEARNING TO VARIOUS CIRCUIT INACCURACIES

1998 ◽  
Vol 08 (02) ◽  
pp. 315-327
Author(s):  
HOWARD C. CARD ◽  
DEAN K. McNEILL ◽  
CHRISTIAN R. SCHNEIDER ◽  
ROLAND S. SCHNEIDER ◽  
BRION K. DOLENKO

An investigation is made of the tolerance of various in-circuit learning algorithms to component imprecision and other circuit limitations in artificial neural networks. In contrast with most previous work, the various circuit limitations are treated separately for their effects on learning. Supervised learning mechanisms including backpropagation and contrastive Hebbian learning, and unsupervised soft competitive learning were found to be sufficiently tolerant of those levels of arithmetic inaccuracy, noise, nonlinearity, weight decay, and statistical variation from fabrication that we have experienced in 1.2 μm analog CMOS circuits employing Gilbert multipliers as the primary computational element. These learning circuits also function properly in the presence of offset errors in analog multipliers and adders, provided that the computed weight updates are constrained by the circuitry to be made only when they exceed certain minimum or threshold values. These results may also be relevant for other analog circuit approaches and for compact (low bit rate) digital implementations, although in this case, the minimum weight increment defined by the bit precision could necessitate stochastic updating.

1989 ◽  
Vol 01 (02) ◽  
pp. 149-165 ◽  
Author(s):  
H. C. Card ◽  
W. R. Moore

This paper provides a tutorial of various VLSI approaches to synthesizing artificial neural networks as microelectronic systems. The means by which the network learns and the synaptic weights become modified is a central theme in this study. The majority of the presentation is concerned with analog circuit approaches to neurons and synapses, employing CMOS circuits. Also included is recent work towards VLSI in situ learning circuits which implement qualitative approximations to Hebbian learning with economy of transistors. An attempt is also made to anticipate relevant developments in VLSI devices which would be suited to neural networks, just as conventional MOS transistors are well suited to traditional digital computer systems.


1995 ◽  
Vol 7 (6) ◽  
pp. 1191-1205 ◽  
Author(s):  
Colin Fyfe

A review is given of a new artificial neural network architecture in which the weights converge to the principal component subspace. The weights learn by only simple Hebbian learning yet require no clipping, normalization or weight decay. The net self-organizes using negative feedback of activation from a set of "interneurons" to the input neurons. By allowing this negative feedback from the interneurons to act on other interneurons we can introduce the necessary asymmetry to cause convergence to the actual principal components. Simulations and analysis confirm such convergence.


2014 ◽  
Vol 51 (3) ◽  
pp. 837-857
Author(s):  
K. Borovkov ◽  
G. Decrouez ◽  
M. Gilson

The paper deals with nonlinear Poisson neuron network models with bounded memory dynamics, which can include both Hebbian learning mechanisms and refractory periods. The state of the network is described by the times elapsed since its neurons fired within the post-synaptic transfer kernel memory span, and the current strengths of synaptic connections, the state spaces of our models being hierarchies of finite-dimensional components. We prove the ergodicity of the stochastic processes describing the behaviour of the networks, establish the existence of continuously differentiable stationary distribution densities (with respect to the Lebesgue measures of corresponding dimensionality) on the components of the state space, and find upper bounds for them. For the density components, we derive a system of differential equations that can be solved in a few simplest cases only. Approaches to approximate computation of the stationary density are discussed. One approach is to reduce the dimensionality of the problem by modifying the network so that each neuron cannot fire if the number of spikes it emitted within the post-synaptic transfer kernel memory span reaches a given threshold. We show that the stationary distribution of this ‘truncated’ network converges to that of the unrestricted network as the threshold increases, and that the convergence is at a superexponential rate. A complementary approach uses discrete Markov chain approximations to the network process.


2001 ◽  
Vol 13 (6) ◽  
pp. 614-620 ◽  
Author(s):  
Kazuhiro Shimonomura ◽  
◽  
Seiji Kameda ◽  
Kazuo Ishii ◽  
Tetsuya Yagi ◽  
...  

A Robot vision system was designed using a silicon retina, which has been developed to mimick the parallel circuit structure of the vertebrate retina. The silicon retina used here is an analog CMOS very large-scale integrated circuit, which executes Laplacian-Gaussian like filtering on the image in real time. The processing is robust to change of illumination condition. Analog circuit modules were designed to detect the contour from the output image of the silicon retina and to binarize the output image. The images processed by the silicon retina as well as those by the analog circuit modules are received by the DOS/V-compatible mother-board with NTSC signal, which enables higher level processings using digital image processing techniques. This novel robot vision system can achieve real time and robust processings in natural illumination condition with a compact hardware and a low power consumption.


NeuroImage ◽  
2018 ◽  
Vol 176 ◽  
pp. 290-300 ◽  
Author(s):  
M.J. Spriggs ◽  
R.L. Sumner ◽  
R.L. McMillan ◽  
R.J. Moran ◽  
I.J. Kirk ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document