TOLERANCE OF ON-CHIP LEARNING TO VARIOUS CIRCUIT INACCURACIES
An investigation is made of the tolerance of various in-circuit learning algorithms to component imprecision and other circuit limitations in artificial neural networks. In contrast with most previous work, the various circuit limitations are treated separately for their effects on learning. Supervised learning mechanisms including backpropagation and contrastive Hebbian learning, and unsupervised soft competitive learning were found to be sufficiently tolerant of those levels of arithmetic inaccuracy, noise, nonlinearity, weight decay, and statistical variation from fabrication that we have experienced in 1.2 μm analog CMOS circuits employing Gilbert multipliers as the primary computational element. These learning circuits also function properly in the presence of offset errors in analog multipliers and adders, provided that the computed weight updates are constrained by the circuitry to be made only when they exceed certain minimum or threshold values. These results may also be relevant for other analog circuit approaches and for compact (low bit rate) digital implementations, although in this case, the minimum weight increment defined by the bit precision could necessitate stochastic updating.