When is the generalized delta rule a learning rule? a physical analogy

Author(s):  
Pemberton ◽  
Vidal
2017 ◽  
Vol 114 (19) ◽  
pp. E3859-E3868 ◽  
Author(s):  
Florent Meyniel ◽  
Stanislas Dehaene

Learning is difficult when the world fluctuates randomly and ceaselessly. Classical learning algorithms, such as the delta rule with constant learning rate, are not optimal. Mathematically, the optimal learning rule requires weighting prior knowledge and incoming evidence according to their respective reliabilities. This “confidence weighting” implies the maintenance of an accurate estimate of the reliability of what has been learned. Here, using fMRI and an ideal-observer analysis, we demonstrate that the brain’s learning algorithm relies on confidence weighting. While in the fMRI scanner, human adults attempted to learn the transition probabilities underlying an auditory or visual sequence, and reported their confidence in those estimates. They knew that these transition probabilities could change simultaneously at unpredicted moments, and therefore that the learning problem was inherently hierarchical. Subjective confidence reports tightly followed the predictions derived from the ideal observer. In particular, subjects managed to attach distinct levels of confidence to each learned transition probability, as required by Bayes-optimal inference. Distinct brain areas tracked the likelihood of new observations given current predictions, and the confidence in those predictions. Both signals were combined in the right inferior frontal gyrus, where they operated in agreement with the confidence-weighting model. This brain region also presented signatures of a hierarchical process that disentangles distinct sources of uncertainty. Together, our results provide evidence that the sense of confidence is an essential ingredient of probabilistic learning in the human brain, and that the right inferior frontal gyrus hosts a confidence-based statistical learning algorithm for auditory and visual sequences.


Author(s):  
MOHSEN EBRAHIMI MOGHADDAM

Motion blur is one of the most common causes of image corruptions caused by blurring. Several methods have been presented up to now, which precisely identify linear motion blur parameters, but most of them possessed low precision in the presence of the noise. The present paper is aimed to introduce an algorithm for estimating linear motion blur parameters in noisy images. This study presents a method to estimate motion direction by using Radon transform, which is followed by the application of two other different methods to estimate motion length; the first of which is based on one-dimensional power spectrum to estimate parameters of noise free images and the second uses bispectrum modeling in noisy images. A Feed-Forward Back Propagation neural network has been designed on the basis of Weierstrass approximation theorem to model bispectrum and the Delta rule as the network learning rule. The methods were tested on several standard images like Camera man, Lena, Lake, etc. that were degraded by linear motion blur and additive noise. The experimental results have been satisfactory. The proposed method, compared to other related methods, suggests an improvement in the supported lowest SNR and precision of estimation.


1995 ◽  
Vol 7 (4) ◽  
pp. 845-865 ◽  
Author(s):  
Jörg Bruske ◽  
Gerald Sommer

Dynamic cell structures (DCS) represent a family of artificial neural architectures suited both for unsupervised and supervised learning. They belong to the recently (Martinetz 1994) introduced class of topology representing networks (TRN) that build perfectly topology preserving feature maps. DCS employ a modified Kohonen learning rule in conjunction with competitive Hebbian learning. The Kohonen type learning rule serves to adjust the synaptic weight vectors while Hebbian learning establishes a dynamic lateral connection structure between the units reflecting the topology of the feature manifold. In case of supervised learning, i.e., function approximation, each neural unit implements a radial basis function, and an additional layer of linear output units adjusts according to a delta-rule. DCS is the first RBF-based approximation scheme attempting to concurrently learn and utilize a perfectly topology preserving map for improved performance. Simulations on a selection of CMU-Benchmarks indicate that the DCS idea applied to the growing cell structure algorithm (Fritzke 1993c) leads to an efficient and elegant algorithm that can beat conventional models on similar tasks.


2016 ◽  
Author(s):  
Chaitanya K. Ryali ◽  
Gautam Reddy ◽  
Angela J. Yu

AbstractUnderstanding how humans and animals learn about statistical regularities in stable and volatile environments, and utilize these regularities to make predictions and decisions, is an important problem in neuroscience and psychology. Using a Bayesian modeling framework, specifically the Dynamic Belief Model (DBM), it has previously been shown that humans tend to make the default assumption that environmental statistics undergo abrupt, unsignaled changes, even when environmental statistics are actually stable. Because exact Bayesian inference in this setting, an example of switching state space models, is computationally intensive, a number of approximately Bayesian and heuristic algorithms have been proposed to account for learning/prediction in the brain. Here, we examine a neurally plausible algorithm, a special case of leaky integration dynamics we denote as EXP (for exponential filtering), that is significantly simpler than all previously suggested algorithms except for the delta-learning rule, and which far outperforms the delta rule in approximating Bayesian prediction performance. We derive the theoretical relationship between DBM and EXP, and show that EXP gains computational efficiency by foregoing the representation of inferential uncertainty (as does the delta rule), but that it nevertheless achieves near-Bayesian performance due to its ability to incorporate a “persistent prior” influence unique to DBM and absent from the other algorithms. Furthermore, we show that EXP is comparable to DBM but better than all other models in reproducing human behavior in a visual search task, suggesting that human learning and prediction also incorporates an element of persistent prior. More broadly, our work demonstrates that when observations are information-poor, detecting changes or modulating the learning rate is both difficult and (thus) unnecessary for making Bayes-optimal predictions.


2020 ◽  
Vol 16 (2) ◽  
pp. 280-289
Author(s):  
Ghalib H. Alshammri ◽  
Walid K. M. Ahmed ◽  
Victor B. Lawrence

Background: The architecture and sequential learning rule-based underlying ARFIS (adaptive-receiver-based fuzzy inference system) are proposed to estimate and predict the adaptive threshold-based detection scheme for diffusion-based molecular communication (DMC). Method: The proposed system forwards an estimate of the received bits based on the current molecular cumulative concentration, which is derived using sequential training-based principle with weight and bias and an input-output mapping based on both human knowledge in the form of fuzzy IFTHEN rules. The ARFIS architecture is employed to model nonlinear molecular communication to predict the received bits over time series. Result: This procedure is suitable for binary On-OFF-Keying (Book signaling), where the receiver bio-nanomachine (Rx Bio-NM) adapts the 1/0-bit detection threshold based on all previous received molecular cumulative concentrations to alleviate the inter-symbol interference (ISI) problem and reception noise. Conclusion: Theoretical and simulation results show the improvement in diffusion-based molecular throughput and the optimal number of molecules in transmission. Furthermore, the performance evaluation in various noisy channel sources shows promising improvement in the un-coded bit error rate (BER) compared with other threshold-based detection schemes in the literature.


Sign in / Sign up

Export Citation Format

Share Document