An intelligent instrument for tracking and adaptive filtering of oscillatory signals using Hebbian learning rules

Measurement ◽  
1999 ◽  
Vol 26 (4) ◽  
pp. 221-227 ◽  
Author(s):  
Sakuntala Mahapatra ◽  
Samrat Lagnajeet Sabat ◽  
Santanu K. Nayak
F1000Research ◽  
2017 ◽  
Vol 6 ◽  
pp. 1222 ◽  
Author(s):  
Gabriele Scheler

In this paper, we present data for the lognormal distributions of spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas examined. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum), neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears to be a general, functional property in all cases analyzed. We then created a generic neural model to investigate adaptive learning rules that create and maintain lognormal distributions. We conclusively demonstrate that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This provides a solution to the long-standing question about the type of plasticity exhibited by intrinsic excitability.


1995 ◽  
Vol 7 (3) ◽  
pp. 507-517 ◽  
Author(s):  
Marco Idiart ◽  
Barry Berk ◽  
L. F. Abbott

Model neural networks can perform dimensional reductions of input data sets using correlation-based learning rules to adjust their weights. Simple Hebbian learning rules lead to an optimal reduction at the single unit level but result in highly redundant network representations. More complex rules designed to reduce or remove this redundancy can develop optimal principal component representations, but they are not very compelling from a biological perspective. Neurons in biological networks have restricted receptive fields limiting their access to the input data space. We find that, within this restricted receptive field architecture, simple correlation-based learning rules can produce surprisingly efficient reduced representations. When noise is present, the size of the receptive fields can be optimally tuned to maximize the accuracy of reconstructions of input data from a reduced representation.


2000 ◽  
Vol 23 (4) ◽  
pp. 550-551
Author(s):  
Mikhail N. Zhadin

The absence of a clear influence of an animal's behavioral responses to Hebbian associative learning in the cerebral cortex requires some changes in the Hebbian learning rules. The participation of the brain monoaminergic systems in Hebbian associative learning is considered.


2012 ◽  
Vol 24 (5) ◽  
pp. 1271-1296 ◽  
Author(s):  
Michael Teichmann ◽  
Jan Wiltschut ◽  
Fred Hamker

The human visual system has the remarkable ability to largely recognize objects invariant of their position, rotation, and scale. A good interpretation of neurobiological findings involves a computational model that simulates signal processing of the visual cortex. In part, this is likely achieved step by step from early to late areas of visual perception. While several algorithms have been proposed for learning feature detectors, only few studies at hand cover the issue of biologically plausible learning of such invariance. In this study, a set of Hebbian learning rules based on calcium dynamics and homeostatic regulations of single neurons is proposed. Their performance is verified within a simple model of the primary visual cortex to learn so-called complex cells, based on a sequence of static images. As a result, the learned complex-cell responses are largely invariant to phase and position.


2013 ◽  
Vol 2013 ◽  
pp. 1-17 ◽  
Author(s):  
Elisa Magosso ◽  
Filippo Cona ◽  
Mauro Ursino

Exposure to synchronous but spatially disparate auditory and visual stimuli produces a perceptual shift of sound location towards the visual stimulus (ventriloquism effect). After adaptation to a ventriloquism situation, enduring sound shift is observed in the absence of the visual stimulus (ventriloquism aftereffect). Experimental studies report opposing results as to aftereffect generalization across sound frequencies varying from aftereffect being confined to the frequency used during adaptation to aftereffect generalizing across some octaves. Here, we present an extension of a model of visual-auditory interaction we previously developed. The new model is able to simulate the ventriloquism effect and, via Hebbian learning rules, the ventriloquism aftereffect and can be used to investigate aftereffect generalization across frequencies. The model includes auditory neurons coding both for the spatial and spectral features of the auditory stimuli and mimicking properties of biological auditory neurons. The model suggests that different extent of aftereffect generalization across frequencies can be obtained by changing the intensity of the auditory stimulus that induces different amounts of activation in the auditory layer. The model provides a coherent theoretical framework to explain the apparently contradictory results found in the literature. Model mechanisms and hypotheses are discussed in relation to neurophysiological and psychophysical data.


2017 ◽  
Author(s):  
Ulises Pereira ◽  
Nicolas Brunel

AbstractThe attractor neural network scenario is a popular scenario for memory storage in association cortex, but there is still a large gap between models based on this scenario and experimental data. We study a recurrent network model in which both learning rules and distribution of stored patterns are inferred from distributions of visual responses for novel and familiar images in inferior temporal cortex (ITC). Unlike classical attractor neural network models, our model exhibits graded activity in retrieval states, with distributions of firing rates that are close to lognormal. Inferred learning rules are close to maximizing the number of stored patterns within a family of unsupervised Hebbian learning rules, suggesting learning rules in ITC are optimized to store a large number of attractor states. Finally, we show that there exists two types of retrieval states: one in which firing rates are constant in time, another in which firing rates fluctuate chaotically.


2016 ◽  
Author(s):  
Yuwei Cui ◽  
Subutai Ahmad ◽  
Jeff Hawkins

1.AbstractHierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the spatial pooler, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the spatial pooler outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the spatial pooler in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.


2021 ◽  
Author(s):  
Andrea Ferigo ◽  
Giovanni Iacca ◽  
Eric Medvet ◽  
Federico Pigozzi

<div>According to Hebbian theory, synaptic plasticity is the ability of neurons to strengthen or weaken the synapses among them in response to stimuli. It plays a fundamental role in the processes of learning and memory of biological neural networks. With plasticity, biological agents can adapt on multiple timescales and outclass artificial agents, the majority of which still rely on static Artificial Neural Network (ANN) controllers. In this work, we focus on Voxel-based Soft Robots (VSRs), a class of simulated artificial agents, composed as aggregations of elastic cubic blocks. We propose a Hebbian ANN controller where every synapse is associated with a Hebbian rule that controls the way the weight is adapted during the VSR lifetime. For a given task and morphology, we optimize the controller for the task of locomotion by evolving, rather than the weights, the parameters of the Hebbian rules. Our results show that the Hebbian controller is comparable, often better than a non-Hebbian baseline and that it is more adaptable to unforeseen damages. We also provide novel insights into the inner workings of plasticity and demonstrate that "true" learning does take place, as the evolved controllers improve over the lifetime and generalize well.</div>


1991 ◽  
Vol 02 (03) ◽  
pp. 169-184 ◽  
Author(s):  
Lei Xu ◽  
Adam Krzyzak ◽  
Erkki Oja

A new modification of the subspace pattern recognition method, called the dual subspace pattern recognition (DSPR) method, is proposed, and neural network models combining both constrained Hebbian and anti-Hebbian learning rules are developed for implementing the DSPR method. An experimental comparison is made by using our model and a three-layer forward net with backpropagation learning. The results illustrate that our model can outperform the backpropagation model in suitable applications.


Sign in / Sign up

Export Citation Format

Share Document