hebbian learning
Recently Published Documents


TOTAL DOCUMENTS

607
(FIVE YEARS 94)

H-INDEX

45
(FIVE YEARS 4)

2022 ◽  
Vol 15 ◽  
Author(s):  
Sarada Krithivasan ◽  
Sanchari Sen ◽  
Swagath Venkataramani ◽  
Anand Raghunathan

Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. We propose LoCal+SGD, a new algorithmic approach to accelerate DNN training by selectively combining localized or Hebbian learning within a Stochastic Gradient Descent (SGD) based training framework. Back-propagation is a computationally expensive process that requires 2 Generalized Matrix Multiply (GEMM) operations to compute the error and weight gradients for each layer. We alleviate this by selectively updating some layers' weights using localized learning rules that require only 1 GEMM operation per layer. Further, since localized weight updates are performed during the forward pass itself, the layer activations for such layers do not need to be stored until the backward pass, resulting in a reduced memory footprint. Localized updates can substantially boost training speed, but need to be used judiciously in order to preserve accuracy and convergence. We address this challenge through a Learning Mode Selection Algorithm, which gradually selects and moves layers to localized learning as training progresses. Specifically, for each epoch, the algorithm identifies a Localized→SGD transition layer that delineates the network into two regions. Layers before the transition layer use localized updates, while the transition layer and later layers use gradient-based updates. We propose both static and dynamic approaches to the design of the learning mode selection algorithm. The static algorithm utilizes a pre-defined scheduler function to identify the position of the transition layer, while the dynamic algorithm analyzes the dynamics of the weight updates made to the transition layer to determine how the boundary between SGD and localized updates is shifted in future epochs. We also propose a low-cost weak supervision mechanism that controls the learning rate of localized updates based on the overall training loss. We applied LoCal+SGD to 8 image recognition CNNs (including ResNet50 and MobileNetV2) across 3 datasets (Cifar10, Cifar100, and ImageNet). Our measurements on an Nvidia GTX 1080Ti GPU demonstrate upto 1.5× improvement in end-to-end training time with ~0.5% loss in Top-1 classification accuracy.


2021 ◽  
Author(s):  
Andrea Ferigo ◽  
Giovanni Iacca ◽  
Eric Medvet ◽  
Federico Pigozzi

<div>According to Hebbian theory, synaptic plasticity is the ability of neurons to strengthen or weaken the synapses among them in response to stimuli. It plays a fundamental role in the processes of learning and memory of biological neural networks. With plasticity, biological agents can adapt on multiple timescales and outclass artificial agents, the majority of which still rely on static Artificial Neural Network (ANN) controllers. In this work, we focus on Voxel-based Soft Robots (VSRs), a class of simulated artificial agents, composed as aggregations of elastic cubic blocks. We propose a Hebbian ANN controller where every synapse is associated with a Hebbian rule that controls the way the weight is adapted during the VSR lifetime. For a given task and morphology, we optimize the controller for the task of locomotion by evolving, rather than the weights, the parameters of the Hebbian rules. Our results show that the Hebbian controller is comparable, often better than a non-Hebbian baseline and that it is more adaptable to unforeseen damages. We also provide novel insights into the inner workings of plasticity and demonstrate that "true" learning does take place, as the evolved controllers improve over the lifetime and generalize well.</div>


2021 ◽  
Author(s):  
Andrea Ferigo ◽  
Giovanni Iacca ◽  
Eric Medvet ◽  
Federico Pigozzi

<div>According to Hebbian theory, synaptic plasticity is the ability of neurons to strengthen or weaken the synapses among them in response to stimuli. It plays a fundamental role in the processes of learning and memory of biological neural networks. With plasticity, biological agents can adapt on multiple timescales and outclass artificial agents, the majority of which still rely on static Artificial Neural Network (ANN) controllers. In this work, we focus on Voxel-based Soft Robots (VSRs), a class of simulated artificial agents, composed as aggregations of elastic cubic blocks. We propose a Hebbian ANN controller where every synapse is associated with a Hebbian rule that controls the way the weight is adapted during the VSR lifetime. For a given task and morphology, we optimize the controller for the task of locomotion by evolving, rather than the weights, the parameters of the Hebbian rules. Our results show that the Hebbian controller is comparable, often better than a non-Hebbian baseline and that it is more adaptable to unforeseen damages. We also provide novel insights into the inner workings of plasticity and demonstrate that "true" learning does take place, as the evolved controllers improve over the lifetime and generalize well.</div>


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8217
Author(s):  
Oliver W. Layton

Most algorithms for steering, obstacle avoidance, and moving object detection rely on accurate self-motion estimation, a problem animals solve in real time as they navigate through diverse environments. One biological solution leverages optic flow, the changing pattern of motion experienced on the eye during self-motion. Here I present ARTFLOW, a biologically inspired neural network that learns patterns in optic flow to encode the observer’s self-motion. The network combines the fuzzy ART unsupervised learning algorithm with a hierarchical architecture based on the primate visual system. This design affords fast, local feature learning across parallel modules in each network layer. Simulations show that the network is capable of learning stable patterns from optic flow simulating self-motion through environments of varying complexity with only one epoch of training. ARTFLOW trains substantially faster and yields self-motion estimates that are far more accurate than a comparable network that relies on Hebbian learning. I show how ARTFLOW serves as a generative model to predict the optic flow that corresponds to neural activations distributed across the network.


2021 ◽  
Author(s):  
Henry Powell ◽  
Mathias Winkel ◽  
Alexander V. Hopp ◽  
Helmut Linde

Abstract A variety of behaviors like spatial navigation or bodily motion can be formulated as graph traversal problems through cognitive maps. We present a neural network model which can solve such tasks and is compatible with a broad range of empirical findings about the mammalian neocortex and hippocampus. The neurons and synaptic connections in the model represent structures that can result from self-organization into a cognitive map via Hebbian learning, i.e. into a graph in which each neuron represents a point of some abstract task-relevant manifold and the recurrent connections encode a distance metric on the manifold. Graph traversal problems are solved by wave-like activation patterns which travel through the recurrent network and guide a localized peak of activity onto a path from some starting position to a target state.


2021 ◽  
pp. 108014
Author(s):  
Parminder Kaur ◽  
Avleen Kaur Malhi ◽  
Husanbir Singh Pannu
Keyword(s):  

2021 ◽  
Vol 104 (18) ◽  
Author(s):  
Weichao Yu ◽  
Jiang Xiao ◽  
Gerrit E. W. Bauer

2021 ◽  
Vol 15 ◽  
Author(s):  
Corentin Delacour ◽  
Aida Todri-Sanial

Oscillatory Neural Network (ONN) is an emerging neuromorphic architecture with oscillators representing neurons and information encoded in oscillator's phase relations. In an ONN, oscillators are coupled with electrical elements to define the network's weights and achieve massive parallel computation. As the weights preserve the network functionality, mapping weights to coupling elements plays a crucial role in ONN performance. In this work, we investigate relaxation oscillators based on VO2 material, and we propose a methodology to map Hebbian coefficients to ONN coupling resistances, allowing a large-scale ONN design. We develop an analytical framework to map weight coefficients into coupling resistor values to analyze ONN architecture performance. We report on an ONN with 60 fully-connected oscillators that perform pattern recognition as a Hopfield Neural Network.


2021 ◽  
pp. 1-33
Author(s):  
Kevin Berlemont ◽  
Jean-Pierre Nadal

Abstract In experiments on perceptual decision making, individuals learn a categorization task through trial-and-error protocols. We explore the capacity of a decision-making attractor network to learn a categorization task through reward-based, Hebbian-type modifications of the weights incoming from the stimulus encoding layer. For the latter, we assume a standard layer of a large number of stimu lus-specific neurons. Within the general framework of Hebbian learning, we have hypothesized that the learning rate is modulated by the reward at each trial. Surprisingly, we find that when the coding layer has been optimized in view of the categorization task, such reward-modulated Hebbian learning (RMHL) fails to extract efficiently the category membership. In previous work, we showed that the attractor neural networks' nonlinear dynamics accounts for behavioral confidence in sequences of decision trials. Taking advantage of these findings, we propose that learning is controlled by confidence, as computed from the neural activity of the decision-making attractor network. Here we show that this confidence-controlled, reward-based Hebbian learning efficiently extracts categorical information from the optimized coding layer. The proposed learning rule is local and, in contrast to RMHL, does not require storing the average rewards obtained on previous trials. In addition, we find that the confidence-controlled learning rule achieves near-optimal performance. In accordance with this result, we show that the learning rule approximates a gradient descent method on a maximizing reward cost function.


2021 ◽  
pp. 1-34
Author(s):  
Xiaolin Hu ◽  
Zhigang Zeng

Abstract The functional properties of neurons in the primary visual cortex (V1) are thought to be closely related to the structural properties of this network, but the specific relationships remain unclear. Previous theoretical studies have suggested that sparse coding, an energy-efficient coding method, might underlie the orientation selectivity of V1 neurons. We thus aimed to delineate how the neurons are wired to produce this feature. We constructed a model and endowed it with a simple Hebbian learning rule to encode images of natural scenes. The excitatory neurons fired sparsely in response to images and developed strong orientation selectivity. After learning, the connectivity between excitatory neuron pairs, inhibitory neuron pairs, and excitatory-inhibitory neuron pairs depended on firing pattern and receptive field similarity between the neurons. The receptive fields (RFs) of excitatory neurons and inhibitory neurons were well predicted by the RFs of presynaptic excitatory neurons and inhibitory neurons, respectively. The excitatory neurons formed a small-world network, in which certain local connection patterns were significantly overrepresented. Bidirectionally manipulating the firing rates of inhibitory neurons caused linear transformations of the firing rates of excitatory neurons, and vice versa. These wiring properties and modulatory effects were congruent with a wide variety of data measured in V1, suggesting that the sparse coding principle might underlie both the functional and wiring properties of V1 neurons.


Sign in / Sign up

Export Citation Format

Share Document