scholarly journals Cell-type–specific neuromodulation guides synaptic credit assignment in a spiking neural network

2021 ◽  
Vol 118 (51) ◽  
pp. e2111821118
Author(s):  
Yuhan Helena Liu ◽  
Stephen Smith ◽  
Stefan Mihalas ◽  
Eric Shea-Brown ◽  
Uygar Sümbül

Brains learn tasks via experience-driven differential adjustment of their myriad individual synaptic connections, but the mechanisms that target appropriate adjustment to particular connections remain deeply enigmatic. While Hebbian synaptic plasticity, synaptic eligibility traces, and top-down feedback signals surely contribute to solving this synaptic credit-assignment problem, alone, they appear to be insufficient. Inspired by new genetic perspectives on neuronal signaling architectures, here, we present a normative theory for synaptic learning, where we predict that neurons communicate their contribution to the learning outcome to nearby neurons via cell-type–specific local neuromodulation. Computational tests suggest that neuron-type diversity and neuron-type–specific local neuromodulation may be critical pieces of the biological credit-assignment puzzle. They also suggest algorithms for improved artificial neural network learning efficiency.

2020 ◽  
Author(s):  
Yuhan Helena Liu ◽  
Stephen Smith ◽  
Stefan Mihalas ◽  
Eric Shea-Brown ◽  
Uygar Sümbül

AbstractAnimals learn and form memories by jointly adjusting the efficacy of their synapses. How they efficiently solve the underlying temporal credit assignment problem remains elusive. Here, we re-analyze the mathematical basis of gradient descent learning in recurrent spiking neural networks (RSNNs) in light of the recent single-cell transcriptomic evidence for cell-type-specific local neuropeptide signaling in the cortex. Our normative theory posits an important role for the notion of neuronal cell types and local diffusive communication by enabling biologically plausible and efficient weight update. While obeying fundamental biological constraints, including separating excitatory vs inhibitory cell types and observing connection sparsity, we trained RSNNs for temporal credit assignment tasks spanning seconds and observed that the inclusion of local modulatory signaling improved learning efficiency. Our learning rule puts forth a novel form of interaction between modulatory signals and synaptic transmission. Moreover, it suggests a computationally efficient on-chip learning method for bio-inspired artificial intelligence.


2011 ◽  
Vol 131 (11) ◽  
pp. 1889-1894
Author(s):  
Yuta Tsuchida ◽  
Michifumi Yoshioka

Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 711
Author(s):  
Mina Basirat ◽  
Bernhard C. Geiger ◽  
Peter M. Roth

Information plane analysis, describing the mutual information between the input and a hidden layer and between a hidden layer and the target over time, has recently been proposed to analyze the training of neural networks. Since the activations of a hidden layer are typically continuous-valued, this mutual information cannot be computed analytically and must thus be estimated, resulting in apparently inconsistent or even contradicting results in the literature. The goal of this paper is to demonstrate how information plane analysis can still be a valuable tool for analyzing neural network training. To this end, we complement the prevailing binning estimator for mutual information with a geometric interpretation. With this geometric interpretation in mind, we evaluate the impact of regularization and interpret phenomena such as underfitting and overfitting. In addition, we investigate neural network learning in the presence of noisy data and noisy labels.


1994 ◽  
Vol 04 (01) ◽  
pp. 23-51 ◽  
Author(s):  
JEROEN DEHAENE ◽  
JOOS VANDEWALLE

A number of matrix flows, based on isospectral and isodirectional flows, is studied and modified for the purpose of local implementability on a network structure. The flows converge to matrices with a predefined spectrum and eigenvectors which are determined by an external signal. The flows can be useful for adaptive signal processing applications and are applied to neural network learning.


Sign in / Sign up

Export Citation Format

Share Document