scholarly journals One-shot learning with spiking neural networks

2020 ◽  
Author(s):  
Franz Scherr ◽  
Christoph Stöckl ◽  
Wolfgang Maass

AbstractUnderstanding how one-shot learning can be accomplished through synaptic plasticity in neural networks of the brain is a major open problem. We propose that approximations to BPTT in recurrent networks of spiking neurons (RSNNs) such as e-prop cannot achieve this because their local synaptic plasticity is gated by learning signals that are rather ad hoc from a biological perspective: Random projections of instantaneously arising losses at the network outputs, analogously as in Broadcast Alignment for feedforward networks. In contrast, synaptic plasticity is gated in the brain by learning signals such as dopamine, which are emitted by specialized brain areas, e.g. VTA. These brain areas have arguably been optimized by evolution to gate synaptic plasticity in such a way that fast learning of survival-relevant tasks is enabled. We found that a corresponding model architecture, where learning signals are emitted by a separate RSNN that is optimized to facilitate fast learning, enables one-shot learning via local synaptic plasticity in RSNNs for large families of learning tasks. The same learning approach also supports fast spike-based learning of posterior probabilities of potential input sources, thereby providing a new basis for probabilistic reasoning in RSNNs. Our new learning approach also solves an open problem in neuromorphic engineering, where on-chip one-shot learning capability is highly desirable for spike-based neuromorphic devices, but could so far not be achieved. Our method can easily be mapped into neuromorphic hardware, and thereby solves this problem.

2021 ◽  
Author(s):  
Anand Subramoney ◽  
Guillaume Bellec ◽  
Franz Scherr ◽  
Robert Legenstein ◽  
Wolfgang Maass

AbstractSpike-based neural network models have so far not been able to reproduce the capability of the brain to learn from very few, often even from just a single example. We show that this deficiency of models disappears if one allows synaptic weights to store priors and other information that optimize the learning process, while using the network state to quickly absorb information from new examples. For that, it suffices to include biologically realistic neurons with spike frequency adaptation in the neural network model, and to optimize the learning process through meta-learning. We demonstrate this on a variety of tasks, including fast learning and deletion of attractors, adaptation of motor control to changes in the body, and solving the Morris water maze task – a paradigm for fast learning of navigation to a new goal.Significance StatementIt has often been conjectured that STDP or other rules for synaptic plasticity can only explain some of the learning capabilities of brains. In particular, learning a new task from few trials is likely to engage additional mechanisms. Results from machine learning show that artificial neural networks can learn from few trials by storing information about them in their network state, rather than encoding them in synaptic weights. But these machine learning methods require neural networks with biologically unrealistic LSTM (Long Short Term Memory) units. We show that biologically quite realistic models for neural networks of the brain can exhibit similar capabilities. In particular, these networks are able to store priors that enable learning from very few examples.


2021 ◽  
Author(s):  
Ceca Kraišniković ◽  
Wolfgang Maass ◽  
Robert Legenstein

The brain uses recurrent spiking neural networks for higher cognitive functions such as symbolic computations, in particular, mathematical computations. We review the current state of research on spike-based symbolic computations of this type. In addition, we present new results which show that surprisingly small spiking neural networks can perform symbolic computations on bit sequences and numbers and even learn such computations using a biologically plausible learning rule. The resulting networks operate in a rather low firing rate regime, where they could not simply emulate artificial neural networks by encoding continuous values through firing rates. Thus, we propose here a new paradigm for symbolic computation in neural networks that provides concrete hypotheses about the organization of symbolic computations in the brain. The employed spike-based network models are the basis for drastically more energy-efficient computer hardware – neuromorphic hardware. Hence, our results can be seen as creating a bridge from symbolic artificial intelligence to energy-efficient implementation in spike-based neuromorphic hardware.


Author(s):  
Shera Lumsden

The field of neuroscience has undergone a recent advancement upon the realization that music has a profound effect on brain plasticity. The hypothesis that a person is born with a brain that is “hard-wired” for use has been replaced with the understanding that while the brain has innate tendencies, it is modifiable and adapts in response to experience (Habib & Besson, 2008). Brain plasticity is necessary for cognitive development to continue (The Neuroscience Institute, 2012). Most infants are born with the basic neural networks needed to begin to adapt to their world, including their musical world, and as they grow and learn, neural networks are formed and developed in response to their experiences. The brain, however, does not always develop as expected, and one significant sign is a delay in gross motor coordination. This paper will present research discussing brain areas and structures associated with coordination and those involved in the processing of music, hypothesizing there might be a relationship between the two. This will have implications for further study regarding the effects of music on the brain and the possibility that music can be used to facilitate brain plasticity and assist in the development of coordination skills in those with developmental delays.  


2021 ◽  
Author(s):  
Ceca Kraisnikovic ◽  
Wolfgang Maass ◽  
Robert Legenstein

The brain uses recurrent spiking neural networks for higher cognitive functions such as symbolic computations, in particular, mathematical computations. We review the current state of research on spike-based symbolic computations of this type. In addition, we present new results which show that surprisingly small spiking neural networks can perform symbolic computations on bit sequences and numbers and even learn such computations using a biologically plausible learning rule. The resulting networks operate in a rather low firing rate regime, where they could not simply emulate artificial neural networks by encoding continuous values through firing rates. Thus, we propose here a new paradigm for symbolic computation in neural networks that provides concrete hypotheses about the organization of symbolic computations in the brain. The employed spike-based network models are the basis for drastically more energy-efficient computer hardware -- neuromorphic hardware. Hence, our results can be seen as creating a bridge from symbolic artificial intelligence to energy-efficient implementation in spike-based neuromorphic hardware.


Author(s):  
Yu Qi ◽  
Jiangrong Shen ◽  
Yueming Wang ◽  
Huajin Tang ◽  
Hang Yu ◽  
...  

Spiking neural networks (SNNs) are considered to be biologically plausible and power-efficient on neuromorphic hardware. However, unlike the brain mechanisms, most existing SNN algorithms have fixed network topologies and connection relationships. This paper proposes a method to jointly learn network connections and link weights simultaneously. The connection structures are optimized by the spike-timing-dependent plasticity (STDP) rule with timing information, and the link weights are optimized by a supervised algorithm. The connection structures and the weights are learned alternately until a termination condition is satisfied. Experiments are carried out using four benchmark datasets. Our approach outperforms classical learning methods such as STDP, Tempotron, SpikeProp, and a state-of-the-art supervised algorithm. In addition, the learned structures effectively reduce the number of connections by about 24%, thus facilitate the computational efficiency of the network.


2021 ◽  
Author(s):  
Christoph Stoeckl ◽  
Dominik Lang ◽  
Wolfgang Maass

Genetically encoded structure endows neural networks of the brain with innate computational capabilities that enable odor classification and basic motor control right after birth. It is also conjectured that the stereotypical laminar organization of neocortical microcircuits provides basic computing capabilities on which subsequent learning can build. However, it has remained unknown how nature achieves this. Insight from artificial neural networks does not help to solve this problem, since their computational capabilities result from learning. We show that genetically encoded control over connection probabilities between different types of neurons suffices for programming substantial computing capabilities into neural networks. This insight also provides a method for enhancing computing and learning capabilities of artificial neural networks and neuromorphic hardware through clever initialization.


2019 ◽  
Author(s):  
Leo Kozachkov ◽  
Mikael Lundqvist ◽  
Jean-Jacques Slotine ◽  
Earl K. Miller

1AbstractThe brain consists of many interconnected networks with time-varying activity. There are multiple sources of noise and variation yet activity has to eventually converge to a stable state for its computations to make sense. We approached this from a control-theory perspective by applying contraction analysis to recurrent neural networks. This allowed us to find mechanisms for achieving stability in multiple connected networks with biologically realistic dynamics, including synaptic plasticity and time-varying inputs. These mechanisms included anti-Hebbian plasticity, synaptic sparsity and excitatory-inhibitory balance. We leveraged these findings to construct networks that could perform functionally relevant computations in the presence of noise and disturbance. Our work provides a blueprint for how to construct stable plastic and distributed networks.


Author(s):  
Armin Schnider

What diseases cause confabulations and which are the brain areas whose damage is responsible? This chapter reviews the causes, both historic and present, of confabulations and deduces the anatomo-clinical relationships for the four forms of confabulation in the following disorders: alcoholic Korsakoff syndrome, traumatic brain injury, rupture of an anterior communicating artery aneurysm, posterior circulation stroke, herpes and limbic encephalitis, hypoxic brain damage, degenerative dementia, tumours, schizophrenia, and syphilis. Overall, clinically relevant confabulation is rare. Some aetiologies have become more important over time, others have virtually disappeared. While confabulations seem to be more frequent after anterior brain damage, only one form has a distinct anatomical basis.


Molecules ◽  
2020 ◽  
Vol 25 (9) ◽  
pp. 2104 ◽  
Author(s):  
Eleonora Ficiarà ◽  
Shoeb Anwar Ansari ◽  
Monica Argenziano ◽  
Luigi Cangemi ◽  
Chiara Monge ◽  
...  

Magnetic Oxygen-Loaded Nanobubbles (MOLNBs), manufactured by adding Superparamagnetic Iron Oxide Nanoparticles (SPIONs) on the surface of polymeric nanobubbles, are investigated as theranostic carriers for delivering oxygen and chemotherapy to brain tumors. Physicochemical and cyto-toxicological properties and in vitro internalization by human brain microvascular endothelial cells as well as the motion of MOLNBs in a static magnetic field were investigated. MOLNBs are safe oxygen-loaded vectors able to overcome the brain membranes and drivable through the Central Nervous System (CNS) to deliver their cargoes to specific sites of interest. In addition, MOLNBs are monitorable either via Magnetic Resonance Imaging (MRI) or Ultrasound (US) sonography. MOLNBs can find application in targeting brain tumors since they can enhance conventional radiotherapy and deliver chemotherapy being driven by ad hoc tailored magnetic fields under MRI and/or US monitoring.


Sign in / Sign up

Export Citation Format

Share Document