scholarly journals Neural heterogeneity promotes robust learning

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Nicolas Perez-Nieves ◽  
Vincent C. H. Leung ◽  
Pier Luigi Dragotti ◽  
Dan F. M. Goodman

AbstractThe brain is a hugely diverse, heterogeneous structure. Whether or not heterogeneity at the neural level plays a functional role remains unclear, and has been relatively little explored in models which are often highly homogeneous. We compared the performance of spiking neural networks trained to carry out tasks of real-world difficulty, with varying degrees of heterogeneity, and found that heterogeneity substantially improved task performance. Learning with heterogeneity was more stable and robust, particularly for tasks with a rich temporal structure. In addition, the distribution of neuronal parameters in the trained networks is similar to those observed experimentally. We suggest that the heterogeneity observed in the brain may be more than just the byproduct of noisy processes, but rather may serve an active and important role in allowing animals to learn in changing environments.

2020 ◽  
Author(s):  
Nicolas Perez-Nieves ◽  
Vincent C. H. Leung ◽  
Pier Luigi Dragotti ◽  
Dan F. M. Goodman

AbstractThe brain has a hugely diverse, heterogeneous structure. By contrast, many functional neural models are homogeneous. We compared the performance of spiking neural net-works trained to carry out difficult tasks, with varying degrees of heterogeneity. Introducing heterogeneity in membrane and synapse time constants substantially improved task performance, and made learning more stable and robust across multiple training methods, particularly for tasks with a rich temporal structure. In addition, the distribution of time constants in the trained networks closely matches those observed experimentally. We suggest that the heterogeneity observed in the brain may be more than just the byproduct of noisy processes, but rather may serve an active and important role in allowing animals to learn in changing environments.


2021 ◽  
Author(s):  
Ceca Kraišniković ◽  
Wolfgang Maass ◽  
Robert Legenstein

The brain uses recurrent spiking neural networks for higher cognitive functions such as symbolic computations, in particular, mathematical computations. We review the current state of research on spike-based symbolic computations of this type. In addition, we present new results which show that surprisingly small spiking neural networks can perform symbolic computations on bit sequences and numbers and even learn such computations using a biologically plausible learning rule. The resulting networks operate in a rather low firing rate regime, where they could not simply emulate artificial neural networks by encoding continuous values through firing rates. Thus, we propose here a new paradigm for symbolic computation in neural networks that provides concrete hypotheses about the organization of symbolic computations in the brain. The employed spike-based network models are the basis for drastically more energy-efficient computer hardware – neuromorphic hardware. Hence, our results can be seen as creating a bridge from symbolic artificial intelligence to energy-efficient implementation in spike-based neuromorphic hardware.


Author(s):  
Mohd Hafizul Afifi Abdullah ◽  
Muhaini Othman ◽  
Shahreen Kasim ◽  
Siti Aisyah Mohamed

<p>Analysing environmental events such as predicting the risk of flood is considered as a challenging task due to the dynamic behaviour of the data. One way to correctly predict the risk of such events is by gathering as much of related historical data and analyse the correlation between the features which contribute to the event occurrences. Inspired by the brain working mechanism, the spiking neural networks have proven the capability of revealing a significant association between different variables spike behaviour during an event. Personalised modelling, on the other hand, allows a personal model to be created for a specific data model and experiment. Therefore, a personalised modelling method incorporating spiking neural network is used to create a personalised model for assessing a real-world flood case study in Kuala Krai, Kelantan based on historical data of 2012-2016 provided by Malaysian Meteorological Department. The result shows that the method produces the highest accuracy among the selected compared algorithms.</p>


Author(s):  
David Gamez

This chapter is an overview of the simulation of spiking neural networks that relates discrete event simulation to other approaches and includes a case study of recent work. The chapter starts with an introduction to the key components of the brain and sets out three neuron models that are commonly used in simulation work. After explaining discrete event, continuous and hybrid simulation, the performance of each method is evaluated and recent research is discussed. To illustrate the issues surrounding this work, the second half of this chapter presents a case study of the SpikeStream neural simulator that covers the architecture, performance and typical applications of this software along with some recent experiments. The last part of the chapter suggests some future trends for work in this area.


2020 ◽  
Author(s):  
Khadeer Ahmed

Brain is a very efficient computing system. It performs very complex tasks while occupying about 2 liters of volume and consuming very little energy. The computation tasks are performed by special cells in the brain called neurons. They compute using electrical pulses and exchange information between them through chemicals called neurotransmitters. With this as inspiration, there are several compute models which exist today trying to exploit the inherent efficiencies demonstrated by nature. The compute models representing spiking neural networks (SNNs) are biologically plausible, hence are used to study and understand the workings of brain and nervous system. More importantly, they are used to solve a wide variety of problems in the field of artificial intelligence (AI). They are uniquely suited to model temporal and spatio-temporal data paradigms. This chapter explores the fundamental concepts of SNNs, few of the popular neuron models, how the information is represented, learning methodologies, and state of the art platforms for implementing and evaluating SNNs along with a discussion on their applications and broader role in the field of AI and data networks.


2021 ◽  
Author(s):  
Daniel B. Ehrlich ◽  
John D. Murray

Real-world tasks require coordination of working memory, decision making, and planning, yet these cognitive functions have disproportionately been studied as independent modular processes in the brain. Here we propose that contingency representations, defined as mappings for how future behaviors depend on upcoming events, can unify working memory and planning computations. We designed a task capable of disambiguating distinct types of representations. Our experiments revealed that human behavior is consistent with contingency representations, and not with traditional sensory models of working memory. In task-optimized recurrent neural networks we investigated possible circuit mechanisms for contingency representations and found that these representations can explain neurophysiological observations from prefrontal cortex during working memory tasks. Finally, we generated falsifiable predictions for neural data to identify contingency representations in neural data and to dissociate different models of working memory. Our findings characterize a neural representational strategy that can unify working memory, planning, and context-dependent decision making.


2011 ◽  
Vol 23 (3) ◽  
pp. 656-663 ◽  
Author(s):  
Chris Christodoulou ◽  
Aristodemos Cleanthous

In this note, we demonstrate that the high firing irregularity produced by the leaky integrate-and-fire neuron with the partial somatic reset mechanism, which has been shown to be the most likely candidate to reflect the mechanism used in the brain for reproducing the highly irregular cortical neuron firing at high rates (Bugmann, Christodoulou, & Taylor, 1997 ; Christodoulou & Bugmann, 2001 ), enhances learning. More specifically, it enhances reward-modulated spike-timing-dependent plasticity with eligibility trace when used in spiking neural networks, as shown by the results when tested in the simple benchmark problem of XOR, as well as in a complex multiagent setting task.


2021 ◽  
Author(s):  
Ceca Kraisnikovic ◽  
Wolfgang Maass ◽  
Robert Legenstein

The brain uses recurrent spiking neural networks for higher cognitive functions such as symbolic computations, in particular, mathematical computations. We review the current state of research on spike-based symbolic computations of this type. In addition, we present new results which show that surprisingly small spiking neural networks can perform symbolic computations on bit sequences and numbers and even learn such computations using a biologically plausible learning rule. The resulting networks operate in a rather low firing rate regime, where they could not simply emulate artificial neural networks by encoding continuous values through firing rates. Thus, we propose here a new paradigm for symbolic computation in neural networks that provides concrete hypotheses about the organization of symbolic computations in the brain. The employed spike-based network models are the basis for drastically more energy-efficient computer hardware -- neuromorphic hardware. Hence, our results can be seen as creating a bridge from symbolic artificial intelligence to energy-efficient implementation in spike-based neuromorphic hardware.


Author(s):  
Yu Qi ◽  
Jiangrong Shen ◽  
Yueming Wang ◽  
Huajin Tang ◽  
Hang Yu ◽  
...  

Spiking neural networks (SNNs) are considered to be biologically plausible and power-efficient on neuromorphic hardware. However, unlike the brain mechanisms, most existing SNN algorithms have fixed network topologies and connection relationships. This paper proposes a method to jointly learn network connections and link weights simultaneously. The connection structures are optimized by the spike-timing-dependent plasticity (STDP) rule with timing information, and the link weights are optimized by a supervised algorithm. The connection structures and the weights are learned alternately until a termination condition is satisfied. Experiments are carried out using four benchmark datasets. Our approach outperforms classical learning methods such as STDP, Tempotron, SpikeProp, and a state-of-the-art supervised algorithm. In addition, the learned structures effectively reduce the number of connections by about 24%, thus facilitate the computational efficiency of the network.


Sign in / Sign up

Export Citation Format

Share Document