scholarly journals Synaptic energy drives the information processing mechanisms in spiking neural networks

2014 ◽  
Vol 11 (2) ◽  
pp. 233-256 ◽  
Author(s):  
Karim El Laithy ◽  
◽  
Martin Bogdan
2010 ◽  
Vol 23 (7) ◽  
pp. 819-835 ◽  
Author(s):  
Simei Gomes Wysoski ◽  
Lubica Benuskova ◽  
Nikola Kasabov

2021 ◽  
pp. 1-27
Author(s):  
Friedemann Zenke ◽  
Tim P. Vogels

Brains process information in spiking neural networks. Their intricate connections shape the diverse functions these networks perform. Yet how network connectivity relates to function is poorly understood, and the functional capabilities of models of spiking networks are still rudimentary. The lack of both theoretical insight and practical algorithms to find the necessary connectivity poses a major impediment to both studying information processing in the brain and building efficient neuromorphic hardware systems. The training algorithms that solve this problem for artificial neural networks typically rely on gradient descent. But doing so in spiking networks has remained challenging due to the nondifferentiable nonlinearity of spikes. To avoid this issue, one can employ surrogate gradients to discover the required connectivity. However, the choice of a surrogate is not unique, raising the question of how its implementation influences the effectiveness of the method. Here, we use numerical simulations to systematically study how essential design parameters of surrogate gradients affect learning performance on a range of classification problems. We show that surrogate gradient learning is robust to different shapes of underlying surrogate derivatives, but the choice of the derivative's scale can substantially affect learning performance. When we combine surrogate gradients with suitable activity regularization techniques, spiking networks perform robust information processing at the sparse activity limit. Our study provides a systematic account of the remarkable robustness of surrogate gradient learning and serves as a practical guide to model functional spiking neural networks.


Author(s):  
Thomas P. Trappenberg

In this chapter a brief review is given of computational systems that are motivated by information processing in the brain, an area that is often called neurocomputing or artificial neural networks. While this is now a well studied and documented area, specific emphasis is given to a subclass of such models, called continuous attractor neural networks, which are beginning to emerge in a wide context of biologically inspired computing. The frequent appearance of such models in biologically motivated studies of brain functions gives some indication that this model might capture important information processing mechanisms used in the brain, either directly or indirectly. Most of this chapter is dedicated to an introduction to this basic model and some extensions that might be important for their application, either as a model of brain processing, or in technical applications. Direct technical applications are only emerging slowly, but some examples of promising directions are highlighted in this chapter.


1988 ◽  
Vol 49 (1) ◽  
pp. 13-23 ◽  
Author(s):  
J.F. Fontanari ◽  
R. Köberle

2012 ◽  
Vol 35 (12) ◽  
pp. 2633 ◽  
Author(s):  
Xiang-Hong LIN ◽  
Tian-Wen ZHANG ◽  
Gui-Cang ZHANG

Sign in / Sign up

Export Citation Format

Share Document