scholarly journals Solving a steady-state PDE using spiking networks and neuromorphic hardware

Author(s):  
J. Darby Smith ◽  
William Severa ◽  
Aaron J. Hill ◽  
Leah Reeder ◽  
Brian Franke ◽  
...  
Author(s):  
Mihai A. Petrovici ◽  
Anna Schroeder ◽  
Oliver Breitwieser ◽  
Andreas Grubl ◽  
Johannes Schemmel ◽  
...  

Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3237
Author(s):  
Alexander Sboev ◽  
Danila Vlasov ◽  
Roman Rybka ◽  
Yury Davydov ◽  
Alexey Serenko ◽  
...  

The problem with training spiking neural networks (SNNs) is relevant due to the ultra-low power consumption these networks could exhibit when implemented in neuromorphic hardware. The ongoing progress in the fabrication of memristors, a prospective basis for analogue synapses, gives relevance to studying the possibility of SNN learning on the base of synaptic plasticity models, obtained by fitting the experimental measurements of the memristor conductance change. The dynamics of memristor conductances is (necessarily) nonlinear, because conductance changes depend on the spike timings, which neurons emit in an all-or-none fashion. The ability to solve classification tasks was previously shown for spiking network models based on the bio-inspired local learning mechanism of spike-timing-dependent plasticity (STDP), as well as with the plasticity that models the conductance change of nanocomposite (NC) memristors. Input data were presented to the network encoded into the intensities of Poisson input spike sequences. This work considers another approach for encoding input data into input spike sequences presented to the network: temporal encoding, in which an input vector is transformed into relative timing of individual input spikes. Since temporal encoding uses fewer input spikes, the processing of each input vector by the network can be faster and more energy-efficient. The aim of the current work is to show the applicability of temporal encoding to training spiking networks with three synaptic plasticity models: STDP, NC memristor approximation, and PPX memristor approximation. We assess the accuracy of the proposed approach on several benchmark classification tasks: Fisher’s Iris, Wisconsin breast cancer, and the pole balancing task (CartPole). The accuracies achieved by SNN with memristor plasticity and conventional STDP are comparable and are on par with classic machine learning approaches.


2021 ◽  
pp. 1-27
Author(s):  
Friedemann Zenke ◽  
Tim P. Vogels

Brains process information in spiking neural networks. Their intricate connections shape the diverse functions these networks perform. Yet how network connectivity relates to function is poorly understood, and the functional capabilities of models of spiking networks are still rudimentary. The lack of both theoretical insight and practical algorithms to find the necessary connectivity poses a major impediment to both studying information processing in the brain and building efficient neuromorphic hardware systems. The training algorithms that solve this problem for artificial neural networks typically rely on gradient descent. But doing so in spiking networks has remained challenging due to the nondifferentiable nonlinearity of spikes. To avoid this issue, one can employ surrogate gradients to discover the required connectivity. However, the choice of a surrogate is not unique, raising the question of how its implementation influences the effectiveness of the method. Here, we use numerical simulations to systematically study how essential design parameters of surrogate gradients affect learning performance on a range of classification problems. We show that surrogate gradient learning is robust to different shapes of underlying surrogate derivatives, but the choice of the derivative's scale can substantially affect learning performance. When we combine surrogate gradients with suitable activity regularization techniques, spiking networks perform robust information processing at the sparse activity limit. Our study provides a systematic account of the remarkable robustness of surrogate gradient learning and serves as a practical guide to model functional spiking neural networks.


Author(s):  
R. C. Moretz ◽  
G. G. Hausner ◽  
D. F. Parsons

Use of the electron microscope to examine wet objects is possible due to the small mass thickness of the equilibrium pressure of water vapor at room temperature. Previous attempts to examine hydrated biological objects and water itself used a chamber consisting of two small apertures sealed by two thin films. Extensive work in our laboratory showed that such films have an 80% failure rate when wet. Using the principle of differential pumping of the microscope column, we can use open apertures in place of thin film windows.Fig. 1 shows the modified Siemens la specimen chamber with the connections to the water supply and the auxiliary pumping station. A mechanical pump is connected to the vapor supply via a 100μ aperture to maintain steady-state conditions.


2021 ◽  
Author(s):  
Wu Lan ◽  
Yuan Peng Du ◽  
Songlan Sun ◽  
Jean Behaghel de Bueren ◽  
Florent Héroguel ◽  
...  

We performed a steady state high-yielding depolymerization of soluble acetal-stabilized lignin in flow, which offered a window into challenges and opportunities that will be faced when continuously processing this feedstock.


2008 ◽  
Vol 45 ◽  
pp. 161-176 ◽  
Author(s):  
Eduardo D. Sontag

This paper discusses a theoretical method for the “reverse engineering” of networks based solely on steady-state (and quasi-steady-state) data.


1979 ◽  
Vol 1 (4) ◽  
pp. 13-24
Author(s):  
E. Dahi ◽  
E. Lund
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document