Spike: A GPU Optimised Spiking Neural Network Simulator

2018 ◽  
Author(s):  
Nasir Ahmad ◽  
James B. Isbister ◽  
Toby St. Clere Smithe ◽  
Simon M. Stringer

ABSTRACTSpiking Neural Network (SNN) simulations require internal variables – such as the membrane voltages of individual neurons and their synaptic inputs – to be updated on a sub-millisecond resolution. As a result, a single second of simulation time requires many thousands of update calculations per neuron. Furthermore, increases in the scale of SNN models have, accordingly, led to manyfold increases in the runtime of SNN simulations. Existing solutions to this problem of scale include high performance CPU based simulators capable of multithreaded execution (“CPU parallelism”). More recent GPU based simulators have emerged, which aim to utilise GPU parallelism for SNN execution. We have identified several key speedups, which give GPU based simulators up to an order of magnitude performance increase over CPU based simulators on several benchmarks. We present the Spike simulator with three key optimisations: timestep grouping, active synapse grouping, and delay insensitivity. Combined, these optimisations massively increase the speed of executing a SNN simulation and produce a simulator which is, on a single machine, faster than currently available simulators.

2014 ◽  
Vol 25 (2) ◽  
pp. 316-331 ◽  
Author(s):  
Kirill Minkovich ◽  
Corey M. Thibeault ◽  
Michael John O'Brien ◽  
Aleksey Nogin ◽  
Youngkwan Cho ◽  
...  

Background/Objectives: In the field of software development, the diversity of programming languages increases dramatically with the increase in their complexity. This leads both programmers and researchers to develop and investigate automated tools to distinguish these programming languages. Different efforts were conducted to achieve this task using keywords of source codes of these programming languages. Therefore, instead of using keywords classification for recognition, this work is conducted to investigate the ability to detect the pattern of a programming language characteristic by using NeMo(High-performance spiking neural network simulator) of neural network and testing the ability of this toolkit to provide detailed analyzable results. Methods/Statistical analysis: the method of achieving these objectives is by using a back propagation neural network via NeMo based on pattern recognition methodology. Findings: The results show that the NeMo neural network of pattern recognition can identify and recognize the pattern of python programming language with high accuracy. It also shows the ability of the NeMo toolkit to represent the analyzable results through a percentage of certainty. Improvements/Applications: it can be noticed from the results the ability of NeMo simulator to provide beneficial platform for studying and analyzing the complexity of the backpropagation neural network model.


Author(s):  
James C Knight ◽  
Thomas Nowotny

AbstractLarge-scale simulations of spiking neural network models are an important tool for improving our understanding of the dynamics and ultimately the function of brains. However, even small mammals such as mice have on the order of 1 × 1012 synaptic connections which, in simulations, are each typically charaterized by at least one floating-point value. This amounts to several terabytes of data – an unrealistic memory requirement for a single desktop machine. Large models are therefore typically simulated on distributed supercomputers which is costly and limits large-scale modelling to a few privileged research groups. In this work, we describe extensions to GeNN – our Graphical Processing Unit (GPU) accelerated spiking neural network simulator – that enable it to ‘procedurally’ generate connectivity and synaptic weights ‘on the go’ as spikes are triggered, instead of storing and retrieving them from memory. We find that GPUs are well-suited to this approach because of their raw computational power which, due to memory bandwidth limitations, is often under-utilised when simulating spiking neural networks. We demonstrate the value of our approach with a recent model of the Macaque visual cortex consisting of 4.13 × 106 neurons and 24.2 × 109 synapses. Using our new method, it can be simulated on a single GPU – a significant step forward in making large-scale brain modelling accessible to many more researchers. Our results match those obtained on a supercomputer and the simulation runs up to 35 % faster on a single high-end GPU than previously on over 1000 supercomputer nodes.


2020 ◽  
Vol 31 (11) ◽  
pp. 2510-2523 ◽  
Author(s):  
Peng Qu ◽  
Youhui Zhang ◽  
Xiang Fei ◽  
Weimin Zheng

2019 ◽  
Vol 8 (3) ◽  
pp. 4612-4616

Simulation studies, in general, heavily rely upon the internal variables of the system / entity in the studies. In case of simulation study of the Spiking Neural Networks (SNNs), the major internal system variables are membrane potentials of the neurons and their respective synaptic inputs which demand to be updated at a sub-millisecond resolution. It would be very apt here to note that this requires thousands of updates to simulate one second of an activity per neuron and this factor makes it imperative to have a highly scalable model to derive some inferences from the simulation. Conventionally, high performance CPUs with high degree of multi-threading were leveraged to conduct simulations and derive inferences. With the advances in the hardware, the degree of parallelism has also increased, especially the GPUs have opened a multitude of avenues to perform SNN simulations at scale. In our pervious works [1, 2, 3], we have demonstrated how GPUs can be leveraged to achieve scalability and performance by using hybrid CPU-GPU approach which have improved the performance as compared to multi-threading on high performance CPUs. In this work, we have focused on hyper parameter tuning of some of the key parameters such as delay insensitivity, time step grouping and the active synapse grouping to achieve greater simulation speed of scalable spiking neural networks


2011 ◽  
Vol 23 (6) ◽  
pp. 1503-1535 ◽  
Author(s):  
Romain Brette ◽  
Dan F. M. Goodman

High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages.


Sign in / Sign up

Export Citation Format

Share Document