spiking neural networks
Recently Published Documents





2022 ◽  
Vol 3 ◽  
Karthikeyan Nagarajan ◽  
Junde Li ◽  
Sina Sayyah Ensan ◽  
Sachhidh Kannan ◽  
Swaroop Ghosh

Spiking Neural Networks (SNN) are fast emerging as an alternative option to Deep Neural Networks (DNN). They are computationally more powerful and provide higher energy-efficiency than DNNs. While exciting at first glance, SNNs contain security-sensitive assets (e.g., neuron threshold voltage) and vulnerabilities (e.g., sensitivity of classification accuracy to neuron threshold voltage change) that can be exploited by the adversaries. We explore global fault injection attacks using external power supply and laser-induced local power glitches on SNN designed using common analog neurons to corrupt critical training parameters such as spike amplitude and neuron’s membrane threshold potential. We also analyze the impact of power-based attacks on the SNN for digit classification task and observe a worst-case classification accuracy degradation of −85.65%. We explore the impact of various design parameters of SNN (e.g., learning rate, spike trace decay constant, and number of neurons) and identify design choices for robust implementation of SNN. We recover classification accuracy degradation by 30–47% for a subset of power-based attacks by modifying SNN training parameters such as learning rate, trace decay constant, and neurons per layer. We also propose hardware-level defenses, e.g., a robust current driver design that is immune to power-oriented attacks, improved circuit sizing of neuron components to reduce/recover the adversarial accuracy degradation at the cost of negligible area, and 25% power overhead. We also propose a dummy neuron-based detection of voltage fault injection at ∼1% power and area overhead each.

2022 ◽  
Anguo Zhang ◽  
Ying Han ◽  
Jing Hu ◽  
Yuzhen Niu ◽  
Yueming Gao ◽  

We propose two simple and effective spiking neuron models to improve the response time of the conventional spiking neural network. The proposed neuron models adaptively tune the presynaptic input current depending on the input received from its presynapses and subsequent neuron firing events. We analyze and derive the firing activity homeostatic convergence of the proposed models. We experimentally verify and compare the models on MNIST handwritten digits and FashionMNIST classification tasks. We show that the proposed neuron models significantly increase the response speed to the input signal.

Gerardo Malavena

AbstractSince the very first introduction of three-dimensional (3–D) vertical-channel (VC) NAND Flash memory arrays, gate-induced drain leakage (GIDL) current has been suggested as a solution to increase the string channel potential to trigger the erase operation. Thanks to that erase scheme, the memory array can be built directly on the top of a $$n^+$$ n + plate, without requiring any p-doped region to contact the string channel and therefore allowing to simplify the manufacturing process and increase the array integration density. For those reasons, the understanding of the physical phenomena occurring in the string when GIDL is triggered is important for the proper design of the cell structure and of the voltage waveforms adopted during erase. Even though a detailed comprehension of the GIDL phenomenology can be achieved by means of technology computer-aided design (TCAD) simulations, they are usually time and resource consuming, especially when realistic string structures with many word-lines (WLs) are considered. In this chapter, an analysis of the GIDL-assisted erase in 3–D VC nand memory arrays is presented. First, the evolution of the string potential and GIDL current during erase is investigated by means of TCAD simulations; then, a compact model able to reproduce both the string dynamics and the threshold voltage transients with reduced computational effort is presented. The developed compact model is proven to be a valuable tool for the optimization of the array performance during erase assisted by GIDL. Then, the idea of taking advantage of GIDL for the erase operation is exported to the context of spiking neural networks (SNNs) based on NOR Flash memory arrays, which require operational schemes that allow single-cell selectivity during both cell program and cell erase. To overcome the block erase typical of nor Flash memory arrays based on Fowler-Nordheim tunneling, a new erase scheme that triggers GIDL in the NOR Flash cell and exploits hot-hole injection (HHI) at its drain side to accomplish the erase operation is presented. Using that scheme, spike-timing dependent plasticity (STDP) is implemented in a mainstream NOR Flash array and array learning is successfully demonstrated in a prototype SNN. The achieved results represent an important step for the development of large-scale neuromorphic systems based on mature and reliable memory technologies.

2022 ◽  
Jong-Ung Baek ◽  
Jin-Young Choi ◽  
Dong Won Kim ◽  
Ji-Chan Kim ◽  
Han-Sol Jun ◽  

Unlike conventional neuromorphic chips fabricated with C-MOSFETs and capacitors, those utilizing p-STT MTJ neuron devices can achieve fast switching (on the order of several tens of nanoseconds) and extremely low...

2021 ◽  
Vol 18 (4) ◽  
pp. 1-21
Hüsrev Cılasun ◽  
Salonik Resch ◽  
Zamshed I. Chowdhury ◽  
Erin Olson ◽  
Masoud Zabihi ◽  

Spiking Neural Networks (SNNs) represent a biologically inspired computation model capable of emulating neural computation in human brain and brain-like structures. The main promise is very low energy consumption. Classic Von Neumann architecture based SNN accelerators in hardware, however, often fall short of addressing demanding computation and data transfer requirements efficiently at scale. In this article, we propose a promising alternative to overcome scalability limitations, based on a network of in-memory SNN accelerators, which can reduce the energy consumption by up to 150.25= when compared to a representative ASIC solution. The significant reduction in energy comes from two key aspects of the hardware design to minimize data communication overheads: (1) each node represents an in-memory SNN accelerator based on a spintronic Computational RAM array, and (2) a novel, De Bruijn graph based architecture establishes the SNN array connectivity.

Webology ◽  
2021 ◽  
Vol 19 (1) ◽  
pp. 01-18
Hayder Rahm Dakheel AL-Fayyadh ◽  
Salam Abdulabbas Ganim Ali ◽  
Dr. Basim Abood

The goal of this paper is to use artificial intelligence to build and evaluate an adaptive learning system where we adopt the basic approaches of spiking neural networks as well as artificial neural networks. Spiking neural networks receive increasing attention due to their advantages over traditional artificial neural networks. They have proven to be energy efficient, biological plausible, and up to 105 times faster if they are simulated on analogue traditional learning systems. Artificial neural network libraries use computational graphs as a pervasive representation, however, spiking models remain heterogeneous and difficult to train. Using the artificial intelligence deductive method, the paper posits two hypotheses that examines whether 1) there exists a common representation for both neural networks paradigms for tutorial mentoring, and whether 2) spiking and non-spiking models can learn a simple recognition task for learning activities for adaptive learning. The first hypothesis is confirmed by specifying and implementing a domain-specific language that generates semantically similar spiking and non-spiking neural networks for tutorial mentoring. Through three classification experiments, the second hypothesis is shown to hold for non-spiking models, but cannot be proven for the spiking models. The paper contributes three findings: 1) a domain-specific language for modelling neural network topologies in adaptive tutorial mentoring for students, 2) a preliminary model for generalizable learning through back-propagation in spiking neural networks for learning activities for students also represented in results section, and 3) a method for transferring optimised non-spiking parameters to spiking neural networks has also been developed for adaptive learning system. The latter contribution is promising because the vast machine learning literature can spill-over to the emerging field of spiking neural networks and adaptive learning computing. Future work includes improving the back-propagation model, exploring time-dependent models for learning, and adding support for adaptive learning systems.

2021 ◽  
Vol 23 (6) ◽  
pp. 317-326
E.A. Ryndin ◽  
N.V. Andreeva ◽  
V.V. Luchinin ◽  
K.S. Goncharov ◽  

In the current era, design and development of artificial neural networks exploiting the architecture of the human brain have evolved rapidly. Artificial neural networks effectively solve a wide range of common for artificial intelligence tasks involving data classification and recognition, prediction, forecasting and adaptive control of object behavior. Biologically inspired underlying principles of ANN operation have certain advantages over the conventional von Neumann architecture including unsupervised learning, architectural flexibility and adaptability to environmental change and high performance under significantly reduced power consumption due to heavy parallel and asynchronous data processing. In this paper, we present the circuit design of main functional blocks (neurons and synapses) intended for hardware implementation of a perceptron-based feedforward spiking neural network. As the third generation of artificial neural networks, spiking neural networks perform data processing utilizing spikes, which are discrete events (or functions) that take place at points in time. Neurons in spiking neural networks initiate precisely timing spikes and communicate with each other via spikes transmitted through synaptic connections or synapses with adaptable scalable weight. One of the prospective approach to emulate the synaptic behavior in hardware implemented spiking neural networks is to use non-volatile memory devices with analog conduction modulation (or memristive structures). Here we propose a circuit design for functional analogues of memristive structure to mimic a synaptic plasticity, pre- and postsynaptic neurons which could be used for developing circuit design of spiking neural network architectures with different training algorithms including spike-timing dependent plasticity learning rule. Two different circuits of electronic synapse were developed. The first one is an analog synapse with photoresistive optocoupler used to ensure the tunable conductivity for synaptic plasticity emulation. While the second one is a digital synapse, in which the synaptic weight is stored in a digital code with its direct conversion into conductivity (without digital-to-analog converter andphotoresistive optocoupler). The results of the prototyping of developed circuits for electronic analogues of synapses, pre- and postsynaptic neurons and the study of transient processes are presented. The developed approach could provide a basis for ASIC design of spiking neural networks based on CMOS (complementary metal oxide semiconductor) design technology.

Sign in / Sign up

Export Citation Format

Share Document