A Unipolar-based Stochastic LIF Neuron Design for Low-cost Spiking Neural Network

Author(s):  
Kun-Chih Jimmy Chen ◽  
Tze-Ling Kuo
Electronics ◽  
2021 ◽  
Vol 10 (19) ◽  
pp. 2441
Author(s):  
Yihao Wang ◽  
Danqing Wu ◽  
Yu Wang ◽  
Xianwu Hu ◽  
Zizhao Ma ◽  
...  

In recent years, the scaling down that Moore's Law relies on has been gradually slowing down, and the traditional von Neumann architecture has been limiting the improvement of computing power. Thus, neuromorphic in-memory computing hardware has been proposed and is becoming a promising alternative. However, there is still a long way to make it possible, and one of the problems is to provide an efficient, reliable, and achievable neural network for hardware implementation. In this paper, we proposed a two-layer fully connected spiking neural network based on binary MRAM (Magneto-resistive Random Access Memory) synapses with low hardware cost. First, the network used an array of multiple binary MRAM cells to store multi-bit fixed-point weight values. This helps to simplify the read/write circuit. Second, we used different kinds of spike encoders that ensure the sparsity of input spikes, to reduce the complexity of peripheral circuits, such as sense amplifiers. Third, we designed a single-step learning rule, which fit well with the fixed-point binary weights. Fourth, we replaced the traditional exponential Leak-Integrate-Fire (LIF) neuron model to avoid the massive cost of exponential circuits. The simulation results showed that, compared to other similar works, our SNN with 1,184 neurons and 313,600 synapses achieved an accuracy of up to 90.6% in the MNIST recognition task with full-resolution (28 × 28) and full-bit-depth (8-bit) images. In the case of low-resolution (16 × 16) and black-white (1-bit) images, the smaller version of our network with 384 neurons and 32,768 synapses still maintained an accuracy of about 77%, extending its application to ultra-low-cost situations. Both versions need less than 30,000 samples to reach convergence, which is a >50% reduction compared to other similar networks. As for robustness, it is immune to the fluctuation of MRAM cell resistance.


2018 ◽  
Vol 48 (3) ◽  
pp. 1777-1788 ◽  
Author(s):  
Yuling Luo ◽  
Lei Wan ◽  
Junxiu Liu ◽  
Jim Harkin ◽  
Yi Cao

2019 ◽  
Vol 66 (9) ◽  
pp. 1582-1586 ◽  
Author(s):  
Edris Zaman Farsa ◽  
Arash Ahmadi ◽  
Mohammad Ali Maleki ◽  
Morteza Gholami ◽  
Hima Nikafshan Rad

2020 ◽  
Vol 382 ◽  
pp. 106-115 ◽  
Author(s):  
Guohe Zhang ◽  
Bing Li ◽  
Jianxing Wu ◽  
Ran Wang ◽  
Yazhu Lan ◽  
...  

2018 ◽  
Author(s):  
Rizki Eka Putri ◽  
Denny Darlis

This article was under review for ICELTICS 2018 -- In the medical world there is still service dissatisfaction caused by lack of blood type testing facility. If the number of tested blood arise, a lot of problems will occur so that electronic devices are needed to determine the blood type accurately and in short time. In this research we implemented an Artificial Neural Network on Xilinx Spartan 3S1000 Field Programable Gate Array using XSA-3S Board to identify the blood type. This research uses blood sample image as system input. VHSIC Hardware Discription Language is the language to describe the algorithm. The algorithm used is feed-forward propagation of backpropagation neural network. There are 3 layers used in design, they are input, hidden1, and output. At hidden1layer has two neurons. In this study the accuracy of detection obtained are 92%, 92%, 92%, 90% and 86% for 32x32, 48x48, 64x64, 80x80, and 96x96 pixel blood image resolution, respectively.


2018 ◽  
Vol 145 ◽  
pp. 488-494 ◽  
Author(s):  
Aleksandr Sboev ◽  
Alexey Serenko ◽  
Roman Rybka ◽  
Danila Vlasov ◽  
Andrey Filchenkov

2021 ◽  
Vol 1914 (1) ◽  
pp. 012036
Author(s):  
LI Wei ◽  
Zhu Wei-gang ◽  
Pang Hong-feng ◽  
Zhao Hong-yu

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2678
Author(s):  
Sergey A. Lobov ◽  
Alexey I. Zharinov ◽  
Valeri A. Makarov ◽  
Victor B. Kazantsev

Cognitive maps and spatial memory are fundamental paradigms of brain functioning. Here, we present a spiking neural network (SNN) capable of generating an internal representation of the external environment and implementing spatial memory. The SNN initially has a non-specific architecture, which is then shaped by Hebbian-type synaptic plasticity. The network receives stimuli at specific loci, while the memory retrieval operates as a functional SNN response in the form of population bursts. The SNN function is explored through its embodiment in a robot moving in an arena with safe and dangerous zones. We propose a measure of the global network memory using the synaptic vector field approach to validate results and calculate information characteristics, including learning curves. We show that after training, the SNN can effectively control the robot’s cognitive behavior, allowing it to avoid dangerous regions in the arena. However, the learning is not perfect. The robot eventually visits dangerous areas. Such behavior, also observed in animals, enables relearning in time-evolving environments. If a dangerous zone moves into another place, the SNN remaps positive and negative areas, allowing escaping the catastrophic interference phenomenon known for some AI architectures. Thus, the robot adapts to changing world.


Sign in / Sign up

Export Citation Format

Share Document