One Transistor One Electrolyte‐Gated Transistor Based Spiking Neural Network for Power‐Efficient Neuromorphic Computing System

2021 ◽  
pp. 2100042
Author(s):  
Yue Li ◽  
Zihao Xuan ◽  
Jikai Lu ◽  
Zhongrui Wang ◽  
Xumeng Zhang ◽  
...  
2021 ◽  
Vol 17 (2) ◽  
pp. 1-29
Author(s):  
Anand Kumar Mukhopadhyay ◽  
Atul Sharma ◽  
Indrajit Chakrabarti ◽  
Arindam Basu ◽  
Mrigank Sharad

The method to map the neural signals to the neuron from which it originates is spike sorting. A low-power spike sorting system is presented for a neural implant device. The spike sorter constitutes a two-step trainer module that is shared by the signal acquisition channel associated with multiple electrodes. A low-power Spiking Neural Network (SNN) module is responsible for assigning the spike class. The two-step shared supervised on-chip training module is presented for improved training accuracy for the SNN. Post implant, the relatively power-hungry training module can be activated conditionally based on a statistics-driven retraining algorithm that allows on the fly training and adaptation. A low-power analog implementation for the SNN classifier is proposed based on resistive crossbar memory exploiting its approximate computing nature. Owing to the direct mapping of SNN functionality using physical characteristics of devices, the analog mode implementation can achieve ∼21 × lower power than its fully digital counterpart. We also incorporate the effect of device variation in the training process to suppress the impact of inevitable inaccuracies in such resistive crossbar devices on the classification accuracy. A variation-aware, digitally calibrated analog front-end is also presented, which consumes less than ∼50 nW power and interfaces with the digital training module as well as the analog SNN spike sorting module. Hence, the proposed scheme is a low-power, variation-tolerant, adaptive, digitally trained, all-analog spike sorter device, applicable to implantable and wearable multichannel brain-machine interfaces.


2021 ◽  
Vol 7 (29) ◽  
pp. eabh0648
Author(s):  
Xing Mou ◽  
Jianshi Tang ◽  
Yingjie Lyu ◽  
Qingtian Zhang ◽  
Siyao Yang ◽  
...  

Inspired by the human brain, nonvolatile memories (NVMs)–based neuromorphic computing emerges as a promising paradigm to build power-efficient computing hardware for artificial intelligence. However, existing NVMs still suffer from physically imperfect device characteristics. In this work, a topotactic phase transition random-access memory (TPT-RAM) with a unique diffusive nonvolatile dual mode based on SrCoOx is demonstrated. The reversible phase transition of SrCoOx is well controlled by oxygen ion migrations along the highly ordered oxygen vacancy channels, enabling reproducible analog switching characteristics with reduced variability. Combining density functional theory and kinetic Monte Carlo simulations, the orientation-dependent switching mechanism of TPT-RAM is investigated synergistically. Furthermore, the dual-mode TPT-RAM is used to mimic the selective stabilization of developing synapses and implement neural network pruning, reducing ~84.2% of redundant synapses while improving the image classification accuracy to 99%. Our work points out a new direction to design bioplausible memristive synapses for neuromorphic computing.


2020 ◽  
Author(s):  
Xumeng Zhang ◽  
Jian Lu ◽  
Rui Wang ◽  
Jinsong Wei ◽  
Tuo Shi ◽  
...  

Abstract Spiking neural network, consisting of spiking neurons and plastic synapses, is a promising but relatively underdeveloped neural network for neuromorphic computing. Inspired by the human brain, it provides a unique solution for highly efficient data processing. Recently, memristor-based neurons and synapses are becoming intriguing candidates to build spiking neural networks in hardware, owing to the close resemblance between their device dynamics and the biological counterparts. However, the functionalities of memristor-based neurons are currently very limited, and a hardware demonstration of fully memristor-based spiking neural networks supporting in situ learning is very challenging. Here, a hybrid spiking neuron by combining the memristor with simple digital circuits is designed and implemented in hardware to enhance the neuron functions. The hybrid neuron with memristive dynamics not only realizes the basic leaky integrate-and-fire neuron function but also enables the in situ tuning of the connected synaptic weights. Finally, a fully hardware spiking neural network with the hybrid neurons and memristive synapses is experimentally demonstrated for the first time, with which in situ Hebbian learning is achieved. This work opens up a way towards the implementation of spiking neurons, supporting in situ learning for future neuromorphic computing systems.


2018 ◽  
Vol 145 ◽  
pp. 488-494 ◽  
Author(s):  
Aleksandr Sboev ◽  
Alexey Serenko ◽  
Roman Rybka ◽  
Danila Vlasov ◽  
Andrey Filchenkov

2021 ◽  
Vol 1914 (1) ◽  
pp. 012036
Author(s):  
LI Wei ◽  
Zhu Wei-gang ◽  
Pang Hong-feng ◽  
Zhao Hong-yu

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2678
Author(s):  
Sergey A. Lobov ◽  
Alexey I. Zharinov ◽  
Valeri A. Makarov ◽  
Victor B. Kazantsev

Cognitive maps and spatial memory are fundamental paradigms of brain functioning. Here, we present a spiking neural network (SNN) capable of generating an internal representation of the external environment and implementing spatial memory. The SNN initially has a non-specific architecture, which is then shaped by Hebbian-type synaptic plasticity. The network receives stimuli at specific loci, while the memory retrieval operates as a functional SNN response in the form of population bursts. The SNN function is explored through its embodiment in a robot moving in an arena with safe and dangerous zones. We propose a measure of the global network memory using the synaptic vector field approach to validate results and calculate information characteristics, including learning curves. We show that after training, the SNN can effectively control the robot’s cognitive behavior, allowing it to avoid dangerous regions in the arena. However, the learning is not perfect. The robot eventually visits dangerous areas. Such behavior, also observed in animals, enables relearning in time-evolving environments. If a dangerous zone moves into another place, the SNN remaps positive and negative areas, allowing escaping the catastrophic interference phenomenon known for some AI architectures. Thus, the robot adapts to changing world.


Sign in / Sign up

Export Citation Format

Share Document