Neuromorphic Memristive Chips: Design and Technology

2021 ◽  
Vol 23 (6) ◽  
pp. 285-294
Author(s):  
N.V. Andreeva ◽  
◽  
V.V. Luchinin ◽  
E.A. Ryndin ◽  
M.G. Anchkov ◽  
...  

Memristive neuromorphic chips exploit a prospective class of novel functional materials (memristors) to deploy a new architecture of spiking neural networks for developing basic blocks of brain-like systems. Memristor-based neuromorphic hardware solutions for multi-agent systems are considered as challenges in frontier areas of chip design for fast and energy-efficient computing. As functional materials, metal oxide thin films with resistive switching and memory effects (memristive structures) are recognized as a potential elemental base for new components of neuromorphic engineering, enabling a combination of both data storage and processing in a single unit. A key design issue in this case is a hardware defined functionality of neural networks. The gradient change of resistive properties of memristive elements and its non-volatile memory behavior ensure the possibility of spiking neural network organization with unsupervised learning through hardware implementation of basic synaptic mechanisms, such as Hebb's learning rules including spike — timing dependent plasticity, long-term potentiation and depression. This paper provides an overview of scientific researches carrying out at Saint Petersburg Electrotechnical University "LETI" since 2014 in the field of novel electronic components for neuromorphic hardware solutions of brain-like chip design. Among the most promising concepts developed by ETU "LETI" are: the design of metal-insulator-metal structures exhibiting multilevel resistive switching (gradient tuning of resistive properties and bipolar resistive switching are combined together in a sin¬gle memristive element) for further use as artificial synaptic devices in neuromorphic chips; computing schemes for spatio-temporal pattern recognition based on spiking neural network architecture implementation; breadboard models of analogue circuits of hardware implementation of neuromorphic blocks for brain-like system developing.

Author(s):  
Xiang Cheng ◽  
Yunzhe Hao ◽  
Jiaming Xu ◽  
Bo Xu

Spiking Neural Network (SNN) is considered more biologically plausible and energy-efficient on emerging neuromorphic hardware. Recently backpropagation algorithm has been utilized for training SNN, which allows SNN to go deeper and achieve higher performance. However, most existing SNN models for object recognition are mainly convolutional structures or fully-connected structures, which only have inter-layer connections, but no intra-layer connections. Inspired by Lateral Interactions in neuroscience, we propose a high-performance and noise-robust Spiking Neural Network (dubbed LISNN). Based on the convolutional SNN, we model the lateral interactions between spatially adjacent neurons and integrate it into the spiking neuron membrane potential formula, then build a multi-layer SNN on a popular deep learning framework, i.\,e., PyTorch. We utilize the pseudo-derivative method to solve the non-differentiable problem when applying backpropagation to train LISNN and test LISNN on multiple standard datasets. Experimental results demonstrate that the proposed model can achieve competitive or better performance compared to current state-of-the-art spiking neural networks on MNIST, Fashion-MNIST, and N-MNIST datasets. Besides, thanks to lateral interactions, our model processes stronger noise-robustness than other SNN. Our work brings a biologically plausible mechanism into SNN, hoping that it can help us understand the visual information processing in the brain.


2021 ◽  
Vol 23 (6) ◽  
pp. 317-326
Author(s):  
E.A. Ryndin ◽  
◽  
N.V. Andreeva ◽  
V.V. Luchinin ◽  
K.S. Goncharov ◽  
...  

In the current era, design and development of artificial neural networks exploiting the architecture of the human brain have evolved rapidly. Artificial neural networks effectively solve a wide range of common for artificial intelligence tasks involving data classification and recognition, prediction, forecasting and adaptive control of object behavior. Biologically inspired underlying principles of ANN operation have certain advantages over the conventional von Neumann architecture including unsupervised learning, architectural flexibility and adaptability to environmental change and high performance under significantly reduced power consumption due to heavy parallel and asynchronous data processing. In this paper, we present the circuit design of main functional blocks (neurons and synapses) intended for hardware implementation of a perceptron-based feedforward spiking neural network. As the third generation of artificial neural networks, spiking neural networks perform data processing utilizing spikes, which are discrete events (or functions) that take place at points in time. Neurons in spiking neural networks initiate precisely timing spikes and communicate with each other via spikes transmitted through synaptic connections or synapses with adaptable scalable weight. One of the prospective approach to emulate the synaptic behavior in hardware implemented spiking neural networks is to use non-volatile memory devices with analog conduction modulation (or memristive structures). Here we propose a circuit design for functional analogues of memristive structure to mimic a synaptic plasticity, pre- and postsynaptic neurons which could be used for developing circuit design of spiking neural network architectures with different training algorithms including spike-timing dependent plasticity learning rule. Two different circuits of electronic synapse were developed. The first one is an analog synapse with photoresistive optocoupler used to ensure the tunable conductivity for synaptic plasticity emulation. While the second one is a digital synapse, in which the synaptic weight is stored in a digital code with its direct conversion into conductivity (without digital-to-analog converter andphotoresistive optocoupler). The results of the prototyping of developed circuits for electronic analogues of synapses, pre- and postsynaptic neurons and the study of transient processes are presented. The developed approach could provide a basis for ASIC design of spiking neural networks based on CMOS (complementary metal oxide semiconductor) design technology.


2017 ◽  
Vol 7 (1) ◽  
Author(s):  
Marc Osswald ◽  
Sio-Hoi Ieng ◽  
Ryad Benosman ◽  
Giacomo Indiveri

Abstract Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.


Author(s):  
Oliver Rhodes ◽  
Luca Peres ◽  
Andrew G. D. Rowley ◽  
Andrew Gait ◽  
Luis A. Plana ◽  
...  

Real-time simulation of a large-scale biologically representative spiking neural network is presented, through the use of a heterogeneous parallelization scheme and SpiNNaker neuromorphic hardware. A published cortical microcircuit model is used as a benchmark test case, representing ≈1 mm 2 of early sensory cortex, containing 77 k neurons and 0.3 billion synapses. This is the first hard real-time simulation of this model, with 10 s of biological simulation time executed in 10 s wall-clock time. This surpasses best-published efforts on HPC neural simulators (3 × slowdown) and GPUs running optimized spiking neural network (SNN) libraries (2 × slowdown). Furthermore, the presented approach indicates that real-time processing can be maintained with increasing SNN size, breaking the communication barrier incurred by traditional computing machinery. Model results are compared to an established HPC simulator baseline to verify simulation correctness, comparing well across a range of statistical measures. Energy to solution and energy per synaptic event are also reported, demonstrating that the relatively low-tech SpiNNaker processors achieve a 10 × reduction in energy relative to modern HPC systems, and comparable energy consumption to modern GPUs. Finally, system robustness is demonstrated through multiple 12 h simulations of the cortical microcircuit, each simulating 12 h of biological time, and demonstrating the potential of neuromorphic hardware as a neuroscience research tool for studying complex spiking neural networks over extended time periods. This article is part of the theme issue ‘Harmonizing energy-autonomous computing and intelligence’.


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Sotir Sotirov ◽  
Vassia Atanassova ◽  
Evdokia Sotirova ◽  
Lyubka Doukovska ◽  
Veselina Bureva ◽  
...  

The approach of InterCriteria Analysis (ICA) was applied for the aim of reducing the set of variables on the input of a neural network, taking into account the fact that their large number increases the number of neurons in the network, thus making them unusable for hardware implementation. Here, for the first time, with the help of the ICA method, correlations between triples of the input parameters for training of the neural networks were obtained. In this case, we use the approach of ICA for data preprocessing, which may yield reduction of the total time for training the neural networks, hence, the time for the network’s processing of data and images.


2018 ◽  
Vol 53 (2) ◽  
pp. 448-460 ◽  
Author(s):  
Yu Ji ◽  
Youhui Zhang ◽  
Wenguang Chen ◽  
Yuan Xie

2021 ◽  
Vol 15 ◽  
Author(s):  
Abderazek Ben Abdallah ◽  
Khanh N. Dang

Spiking Neuromorphic systems have been introduced as promising platforms for energy-efficient spiking neural network (SNNs) execution. SNNs incorporate neuronal and synaptic states in addition to the variant time scale into their computational model. Since each neuron in these networks is connected to many others, high bandwidth is required. Moreover, since the spike times are used to encode information in SNN, a precise communication latency is also needed, although SNN is tolerant to the spike delay variation in some limits when it is seen as a whole. The two-dimensional packet-switched network-on-chip was proposed as a solution to provide a scalable interconnect fabric in large-scale spike-based neural networks. The 3D-ICs have also attracted a lot of attention as a potential solution to resolve the interconnect bottleneck. Combining these two emerging technologies provides a new horizon for IC design to satisfy the high requirements of low power and small footprint in emerging AI applications. Moreover, although fault-tolerance is a natural feature of biological systems, integrating many computation and memory units into neuromorphic chips confronts the reliability issue, where a defective part can affect the overall system's performance. This paper presents the design and simulation of R-NASH-a reliable three-dimensional digital neuromorphic system geared explicitly toward the 3D-ICs biological brain's three-dimensional structure, where information in the network is represented by sparse patterns of spike timing and learning is based on the local spike-timing-dependent-plasticity rule. Our platform enables high integration density and small spike delay of spiking networks and features a scalable design. R-NASH is a design based on the Through-Silicon-Via technology, facilitating spiking neural network implementation on clustered neurons based on Network-on-Chip. We provide a memory interface with the host CPU, allowing for online training and inference of spiking neural networks. Moreover, R-NASH supports fault recovery with graceful performance degradation.


Sign in / Sign up

Export Citation Format

Share Document