scholarly journals Spiking Neural Networks in Spintronic Computational RAM

2021 ◽  
Vol 18 (4) ◽  
pp. 1-21
Author(s):  
Hüsrev Cılasun ◽  
Salonik Resch ◽  
Zamshed I. Chowdhury ◽  
Erin Olson ◽  
Masoud Zabihi ◽  
...  

Spiking Neural Networks (SNNs) represent a biologically inspired computation model capable of emulating neural computation in human brain and brain-like structures. The main promise is very low energy consumption. Classic Von Neumann architecture based SNN accelerators in hardware, however, often fall short of addressing demanding computation and data transfer requirements efficiently at scale. In this article, we propose a promising alternative to overcome scalability limitations, based on a network of in-memory SNN accelerators, which can reduce the energy consumption by up to 150.25= when compared to a representative ASIC solution. The significant reduction in energy comes from two key aspects of the hardware design to minimize data communication overheads: (1) each node represents an in-memory SNN accelerator based on a spintronic Computational RAM array, and (2) a novel, De Bruijn graph based architecture establishes the SNN array connectivity.

Electronics ◽  
2018 ◽  
Vol 7 (12) ◽  
pp. 396 ◽  
Author(s):  
Errui Zhou ◽  
Liang Fang ◽  
Binbin Yang

Neuromorphic computing systems are promising alternatives in the fields of pattern recognition, image processing, etc. especially when conventional von Neumann architectures face several bottlenecks. Memristors play vital roles in neuromorphic computing systems and are usually used as synaptic devices. Memristive spiking neural networks (MSNNs) are considered to be more efficient and biologically plausible than other systems due to their spike-based working mechanism. In contrast to previous SNNs with complex architectures, we propose a hardware-friendly architecture and an unsupervised spike-timing dependent plasticity (STDP) learning method for MSNNs in this paper. The architecture, which is friendly to hardware implementation, includes an input layer, a feature learning layer and a voting circuit. To reduce hardware complexity, some constraints are enforced: the proposed architecture has no lateral inhibition and is purely feedforward; it uses the voting circuit as a classifier and does not use additional classifiers; all neurons can generate at most one spike and do not need to consider firing rates and refractory periods; all neurons have the same fixed threshold voltage for classification. The presented unsupervised STDP learning method is time-dependent and uses no homeostatic mechanism. The MNIST dataset is used to demonstrate our proposed architecture and learning method. Simulation results show that our proposed architecture with the learning method achieves a classification accuracy of 94.6%, which outperforms other unsupervised SNNs that use time-based encoding schemes.


2020 ◽  
Vol 14 ◽  
Author(s):  
Martino Sorbaro ◽  
Qian Liu ◽  
Massimo Bortone ◽  
Sadique Sheik

Materials ◽  
2019 ◽  
Vol 12 (21) ◽  
pp. 3461 ◽  
Author(s):  
Paolo La Torraca ◽  
Francesco Maria Puglisi ◽  
Andrea Padovani ◽  
Luca Larcher

Memristor-based neuromorphic systems have been proposed as a promising alternative to von Neumann computing architectures, which are currently challenged by the ever-increasing computational power required by modern artificial intelligence (AI) algorithms. The design and optimization of memristive devices for specific AI applications is thus of paramount importance, but still extremely complex, as many different physical mechanisms and their interactions have to be accounted for, which are, in many cases, not fully understood. The high complexity of the physical mechanisms involved and their partial comprehension are currently hampering the development of memristive devices and preventing their optimization. In this work, we tackle the application-oriented optimization of Resistive Random-Access Memory (RRAM) devices using a multiscale modeling platform. The considered platform includes all the involved physical mechanisms (i.e., charge transport and trapping, and ion generation, diffusion, and recombination) and accounts for the 3D electric and temperature field in the device. Thanks to its multiscale nature, the modeling platform allows RRAM devices to be simulated and the microscopic physical mechanisms involved to be investigated, the device performance to be connected to the material’s microscopic properties and geometries, the device electrical characteristics to be predicted, the effect of the forming conditions (i.e., temperature, compliance current, and voltage stress) on the device’s performance and variability to be evaluated, the analog resistance switching to be optimized, and the device’s reliability and failure causes to be investigated. The discussion of the presented simulation results provides useful insights for supporting the application-oriented optimization of RRAM technology according to specific AI applications, for the implementation of either non-volatile memories, deep neural networks, or spiking neural networks.


Materials ◽  
2020 ◽  
Vol 13 (1) ◽  
pp. 166 ◽  
Author(s):  
Valerio Milo ◽  
Gerardo Malavena ◽  
Christian Monzio Compagnoni ◽  
Daniele Ielmini

Neuromorphic computing has emerged as one of the most promising paradigms to overcome the limitations of von Neumann architecture of conventional digital processors. The aim of neuromorphic computing is to faithfully reproduce the computing processes in the human brain, thus paralleling its outstanding energy efficiency and compactness. Toward this goal, however, some major challenges have to be faced. Since the brain processes information by high-density neural networks with ultra-low power consumption, novel device concepts combining high scalability, low-power operation, and advanced computing functionality must be developed. This work provides an overview of the most promising device concepts in neuromorphic computing including complementary metal-oxide semiconductor (CMOS) and memristive technologies. First, the physics and operation of CMOS-based floating-gate memory devices in artificial neural networks will be addressed. Then, several memristive concepts will be reviewed and discussed for applications in deep neural network and spiking neural network architectures. Finally, the main technology challenges and perspectives of neuromorphic computing will be discussed.


Author(s):  
Jianhao Ding ◽  
Zhaofei Yu ◽  
Yonghong Tian ◽  
Tiejun Huang

Spiking Neural Networks (SNNs), as bio-inspired energy-efficient neural networks, have attracted great attentions from researchers and industry. The most efficient way to train deep SNNs is through ANN-SNN conversion. However, the conversion usually suffers from accuracy loss and long inference time, which impede the practical application of SNN. In this paper, we theoretically analyze ANN-SNN conversion and derive sufficient conditions of the optimal conversion. To better correlate ANN-SNN and get greater accuracy, we propose Rate Norm Layer to replace the ReLU activation function in source ANN training, enabling direct conversion from a trained ANN to an SNN. Moreover, we propose an optimal fit curve to quantify the fit between the activation value of source ANN and the actual firing rate of target SNN. We show that the inference time can be reduced by optimizing the upper bound of the fit curve in the revised ANN to achieve fast inference. Our theory can explain the existing work on fast reasoning and get better results. The experimental results show that the proposed method achieves near loss-less conversion with VGG-16, PreActResNet-18, and deeper structures. Moreover, it can reach 8.6× faster reasoning performance under 0.265× energy consumption of the typical method. The code is available at https://github.com/DingJianhao/OptSNNConvertion-RNL-RIL.


2020 ◽  
Author(s):  
Kun Liao ◽  
Ye Chen ◽  
Zhongcheng Yu ◽  
Xiaoyong Hu ◽  
Xingyuan Wang ◽  
...  

Abstract The rapid development of information technology has fueled an ever-increasing demand for ultrafast and ultralow-energy-consumption computing. Existing computing instruments are pre-dominantly electronic processors. The scaling of computing speed is limited not only by data transfer between memory and processing units, but also by RC delay associated with integrated circuits. Using photons as information carriers is a promising alternative. Here, we report a strategy to realize ultrafast and ultralow-energy-consumption all-optical computing based on convolutional neural networks, leveraging entirely linear optical interactions. The device is constructed from cascaded silicon Y-shaped waveguides with side-coupled silicon waveguide segments to enable complete phase and amplitude control in each waveguide branch. The generic device concept can be used for equation solving, multifunctional logic operation, Fourier transformation, series expanding and encoding, as well as many other mathematical operations. Multiple computing functions were experimentally demonstrated to validate all-optical computing performances. The time-of-flight of light through the network structure corresponds to an ultrafast computing time of the order of several picoseconds with an ultralow energy consumption of dozens of femtojoules per bit. Our approach can be further expanded to fulfill other complex computing tasks based on non-von Neumann architectures and thus paves a new way for on-chip all-optical computing.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Amritanand Sebastian ◽  
Andrew Pannone ◽  
Shiva Subbulakshmi Radhakrishnan ◽  
Saptarshi Das

Abstract The recent decline in energy, size and complexity scaling of traditional von Neumann architecture has resurrected considerable interest in brain-inspired computing. Artificial neural networks (ANNs) based on emerging devices, such as memristors, achieve brain-like computing but lack energy-efficiency. Furthermore, slow learning, incremental adaptation, and false convergence are unresolved challenges for ANNs. In this article we, therefore, introduce Gaussian synapses based on heterostructures of atomically thin two-dimensional (2D) layered materials, namely molybdenum disulfide and black phosphorus field effect transistors (FETs), as a class of analog and probabilistic computational primitives for hardware implementation of statistical neural networks. We also demonstrate complete tunability of amplitude, mean and standard deviation of the Gaussian synapse via threshold engineering in dual gated molybdenum disulfide and black phosphorus FETs. Finally, we show simulation results for classification of brainwaves using Gaussian synapse based probabilistic neural networks.


2020 ◽  
Vol 6 (35) ◽  
pp. eabb3348
Author(s):  
Sungi Kim ◽  
Namjun Kim ◽  
Jinyoung Seo ◽  
Jeong-Eun Park ◽  
Eun Ho Song ◽  
...  

The lack of a scalable nanoparticle-based computing architecture severely limits the potential and use of nanoparticles for manipulating and processing information with molecular computing schemes. Inspired by the von Neumann architecture (VNA), in which multiple programs can be operated without restructuring the computer, we realized the nanoparticle-based VNA (NVNA) on a lipid chip for multiple executions of arbitrary molecular logic operations in the single chip without refabrication. In this system, nanoparticles on a lipid chip function as the hardware that features memory, processors, and output units, and DNA strands are used as the software to provide molecular instructions for the facile programming of logic circuits. NVNA enables a group of nanoparticles to form a feed-forward neural network, a perceptron, which implements functionally complete Boolean logic operations, and provides a programmable, resettable, scalable computing architecture and circuit board to form nanoparticle neural networks and make logical decisions.


Author(s):  
Dennis Valbjørn Christensen ◽  
Regina Dittmann ◽  
Bernabe Linares-Barranco ◽  
Abu Sebastian ◽  
Manuel Le Gallo ◽  
...  

Abstract Modern computation based on the von Neumann architecture is today a mature cutting-edge science. In the Von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018 calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this Roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The Roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this Roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community.


Sign in / Sign up

Export Citation Format

Share Document