Learning with Resistive Switching Neural Networks

Author(s):  
Mingyi Rao ◽  
Qiangfei Xia ◽  
J. Joshua Yang ◽  
Zhongrui Wang ◽  
Can Li ◽  
...  
2019 ◽  
Vol 9 (1) ◽  
Author(s):  
P. Stoliar ◽  
H. Yamada ◽  
Y. Toyosaki ◽  
A. Sawa

AbstractResistive switching (RS) devices have attracted increasing attention for artificial synapse applications in neural networks because of their nonvolatile and analogue resistance changes. Among the neural networks, a spiking neural network (SNN) based on spike-timing-dependent plasticity (STDP) is highly energy efficient. To implement STDP in resistive switching devices, several types of voltage spikes have been proposed to date, but there have been few reports on the relationship between the STDP characteristics and spike types. Here, we report the STDP characteristics implemented in ferroelectric tunnel junctions (FTJs) by several types of spikes. Based on simulated time evolutions of superimposed spikes and taking the nonlinear current-voltage (I-V) characteristics of FTJs into account, we propose equations for simulating the STDP curve parameters of a magnitude of the conductance change (ΔGmax) and a time window (τC) from the spike parameters of a peak amplitude (Vpeak) and time durations (tp and td) for three spike types: triangle-triangle, rectangular-triangle, and rectangular-rectangular. The power consumption experiments of the STDP revealed that the power consumption under the inactive-synapse condition (spike timing |Δt| > τC) was as large as 50–82% of that under the active-synapse condition (|Δt| < τC). This finding indicates that the power consumption under the inactive-synapse condition should be reduced to minimize the total power consumption of an SNN implemented by using FTJs as synapses.


2021 ◽  
Vol 23 (6) ◽  
pp. 285-294
Author(s):  
N.V. Andreeva ◽  
◽  
V.V. Luchinin ◽  
E.A. Ryndin ◽  
M.G. Anchkov ◽  
...  

Memristive neuromorphic chips exploit a prospective class of novel functional materials (memristors) to deploy a new architecture of spiking neural networks for developing basic blocks of brain-like systems. Memristor-based neuromorphic hardware solutions for multi-agent systems are considered as challenges in frontier areas of chip design for fast and energy-efficient computing. As functional materials, metal oxide thin films with resistive switching and memory effects (memristive structures) are recognized as a potential elemental base for new components of neuromorphic engineering, enabling a combination of both data storage and processing in a single unit. A key design issue in this case is a hardware defined functionality of neural networks. The gradient change of resistive properties of memristive elements and its non-volatile memory behavior ensure the possibility of spiking neural network organization with unsupervised learning through hardware implementation of basic synaptic mechanisms, such as Hebb's learning rules including spike — timing dependent plasticity, long-term potentiation and depression. This paper provides an overview of scientific researches carrying out at Saint Petersburg Electrotechnical University "LETI" since 2014 in the field of novel electronic components for neuromorphic hardware solutions of brain-like chip design. Among the most promising concepts developed by ETU "LETI" are: the design of metal-insulator-metal structures exhibiting multilevel resistive switching (gradient tuning of resistive properties and bipolar resistive switching are combined together in a sin¬gle memristive element) for further use as artificial synaptic devices in neuromorphic chips; computing schemes for spatio-temporal pattern recognition based on spiking neural network architecture implementation; breadboard models of analogue circuits of hardware implementation of neuromorphic blocks for brain-like system developing.


2021 ◽  
pp. 108034
Author(s):  
G. González-Cordero ◽  
M.B. González ◽  
M. Zabala ◽  
K. Kalam ◽  
A. Tamm ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3141
Author(s):  
Rocio Romero-Zaliz ◽  
Antonio Cantudo ◽  
Eduardo Perez ◽  
Francisco Jimenez-Molinos ◽  
Christian Wenger ◽  
...  

We have performed different simulation experiments in relation to hardware neural networks (NN) to analyze the role of the number of synapses for different NN architectures in the network accuracy, considering different datasets. A technology that stands upon 4-kbit 1T1R ReRAM arrays, where resistive switching devices based on HfO2 dielectrics are employed, is taken as a reference. In our study, fully dense (FdNN) and convolutional neural networks (CNN) were considered, where the NN size in terms of the number of synapses and of hidden layer neurons were varied. CNNs work better when the number of synapses to be used is limited. If quantized synaptic weights are included, we observed that NN accuracy decreases significantly as the number of synapses is reduced; in this respect, a trade-off between the number of synapses and the NN accuracy has to be achieved. Consequently, the CNN architecture must be carefully designed; in particular, it was noticed that different datasets need specific architectures according to their complexity to achieve good results. It was shown that due to the number of variables that can be changed in the optimization of a NN hardware implementation, a specific solution has to be worked in each case in terms of synaptic weight levels, NN architecture, etc.


Sign in / Sign up

Export Citation Format

Share Document