hardware neural network
Recently Published Documents


TOTAL DOCUMENTS

49
(FIVE YEARS 19)

H-INDEX

8
(FIVE YEARS 1)

Author(s):  
Mikihito Hayakawa ◽  
Kenji Takeda ◽  
Motokuni Ishibashi ◽  
Kaito Tanami ◽  
Megumi Aibara ◽  
...  

2021 ◽  
Vol 15 ◽  
Author(s):  
Wooseok Choi ◽  
Myonghoon Kwak ◽  
Seyoung Kim ◽  
Hyunsang Hwang

Hardware neural network (HNN) based on analog synapse array excels in accelerating parallel computations. To implement an energy-efficient HNN with high accuracy, high-precision synaptic devices and fully-parallel array operations are essential. However, existing resistive memory (RRAM) devices can represent only a finite number of conductance states. Recently, there have been attempts to compensate device nonidealities using multiple devices per weight. While there is a benefit, it is difficult to apply the existing parallel updating scheme to the synaptic units, which significantly increases updating process’s cost in terms of computation speed, energy, and complexity. Here, we propose an RRAM-based hybrid synaptic unit consisting of a “big” synapse and a “small” synapse, and a related training method. Unlike previous attempts, array-wise fully-parallel learning is possible with our proposed architecture with a simple array selection logic. To experimentally verify the hybrid synapse, we exploit Mo/TiOx RRAM, which shows promising synaptic properties and areal dependency of conductance precision. By realizing the intrinsic gain via proportionally scaled device area, we show that the big and small synapse can be implemented at the device-level without modifications to the operational scheme. Through neural network simulations, we confirm that RRAM-based hybrid synapse with the proposed learning method achieves maximum accuracy of 97 %, comparable to floating-point implementation (97.92%) of the software even with only 50 conductance states in each device. Our results promise training efficiency and inference accuracy by using existing RRAM devices.


2021 ◽  
Author(s):  
Upasana Sahu ◽  
Naven Sisodia ◽  
Janak Sharda ◽  
Pranaba Kishor Muduli ◽  
Debanjan Bhowmik

we have modeled domain-wall motion in ferrimagnetic and ferromagnetic devices through micro magnetics and shown that the domain-wall velocity can be 2–2.5X faster in the ferrimagnetic device compared to the ferromagnetic device. We also show that this velocity ratio is consistent with recent experimental findings Because of such a velocity ratio, when such devices are used as synapses in the crossbar-array-based fully connected network, our system-level simulation here shows that a ferrimagnet-synapse-based crossbar offers 4X faster (for the same energy efficiency) or 4X more energy-efficient (for the same speed) learning when compared to the ferromagnet-synapse-based crossbar.


2021 ◽  
Author(s):  
Upasana Sahu ◽  
Naven Sisodia ◽  
Janak Sharda ◽  
Pranaba Kishor Muduli ◽  
Debanjan Bhowmik

we have modeled domain-wall motion in ferrimagnetic and ferromagnetic devices through micro magnetics and shown that the domain-wall velocity can be 2–2.5X faster in the ferrimagnetic device compared to the ferromagnetic device. We also show that this velocity ratio is consistent with recent experimental findings Because of such a velocity ratio, when such devices are used as synapses in the crossbar-array-based fully connected network, our system-level simulation here shows that a ferrimagnet-synapse-based crossbar offers 4X faster (for the same energy efficiency) or 4X more energy-efficient (for the same speed) learning when compared to the ferromagnet-synapse-based crossbar.


2021 ◽  
Vol 14 (1) ◽  
pp. 68-79
Author(s):  
С.Ю. Удовиченко ◽  
А.Д. Писарев ◽  
А.Н. Бусыгин ◽  
А.Н. Бобылев

Во входном и выходном устройствах биоморфного нейропроцессора происходят первичная и конечная обработка информации. Представлены результаты по сжатию на входе цифровой информации и ее кодированию в импульсы, а также по декодированию информации об активации нейронов на выходе в цифровой двоичный код. Представлена реализация аппаратной нейросети процессора на основе оригинальной биоморфной электрической модели нейрона. Приведены результаты SPICE-моделирования и экспериментального исследования процессов обработки сигналов в режимах маршрутизации выходных импульсов нейронов на синапсы других нейронов в логической матрице, скалярного умножения матрицы чисел на вектор, а также ассоциативного самообучения в запоминающей матрице. Впервые продемонстрирована генерация новой ассоциации (нового знания) как в компьютерном моделировании, так и в изготовленном мемристорно-диодном кроссбаре, в отличие от самообучения в существующих аппаратных нейросетях с синапсами на базе дискретных мемристоров. Primary and ultimate information processing takes place in the input and output devices of the biomorphic neuroprocessor. The results are presented on the compression of digital information at the input and its coding into pulses, as well as on the decoding of information about the activation of neurons at the output into a digital binary code. An implementation of a hardware neural network of a processor based on an original biomorphic electrical model of a neuron is presented. The results of SPICE modeling and experimental research of signal processing processes in the modes of routing neuron output pulses to synapses of other neurons in a logical matrix, scalar multiplication of a matrix of numbers by a vector, and associative selflearning in a memory matrix are presented. For the first time, the generation of a new association (new knowledge) was demonstrated both in computer simulation and in a fabricated memristor-diode crossbar, in contrast to self-learning in existing hardware neural networks with synapses based on discrete memristors.


Sign in / Sign up

Export Citation Format

Share Document