BenchNN: On the broad potential application scope of hardware neural network accelerators

Author(s):  
Tianshi Chen ◽  
Yunji Chen ◽  
Marc Duranton ◽  
Qi Guo ◽  
Atif Hashmi ◽  
...  
2021 ◽  
Vol 13 (12) ◽  
pp. 2405
Author(s):  
Fengyang Long ◽  
Chengfa Gao ◽  
Yuxiang Yan ◽  
Jinling Wang

Precise modeling of weighted mean temperature (Tm) is critical for realizing real-time conversion from zenith wet delay (ZWD) to precipitation water vapor (PWV) in Global Navigation Satellite System (GNSS) meteorology applications. The empirical Tm models developed by neural network techniques have been proved to have better performances on the global scale; they also have fewer model parameters and are thus easy to operate. This paper aims to further deepen the research of Tm modeling with the neural network, and expand the application scope of Tm models and provide global users with more solutions for the real-time acquisition of Tm. An enhanced neural network Tm model (ENNTm) has been developed with the radiosonde data distributed globally. Compared with other empirical models, the ENNTm has some advanced features in both model design and model performance, Firstly, the data for modeling cover the whole troposphere rather than just near the Earth’s surface; secondly, the ensemble learning was employed to weaken the impact of sample disturbance on model performance and elaborate data preprocessing, including up-sampling and down-sampling, which was adopted to achieve better model performance on the global scale; furthermore, the ENNTm was designed to meet the requirements of three different application conditions by providing three sets of model parameters, i.e., Tm estimating without measured meteorological elements, Tm estimating with only measured temperature and Tm estimating with both measured temperature and water vapor pressure. The validation work is carried out by using the radiosonde data of global distribution, and results show that the ENNTm has better performance compared with other competing models from different perspectives under the same application conditions, the proposed model expanded the application scope of Tm estimation and provided the global users with more choices in the applications of real-time GNSS-PWV retrival.


2021 ◽  
Vol 15 ◽  
Author(s):  
Wooseok Choi ◽  
Myonghoon Kwak ◽  
Seyoung Kim ◽  
Hyunsang Hwang

Hardware neural network (HNN) based on analog synapse array excels in accelerating parallel computations. To implement an energy-efficient HNN with high accuracy, high-precision synaptic devices and fully-parallel array operations are essential. However, existing resistive memory (RRAM) devices can represent only a finite number of conductance states. Recently, there have been attempts to compensate device nonidealities using multiple devices per weight. While there is a benefit, it is difficult to apply the existing parallel updating scheme to the synaptic units, which significantly increases updating process’s cost in terms of computation speed, energy, and complexity. Here, we propose an RRAM-based hybrid synaptic unit consisting of a “big” synapse and a “small” synapse, and a related training method. Unlike previous attempts, array-wise fully-parallel learning is possible with our proposed architecture with a simple array selection logic. To experimentally verify the hybrid synapse, we exploit Mo/TiOx RRAM, which shows promising synaptic properties and areal dependency of conductance precision. By realizing the intrinsic gain via proportionally scaled device area, we show that the big and small synapse can be implemented at the device-level without modifications to the operational scheme. Through neural network simulations, we confirm that RRAM-based hybrid synapse with the proposed learning method achieves maximum accuracy of 97 %, comparable to floating-point implementation (97.92%) of the software even with only 50 conductance states in each device. Our results promise training efficiency and inference accuracy by using existing RRAM devices.


2018 ◽  
Vol 8 (1) ◽  
Author(s):  
Sungho Kim ◽  
Meehyun Lim ◽  
Yeamin Kim ◽  
Hee-Dong Kim ◽  
Sung-Jin Choi

Author(s):  
Alexander D. Pisarev ◽  
Alexander N. Busygin ◽  
Abdulla Kh. A. Ibrahim ◽  
Sergey Yu. Udovichenko

This publication is the series of articles continuation on the creation of neuroprocessor nodes based on a composite memristor-diode crossbar. The authors have determined the principles of modifying the pulse information into a binary code in the output device of the neuroprocessor, implemented in a logical matrix based on a new electronic element — a combined memristor-diode crossbar. The processing of pulse signals is possible in the logical matrix, since one layer of the matrix is a set of logical AND or OR gates with arbitrarily connected inputs. The authors have proposed two solutions to the problem of decoding pulses from a population of neurons in the output device, coming from the hardware neural network of the neuroprocessor, into standard binary signals. The first solution involves the two layers use of a logical matrix and a pulse generator. The compactness of the second solution is achieved due to the presence of a binary number generator, which allows to get rid of one layer of the logical matrix. This article presents the SPICE modeling results of the decoding pulsed information process signals into binary format and confirms the operability of the output device electrical circuit. The originality of the device operation lies in the switching of the generator signals by the logical matrix to the neuroprocessor output based on the time delay of the input pulse from the hardware neural network. The use of the memristor logical matrix in all nodes of the neuroprocessor, including the input device, makes it possible to unify the element base of the neuroprocessor complete electrical circuit, as well as its power supplies.


AIP Advances ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. 025111 ◽  
Author(s):  
Divya Kaushik ◽  
Utkarsh Singh ◽  
Upasana Sahu ◽  
Indu Sreedevi ◽  
Debanjan Bhowmik

2011 ◽  
Vol 16 (2) ◽  
pp. 229-233 ◽  
Author(s):  
Kazuto Okazaki ◽  
Tatsuya Ogiwara ◽  
Dongshin Yang ◽  
Kentaro Sakata ◽  
Ken Saito ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document