scholarly journals Neural Network Training Acceleration With RRAM-Based Hybrid Synapses

2021 ◽  
Vol 15 ◽  
Author(s):  
Wooseok Choi ◽  
Myonghoon Kwak ◽  
Seyoung Kim ◽  
Hyunsang Hwang

Hardware neural network (HNN) based on analog synapse array excels in accelerating parallel computations. To implement an energy-efficient HNN with high accuracy, high-precision synaptic devices and fully-parallel array operations are essential. However, existing resistive memory (RRAM) devices can represent only a finite number of conductance states. Recently, there have been attempts to compensate device nonidealities using multiple devices per weight. While there is a benefit, it is difficult to apply the existing parallel updating scheme to the synaptic units, which significantly increases updating process’s cost in terms of computation speed, energy, and complexity. Here, we propose an RRAM-based hybrid synaptic unit consisting of a “big” synapse and a “small” synapse, and a related training method. Unlike previous attempts, array-wise fully-parallel learning is possible with our proposed architecture with a simple array selection logic. To experimentally verify the hybrid synapse, we exploit Mo/TiOx RRAM, which shows promising synaptic properties and areal dependency of conductance precision. By realizing the intrinsic gain via proportionally scaled device area, we show that the big and small synapse can be implemented at the device-level without modifications to the operational scheme. Through neural network simulations, we confirm that RRAM-based hybrid synapse with the proposed learning method achieves maximum accuracy of 97 %, comparable to floating-point implementation (97.92%) of the software even with only 50 conductance states in each device. Our results promise training efficiency and inference accuracy by using existing RRAM devices.

2020 ◽  
Vol 2020 (17) ◽  
pp. 2-1-2-6
Author(s):  
Shih-Wei Sun ◽  
Ting-Chen Mou ◽  
Pao-Chi Chang

To improve the workout efficiency and to provide the body movement suggestions to users in a “smart gym” environment, we propose to use a depth camera for capturing a user’s body parts and mount multiple inertial sensors on the body parts of a user to generate deadlift behavior models generated by a recurrent neural network structure. The contribution of this paper is trifold: 1) The multimodal sensing signals obtained from multiple devices are fused for generating the deadlift behavior classifiers, 2) the recurrent neural network structure can analyze the information from the synchronized skeletal and inertial sensing data, and 3) a Vaplab dataset is generated for evaluating the deadlift behaviors recognizing capability in the proposed method.


2020 ◽  
Vol 71 (6) ◽  
pp. 66-74
Author(s):  
Younis M. Younis ◽  
Salman H. Abbas ◽  
Farqad T. Najim ◽  
Firas Hashim Kamar ◽  
Gheorghe Nechifor

A comparison between artificial neural network (ANN) and multiple linear regression (MLR) models was employed to predict the heat of combustion, and the gross and net heat values, of a diesel fuel engine, based on the chemical composition of the diesel fuel. One hundred and fifty samples of Iraqi diesel provided data from chromatographic analysis. Eight parameters were applied as inputs in order to predict the gross and net heat combustion of the diesel fuel. A trial-and-error method was used to determine the shape of the individual ANN. The results showed that the prediction accuracy of the ANN model was greater than that of the MLR model in predicting the gross heat value. The best neural network for predicting the gross heating value was a back-propagation network (8-8-1), using the Levenberg�Marquardt algorithm for the second step of network training. R = 0.98502 for the test data. In the same way, the best neural network for predicting the net heating value was a back-propagation network (8-5-1), using the Levenberg�Marquardt algorithm for the second step of network training. R = 0.95112 for the test data.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 711
Author(s):  
Mina Basirat ◽  
Bernhard C. Geiger ◽  
Peter M. Roth

Information plane analysis, describing the mutual information between the input and a hidden layer and between a hidden layer and the target over time, has recently been proposed to analyze the training of neural networks. Since the activations of a hidden layer are typically continuous-valued, this mutual information cannot be computed analytically and must thus be estimated, resulting in apparently inconsistent or even contradicting results in the literature. The goal of this paper is to demonstrate how information plane analysis can still be a valuable tool for analyzing neural network training. To this end, we complement the prevailing binning estimator for mutual information with a geometric interpretation. With this geometric interpretation in mind, we evaluate the impact of regularization and interpret phenomena such as underfitting and overfitting. In addition, we investigate neural network learning in the presence of noisy data and noisy labels.


Sign in / Sign up

Export Citation Format

Share Document