Recursive Binary Neural Network Training Model for Efficient Usage of On-Chip Memory

2019 ◽  
Vol 66 (7) ◽  
pp. 2593-2605
Author(s):  
Tianchan Guan ◽  
Peiye Liu ◽  
Xiaoyang Zeng ◽  
Martha Kim ◽  
Mingoo Seok
2021 ◽  
Vol 4 (135) ◽  
pp. 12-22
Author(s):  
Vladimir Gerasimov ◽  
Nadija Karpenko ◽  
Denys Druzhynin

The goal of the paper is to create a training model based on real raw noisy data and train a neural network to determine the behavior of the fuel level, namely, to determine the time and volume of vehicle refueling, fuel consumption / excessive consumption / drainage.Various algorithms and data processing methods are used in fuel control and metering systems to get rid of noise. In some systems, primary filtering is used by excluding readings that are out of range, sharp jumps and deviations, and averaging over a sliding window. Research is being carried out on the use of more complex filters than simple averaging – by example, the Kalman filter for data processing.When measuring the fuel level using various fuel level sensor the data is influenced by many external factors that can interfere with the measurement and distort the real fuel level. Since these interferences are random and have a different structure, it is very difficult to completely remove them using classical noise suppression algorithms. Therefore, we use artificial intelligence, namely a neural network, to find patterns, detect noise and correct distorted data. To correct distorted data, you first need to determine which data is distorted, classify the data.In the course of the work, the raw data on the fuel level were transformed for use in the neural network training model. To describe the behavior of the fuel level, we use 4 possible classes: fuel consumption is observed, the vehicle is refueled, the fuel level does not change (the vehicle is idle), the data is distorted by noise. Also, in the process of work, additional tools of the DeepLearning4 library were used to load data training and training a neural network. A multilayer neural network model is used, namely a three-layer neural network, as well as used various training parameters provided by the DeepLearning4j library, which were obtained because of experiments.After training the neural network was used on test data, because of which the Confusion Matrix and Evaluation Metrics were obtained.In conclusion, finding a good model takes a lot of ideas and a lot of experimentation, also need to correctly process and transform the raw data to get the correct data for training. So far, a neural network has been trained to determine the state of the fuel level at a point in time and classify the behavior into four main labels (classes). Although we have not reduced the error in determining the behavior of the fuel level to zero, we have saved the states of the neural network, and in the future we will be able to retrain and evolve our neural network to obtain better results.


2003 ◽  
Vol 13 (05) ◽  
pp. 333-351 ◽  
Author(s):  
DI WANG ◽  
NARENDRA S. CHAUDHARI

A key problem in Binary Neural Network learning is to decide bigger linear separable subsets. In this paper we prove some lemmas about linear separability. Based on these lemmas, we propose Multi-Core Learning (MCL) and Multi-Core Expand-and-Truncate Learning (MCETL) algorithms to construct Binary Neural Networks. We conclude that MCL and MCETL simplify the equations to compute weights and thresholds, and they result in the construction of simpler hidden layer. Examples are given to demonstrate these conclusions.


2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Qinju Liu ◽  
Xianhui Lu ◽  
Fucai Luo ◽  
Shuai Zhou ◽  
Jingnan He ◽  
...  

We present a secure backpropagation neural network training model (SecureBP), which allows a neural network to be trained while retaining the confidentiality of the training data, based on the homomorphic encryption scheme. We make two contributions. The first one is to introduce a method to find a more accurate and numerically stable polynomial approximation of functions in a certain interval. The second one is to find a strategy of refreshing ciphertext during training, which keeps the order of magnitude of noise at O˜e33.


Author(s):  
Erwei Wang ◽  
James J. Davis ◽  
Daniele Moro ◽  
Piotr Zielinski ◽  
Jia Jie Lim ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document