scholarly journals A Constructive Learning Algorithm Based on Division of Training Data for Multilayer Neural Networks

1996 ◽  
Vol 116 (10) ◽  
pp. 1183-1187
Author(s):  
Tatsuya Uno ◽  
Seiichi Koakutsu ◽  
Hironori Hirata
2018 ◽  
Vol 8 (12) ◽  
pp. 2416 ◽  
Author(s):  
Ansi Zhang ◽  
Honglei Wang ◽  
Shaobo Li ◽  
Yuxin Cui ◽  
Zhonghao Liu ◽  
...  

Prognostics, such as remaining useful life (RUL) prediction, is a crucial task in condition-based maintenance. A major challenge in data-driven prognostics is the difficulty of obtaining a sufficient number of samples of failure progression. However, for traditional machine learning methods and deep neural networks, enough training data is a prerequisite to train good prediction models. In this work, we proposed a transfer learning algorithm based on Bi-directional Long Short-Term Memory (BLSTM) recurrent neural networks for RUL estimation, in which the models can be first trained on different but related datasets and then fine-tuned by the target dataset. Extensive experimental results show that transfer learning can in general improve the prediction models on the dataset with a small number of samples. There is one exception that when transferring from multi-type operating conditions to single operating conditions, transfer learning led to a worse result.


1994 ◽  
Vol 05 (01) ◽  
pp. 67-75 ◽  
Author(s):  
BYOUNG-TAK ZHANG

Much previous work on training multilayer neural networks has attempted to speed up the backpropagation algorithm using more sophisticated weight modification rules, whereby all the given training examples are used in a random or predetermined sequence. In this paper we investigate an alternative approach in which the learning proceeds on an increasing number of selected training examples, starting with a small training set. We derive a measure of criticality of examples and present an incremental learning algorithm that uses this measure to select a critical subset of given examples for solving the particular task. Our experimental results suggest that the method can significantly improve training speed and generalization performance in many real applications of neural networks. This method can be used in conjunction with other variations of gradient descent algorithms.


2013 ◽  
Vol 37 ◽  
pp. 182-188 ◽  
Author(s):  
Bernard Widrow ◽  
Aaron Greenblatt ◽  
Youngsik Kim ◽  
Dookun Park

2021 ◽  
Vol 1964 (6) ◽  
pp. 062042
Author(s):  
R. Mohanapriya ◽  
D. Vijendra Babu ◽  
S. SathishKumar ◽  
C. Sarala ◽  
E. Anjali ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document