On-line learning algorithm for recurrent neural networks using variational methods

1999 ◽  
Vol 20 (6-7) ◽  
pp. 457
Author(s):  
Won-Geun Oh ◽  
Byung-Suhl Suh
1999 ◽  
Vol 10 (2) ◽  
pp. 253-271 ◽  
Author(s):  
P. Campolucci ◽  
A. Uncini ◽  
F. Piazza ◽  
B.D. Rao

2002 ◽  
Vol 124 (3) ◽  
pp. 364-374 ◽  
Author(s):  
Alexander G. Parlos ◽  
Sunil K. Menon ◽  
Amir F. Atiya

On-line filtering of stochastic variables that are difficult or expensive to directly measure has been widely studied. In this paper a practical algorithm is presented for adaptive state filtering when the underlying nonlinear state equations are partially known. The unknown dynamics are constructively approximated using neural networks. The proposed algorithm is based on the two-step prediction-update approach of the Kalman Filter. The algorithm accounts for the unmodeled nonlinear dynamics and makes no assumptions regarding the system noise statistics. The proposed filter is implemented using static and dynamic feedforward neural networks. Both off-line and on-line learning algorithms are presented for training the filter networks. Two case studies are considered and comparisons with Extended Kalman Filters (EKFs) performed. For one of the case studies, the EKF converges but it results in higher state estimation errors than the equivalent neural filter with on-line learning. For another, more complex case study, the developed EKF does not converge. For both case studies, the off-line trained neural state filters converge quite rapidly and exhibit acceptable performance. On-line training further enhances filter performance, decoupling the eventual filter accuracy from the accuracy of the assumed system model.


2018 ◽  
Vol 8 (12) ◽  
pp. 2416 ◽  
Author(s):  
Ansi Zhang ◽  
Honglei Wang ◽  
Shaobo Li ◽  
Yuxin Cui ◽  
Zhonghao Liu ◽  
...  

Prognostics, such as remaining useful life (RUL) prediction, is a crucial task in condition-based maintenance. A major challenge in data-driven prognostics is the difficulty of obtaining a sufficient number of samples of failure progression. However, for traditional machine learning methods and deep neural networks, enough training data is a prerequisite to train good prediction models. In this work, we proposed a transfer learning algorithm based on Bi-directional Long Short-Term Memory (BLSTM) recurrent neural networks for RUL estimation, in which the models can be first trained on different but related datasets and then fine-tuned by the target dataset. Extensive experimental results show that transfer learning can in general improve the prediction models on the dataset with a small number of samples. There is one exception that when transferring from multi-type operating conditions to single operating conditions, transfer learning led to a worse result.


Sign in / Sign up

Export Citation Format

Share Document