Sequential learning algorithm for PG-RBF network using regression weights for time series prediction

Author(s):  
I. Rojas ◽  
H. Pomares ◽  
Juris L. Bernier ◽  
Juris Ortega ◽  
E. Ros ◽  
...  
1997 ◽  
Vol 9 (2) ◽  
pp. 461-478 ◽  
Author(s):  
Lu Yingwei ◽  
N. Sundararajan ◽  
P. Saratchandran

This article presents a sequential learning algorithm for function approximation and time-series prediction using a minimal radial basis function neural network (RBFNN). The algorithm combines the growth criterion of the resource-allocating network (RAN) of Platt (1991) with a pruning strategy based on the relative contribution of each hidden unit to the overall network output. The resulting network leads toward a minimal topology for the RBFNN. The performance of the algorithm is compared with RAN and the enhanced RAN algorithm of Kadirkamanathan and Niranjan (1993) for the following benchmark problems: (1) hearta from the benchmark problems database PROBEN1, (2) Hermite polynomial, and (3) Mackey-Glass chaotic time series. For these problems, the proposed algorithm is shown to realize RBFNNs with far fewer hidden neurons with better or same accuracy.


2007 ◽  
Vol 90 (12) ◽  
pp. 129-139
Author(s):  
Manabu Gouko ◽  
Yoshihiro Sugaya ◽  
Hirotomo Aso

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Yusuke Sakemi ◽  
Kai Morino ◽  
Timothée Leleu ◽  
Kazuyuki Aihara

AbstractReservoir computing (RC) is a machine learning algorithm that can learn complex time series from data very rapidly based on the use of high-dimensional dynamical systems, such as random networks of neurons, called “reservoirs.” To implement RC in edge computing, it is highly important to reduce the amount of computational resources that RC requires. In this study, we propose methods that reduce the size of the reservoir by inputting the past or drifting states of the reservoir to the output layer at the current time step. To elucidate the mechanism of model-size reduction, the proposed methods are analyzed based on information processing capacity proposed by Dambre et al. (Sci Rep 2:514, 2012). In addition, we evaluate the effectiveness of the proposed methods on time-series prediction tasks: the generalized Hénon-map and NARMA. On these tasks, we found that the proposed methods were able to reduce the size of the reservoir up to one tenth without a substantial increase in regression error.


2018 ◽  
Vol 4 (2) ◽  
pp. 563-565
Author(s):  
Rachita Sharma ◽  
Sanjay Kumar Dubey

This paper describes the introduction of Supervised and Unsupervised Techniques with the comparison of SOFM (Self Organized Feature Map) used for Satellite Imagery. In this we have explained the way of spatial and temporal changes detection used in forecasting in satellite imagery. Forecasting is based on time series of images using Artificial Neural Network. Recently neural networks have gained a lot of interest in time series prediction due to their ability to learn effectively nonlinear dependencies from large volume of possibly noisy data with a learning algorithm. Unsupervised neural networks reveal useful information from the temporal sequence and they reported power in cluster analysis and dimensionality reduction. In unsupervised learning, no pre classification and pre labeling of the input data is needed. SOFM is one of the unsupervised neural network used for time series prediction .In time series prediction the goal is to construct a model that can predict the future of the measured process under interest. There are various approaches to time series prediction that have been used over the years. It is a research area having application in diverse fields like weather forecasting, speech recognition, remote sensing. Advances in remote sensing technology and availability of high resolution images in recent years have motivated many researchers to study patterns in the images for the purpose of trend analysis


Sign in / Sign up

Export Citation Format

Share Document