Learning Patterns of States in Time Series by Genetic Programming

Author(s):  
Feng Xie ◽  
Andy Song ◽  
Vic Ciesielski
2016 ◽  
Vol 20 (10) ◽  
pp. 3915-3925 ◽  
Author(s):  
Andy Song ◽  
Feng Xie ◽  
Vic Ciesielski

1991 ◽  
Vol 3 (2) ◽  
pp. 213-225 ◽  
Author(s):  
John Platt

We have created a network that allocates a new computational unit whenever an unusual pattern is presented to the network. This network forms compact representations, yet learns easily and rapidly. The network can be used at any time in the learning process and the learning patterns do not have to be repeated. The units in this network respond to only a local region of the space of input values. The network learns by allocating new units and adjusting the parameters of existing units. If the network performs poorly on a presented pattern, then a new unit is allocated that corrects the response to the presented pattern. If the network performs well on a presented pattern, then the network parameters are updated using standard LMS gradient descent. We have obtained good results with our resource-allocating network (RAN). For predicting the Mackey-Glass chaotic time series, RAN learns much faster than do those using backpropagation networks and uses a comparable number of synapses.


2002 ◽  
Vol 15 (2) ◽  
pp. 265-279 ◽  
Author(s):  
Witthaya Panyaworayan ◽  
Georg Wuetschner

In this paper we present a prediction process of Time Series using a combination of Genetic Programming and Constant Optimization. The Genetic Programming will be used to evolve the structure of the prediction function, whereas the Constant Optimization will determine the numerical parameters of the prediction function. The prediction process is applied recursively. In each recursion step, a sub-prediction function is evolved. At the end of the iteration all sub-prediction functions form the final prediction function. The avoiding of a major problem in the prediction called over-fitting is also described in this article.


Author(s):  
Daniel Rivero ◽  
Miguel Varela ◽  
Javier Pereira

A technique is described in this chapter that makes it possible to extract the knowledge held by previously trained artificial neural networks. This makes it possible for them to be used in a number of areas (such as medicine) where it is necessary to know how they work, as well as having a network that functions. This chapter explains how to carry out this process to extract knowledge, defined as rules. Special emphasis is placed on extracting knowledge from recurrent neural networks, in particular when applied in predicting time series.


2018 ◽  
Vol 49 (6) ◽  
pp. 1880-1889 ◽  
Author(s):  
Mani Kumar ◽  
Rajeev Ranjan Sahay

Abstract In this study we have developed a conjunction model, WGP, of discrete wavelet transform (DWT) and genetic programming (GP) for forecasting river floods when the only data available are the historical daily flows. DWT is used for denoising and smoothening the observed flow time series on which GP is implemented to get the next-day flood. The new model is compared with autoregressive (AR) and stand-alone GP models. All models are calibrated and tested on the Kosi River which is one of the most devastating rivers of the world with high and spiky monsoon flows, modeling of which poses a great challenge. With different inputs, 12 models, four in each class of WGP, GP and AR, are devised. The best performing WGP model, WGP4, with four previous daily flow rates as input, forecasts the Kosi floods with an accuracy of 87.9%, root mean square error of 123.9 m3/s and Nash–Sutcliffe coefficient of 0.993, the best performance indices among all the developed models. The extreme floods are also better simulated by the WGP models than by AR and GP models.


Sign in / Sign up

Export Citation Format

Share Document