Neural networks of combination of forecasts for data with long memory pattern

Author(s):  
M.A. Badri
1998 ◽  
Vol 35 (3-4) ◽  
pp. 551-554
Author(s):  
Masood A. Badri ◽  
Ahmed Al-Mutawa ◽  
Amr Murtagy

2005 ◽  
Vol 342 (1-2) ◽  
pp. 114-128 ◽  
Author(s):  
Zhigang Zeng ◽  
De-Shuang Huang ◽  
Zengfu Wang

2009 ◽  
Vol 19 (03) ◽  
pp. 843-856 ◽  
Author(s):  
YUNQUAN KE ◽  
CHUN-FANG MIAO

In this paper, memory patterns of bidirectional associative memory (BAM) neural networks with time-delay are investigated based on stability theory. Several sufficient conditions are obtained such that the equilibrium point is locally exponentially stable when the point is located at the designated position. These conditions, which can be directly derived from the synaptic connection weights and the external input of the BAM neural networks, are very easy to be verified. In addition, three examples are given to show the effectiveness of the results.


2021 ◽  
Author(s):  
Luis Alberiko Gil-Alana

In this paper we investigate the time trend coefficients in snowpack percentages by watershed in Colorado, US, allowing for the possibility of long range dependence or long memory processes. Nine series corresponding to the following watersheds are examined: Arkansas, Colorado, Gunnison, North Platte, Rio Grande, South Platte, San Juan-Animas-Dolores-San Miguel, Yampa & White and Colorado Statewide, based on annual data over the last eighty years. The longest series start in 1937 and all end in 2019. The results indicate that most of the series display a significant decline over time, showing negative time trend coefficients, and thus supporting the hypothesis of climate change and global warming. Nevertheless, there is no evidence of a long memory pattern in the data.


2019 ◽  
Vol 45 (3) ◽  
pp. 481-513
Author(s):  
Shuntaro Takahashi ◽  
Kumiko Tanaka-Ishii

In this article, we evaluate computational models of natural language with respect to the universal statistical behaviors of natural language. Statistical mechanical analyses have revealed that natural language text is characterized by scaling properties, which quantify the global structure in the vocabulary population and the long memory of a text. We study whether five scaling properties (given by Zipf’s law, Heaps’ law, Ebeling’s method, Taylor’s law, and long-range correlation analysis) can serve for evaluation of computational models. Specifically, we test n-gram language models, a probabilistic context-free grammar, language models based on Simon/Pitman-Yor processes, neural language models, and generative adversarial networks for text generation. Our analysis reveals that language models based on recurrent neural networks with a gating mechanism (i.e., long short-term memory; a gated recurrent unit; and quasi-recurrent neural networks) are the only computational models that can reproduce the long memory behavior of natural language. Furthermore, through comparison with recently proposed model-based evaluation methods, we find that the exponent of Taylor’s law is a good indicator of model quality.


2019 ◽  
Vol 41 (1) ◽  
pp. 41452
Author(s):  
Aline Castello Branco Mancuso ◽  
Liane Werner

Over the years, several studies that compare individual forecasts with the combination of forecasts were published. There is, however, no unanimity in the conclusions. Furthermore, methods of combination by regression are poorly explored. This paper presents a comparative study of three methods of combination and their individual forecasts. Based on simulated data, it is evaluated the accuracy of Artificial Neural Networks, ARIMA and exponential smoothing models; calculating the combined forecasts through simple average, minimum variance and regression methods. Four accuracy measurements, MAE, MAPE, RMSE and Theil’s U, were used for choosing the most accurate method. The main contribution is the accuracy of the combination by regression methods.


Sign in / Sign up

Export Citation Format

Share Document