scholarly journals Time Series Complexities and Their Relationship to Forecasting Performance

Entropy ◽  
2020 ◽  
Vol 22 (1) ◽  
pp. 89 ◽  
Author(s):  
Mirna Ponce-Flores ◽  
Juan Frausto-Solís ◽  
Guillermo Santamaría-Bonfil ◽  
Joaquín Pérez-Ortega ◽  
Juan J. González-Barbosa

Entropy is a key concept in the characterization of uncertainty for any given signal, and its extensions such as Spectral Entropy and Permutation Entropy. They have been used to measure the complexity of time series. However, these measures are subject to the discretization employed to study the states of the system, and identifying the relationship between complexity measures and the expected performance of the four selected forecasting methods that participate in the M4 Competition. This relationship allows the decision, in advance, of which algorithm is adequate. Therefore, in this paper, we found the relationships between entropy-based complexity framework and the forecasting error of four selected methods (Smyl, Theta, ARIMA, and ETS). Moreover, we present a framework extension based on the Emergence, Self-Organization, and Complexity paradigm. The experimentation with both synthetic and M4 Competition time series show that the feature space induced by complexities, visually constrains the forecasting method performance to specific regions; where the logarithm of its metric error is poorer, the Complexity based on the emergence and self-organization is maximal.


2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
Cem Kocak

Fuzzy time series approaches have an important deficiency according to classical time series approaches. This deficiency comes from the fact that all of the fuzzy time series models developed in the literature use autoregressive (AR) variables, without any studies that also make use of moving averages (MAs) variables with the exception of only one study (Egrioglu et al. (2013)). In order to eliminate this deficiency, it is necessary to have many of daily life time series be expressed with Autoregressive Moving Averages (ARMAs) models that are based not only on the lagged values of the time series (AR variables) but also on the lagged values of the error series (MA variables). To that end, a new first-order fuzzy ARMA(1,1) time series forecasting method solution algorithm based on fuzzy logic group relation tables has been developed. The new method proposed has been compared against some methods in the literature by applying them on Istanbul Stock Exchange national 100 index (IMKB) and Gold Prices time series in regards to forecasting performance.



2018 ◽  
Author(s):  
Frank Pennekamp ◽  
Alison C. Iles ◽  
Joshua Garland ◽  
Georgina Brennan ◽  
Ulrich Brose ◽  
...  

AbstractSuccessfully predicting the future states of systems that are complex, stochastic and potentially chaotic is a major challenge. Model forecasting error (FE) is the usual measure of success; however model predictions provide no insights into the potential for improvement. In short, the realized predictability of a specific model is uninformative about whether the system is inherently predictable or whether the chosen model is a poor match for the system and our observations thereof. Ideally, model proficiency would be judged with respect to the systems’ intrinsic predictability – the highest achievable predictability given the degree to which system dynamics are the result of deterministic v. stochastic processes. Intrinsic predictability may be quantified with permutation entropy (PE), a model-free, information-theoretic measure of the complexity of a time series. By means of simulations we show that a correlation exists between estimated PE and FE and show how stochasticity, process error, and chaotic dynamics affect the relationship. This relationship is verified for a dataset of 461 empirical ecological time series. We show how deviations from the expected PE-FE relationship are related to covariates of data quality and the nonlinearity of ecological dynamics.These results demonstrate a theoretically-grounded basis for a model-free evaluation of a system’s intrinsic predictability. Identifying the gap between the intrinsic and realized predictability of time series will enable researchers to understand whether forecasting proficiency is limited by the quality and quantity of their data or the ability of the chosen forecasting model to explain the data. Intrinsic predictability also provides a model-free baseline of forecasting proficiency against which modeling efforts can be evaluated.GlossaryActive information: The amount of information that is available to forecasting models (redundant information minus lost information; Fig. 1).Forecasting error (FE): A measure of the discrepancy between a model’s forecasts and the observed dynamics of a system. Common measures of forecast error are root mean squared error and mean absolute error.Entropy: Measures the average amount of information in the outcome of a stochastic process.Information: Any entity that provides answers and resolves uncertainty about a process. When information is calculated using logarithms to the base two (i.e. information in bits), it is the minimum number of yes/no questions required, on average, to determine the identity of the symbol (Jost 2006). The information in an observation consists of information inherited from the past (redundant information), and of new information.Intrinsic predictability: the maximum achievable predictability of a system (Beckage et al. 2011).Lost information: The part of the redundant information lost due to measurement or sampling error, or transformations of the data (Fig. 1).New information, Shannon entropy rate: The Shannon entropy rate quantifies the average amount of information per observation in a time series that is unrelated to the past, i.e., the new information (Fig. 1).Nonlinearity: When the deterministic processes governing system dynamics depend on the state of the system.Permutation entropy (PE): permutation entropy is a measure of the complexity of a time series (Bandt & Pompe, 2002) that is negatively correlated with a system’s predictability (Garland et al. 2015). Permutation entropy quantifies the combined new and lost information. PE is scaled to range between a minimum of 0 and a maximum of 1.Realized predictability: the achieved predictability of a system from a given forecasting model.Redundant information: The information inherited from the past, and thus the maximum amount of information available for use in forecasting (Fig. 1).Symbols, words, permutations: symbols are simply the smallest unit in a formal language such as the letters in the English alphabet i.e., {“A”, “B”,…, “Z”}. In information theory the alphabet is more abstract, such as elements in the set {“up”, “down”} or {“1”, “2”, “3”}. Words, of length m refer to concatenations of the symbols (e.g., up-down-down) in a set. Permutations are the possible orderings of symbols in a set. In this manuscript, the words are the permutations that arise from the numerical ordering of m data points in a time series.Weighted permutation entropy (WPE): a modification of permutation entropy (Fadlallah et al., 2013) that distinguishes between small-scale, noise-driven variation and large-scale, system-driven variation by considering the magnitudes of changes in addition to the rank-order patterns of PE.



2012 ◽  
Vol 57 (1) ◽  
Author(s):  
Maria Elena ◽  
Muhamad Hisyam Lee ◽  
Suhartono H. ◽  
Hossein I. ◽  
Nur Haizum Abd Rahman ◽  
...  

Forecasting is very important in many types of organizations since predictions of future events must be incorporated into the decision–making process. In the case of tourism demand, better forecast would help directors and investors make operational, tactical, and strategic decisions. Generally, in time series we can divide forecasting method into classical method and modern methods. Although recent studies show that the newer and more advanced forecasting techniques tend to result in improved forecast accuracy under certain circumstances, no clear–cut evidence shows that any one model can consistently outperform other models in the forecasting competition [1]. In this study, the forecasting performance between Box–Jenkins approaches of seasonal autoregressive integrated moving average (SARIMA) and four models of fuzzy time series has been compared by using MAPE, MAD and RMSE as the forecast measures of accuracy. The empirical results show that Chen's fuzzy time series model outperforms the SARIMA and the other fuzzy time series models.



2011 ◽  
Vol 145 ◽  
pp. 143-148
Author(s):  
Hsien Lun Wong ◽  
Chi Chen Wang ◽  
Tsung Yi Shen

Fuzzy time series methods have been applied to social forecasting for over a decade; however, little research has been done to discuss the decision of an optimal fuzzy model for time series. In the paper, we evaluate the forecasting performance of three listed multivariate fuzzy models by comparing forecasting MSE of model. The data obtained from AEROM, Taiwan, includes Taiwan’s exports and foreign exchange rate for models’ test. The algorithm for predictive value of the models has three-stage computation procedure: First, calibrating time series correlation, deciding window base and interval partition; second, solving the static forecasting value of each model; third, comparing the dynamic parameter to impact of the forecasting error. The empirical results indicate that increasing predictor variables has no significant effect on predictive performance of the models; increasing length of interval would not improve the prediction performance of the models. Moreover, Fuzzy model is better for short-term time series forecasting. For forecasting purpose, Heuristic model has best forecasting performance among three fuzzy models. The findings of the paper represent a significant contribution to our understanding of the applicability of fuzzy models to predict.



2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Gian Maria Campedelli ◽  
Mihovil Bartulovic ◽  
Kathleen M. Carley

AbstractIn the last 20 years, terrorism has led to hundreds of thousands of deaths and massive economic, political, and humanitarian crises in several regions of the world. Using real-world data on attacks occurred in Afghanistan and Iraq from 2001 to 2018, we propose the use of temporal meta-graphs and deep learning to forecast future terrorist targets. Focusing on three event dimensions, i.e., employed weapons, deployed tactics and chosen targets, meta-graphs map the connections among temporally close attacks, capturing their operational similarities and dependencies. From these temporal meta-graphs, we derive 2-day-based time series that measure the centrality of each feature within each dimension over time. Formulating the problem in the context of the strategic behavior of terrorist actors, these multivariate temporal sequences are then utilized to learn what target types are at the highest risk of being chosen. The paper makes two contributions. First, it demonstrates that engineering the feature space via temporal meta-graphs produces richer knowledge than shallow time-series that only rely on frequency of feature occurrences. Second, the performed experiments reveal that bi-directional LSTM networks achieve superior forecasting performance compared to other algorithms, calling for future research aiming at fully discovering the potential of artificial intelligence to counter terrorist violence.



2019 ◽  
Vol 139 (3) ◽  
pp. 212-224
Author(s):  
Xiaowei Dui ◽  
Masakazu Ito ◽  
Yu Fujimoto ◽  
Yasuhiro Hayashi ◽  
Guiping Zhu ◽  
...  


Open Physics ◽  
2021 ◽  
Vol 19 (1) ◽  
pp. 360-374
Author(s):  
Yuan Pei ◽  
Lei Zhenglin ◽  
Zeng Qinghui ◽  
Wu Yixiao ◽  
Lu Yanli ◽  
...  

Abstract The load of the showcase is a nonlinear and unstable time series data, and the traditional forecasting method is not applicable. Deep learning algorithms are introduced to predict the load of the showcase. Based on the CEEMD–IPSO–LSTM combination algorithm, this paper builds a refrigerated display cabinet load forecasting model. Compared with the forecast results of other models, it finally proves that the CEEMD–IPSO–LSTM model has the highest load forecasting accuracy, and the model’s determination coefficient is 0.9105, which is obviously excellent. Compared with other models, the model constructed in this paper can predict the load of showcases, which can provide a reference for energy saving and consumption reduction of display cabinet.



2021 ◽  
Vol 13 (15) ◽  
pp. 8670
Author(s):  
Xiwen Cui ◽  
Shaojun E ◽  
Dongxiao Niu ◽  
Dongyu Wang ◽  
Mingyu Li

In the process of economic development, the consumption of energy leads to environmental pollution. Environmental pollution affects the sustainable development of the world, and therefore energy consumption needs to be controlled. To help China formulate sustainable development policies, this paper proposes an energy consumption forecasting model based on an improved whale algorithm optimizing a linear support vector regression machine. The model combines multiple optimization methods to overcome the shortcomings of traditional models. This effectively improves the forecasting performance. The results of the projection of China’s future energy consumption data show that current policies are unable to achieve the carbon peak target. This result requires China to develop relevant policies, especially measures related to energy consumption factors, as soon as possible to ensure that China can achieve its peak carbon targets.



Sign in / Sign up

Export Citation Format

Share Document