Three Novel Methods to Predict Traffic Time Series in Reconstructed State Spaces

Author(s):  
Lawrence W. Lan ◽  
Feng-Yu Lin ◽  
April Y. Kuo

This article proposes three novel methods—temporal confined (TC), spatiotemporal confined (STC) and spatial confined (SC)—to forecast the temporal evolution of traffic parameters. The fundamental rationales are to embed one-dimensional traffic time series into reconstructed state spaces and then to perform fuzzy reasoning to infer the future changes in traffic series. The TC, STC and SC methods respectively employ different fuzzy reasoning logics to select similar historical traffic trajectories. Theil inequality coefficient and its decomposed components are used to evaluate the predicting power and source of errors. Field observed one-minute traffic counts are used to test the predicting power. The results show that overall prediction accuracies for the three methods are satisfactorily high with small systematic errors and little deviation from the observed data. It suggests that the proposed three methods can be used to capture and forecast the short-term (e.g., one-minute) temporal evolution of traffic parameters.

2010 ◽  
Vol 1 (1) ◽  
pp. 16-35 ◽  
Author(s):  
Lawrence W. Lan ◽  
Feng-Yu Lin ◽  
April Y. Kuo

This article proposes three novel methods—temporal confined (TC), spatiotemporal confined (STC) and spatial confined (SC)—to forecast the temporal evolution of traffic parameters. The fundamental rationales are to embed one-dimensional traffic time series into reconstructed state spaces and then to perform fuzzy reasoning to infer the future changes in traffic series. The TC, STC and SC methods respectively employ different fuzzy reasoning logics to select similar historical traffic trajectories. Theil inequality coefficient and its decomposed components are used to evaluate the predicting power and source of errors. Field observed one-minute traffic counts are used to test the predicting power. The results show that overall prediction accuracies for the three methods are satisfactorily high with small systematic errors and little deviation from the observed data. It suggests that the proposed three methods can be used to capture and forecast the short-term (e.g., one-minute) temporal evolution of traffic parameters.


2021 ◽  
Vol 7 ◽  
pp. 58-64
Author(s):  
Xifeng Guo ◽  
Ye Gao ◽  
Yupeng Li ◽  
Di Zheng ◽  
Dan Shan

Electronics ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 1151
Author(s):  
Carolina Gijón ◽  
Matías Toril ◽  
Salvador Luna-Ramírez ◽  
María Luisa Marí-Altozano ◽  
José María Ruiz-Avilés

Network dimensioning is a critical task in current mobile networks, as any failure in this process leads to degraded user experience or unnecessary upgrades of network resources. For this purpose, radio planning tools often predict monthly busy-hour data traffic to detect capacity bottlenecks in advance. Supervised Learning (SL) arises as a promising solution to improve predictions obtained with legacy approaches. Previous works have shown that deep learning outperforms classical time series analysis when predicting data traffic in cellular networks in the short term (seconds/minutes) and medium term (hours/days) from long historical data series. However, long-term forecasting (several months horizon) performed in radio planning tools relies on short and noisy time series, thus requiring a separate analysis. In this work, we present the first study comparing SL and time series analysis approaches to predict monthly busy-hour data traffic on a cell basis in a live LTE network. To this end, an extensive dataset is collected, comprising data traffic per cell for a whole country during 30 months. The considered methods include Random Forest, different Neural Networks, Support Vector Regression, Seasonal Auto Regressive Integrated Moving Average and Additive Holt–Winters. Results show that SL models outperform time series approaches, while reducing data storage capacity requirements. More importantly, unlike in short-term and medium-term traffic forecasting, non-deep SL approaches are competitive with deep learning while being more computationally efficient.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Els Weinans ◽  
Rick Quax ◽  
Egbert H. van Nes ◽  
Ingrid A. van de Leemput

AbstractVarious complex systems, such as the climate, ecosystems, and physical and mental health can show large shifts in response to small changes in their environment. These ‘tipping points’ are notoriously hard to predict based on trends. However, in the past 20 years several indicators pointing to a loss of resilience have been developed. These indicators use fluctuations in time series to detect critical slowing down preceding a tipping point. Most of the existing indicators are based on models of one-dimensional systems. However, complex systems generally consist of multiple interacting entities. Moreover, because of technological developments and wearables, multivariate time series are becoming increasingly available in different fields of science. In order to apply the framework of resilience indicators to multivariate time series, various extensions have been proposed. Not all multivariate indicators have been tested for the same types of systems and therefore a systematic comparison between the methods is lacking. Here, we evaluate the performance of the different multivariate indicators of resilience loss in different scenarios. We show that there is not one method outperforming the others. Instead, which method is best to use depends on the type of scenario the system is subject to. We propose a set of guidelines to help future users choose which multivariate indicator of resilience is best to use for their particular system.


Energies ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 3299
Author(s):  
Ashish Shrestha ◽  
Bishal Ghimire ◽  
Francisco Gonzalez-Longatt

Withthe massive penetration of electronic power converter (EPC)-based technologies, numerous issues are being noticed in the modern power system that may directly affect system dynamics and operational security. The estimation of system performance parameters is especially important for transmission system operators (TSOs) in order to operate a power system securely. This paper presents a Bayesian model to forecast short-term kinetic energy time series data for a power system, which can thus help TSOs to operate a respective power system securely. A Markov chain Monte Carlo (MCMC) method used as a No-U-Turn sampler and Stan’s limited-memory Broyden–Fletcher–Goldfarb–Shanno (LM-BFGS) algorithm is used as the optimization method here. The concept of decomposable time series modeling is adopted to analyze the seasonal characteristics of datasets, and numerous performance measurement matrices are used for model validation. Besides, an autoregressive integrated moving average (ARIMA) model is used to compare the results of the presented model. At last, the optimal size of the training dataset is identified, which is required to forecast the 30-min values of the kinetic energy with a low error. In this study, one-year univariate data (1-min resolution) for the integrated Nordic power system (INPS) are used to forecast the kinetic energy for sequences of 30 min (i.e., short-term sequences). Performance evaluation metrics such as the root-mean-square error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE), and mean absolute scaled error (MASE) of the proposed model are calculated here to be 4.67, 3.865, 0.048, and 8.15, respectively. In addition, the performance matrices can be improved by up to 3.28, 2.67, 0.034, and 5.62, respectively, by increasing MCMC sampling. Similarly, 180.5 h of historic data is sufficient to forecast short-term results for the case study here with an accuracy of 1.54504 for the RMSE.


2020 ◽  
Vol 72 (1) ◽  
Author(s):  
Masayuki Kano ◽  
Shin’ichi Miyazaki ◽  
Yoichi Ishikawa ◽  
Kazuro Hirahara

Abstract Postseismic Global Navigation Satellite System (GNSS) time series followed by megathrust earthquakes can be interpreted as a result of afterslip on the plate interface, especially in its early phase. Afterslip is a stress release process accumulated by adjacent coseismic slip and can be considered a recovery process for future events during earthquake cycles. Spatio-temporal evolution of afterslip often triggers subsequent earthquakes through stress perturbation. Therefore, it is important to quantitatively capture the spatio-temporal evolution of afterslip and related postseismic crustal deformation and to predict their future evolution with a physics-based simulation. We developed an adjoint data assimilation method, which directly assimilates GNSS time series into a physics-based model to optimize the frictional parameters that control the slip behavior on the fault. The developed method was validated with synthetic data. Through the optimization of frictional parameters, the spatial distributions of afterslip could roughly (but not in detail) be reproduced if the observation noise was included. The optimization of frictional parameters reproduced not only the postseismic displacements used for the assimilation, but also improved the prediction skill of the following time series. Then, we applied the developed method to the observed GNSS time series for the first 15 days following the 2003 Tokachi-oki earthquake. The frictional parameters in the afterslip regions were optimized to A–B ~ O(10 kPa), A ~ O(100 kPa), and L ~ O(10 mm). A large afterslip is inferred on the shallower side of the coseismic slip area. The optimized frictional parameters quantitatively predicted the postseismic GNSS time series for the following 15 days. These characteristics can also be detected if the simulation variables can be simultaneously optimized. The developed data assimilation method, which can be directly applied to GNSS time series following megathrust earthquakes, is an effective quantitative evaluation method for assessing risks of subsequent earthquakes and for monitoring the recovery process of megathrust earthquakes.


Sign in / Sign up

Export Citation Format

Share Document