Multi-dimensional Prediction Model for Cell Traffic in City Scale

Author(s):  
Hong Wang ◽  
Liqun Wang ◽  
Shufang Zhao ◽  
Xiuming Yue

Traffic prediction is a classical time series prediction which has been investigated in different domains, but most existing models are proposed based on limited time or spatial scale. Mobile cellular network traffic prediction is of paramount importance for quality-of-service (QoS) and power management of the cellular base stations, especially in the 5G era. Through the statistical analysis of the real historical traffic data obtained in a city scale spanning across multiple months, this paper makes an in-depth study of the temporal characteristics and behavior rules of the model data traffic. Considering that the time series data show different changing rules under the different time dimensions, spatial dimensions and independent dimensions, a multi-dimensional recurrent neural network (MDRNN) prediction model is established to predict the future cell traffic volume over various temporal and spatial dimensions. The data of this paper are trained and tested over real data of a city, and the granularity of the proposed prediction model can be drilled down to the cell level. Compared with the traditional trend fitting method, the proposed model achieves mean absolute percentage error (MAPE) reduction of 6.56%, and provides guidance for energy efficiency optimization and power consumption reduction of base stations in various temporal and spatial dimensions.

Author(s):  
Dmitrii Borkin ◽  
Martin Németh ◽  
German Michaľčonok ◽  
Olga Mezentseva

Abstract This paper aims at the time-series data analysis. We propose the possibility of adding additional features to the existing time series data set, to improve the prediction performance of the prediction model. The main goal of our research was to find a proper method for building a prediction model for the time-series data, using also machine learning methods. In this phase of research, we aim at the data analysis and proposal of the ways to add additional features to our dataset. In this paper, we aim at adding derived parameters from one of the original features. We also propose incorporating LAG’s into the dataset as new features, to enhance the prediction performance on the time series based data.


2020 ◽  
Author(s):  
Sina Ardabili ◽  
Amir Mosavi ◽  
Shahab S. Band ◽  
Annamaria R. Varkonyi-Koczy

Abstract Advancement of the novel models for time-series prediction of COVID-19 is of utmost importance. Machine learning (ML) methods have recently shown promising results. The present study aims to engage an artificial neural network-integrated by grey wolf optimizer for COVID-19 outbreak predictions by employing the Global dataset. Training and testing processes have been performed by time-series data related to January 22 to September 15, 2020 and validation has been performed by time-series data related to September 16 to October 15, 2020. Results have been evaluated by employing mean absolute percentage error (MAPE) and correlation coefficient (r) values. ANN-GWO provided a MAPE of 6.23, 13.15 and 11.4% for training, testing and validating phases, respectively. According to the results, the developed model could successfully cope with the prediction task.


Author(s):  
Muhammad Faheem Mushtaq ◽  
Urooj Akram ◽  
Muhammad Aamir ◽  
Haseeb Ali ◽  
Muhammad Zulqarnain

It is important to predict a time series because many problems that are related to prediction such as health prediction problem, climate change prediction problem and weather prediction problem include a time component. To solve the time series prediction problem various techniques have been developed over many years to enhance the accuracy of forecasting. This paper presents a review of the prediction of physical time series applications using the neural network models. Neural Networks (NN) have appeared as an effective tool for forecasting of time series.  Moreover, to resolve the problems related to time series data, there is a need of network with single layer trainable weights that is Higher Order Neural Network (HONN) which can perform nonlinearity mapping of input-output. So, the developers are focusing on HONN that has been recently considered to develop the input representation spaces broadly. The HONN model has the ability of functional mapping which determined through some time series problems and it shows the more benefits as compared to conventional Artificial Neural Networks (ANN). The goal of this research is to present the reader awareness about HONN for physical time series prediction, to highlight some benefits and challenges using HONN.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Ari Wibisono ◽  
Petrus Mursanto ◽  
Jihan Adibah ◽  
Wendy D. W. T. Bayu ◽  
May Iffah Rizki ◽  
...  

Abstract Real-time information mining of a big dataset consisting of time series data is a very challenging task. For this purpose, we propose using the mean distance and the standard deviation to enhance the accuracy of the existing fast incremental model tree with the drift detection (FIMT-DD) algorithm. The standard FIMT-DD algorithm uses the Hoeffding bound as its splitting criterion. We propose the further use of the mean distance and standard deviation, which are used to split a tree more accurately than the standard method. We verify our proposed method using the large Traffic Demand Dataset, which consists of 4,000,000 instances; Tennet’s big wind power plant dataset, which consists of 435,268 instances; and a road weather dataset, which consists of 30,000,000 instances. The results show that our proposed FIMT-DD algorithm improves the accuracy compared to the standard method and Chernoff bound approach. The measured errors demonstrate that our approach results in a lower Mean Absolute Percentage Error (MAPE) in every stage of learning by approximately 2.49% compared with the Chernoff Bound method and 19.65% compared with the standard method.


Author(s):  
Jae-Hyun Kim, Chang-Ho An

Due to the global economic downturn, the Korean economy continues to slump. Hereupon the Bank of Korea implemented a monetary policy of cutting the base rate to actively respond to the economic slowdown and low prices. Economists have been trying to predict and analyze interest rate hikes and cuts. Therefore, in this study, a prediction model was estimated and evaluated using vector autoregressive model with time series data of long- and short-term interest rates. The data used for this purpose were call rate (1 day), loan interest rate, and Treasury rate (3 years) between January 2002 and December 2019, which were extracted monthly from the Bank of Korea database and used as variables, and a vector autoregressive (VAR) model was used as a research model. The stationarity test of variables was confirmed by the ADF-unit root test. Bidirectional linear dependency relationship between variables was confirmed by the Granger causality test. For the model identification, AICC, SBC, and HQC statistics, which were the minimum information criteria, were used. The significance of the parameters was confirmed through t-tests, and the fitness of the estimated prediction model was confirmed by the significance test of the cross-correlation matrix and the multivariate Portmanteau test. As a result of predicting call rate, loan interest rate, and Treasury rate using the prediction model presented in this study, it is predicted that interest rates will continue to drop.


PeerJ ◽  
2019 ◽  
Vol 7 ◽  
pp. e7183 ◽  
Author(s):  
Hafiza Mamona Nazir ◽  
Ijaz Hussain ◽  
Ishfaq Ahmad ◽  
Muhammad Faisal ◽  
Ibrahim M. Almanjahie

Due to non-stationary and noise characteristics of river flow time series data, some pre-processing methods are adopted to address the multi-scale and noise complexity. In this paper, we proposed an improved framework comprising Complete Ensemble Empirical Mode Decomposition with Adaptive Noise-Empirical Bayesian Threshold (CEEMDAN-EBT). The CEEMDAN-EBT is employed to decompose non-stationary river flow time series data into Intrinsic Mode Functions (IMFs). The derived IMFs are divided into two parts; noise-dominant IMFs and noise-free IMFs. Firstly, the noise-dominant IMFs are denoised using empirical Bayesian threshold to integrate the noises and sparsities of IMFs. Secondly, the denoised IMF’s and noise free IMF’s are further used as inputs in data-driven and simple stochastic models respectively to predict the river flow time series data. Finally, the predicted IMF’s are aggregated to get the final prediction. The proposed framework is illustrated by using four rivers of the Indus Basin System. The prediction performance is compared with Mean Square Error, Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE). Our proposed method, CEEMDAN-EBT-MM, produced the smallest MAPE for all four case studies as compared with other methods. This suggests that our proposed hybrid model can be used as an efficient tool for providing the reliable prediction of non-stationary and noisy time series data to policymakers such as for planning power generation and water resource management.


2020 ◽  
Vol 29 (07n08) ◽  
pp. 2040010
Author(s):  
Shao-Pei Ji ◽  
Yu-Long Meng ◽  
Liang Yan ◽  
Gui-Shan Dong ◽  
Dong Liu

Time series data from real problems have nonlinear, non-smooth, and multi-scale composite characteristics. This paper first proposes a gated recurrent unit-correction (GRU-corr) network model, which adds a correction layer to the GRU neural network. Then, a adaptive staged variation PSO (ASPSO) is proposed. Finally, to overcome the drawbacks of the imprecise selection of the GRU-corr network parameters and obtain the high-precision global optimization of network parameters, weight parameters and the hidden nodes number of GRU-corr is optimized by ASPSO, and a time series prediction model (ASPSO-GRU-corr) is proposed based on the GRU-corr optimized by ASPSO. In the experiment, a comparative analysis of the optimization performance of ASPSO on a benchmark function was performed to verify its validity, and then the ASPSO-GRU-corr model is used to predict the ship motion cross-sway angle data. The results show that, ASPSO has better optimization performance and convergence speed compared with other algorithms, while the ASPSO-GRU-corr has higher generalization performance and lower architecture complexity. The ASPSO-GRU-corr can reveal the intrinsic multi-scale composite features of the time series, which is a reliable nonlinear and non-steady time series prediction method.


Agriculture ◽  
2020 ◽  
Vol 10 (12) ◽  
pp. 612
Author(s):  
Helin Yin ◽  
Dong Jin ◽  
Yeong Hyeon Gu ◽  
Chang Jin Park ◽  
Sang Keun Han ◽  
...  

It is difficult to forecast vegetable prices because they are affected by numerous factors, such as weather and crop production, and the time-series data have strong non-linear and non-stationary characteristics. To address these issues, we propose the STL-ATTLSTM (STL-Attention-based LSTM) model, which integrates the seasonal trend decomposition using the Loess (STL) preprocessing method and attention mechanism based on long short-term memory (LSTM). The proposed STL-ATTLSTM forecasts monthly vegetable prices using various types of information, such as vegetable prices, weather information of the main production areas, and market trading volumes. The STL method decomposes time-series vegetable price data into trend, seasonality, and remainder components. It uses the remainder component by removing the trend and seasonality components. In the model training process, attention weights are assigned to all input variables; thus, the model’s prediction performance is improved by focusing on the variables that affect the prediction results. The proposed STL-ATTLSTM was applied to five crops, namely cabbage, radish, onion, hot pepper, and garlic, and its performance was compared to three benchmark models (i.e., LSTM, attention LSTM, and STL-LSTM). The performance results show that the LSTM model combined with the STL method (STL-LSTM) achieved a 12% higher prediction accuracy than the attention LSTM model that did not use the STL method and solved the prediction lag arising from high seasonality. The attention LSTM model improved the prediction accuracy by approximately 4% to 5% compared to the LSTM model. The STL-ATTLSTM model achieved the best performance, with an average root mean square error (RMSE) of 380, and an average mean absolute percentage error (MAPE) of 7%.


Sign in / Sign up

Export Citation Format

Share Document