Time-series of economic data

Keyword(s):  
2001 ◽  
Vol 5 (3) ◽  
pp. 380-412 ◽  
Author(s):  
Melvin A. Hinich ◽  
Phillip Wild

We develop a test of the null hypothesis that an observed time series is a realization of a strictly stationary random process. Our test is based on the result that the kth value of the discrete Fourier transform of a sample frame has a zero mean under the null hypothesis. The test that we develop will have considerable power against an important form of nonstationarity hitherto not considered in the mainstream econometric time-series literature, that is, where the mean of a time series is periodic with random variation in its periodic structure. The size and power properties of the test are investigated and its applicability to real-world problems is demonstrated by application to three economic data sets.


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Hao Du ◽  
Hao Gong ◽  
Suyue Han ◽  
Peng Zheng ◽  
Bin Liu ◽  
...  

Reconstruction of realistic economic data often causes social economists to analyze the underlying driving factors in time-series data or to study volatility. The intrinsic complexity of time-series data interests and attracts social economists. This paper proposes the bilateral permutation entropy (BPE) index method to solve the problem based on partly ensemble empirical mode decomposition (PEEMD), which was proposed as a novel data analysis method for nonlinear and nonstationary time series compared with the T-test method. First, PEEMD is extended to the case of gold price analysis in this paper for decomposition into several independent intrinsic mode functions (IMFs), from high to low frequency. Second, IMFs comprise three parts, including a high-frequency part, low-frequency part, and the whole trend based on a fine-to-coarse reconstruction by the BPE index method and the T-test method. Then, this paper conducts a correlation analysis on the basis of the reconstructed data and the related affected macroeconomic factors, including global gold production, world crude oil prices, and world inflation. Finally, the BPE index method is evidently a vitally significant technique for time-series data analysis in terms of reconstructed IMFs to obtain realistic data.


1982 ◽  
Vol 19 (A) ◽  
pp. 413-425
Author(s):  
Don McNeil

Some inadequacies of both the traditional (exponential smoothing) and Box-Jenkins approaches to time series forecasting of economic data are investigated. An approach is suggested which integrates these two methodologies. It is based on smoothing the data using straight line segments instead of differencing to obtain stationarity, and forecasting using an autoregressive-moving-average model for the residuals from the most recent linear segment. The efficiency of this approach is calculated theoretically using a series comprising integrated white noise.


10.29007/tjbv ◽  
2020 ◽  
Author(s):  
Craig Capano ◽  
Jeanette Hariharan ◽  
Hashem Moud ◽  
Ashish Asutosh

Estimating future costs of construction is an important component to the success of any contracting company. Traditionally a cost modifier has been utilized to offset cost escalations or volatility predictions. Construction estimators and contractors have also attempted to utilize a variety of prediction models. This paper establishes a basis for reliable forecasting and explores the possibility of developing prediction models using time series Neural Networks (NN) by utilizing historic data of three accepted macro-economic composite indicators (MEI) and two accepted construction industry cost indices. The use of these macro-economic indicators for NN-based models may be used to predict cost escalations for construction. Nonlinear autoregressive NN models are constructed through using the macro-economic data and the construction cost data to determine if a reliable time-series predictive model could be established. The results of these models indicated that there is a high correlation between the macro-economic escalations, independent factors, and the construction cost escalations, dependent factors, over time. Use and knowledge of these correlations could aid in the prediction of cost escalations during construction.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Hongxiang Sun ◽  
Zhongkai Yao ◽  
Qingchun Miao

With the rapid development of information technology and globalization of economy, financial data are being generated and collected at an unprecedented rate. Consequently, there has been a dire need of automated methods for effective and proficient utilization of a substantial amount of financial data to help in investment planning and decision-making. Data mining methods have been employed to discover hidden patterns and estimate future tendencies in financial markets. In this article, an improved macroeconomic growth prediction algorithm based on data mining and fuzzy correlation analysis is presented. This study analyzes the sequence of economic characteristics, reorganizes the spatial structure of economic characteristics, and integrates the statistical information of economic data. Using the optimized Apriori algorithm, the association rules between macroeconomic data are generated. Distinct features are extracted according to association rules using the joint distribution characteristic quantity of macroeconomic time series. Moreover, the Doppler parameter of macroeconomic time series growth prediction is calculated, and the residual analysis method of the regression model is used to predict the growth of macroeconomic data. Experimental results show that the proposed algorithm has better adaptability, less computation time, and higher prediction accuracy of economic data mining.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-8
Author(s):  
José Roberto C. Piqueira ◽  
Sérgio Henrique Vannucchi Leme de Mattos

This work is a generalization of the López-Ruiz, Mancini, and Calbet (LMC) and Shiner, Davison, and Landsberg (SDL) complexity measures, considering that the state of a system or process is represented by a continuous temporal series of a dynamical variable. As the two complexity measures are based on the calculation of informational entropy, an equivalent information source is defined by using partitions of the dynamical variable range. During the time intervals, the information associated with the measured dynamical variable is the seed to calculate instantaneous LMC and SDL measures. To show how the methodology works generating indicators, two examples, one concerning meteorological data and the other concerning economic data, are presented and discussed.


1985 ◽  
Vol 80 (391) ◽  
pp. 783
Author(s):  
Mark W. Watson ◽  
Arnold Zellner

1982 ◽  
Vol 19 (A) ◽  
pp. 413-425
Author(s):  
Don McNeil

Some inadequacies of both the traditional (exponential smoothing) and Box-Jenkins approaches to time series forecasting of economic data are investigated. An approach is suggested which integrates these two methodologies. It is based on smoothing the data using straight line segments instead of differencing to obtain stationarity, and forecasting using an autoregressive-moving-average model for the residuals from the most recent linear segment. The efficiency of this approach is calculated theoretically using a series comprising integrated white noise.


Sign in / Sign up

Export Citation Format

Share Document