Fractal dimension of time series estimated by the surface division method

2019 ◽  
Vol 64 (9) ◽  
pp. 7-24
Author(s):  
Grzegorz Przekota

One of the most important issues to be settled in the analysis of time series is determining their variability andidentifying the process of shaping their values. In the classical approach, volatility is most often identified with the variance of growth rates.However, risk can be characterisednot only by the variability, but also by the predictability of the changes which can be evaluatedusing thefractal dimension. The aim of this paper is to presentthe applicability of the fractal dimension estimated by the surface division method tothe assessment ofthe properties of time series. The paper presents a method for determining the fractal dimension, its interpretation, significance tables and an example of its application. Fractal dimension has been used here to describe the properties of the time series of the WIG stockexchange index in 2014–2018 and the time series of the growth rates of the largest listed Polish companiesin 2015–2018. The applied methodmakesit possible toclassify a time series into one of three classesof series: persistent, random or antipersistent. Specific cases showthe differences between the use of standard deviation and fractal dimension for riskassessment. Fractal dimension appears here to be a method for assessing the degree of stability of variations.

1999 ◽  
Vol 13 (12) ◽  
pp. 1463-1476 ◽  
Author(s):  
V. I. YUKALOV ◽  
S. GLUZMAN

The self-similar analysis of time series is generalized by introducing the notion of scenario probabilities. This makes it possible to give a complete statistical description for the forecast spectrum by defining the average forecast as a weighted fixed point and by calculating the corresponding a priori standard deviation and variance coefficient. Several examples of stock-market time series illustrate the method.


1998 ◽  
Vol 2 ◽  
pp. 141-148
Author(s):  
J. Ulbikas ◽  
A. Čenys ◽  
D. Žemaitytė ◽  
G. Varoneckas

Variety of methods of nonlinear dynamics have been used for possibility of an analysis of time series in experimental physiology. Dynamical nature of experimental data was checked using specific methods. Statistical properties of the heart rate have been investigated. Correlation between of cardiovascular function and statistical properties of both, heart rate and stroke volume, have been analyzed. Possibility to use a data from correlations in heart rate for monitoring of cardiovascular function was discussed.


1984 ◽  
Vol 30 (104) ◽  
pp. 66-76 ◽  
Author(s):  
Paul A. Mayewski ◽  
W. Berry Lyons ◽  
N. Ahmad ◽  
Gordon Smith ◽  
M. Pourchet

AbstractSpectral analysis of time series of a c. 17 ± 0.3 year core, calibrated for total ß activity recovered from Sentik Glacier (4908m) Ladakh, Himalaya, yields several recognizable periodicities including subannual, annual, and multi-annual. The time-series, include both chemical data (chloride, sodium, reactive iron, reactive silicate, reactive phosphate, ammonium, δD, δ(18O) and pH) and physical data (density, debris and ice-band locations, and microparticles in size grades 0.50 to 12.70 μm). Source areas for chemical species investigated and general air-mass circulation defined from chemical and physical time-series are discussed to demonstrate the potential of such studies in the development of paleometeorological data sets from remote high-alpine glacierized sites such as the Himalaya.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 890
Author(s):  
Jakub Bartak ◽  
Łukasz Jabłoński ◽  
Agnieszka Jastrzębska

In this paper, we study economic growth and its volatility from an episodic perspective. We first demonstrate the ability of the genetic algorithm to detect shifts in the volatility and levels of a given time series. Having shown that it works well, we then use it to detect structural breaks that segment the GDP per capita time series into episodes characterized by different means and volatility of growth rates. We further investigate whether a volatile economy is likely to grow more slowly and analyze the determinants of high/low growth with high/low volatility patterns. The main results indicate a negative relationship between volatility and growth. Moreover, the results suggest that international trade simultaneously promotes growth and increases volatility, human capital promotes growth and stability, and financial development reduces volatility and negatively correlates with growth.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 659
Author(s):  
Jue Lu ◽  
Ze Wang

Entropy indicates irregularity or randomness of a dynamic system. Over the decades, entropy calculated at different scales of the system through subsampling or coarse graining has been used as a surrogate measure of system complexity. One popular multi-scale entropy analysis is the multi-scale sample entropy (MSE), which calculates entropy through the sample entropy (SampEn) formula at each time scale. SampEn is defined by the “logarithmic likelihood” that a small section (within a window of a length m) of the data “matches” with other sections will still “match” the others if the section window length increases by one. “Match” is defined by a threshold of r times standard deviation of the entire time series. A problem of current MSE algorithm is that SampEn calculations at different scales are based on the same matching threshold defined by the original time series but data standard deviation actually changes with the subsampling scales. Using a fixed threshold will automatically introduce systematic bias to the calculation results. The purpose of this paper is to mathematically present this systematic bias and to provide methods for correcting it. Our work will help the large MSE user community avoiding introducing the bias to their multi-scale SampEn calculation results.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Ari Wibisono ◽  
Petrus Mursanto ◽  
Jihan Adibah ◽  
Wendy D. W. T. Bayu ◽  
May Iffah Rizki ◽  
...  

Abstract Real-time information mining of a big dataset consisting of time series data is a very challenging task. For this purpose, we propose using the mean distance and the standard deviation to enhance the accuracy of the existing fast incremental model tree with the drift detection (FIMT-DD) algorithm. The standard FIMT-DD algorithm uses the Hoeffding bound as its splitting criterion. We propose the further use of the mean distance and standard deviation, which are used to split a tree more accurately than the standard method. We verify our proposed method using the large Traffic Demand Dataset, which consists of 4,000,000 instances; Tennet’s big wind power plant dataset, which consists of 435,268 instances; and a road weather dataset, which consists of 30,000,000 instances. The results show that our proposed FIMT-DD algorithm improves the accuracy compared to the standard method and Chernoff bound approach. The measured errors demonstrate that our approach results in a lower Mean Absolute Percentage Error (MAPE) in every stage of learning by approximately 2.49% compared with the Chernoff Bound method and 19.65% compared with the standard method.


Sign in / Sign up

Export Citation Format

Share Document