FRACTAL STRUCTURE OF FINANCIAL HIGH FREQUENCY DATA

Fractals ◽  
2002 ◽  
Vol 10 (01) ◽  
pp. 13-18 ◽  
Author(s):  
YOSHIAKI KUMAGAI

We propose a new method to describe scaling behavior of time series. We introduce an extension of extreme values. Using these extreme values determined by a scale, we define some functions. Moreover, using these functions, we can measure a kind of fractal dimension — fold dimension. In financial high frequency data, observations can occur at varying time intervals. Using these functions, we can analyze non-equidistant data without interpolation or evenly sampling. Further, the problem of choosing the appropriate time scale is avoided. Lastly, these functions are related to a viewpoint of investor whose transaction costs coincide with the spread.

2004 ◽  
Vol 07 (05) ◽  
pp. 615-643 ◽  
Author(s):  
ERHAN BAYRAKTAR ◽  
H. VINCENT POOR ◽  
K. RONNIE SIRCAR

S&P 500 index data sampled at one-minute intervals over the course of 11.5 years (January 1989–May 2000) is analyzed, and in particular the Hurst parameter over segments of stationarity (the time period over which the Hurst parameter is almost constant) is estimated. An asymptotically unbiased and efficient estimator using the log-scale spectrum is employed. The estimator is asymptotically Gaussian and the variance of the estimate that is obtained from a data segment of N points is of order [Formula: see text]. Wavelet analysis is tailor-made for the high frequency data set, since it has low computational complexity due to the pyramidal algorithm for computing the detail coefficients. This estimator is robust to additive non-stationarities, and here it is shown to exhibit some degree of robustness to multiplicative non-stationarities, such as seasonalities and volatility persistence, as well. This analysis suggests that the market became more efficient in the period 1997–2000.


2001 ◽  
Vol 04 (01) ◽  
pp. 147-177 ◽  
Author(s):  
GILLES ZUMBACH ◽  
ULRICH MÜLLER

We present a toolbox to compute and extract information from inhomogeneous (i.e. unequally spaced) time series. The toolbox contains a large set of operators, mapping from the space of inhomogeneous time series to itself. These operators are computationally efficient (time and memory-wise) and suitable for stochastic processes. This makes them attractive for processing high-frequency data in finance and other fields. Using a basic set of operators, we easily construct more powerful combined operators which cover a wide set of typical applications. The operators are classified as either macroscopic operators (that have a limit value when the sampling frequency goes to infinity) or microscopic operators (that strongly depend on the actual sampling). For inhomogeneous data, macroscopic operators are more robust and more important. Examples of macroscopic operators are (exponential) moving averages, differentials, derivatives, moving volatilities, etc.…


Author(s):  
Josip Arnerić

AbstractAvailability of high-frequency data, in line with IT developments, enables the use of Availability of high-frequency data, in line with IT developments, enables the use of more information to estimate not only the variance (volatility), but also higher realized moments and the entire realized distribution of returns. Old-fashioned approaches use only closing prices and assume that underlying distribution is time-invariant, which makes traditional forecasting models unreliable. Moreover, time-varying realized moments support findings that returns are not identically distributed across trading days. The objective of the paper is to find an appropriate data-driven distribution of returns using high-frequency data. The kernel estimation method is applied to DAX intraday prices, which balances between the bias and the variance of the realized moments with respect to the bandwidth selection as well as the sampling frequency selection. The main finding is that the kernel bandwidth is strongly related to the sampling frequency at the slow-time-time scale when applying a two-scale estimator, while the fast-time-time scale sampling frequency is held fixed. The realized kernel density estimation enriches the literature by providing the best data-driven proxy of the true but unknown probability density function of returns, which can be used as a benchmark in comparison against ex-ante or implied driven moments.


2017 ◽  
Vol 68 (3) ◽  
Author(s):  
Nlandu Mamingi

AbstractThis paper delivers an up-to-date literature review dealing with aggregation over time of economic time series, e.g. the transformation of high-frequency data to low frequency data, with a focus on its benefits (the beauty) and its costs (the ugliness). While there are some benefits associated with aggregating data over time, the negative effects are numerous. Aggregation over time is shown to have implications for inferences, public policy and forecasting.


Sign in / Sign up

Export Citation Format

Share Document