Volatility, Long Memory, and Chaos: A Discussion on some “Stylized Facts” in Financial Markets with a Focus on High Frequency Data

2013 ◽  
pp. 71-101 ◽  
Author(s):  
Amitava Sarkar ◽  
Gagari Chakrabarti ◽  
Chitrakalpa Sen
2020 ◽  
Vol 13 (12) ◽  
pp. 309 ◽  
Author(s):  
Julien Chevallier

The original contribution of this paper is to empirically document the contagion of the Covid-19 on financial markets. We merge databases from Johns Hopkins Coronavirus Center, Oxford-Man Institute Realized Library, NYU Volatility Lab, and St-Louis Federal Reserve Board. We deploy three types of models throughout our experiments: (i) the Susceptible-Infective-Removed (SIR) that predicts the infections’ peak on 2020-03-27; (ii) volatility (GARCH), correlation (DCC), and risk-management (Value-at-Risk (VaR)) models that relate how bears painted Wall Street red; and, (iii) data-science trees algorithms with forward prunning, mosaic plots, and Pythagorean forests that crunch the data on confirmed, deaths, and recovered Covid-19 cases and then tie them to high-frequency data for 31 stock markets.


2006 ◽  
Vol 4 (1) ◽  
pp. 55
Author(s):  
Marcelo C. Carvalho ◽  
Marco Aurélio S. Freire ◽  
Marcelo Cunha Medeiros ◽  
Leonardo R. Souza

The goal of this paper is twofold. First, using five of the most actively traded stocks in the Brazilian financial market, this paper shows that the normality assumption commonly used in the risk management area to describe the distributions of returns standardized by volatilities is not compatible with volatilities estimated by EWMA or GARCH models. In sharp contrast, when the information contained in high frequency data is used to construct the realized volatility measures, we attain the normality of the standardized returns, giving promise of improvements in Value-at-Risk statistics. We also describe the distributions of volatilities of the Brazilian stocks, showing that they are nearly lognormal. Second, we estimate a simple model of the log of realized volatilities that differs from the ones in other studies. The main difference is that we do not find evidence of long memory. The estimated model is compared with commonly used alternatives in out-of-sample forecasting experiment.


2020 ◽  
Vol 13 (2) ◽  
pp. 38
Author(s):  
Juan Carlos Ruilova ◽  
Pedro Alberto Morettin

In this work we study a variant of the GARCH model when we consider the arrival of heterogeneous information in high-frequency data. This model is known as HARCH(n). We modify the HARCH(n) model when taking into consideration some market components that we consider important to the modeling process. This model, called parsimonious HARCH(m,p), takes into account the heterogeneous information present in the financial market and the long memory of volatility. Some theoretical properties of this model are studied. We used maximum likelihood and Griddy-Gibbs sampling to estimate the parameters of the proposed model and apply it to model the Euro-Dollar exchange rate series.


Sign in / Sign up

Export Citation Format

Share Document