scholarly journals COVID-19 Pandemic and Financial Contagion

2020 ◽  
Vol 13 (12) ◽  
pp. 309 ◽  
Author(s):  
Julien Chevallier

The original contribution of this paper is to empirically document the contagion of the Covid-19 on financial markets. We merge databases from Johns Hopkins Coronavirus Center, Oxford-Man Institute Realized Library, NYU Volatility Lab, and St-Louis Federal Reserve Board. We deploy three types of models throughout our experiments: (i) the Susceptible-Infective-Removed (SIR) that predicts the infections’ peak on 2020-03-27; (ii) volatility (GARCH), correlation (DCC), and risk-management (Value-at-Risk (VaR)) models that relate how bears painted Wall Street red; and, (iii) data-science trees algorithms with forward prunning, mosaic plots, and Pythagorean forests that crunch the data on confirmed, deaths, and recovered Covid-19 cases and then tie them to high-frequency data for 31 stock markets.

2006 ◽  
Vol 4 (1) ◽  
pp. 55
Author(s):  
Marcelo C. Carvalho ◽  
Marco Aurélio S. Freire ◽  
Marcelo Cunha Medeiros ◽  
Leonardo R. Souza

The goal of this paper is twofold. First, using five of the most actively traded stocks in the Brazilian financial market, this paper shows that the normality assumption commonly used in the risk management area to describe the distributions of returns standardized by volatilities is not compatible with volatilities estimated by EWMA or GARCH models. In sharp contrast, when the information contained in high frequency data is used to construct the realized volatility measures, we attain the normality of the standardized returns, giving promise of improvements in Value-at-Risk statistics. We also describe the distributions of volatilities of the Brazilian stocks, showing that they are nearly lognormal. Second, we estimate a simple model of the log of realized volatilities that differs from the ones in other studies. The main difference is that we do not find evidence of long memory. The estimated model is compared with commonly used alternatives in out-of-sample forecasting experiment.


2020 ◽  
Vol 96 (314) ◽  
pp. 314-330
Author(s):  
Wenying Yao ◽  
Mardi Dungey ◽  
Vitali Alexeev

1999 ◽  
Vol 6 (5) ◽  
pp. 431-455 ◽  
Author(s):  
Andrea Beltratti ◽  
Claudio Morana

Author(s):  
Yuta Koike

AbstractA new approach for modeling lead–lag relationships in high-frequency financial markets is proposed. The model accommodates non-synchronous trading and market microstructure noise as well as intraday variations of lead–lag relationships, which are essential for empirical applications. A simple statistical methodology for analyzing the proposed model is presented, as well. The methodology is illustrated by an empirical study to detect lead–lag relationships between the S&P 500 index and its two derivative products.


2022 ◽  
Vol 15 (1) ◽  
pp. 1-20
Author(s):  
Ravinder Kumar ◽  
Lokesh Kumar Shrivastav

Designing a system for analytics of high-frequency data (Big data) is a very challenging and crucial task in data science. Big data analytics involves the development of an efficient machine learning algorithm and big data processing techniques or frameworks. Today, the development of the data processing system is in high demand for processing high-frequency data in a very efficient manner. This paper proposes the processing and analytics of stochastic high-frequency stock market data using a modified version of suitable Gradient Boosting Machine (GBM). The experimental results obtained are compared with deep learning and Auto-Regressive Integrated Moving Average (ARIMA) methods. The results obtained using modified GBM achieves the highest accuracy (R2 = 0.98) and minimum error (RMSE = 0.85) as compared to the other two approaches.


2009 ◽  
Vol 20 (2) ◽  
pp. 128-136 ◽  
Author(s):  
Xi-Dong Shao ◽  
Yu-Jun Lian ◽  
Lian-Qian Yin

Sign in / Sign up

Export Citation Format

Share Document