scholarly journals The predictive capacity of GARCH-type models in measuring the volatility of crypto and world currencies

PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245904
Author(s):  
Viviane Naimy ◽  
Omar Haddad ◽  
Gema Fernández-Avilés ◽  
Rim El Khoury

This paper provides a thorough overview and further clarification surrounding the volatility behavior of the major six cryptocurrencies (Bitcoin, Ripple, Litecoin, Monero, Dash and Dogecoin) with respect to world currencies (Euro, British Pound, Canadian Dollar, Australian Dollar, Swiss Franc and the Japanese Yen), the relative performance of diverse GARCH-type specifications namely the SGARCH, IGARCH (1,1), EGARCH (1,1), GJR-GARCH (1,1), APARCH (1,1), TGARCH (1,1) and CGARCH (1,1), and the forecasting performance of the Value at Risk measure. The sampled period extends from October 13th 2015 till November 18th 2019. The findings evidenced the superiority of the IGARCH model, in both the in-sample and the out-of-sample contexts, when it deals with forecasting the volatility of world currencies, namely the British Pound, Canadian Dollar, Australian Dollar, Swiss Franc and the Japanese Yen. The CGARCH alternative modeled the Euro almost perfectly during both periods. Advanced GARCH models better depicted asymmetries in cryptocurrencies’ volatility and revealed persistence and “intensifying” levels in their volatility. The IGARCH was the best performing model for Monero. As for the remaining cryptocurrencies, the GJR-GARCH model proved to be superior during the in-sample period while the CGARCH and TGARCH specifications were the optimal ones in the out-of-sample interval. The VaR forecasting performance is enhanced with the use of the asymmetric GARCH models. The VaR results provided a very accurate measure in determining the level of downside risk exposing the selected exchange currencies at all confidence levels. However, the outcomes were far from being uniform for the selected cryptocurrencies: convincing for Dash and Dogcoin, acceptable for Litecoin and Monero and unconvincing for Bitcoin and Ripple, where the (optimal) model was not rejected only at the 99% confidence level.

2015 ◽  
Vol 11 (4) ◽  
pp. 438-450 ◽  
Author(s):  
Stoyu I. Ivanov

Purpose – The purpose of this paper is to find if erosion of value exists in grantor trust structured exchange traded funds. The author examines the performance of six currency exchange traded funds’ tracking errors and pricing deviations on intradaily-one-minute interval basis. All of these exchange traded funds are grantor trusts. The author also studies which metric is of more importance to investors in these exchange traded funds by examining how these performance metrics are related to the exchange traded funds’ arbitrage mechanism. Design/methodology/approach – The Australian Dollar ETF (FXA) is designed to be 100 times the US Dollar (USD) value of the Australian Dollar, the British Pound ETF (FXB) is designed to be 100 times the USD value of the British Pound, the Canadian Dollar ETF (FXC) is designed to be 100 times the USD value of the Canadian Dollar, the Euro ETF (FXE) is designed to be 100 times the USD value of the Euro, the Swiss Franc ETF (FXF) is designed to be 100 times the USD value of the Swiss Franc and the Japanese Yen ETF (FXY) is designed to be 10,000 times the USD value of the Japanese Yen. The author uses these proportions to estimate pricing deviations. The author uses a moving average model based on an Elton et al. (2002) to estimate if tracking error or pricing deviation are more relevant in ETF arbitrage and thus to investors. Findings – The author documents that the average intradaily tracking errors for the six currency ETFs are relatively small and stable. The tracking errors are highest for the FXF, 0.000311 percent and smallest for FXB, −0.000014 percent. FXB is the only ETF with a negative tracking error. All six ETFs average intradaily pricing deviations are negative with the exception of the FXA pricing deviation which is a positive $0.17; the rest of the ETFs pricing deviations are −0.3778 for FXB, −0.3231 for FXC, −0.2697 for FXC, −0.2697 for FXE, −0.6484 for FXF and −0.9273 for FXY. All exhibit skewness, kurtosis, very high levels of positive autocorrelation and negative trends, which suggests erosion of value. The author also found that these exchange traded funds’ arbitrage mechanism is more closely related to the exchange traded funds’ pricing deviation than tracking error. Research limitations/implications – The paper uses high-frequency one-minute interval data in the analysis of pricing deviation which might be artificially deflating standard errors and thus inflating the t-test significance values. Originality/value – The paper is relevant to ETF investors and contributes to the continuing search in the finance literature of better ETF performance metric.


Author(s):  
Bahram Adrangi ◽  
Mary Allender Allender ◽  
Arjun Chatrath ◽  
Kambiz Raffiee

Employing the daily bilateral exchange rate of the dollar against the Canadian dollar, the Swiss franc and the Japanese yen, we conduct a battery of tests for the presence of low-dimension chaos.  The three stationary series are subjected to Correlation Dimension tests, BDS tests, and tests for entropy. While we find strong evidence of nonlinear dependence in the data, the evidence is not consistent with chaos.  Our test results indicate that GARCH-type processes explain the nonlinearities in the data.  We also show that employing seasonally adjusted index series enhances the robustness of results via the existing tests for chaotic structure.


2006 ◽  
Vol 4 (1) ◽  
pp. 55
Author(s):  
Marcelo C. Carvalho ◽  
Marco Aurélio S. Freire ◽  
Marcelo Cunha Medeiros ◽  
Leonardo R. Souza

The goal of this paper is twofold. First, using five of the most actively traded stocks in the Brazilian financial market, this paper shows that the normality assumption commonly used in the risk management area to describe the distributions of returns standardized by volatilities is not compatible with volatilities estimated by EWMA or GARCH models. In sharp contrast, when the information contained in high frequency data is used to construct the realized volatility measures, we attain the normality of the standardized returns, giving promise of improvements in Value-at-Risk statistics. We also describe the distributions of volatilities of the Brazilian stocks, showing that they are nearly lognormal. Second, we estimate a simple model of the log of realized volatilities that differs from the ones in other studies. The main difference is that we do not find evidence of long memory. The estimated model is compared with commonly used alternatives in out-of-sample forecasting experiment.


2021 ◽  
Vol 14 (6) ◽  
pp. 251
Author(s):  
Yuhao Liu ◽  
Petar M. Djurić ◽  
Young Shin Kim ◽  
Svetlozar T. Rachev ◽  
James Glimm

We investigate a systemic risk measure known as CoVaR that represents the value-at-risk (VaR) of a financial system conditional on an institution being under distress. For characterizing and estimating CoVaR, we use the copula approach and introduce the normal tempered stable (NTS) copula based on the Lévy process. We also propose a novel backtesting method for CoVaR by a joint distribution correction. We test the proposed NTS model on the daily S&P 500 index and Dow Jones index with in-sample and out-of-sample tests. The results show that the NTS copula outperforms traditional copulas in the accuracy of both tail dependence and marginal processes modeling.


2020 ◽  
Vol 18 (3) ◽  
pp. 80
Author(s):  
Paulo Fernando Marschner ◽  
Paulo Sergio Ceretta

<p>This study aims to understand the volatile behavior of six highly representative cryptocurrencies. To do so, EGARCH and Markov-switching EGARCH models were estimated, combined with different distributions of statistical probability. The predictive capacity of the best models resulting from these combinations were tested by predicting the value-at-risk. The daily returns of the cryptocurrencies clearly show regime changes in their volatility dynamics. In the in-sample analysis, the regime change model confirms the existence of two states: the first characterized by a greater ARCH effect and less affected by asymmetries, while the second reveals a greater effect of the arrival of information, that is, it is more sensitive to asymmetric shocks. In the out-of-sample analysis, the value-at-risk predictions of the regime change model clearly exceed the single-regime model by the extreme quantile of 1%.</p>


2005 ◽  
Vol 25 (1) ◽  
pp. 43
Author(s):  
Leonardo Souza ◽  
Alvaro Veiga ◽  
Marcelo C. Medeiros

The important issue of forecasting volatilities brings the difficult task of back-testing the forecasting performance. As volatility cannot be observed directly, one has to use an observable proxy for volatility or a utility function to assess the prediction quality. This kind of procedure can easily lead to poor assessment. The goal of this paper is to compare different volatility models and different performance measures using White’s Reality Check. The Reality Check consists of a non-parametric test that checks if any of a number of concurrent methods yields forecasts significantly better than a given benchmark method. For this purpose, a Monte Carlo simulation is carried out with four different processes, one of them a Gaussian white noise and the others following GARCH specifications. Two benchmark methods are used: the naive (predicting the out-of-sample volatility by in-sample variance) and the Riskmetrics method


2021 ◽  
pp. 1-29
Author(s):  
Yanhong Chen

ABSTRACT In this paper, we study the optimal reinsurance contracts that minimize the convex combination of the Conditional Value-at-Risk (CVaR) of the insurer’s loss and the reinsurer’s loss over the class of ceded loss functions such that the retained loss function is increasing and the ceded loss function satisfies Vajda condition. Among a general class of reinsurance premium principles that satisfy the properties of risk loading and convex order preserving, the optimal solutions are obtained. Our results show that the optimal ceded loss functions are in the form of five interconnected segments for general reinsurance premium principles, and they can be further simplified to four interconnected segments if more properties are added to reinsurance premium principles. Finally, we derive optimal parameters for the expected value premium principle and give a numerical study to analyze the impact of the weighting factor on the optimal reinsurance.


2021 ◽  
Vol 14 (5) ◽  
pp. 201
Author(s):  
Yuan Hu ◽  
W. Brent Lindquist ◽  
Svetlozar T. Rachev

This paper investigates performance attribution measures as a basis for constraining portfolio optimization. We employ optimizations that minimize conditional value-at-risk and investigate two performance attributes, asset allocation (AA) and the selection effect (SE), as constraints on asset weights. The test portfolio consists of stocks from the Dow Jones Industrial Average index. Values for the performance attributes are established relative to two benchmarks, equi-weighted and price-weighted portfolios of the same stocks. Performance of the optimized portfolios is judged using comparisons of cumulative price and the risk-measures: maximum drawdown, Sharpe ratio, Sortino–Satchell ratio and Rachev ratio. The results suggest that achieving SE performance thresholds requires larger turnover values than that required for achieving comparable AA thresholds. The results also suggest a positive role in price and risk-measure performance for the imposition of constraints on AA and SE.


Sign in / Sign up

Export Citation Format

Share Document